in_source_id
stringlengths
13
58
issue
stringlengths
3
241k
before_files
listlengths
0
3
after_files
listlengths
0
3
pr_diff
stringlengths
109
107M
ansible__ansible-32912
packet_device not working with ipxe_script_url set ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME modules/cloud/packet ##### ANSIBLE VERSION ``` ansible 2.4.1.0 config file = /home/krist/.ansible.cfg configured module search path = [u'/home/krist/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = bin/ansible python version = 2.7.14 (default, Nov 2 2017, 18:42:05) [GCC 7.2.1 20170915 (Red Hat 7.2.1-2)] ``` ##### CONFIGURATION ``` ANSIBLE_PIPELINING(/home/krist/Work/LAB/pakket/ansible.cfg) = True DEFAULT_HOST_LIST(/home/krist/Work/LAB/pakket/ansible.cfg) = [u'/home/krist/Work/LAB/pakket/inventory'] DEFAULT_ROLES_PATH(/home/krist/Work/LAB/pakket/ansible.cfg) = [u'/home/krist/Work/LAB/pakket/roles'] DEFAULT_VAULT_PASSWORD_FILE(/home/krist/Work/LAB/pakket/ansible.cfg) = /home/krist/.ansible/password ``` ##### OS / ENVIRONMENT Fedora 26, Ansible from git devel branch. ##### SUMMARY packet_device: Creating a packet host with ipxeboot does not work. ##### STEPS TO REPRODUCE Install the packet.net CLI tools. Create a group_vars/all/main.yaml with correct values for location, api key and project id. Try to provision a host with a playbook: ```yaml - name: create rhev lab hosts: localhost tasks: - packet_sshkey: key_file: "{{ lookup('env','HOME') + '/.ssh/id_rsa.pub' }}" label: default key - packet_device: project_id: "{{ project_id }}" hostnames: "{{ item }}" operating_system: custom_ipxe ipxe_script_url: http://boot.example.com/rhvh/boot.ipxe plan: baremetal_0 facility: "{{ location }}" auth_token: "{{ api_key }}" with_items: - rhvh1 ``` ##### EXPECTED RESULTS Host is provisioned and attempts a ipxe boot with the URL provided. ##### ACTUAL RESULTS ``` task path: /home/krist/Work/LAB/pakket/create_rhvh_lab.yaml:9 Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/packet/packet_device.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: krist <127.0.0.1> EXEC /bin/sh -c '/usr/bin/python && sleep 0' failed: [localhost] (item=rhvh1) => { "changed": false, "failed": true, "invocation": { "module_args": { "auth_token": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "facility": "ams1", "hostnames": "rhvh1", "ipxe_script_url": "http://boot.example.com/rhvh/boot.ipxe", "operating_system": "custom_ipxe", "plan": "baremetal_0", "project_id": "be6b7156-3c89-447c-b46e-ee376809a3d2" } }, "item": "rhvh1", "msg": "parameters are mutually exclusive: ('ipxe_script_url', 'operating_system')" ``` I assumed that this just meant that I should not define operating_system when ipxe_script_url is set. So I also tested with a playbook where I had removed the operating_system parameter. There the result was: ``` task path: /home/krist/Work/LAB/pakket/create_rhvh_lab.yaml:9 Using module file /usr/lib/python2.7/site-packages/ansible/modules/cloud/packet/packet_device.py <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: krist <127.0.0.1> EXEC /bin/sh -c '/usr/bin/python && sleep 0' The full traceback is: Traceback (most recent call last): File "/tmp/ansible_qyWyXe/ansible_module_packet_device.py", line 640, in main module.exit_json(**act_on_devices(module, packet_conn, state)) File "/tmp/ansible_qyWyXe/ansible_module_packet_device.py", line 575, in act_on_devices for n in create_hostnames] File "/tmp/ansible_qyWyXe/ansible_module_packet_device.py", line 445, in create_single_device % param) Exception: operating_system parameter is required for new device. failed: [localhost] (item=rhvh1) => { "changed": false, "failed": true, "invocation": { "module_args": { "always_pxe": false, "auth_token": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "count": 1, "count_offset": 1, "device_ids": null, "facility": "ams1", "features": null, "hostnames": [ "rhvh1" ], "ipxe_script_url": "http://home.kri.st/rhvh/boot.ipxe", "locked": false, "operating_system": null, "plan": "baremetal_0", "project_id": "be6b7156-3c89-447c-b46e-ee376809a3d2", "state": "present", "user_data": null, "wait_for_public_IPv": null, "wait_timeout": 900 } }, "item": "rhvh1", "msg": "failed to set device state present, error: operating_system parameter is required for new device." } ``` I think that packet_device should either allow both operating_system and ipxe_script_url to be set, or otherwise just automatically set operating_system to custom_ipxe when ipxe_script_url is set. Fix Packet guide to comply with latest version of the packet module ##### SUMMARY This PR fixes the Packet Guide doc to follow the latest merged changes in the packet_device module. ##### ISSUE TYPE - Docs Pull Request ##### COMPONENT NAME docs/docsite/rst/guide_packet.rst ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes below --> ``` ansible 2.5.0 (fix-packet-guide-to-comply-with-latest-device-module acdda6f020) last updated 2017/10/06 15:03:35 (GMT +300) config file = None configured module search path = [u'/home/tomk/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /home/tomk/ansible/lib/ansible executable location = /home/tomk/ansible/bin/ansible python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] ```
[ { "content": "#!/usr/bin/python\n# (c) 2016, Tomas Karasek <[email protected]>\n# (c) 2016, Matt Baldwin <[email protected]>\n# (c) 2016, Thibaud Morel l'Horset <[email protected]>\n#\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'community'}\n\nDOCUMENTATION = '''\n---\nmodule: packet_device\n\nshort_description: Manage a bare metal server in the Packet Host.\n\ndescription:\n - Manage a bare metal server in the Packet Host (a \"device\" in the API terms).\n - When the machine is created it can optionally wait for public IP address, or for active state.\n - This module has a dependency on packet >= 1.0.\n - API is documented at U(https://www.packet.net/developers/api/devices).\n\nversion_added: \"2.3\"\n\nauthor:\n - Tomas Karasek (@t0mk) <[email protected]>\n - Matt Baldwin <[email protected]>\n - Thibaud Morel l'Horset <[email protected]>\n\noptions:\n auth_token:\n description:\n - Packet api token. You can also supply it in env var C(PACKET_API_TOKEN).\n\n count:\n description:\n - The number of devices to create. Count number can be included in hostname via the %d string formatter.\n default: 1\n\n count_offset:\n description:\n - From which number to start the count.\n default: 1\n\n device_ids:\n description:\n - List of device IDs on which to operate.\n\n facility:\n description:\n - Facility slug for device creation. See Packet API for current list - U(https://www.packet.net/developers/api/facilities/).\n\n features:\n description:\n - Dict with \"features\" for device creation. See Packet API docs for details.\n\n hostnames:\n description:\n - A hostname of a device, or a list of hostnames.\n - If given string or one-item list, you can use the C(\"%d\") Python string format to expand numbers from I(count).\n - If only one hostname, it might be expanded to list if I(count)>1.\n aliases: [name]\n\n locked:\n description:\n - Whether to lock a created device.\n default: false\n version_added: \"2.4\"\n aliases: [lock]\n\n operating_system:\n description:\n - OS slug for device creation. See Packet API for current list - U(https://www.packet.net/developers/api/operatingsystems/).\n\n plan:\n description:\n - Plan slug for device creation. See Packet API for current list - U(https://www.packet.net/developers/api/plans/).\n\n project_id:\n description:\n - ID of project of the device.\n required: true\n\n state:\n description:\n - Desired state of the device.\n - If set to C(present) (the default), the module call will return immediately after the device-creating HTTP request successfully returns.\n - If set to C(active), the module call will block until all the specified devices are in state active due to the Packet API, or until I(wait_timeout).\n choices: [present, absent, active, inactive, rebooted]\n default: present\n\n user_data:\n description:\n - Userdata blob made available to the machine\n\n wait:\n description:\n - Whether to wait for the instance to be assigned IP address before returning.\n - This option has been deprecated in favor of C(wait_for_public_IPv).\n default: false\n\n wait_for_public_IPv:\n description:\n - Whether to wait for the instance to be assigned a public IPv4/IPv6 address.\n - If set to 4, it will wait until IPv4 is assigned to the instance.\n - If set to 6, wait until public IPv6 is assigned to the instance.\n choices: [4,6]\n version_added: \"2.4\"\n\n wait_timeout:\n description:\n - How long (seconds) to wait either for automatic IP address assignment, or for the device to reach the C(active) I(state).\n - If I(wait_for_public_IPv) is set and I(state) is C(active), the module will wait for both events consequently, applying the timeout twice.\n default: 900\n ipxe_script_url:\n description:\n - URL of custom iPXE script for provisioning.\n - More about custome iPXE for Packet devices at U(https://help.packet.net/technical/infrastructure/custom-ipxe).\n version_added: \"2.4\"\n always_pxe:\n description:\n - Persist PXE as the first boot option.\n - Normally, the PXE process happens only on the first boot. Set this arg to have your device continuously boot to iPXE.\n default: false\n version_added: \"2.4\"\n\n\nrequirements:\n - \"packet-python >= 1.35\"\n\nnotes:\n - Doesn't support check mode.\n\n'''\n\nEXAMPLES = '''\n# All the examples assume that you have your Packet api token in env var PACKET_API_TOKEN.\n# You can also pass it to the auth_token parameter of the module instead.\n\n# Creating devices\n\n- name: create 1 device\n hosts: localhost\n tasks:\n - packet_device:\n project_id: 89b497ee-5afc-420a-8fb5-56984898f4df\n hostnames: myserver\n operating_system: ubuntu_16_04\n plan: baremetal_0\n facility: sjc1\n\n# Create the same device and wait until it is in state \"active\", (when it's\n# ready for other API operations). Fail if the devices in not \"active\" in\n# 10 minutes.\n\n- name: create device and wait up to 10 minutes for active state\n hosts: localhost\n tasks:\n - packet_device:\n project_id: 89b497ee-5afc-420a-8fb5-56984898f4df\n hostnames: myserver\n operating_system: ubuntu_16_04\n plan: baremetal_0\n facility: sjc1\n state: active\n wait_timeout: 600\n\n- name: create 3 ubuntu devices called server-01, server-02 and server-03\n hosts: localhost\n tasks:\n - packet_device:\n project_id: 89b497ee-5afc-420a-8fb5-56984898f4df\n hostnames: server-%02d\n count: 3\n operating_system: ubuntu_16_04\n plan: baremetal_0\n facility: sjc1\n\n- name: Create 3 coreos devices with userdata, wait until they get IPs and then wait for SSH\n hosts: localhost\n tasks:\n - name: create 3 devices and register their facts\n packet_device:\n hostnames: [coreos-one, coreos-two, coreos-three]\n operating_system: coreos_stable\n plan: baremetal_0\n facility: ewr1\n locked: true\n project_id: 89b497ee-5afc-420a-8fb5-56984898f4df\n wait_for_public_IPv: 4\n user_data: |\n #cloud-config\n ssh_authorized_keys:\n - {{ lookup('file', 'my_packet_sshkey') }}\n coreos:\n etcd:\n discovery: https://discovery.etcd.io/6a28e078895c5ec737174db2419bb2f3\n addr: $private_ipv4:4001\n peer-addr: $private_ipv4:7001\n fleet:\n public-ip: $private_ipv4\n units:\n - name: etcd.service\n command: start\n - name: fleet.service\n command: start\n register: newhosts\n\n - name: wait for ssh\n wait_for:\n delay: 1\n host: \"{{ item.public_ipv4 }}\"\n port: 22\n state: started\n timeout: 500\n with_items: \"{{ newhosts.devices }}\"\n\n\n# Other states of devices\n\n- name: remove 3 devices by uuid\n hosts: localhost\n tasks:\n - packet_device:\n project_id: 89b497ee-5afc-420a-8fb5-56984898f4df\n state: absent\n device_ids:\n - 1fb4faf8-a638-4ac7-8f47-86fe514c30d8\n - 2eb4faf8-a638-4ac7-8f47-86fe514c3043\n - 6bb4faf8-a638-4ac7-8f47-86fe514c301f\n'''\n\nRETURN = '''\nchanged:\n description: True if a device was altered in any way (created, modified or removed)\n type: bool\n sample: True\n returned: success\n\ndevices:\n description: Information about each device that was processed\n type: list\n sample: '[{\"hostname\": \"my-server.com\", \"id\": \"2a5122b9-c323-4d5c-b53c-9ad3f54273e7\",\n \"public_ipv4\": \"147.229.15.12\", \"private-ipv4\": \"10.0.15.12\",\n \"tags\": [], \"locked\": false, \"state\": \"provisioning\",\n \"public_ipv6\": \"\"2604:1380:2:5200::3\"}]'\n returned: success\n''' # NOQA\n\n\nimport os\nimport re\nimport time\nimport uuid\nimport traceback\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils._text import to_native\n\nHAS_PACKET_SDK = True\ntry:\n import packet\nexcept ImportError:\n HAS_PACKET_SDK = False\n\nfrom ansible.module_utils.basic import AnsibleModule\n\n\nNAME_RE = '({0}|{0}{1}*{0})'.format('[a-zA-Z0-9]', '[a-zA-Z0-9\\-]')\nHOSTNAME_RE = '({0}\\.)*{0}$'.format(NAME_RE)\nMAX_DEVICES = 100\n\nPACKET_DEVICE_STATES = (\n 'queued',\n 'provisioning',\n 'failed',\n 'powering_on',\n 'active',\n 'powering_off',\n 'inactive',\n 'rebooting',\n)\n\nPACKET_API_TOKEN_ENV_VAR = \"PACKET_API_TOKEN\"\n\n\nALLOWED_STATES = ['absent', 'active', 'inactive', 'rebooted', 'present']\n\n\ndef serialize_device(device):\n \"\"\"\n Standard represenation for a device as returned by various tasks::\n\n {\n 'id': 'device_id'\n 'hostname': 'device_hostname',\n 'tags': [],\n 'locked': false,\n 'state': 'provisioning',\n 'ip_addresses': [\n {\n \"address\": \"147.75.194.227\",\n \"address_family\": 4,\n \"public\": true\n },\n {\n \"address\": \"2604:1380:2:5200::3\",\n \"address_family\": 6,\n \"public\": true\n },\n {\n \"address\": \"10.100.11.129\",\n \"address_family\": 4,\n \"public\": false\n }\n ],\n \"private_ipv4\": \"10.100.11.129\",\n \"public_ipv4\": \"147.75.194.227\",\n \"public_ipv6\": \"2604:1380:2:5200::3\",\n }\n\n \"\"\"\n device_data = {}\n device_data['id'] = device.id\n device_data['hostname'] = device.hostname\n device_data['tags'] = device.tags\n device_data['locked'] = device.locked\n device_data['state'] = device.state\n device_data['ip_addresses'] = [\n {\n 'address': addr_data['address'],\n 'address_family': addr_data['address_family'],\n 'public': addr_data['public'],\n }\n for addr_data in device.ip_addresses\n ]\n # Also include each IPs as a key for easier lookup in roles.\n # Key names:\n # - public_ipv4\n # - public_ipv6\n # - private_ipv4\n # - private_ipv6 (if there is one)\n for ipdata in device_data['ip_addresses']:\n if ipdata['public']:\n if ipdata['address_family'] == 6:\n device_data['public_ipv6'] = ipdata['address']\n elif ipdata['address_family'] == 4:\n device_data['public_ipv4'] = ipdata['address']\n elif not ipdata['public']:\n if ipdata['address_family'] == 6:\n # Packet doesn't give public ipv6 yet, but maybe one\n # day they will\n device_data['private_ipv6'] = ipdata['address']\n elif ipdata['address_family'] == 4:\n device_data['private_ipv4'] = ipdata['address']\n return device_data\n\n\ndef is_valid_hostname(hostname):\n return re.match(HOSTNAME_RE, hostname) is not None\n\n\ndef is_valid_uuid(myuuid):\n try:\n val = uuid.UUID(myuuid, version=4)\n except ValueError:\n return False\n return str(val) == myuuid\n\n\ndef listify_string_name_or_id(s):\n if ',' in s:\n return s.split(',')\n else:\n return [s]\n\n\ndef get_hostname_list(module):\n # hostname is a list-typed param, so I guess it should return list\n # (and it does, in Ansible 2.2.1) but in order to be defensive,\n # I keep here the code to convert an eventual string to list\n hostnames = module.params.get('hostnames')\n count = module.params.get('count')\n count_offset = module.params.get('count_offset')\n if isinstance(hostnames, str):\n hostnames = listify_string_name_or_id(hostnames)\n if not isinstance(hostnames, list):\n raise Exception(\"name %s is not convertible to list\" % hostnames)\n\n # at this point, hostnames is a list\n hostnames = [h.strip() for h in hostnames]\n\n if (len(hostnames) > 1) and (count > 1):\n _msg = (\"If you set count>1, you should only specify one hostname \"\n \"with the %d formatter, not a list of hostnames.\")\n raise Exception(_msg)\n\n if (len(hostnames) == 1) and (count > 0):\n hostname_spec = hostnames[0]\n count_range = range(count_offset, count_offset + count)\n if re.search(\"%\\d{0,2}d\", hostname_spec):\n hostnames = [hostname_spec % i for i in count_range]\n elif count > 1:\n hostname_spec = '%s%%02d' % hostname_spec\n hostnames = [hostname_spec % i for i in count_range]\n\n for hn in hostnames:\n if not is_valid_hostname(hn):\n raise Exception(\"Hostname '%s' does not seem to be valid\" % hn)\n\n if len(hostnames) > MAX_DEVICES:\n raise Exception(\"You specified too many hostnames, max is %d\" %\n MAX_DEVICES)\n return hostnames\n\n\ndef get_device_id_list(module):\n device_ids = module.params.get('device_ids')\n\n if isinstance(device_ids, str):\n device_ids = listify_string_name_or_id(device_ids)\n\n device_ids = [di.strip() for di in device_ids]\n\n for di in device_ids:\n if not is_valid_uuid(di):\n raise Exception(\"Device ID '%s' does not seem to be valid\" % di)\n\n if len(device_ids) > MAX_DEVICES:\n raise Exception(\"You specified too many devices, max is %d\" %\n MAX_DEVICES)\n return device_ids\n\n\ndef create_single_device(module, packet_conn, hostname):\n\n for param in ('hostnames', 'operating_system', 'plan'):\n if not module.params.get(param):\n raise Exception(\"%s parameter is required for new device.\"\n % param)\n project_id = module.params.get('project_id')\n plan = module.params.get('plan')\n user_data = module.params.get('user_data')\n facility = module.params.get('facility')\n operating_system = module.params.get('operating_system')\n locked = module.params.get('locked')\n ipxe_script_url = module.params.get('ipxe_script_url')\n always_pxe = module.params.get('always_pxe')\n device = packet_conn.create_device(\n project_id=project_id,\n hostname=hostname,\n plan=plan,\n facility=facility,\n operating_system=operating_system,\n userdata=user_data,\n locked=locked)\n return device\n\n\ndef refresh_device_list(module, packet_conn, devices):\n device_ids = [d.id for d in devices]\n new_device_list = get_existing_devices(module, packet_conn)\n return [d for d in new_device_list if d.id in device_ids]\n\n\ndef wait_for_devices_active(module, packet_conn, watched_devices):\n wait_timeout = module.params.get('wait_timeout')\n wait_timeout = time.time() + wait_timeout\n refreshed = watched_devices\n while wait_timeout > time.time():\n refreshed = refresh_device_list(module, packet_conn, watched_devices)\n if all(d.state == 'active' for d in refreshed):\n return refreshed\n time.sleep(5)\n raise Exception(\"Waiting for state \\\"active\\\" timed out for devices: %s\"\n % [d.hostname for d in refreshed if d.state != \"active\"])\n\n\ndef wait_for_public_IPv(module, packet_conn, created_devices):\n\n def has_public_ip(addr_list, ip_v):\n return any([a['public'] and a['address_family'] == ip_v and\n a['address'] for a in addr_list])\n\n def all_have_public_ip(ds, ip_v):\n return all([has_public_ip(d.ip_addresses, ip_v) for d in ds])\n\n address_family = module.params.get('wait_for_public_IPv')\n\n wait_timeout = module.params.get('wait_timeout')\n wait_timeout = time.time() + wait_timeout\n while wait_timeout > time.time():\n refreshed = refresh_device_list(module, packet_conn, created_devices)\n if all_have_public_ip(refreshed, address_family):\n return refreshed\n time.sleep(5)\n\n raise Exception(\"Waiting for IPv%d address timed out. Hostnames: %s\"\n % (address_family, [d.hostname for d in created_devices]))\n\n\ndef get_existing_devices(module, packet_conn):\n project_id = module.params.get('project_id')\n return packet_conn.list_devices(\n project_id, params={\n 'per_page': MAX_DEVICES})\n\n\ndef get_specified_device_identifiers(module):\n if module.params.get('device_ids'):\n device_id_list = get_device_id_list(module)\n return {'ids': device_id_list, 'hostnames': []}\n elif module.params.get('hostnames'):\n hostname_list = get_hostname_list(module)\n return {'hostnames': hostname_list, 'ids': []}\n\n\ndef act_on_devices(module, packet_conn, target_state):\n specified_identifiers = get_specified_device_identifiers(module)\n existing_devices = get_existing_devices(module, packet_conn)\n changed = False\n create_hostnames = []\n if target_state in ['present', 'active', 'rebooted']:\n # states where we might create non-existing specified devices\n existing_devices_names = [ed.hostname for ed in existing_devices]\n create_hostnames = [hn for hn in specified_identifiers['hostnames']\n if hn not in existing_devices_names]\n\n process_devices = [d for d in existing_devices\n if (d.id in specified_identifiers['ids']) or\n (d.hostname in specified_identifiers['hostnames'])]\n\n if target_state != 'present':\n _absent_state_map = {}\n for s in PACKET_DEVICE_STATES:\n _absent_state_map[s] = packet.Device.delete\n\n state_map = {\n 'absent': _absent_state_map,\n 'active': {'inactive': packet.Device.power_on,\n 'provisioning': None, 'rebooting': None\n },\n 'inactive': {'active': packet.Device.power_off},\n 'rebooted': {'active': packet.Device.reboot,\n 'inactive': packet.Device.power_on,\n 'provisioning': None, 'rebooting': None\n },\n }\n\n # First do non-creation actions, it might be faster\n for d in process_devices:\n if d.state == target_state:\n continue\n if d.state in state_map[target_state]:\n api_operation = state_map[target_state].get(d.state)\n if api_operation is not None:\n api_operation(d)\n changed = True\n else:\n _msg = (\n \"I don't know how to process existing device %s from state %s \"\n \"to state %s\" %\n (d.hostname, d.state, target_state))\n raise Exception(_msg)\n\n # At last create missing devices\n created_devices = []\n if create_hostnames:\n created_devices = [create_single_device(module, packet_conn, n)\n for n in create_hostnames]\n if module.params.get('wait_for_public_IPv'):\n created_devices = wait_for_public_IPv(\n module, packet_conn, created_devices)\n changed = True\n\n processed_devices = created_devices + process_devices\n if target_state == 'active':\n processed_devices = wait_for_devices_active(\n module, packet_conn, processed_devices)\n\n return {\n 'changed': changed,\n 'devices': [serialize_device(d) for d in processed_devices]\n }\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n auth_token=dict(default=os.environ.get(PACKET_API_TOKEN_ENV_VAR),\n no_log=True),\n count=dict(type='int', default=1),\n count_offset=dict(type='int', default=1),\n device_ids=dict(type='list'),\n facility=dict(),\n features=dict(type='dict'),\n hostnames=dict(type='list', aliases=['name']),\n locked=dict(type='bool', default=False, aliases=['lock']),\n operating_system=dict(),\n plan=dict(),\n project_id=dict(required=True),\n state=dict(choices=ALLOWED_STATES, default='present'),\n user_data=dict(default=None),\n wait_for_public_IPv=dict(type='int', choices=[4, 6]),\n wait_timeout=dict(type='int', default=900),\n ipxe_script_url=dict(default=''),\n always_pxe=dict(type='bool', default=False),\n ),\n required_one_of=[('device_ids', 'hostnames',)],\n mutually_exclusive=[\n ('always_pxe', 'operating_system'),\n ('ipxe_script_url', 'operating_system'),\n ('hostnames', 'device_ids'),\n ('count', 'device_ids'),\n ('count_offset', 'device_ids'),\n ]\n )\n\n if not HAS_PACKET_SDK:\n module.fail_json(msg='packet required for this module')\n\n if not module.params.get('auth_token'):\n _fail_msg = (\"if Packet API token is not in environment variable %s, \"\n \"the auth_token parameter is required\" %\n PACKET_API_TOKEN_ENV_VAR)\n module.fail_json(msg=_fail_msg)\n\n auth_token = module.params.get('auth_token')\n\n packet_conn = packet.Manager(auth_token=auth_token)\n\n state = module.params.get('state')\n\n try:\n module.exit_json(**act_on_devices(module, packet_conn, state))\n except Exception as e:\n module.fail_json(msg='failed to set device state %s, error: %s' %\n (state, to_native(e)), exception=traceback.format_exc())\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/cloud/packet/packet_device.py" } ]
[ { "content": "#!/usr/bin/python\n# (c) 2016, Tomas Karasek <[email protected]>\n# (c) 2016, Matt Baldwin <[email protected]>\n# (c) 2016, Thibaud Morel l'Horset <[email protected]>\n#\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'community'}\n\nDOCUMENTATION = '''\n---\nmodule: packet_device\n\nshort_description: Manage a bare metal server in the Packet Host.\n\ndescription:\n - Manage a bare metal server in the Packet Host (a \"device\" in the API terms).\n - When the machine is created it can optionally wait for public IP address, or for active state.\n - This module has a dependency on packet >= 1.0.\n - API is documented at U(https://www.packet.net/developers/api/devices).\n\nversion_added: \"2.3\"\n\nauthor:\n - Tomas Karasek (@t0mk) <[email protected]>\n - Matt Baldwin <[email protected]>\n - Thibaud Morel l'Horset <[email protected]>\n\noptions:\n auth_token:\n description:\n - Packet api token. You can also supply it in env var C(PACKET_API_TOKEN).\n\n count:\n description:\n - The number of devices to create. Count number can be included in hostname via the %d string formatter.\n default: 1\n\n count_offset:\n description:\n - From which number to start the count.\n default: 1\n\n device_ids:\n description:\n - List of device IDs on which to operate.\n\n facility:\n description:\n - Facility slug for device creation. See Packet API for current list - U(https://www.packet.net/developers/api/facilities/).\n\n features:\n description:\n - Dict with \"features\" for device creation. See Packet API docs for details.\n\n hostnames:\n description:\n - A hostname of a device, or a list of hostnames.\n - If given string or one-item list, you can use the C(\"%d\") Python string format to expand numbers from I(count).\n - If only one hostname, it might be expanded to list if I(count)>1.\n aliases: [name]\n\n locked:\n description:\n - Whether to lock a created device.\n default: false\n version_added: \"2.4\"\n aliases: [lock]\n\n operating_system:\n description:\n - OS slug for device creation. See Packet API for current list - U(https://www.packet.net/developers/api/operatingsystems/).\n\n plan:\n description:\n - Plan slug for device creation. See Packet API for current list - U(https://www.packet.net/developers/api/plans/).\n\n project_id:\n description:\n - ID of project of the device.\n required: true\n\n state:\n description:\n - Desired state of the device.\n - If set to C(present) (the default), the module call will return immediately after the device-creating HTTP request successfully returns.\n - If set to C(active), the module call will block until all the specified devices are in state active due to the Packet API, or until I(wait_timeout).\n choices: [present, absent, active, inactive, rebooted]\n default: present\n\n user_data:\n description:\n - Userdata blob made available to the machine\n\n wait:\n description:\n - Whether to wait for the instance to be assigned IP address before returning.\n - This option has been deprecated in favor of C(wait_for_public_IPv).\n default: false\n\n wait_for_public_IPv:\n description:\n - Whether to wait for the instance to be assigned a public IPv4/IPv6 address.\n - If set to 4, it will wait until IPv4 is assigned to the instance.\n - If set to 6, wait until public IPv6 is assigned to the instance.\n choices: [4,6]\n version_added: \"2.4\"\n\n wait_timeout:\n description:\n - How long (seconds) to wait either for automatic IP address assignment, or for the device to reach the C(active) I(state).\n - If I(wait_for_public_IPv) is set and I(state) is C(active), the module will wait for both events consequently, applying the timeout twice.\n default: 900\n ipxe_script_url:\n description:\n - URL of custom iPXE script for provisioning.\n - More about custome iPXE for Packet devices at U(https://help.packet.net/technical/infrastructure/custom-ipxe).\n version_added: \"2.4\"\n always_pxe:\n description:\n - Persist PXE as the first boot option.\n - Normally, the PXE process happens only on the first boot. Set this arg to have your device continuously boot to iPXE.\n default: false\n version_added: \"2.4\"\n\n\nrequirements:\n - \"packet-python >= 1.35\"\n\nnotes:\n - Doesn't support check mode.\n\n'''\n\nEXAMPLES = '''\n# All the examples assume that you have your Packet api token in env var PACKET_API_TOKEN.\n# You can also pass it to the auth_token parameter of the module instead.\n\n# Creating devices\n\n- name: create 1 device\n hosts: localhost\n tasks:\n - packet_device:\n project_id: 89b497ee-5afc-420a-8fb5-56984898f4df\n hostnames: myserver\n operating_system: ubuntu_16_04\n plan: baremetal_0\n facility: sjc1\n\n# Create the same device and wait until it is in state \"active\", (when it's\n# ready for other API operations). Fail if the devices in not \"active\" in\n# 10 minutes.\n\n- name: create device and wait up to 10 minutes for active state\n hosts: localhost\n tasks:\n - packet_device:\n project_id: 89b497ee-5afc-420a-8fb5-56984898f4df\n hostnames: myserver\n operating_system: ubuntu_16_04\n plan: baremetal_0\n facility: sjc1\n state: active\n wait_timeout: 600\n\n- name: create 3 ubuntu devices called server-01, server-02 and server-03\n hosts: localhost\n tasks:\n - packet_device:\n project_id: 89b497ee-5afc-420a-8fb5-56984898f4df\n hostnames: server-%02d\n count: 3\n operating_system: ubuntu_16_04\n plan: baremetal_0\n facility: sjc1\n\n- name: Create 3 coreos devices with userdata, wait until they get IPs and then wait for SSH\n hosts: localhost\n tasks:\n - name: create 3 devices and register their facts\n packet_device:\n hostnames: [coreos-one, coreos-two, coreos-three]\n operating_system: coreos_stable\n plan: baremetal_0\n facility: ewr1\n locked: true\n project_id: 89b497ee-5afc-420a-8fb5-56984898f4df\n wait_for_public_IPv: 4\n user_data: |\n #cloud-config\n ssh_authorized_keys:\n - {{ lookup('file', 'my_packet_sshkey') }}\n coreos:\n etcd:\n discovery: https://discovery.etcd.io/6a28e078895c5ec737174db2419bb2f3\n addr: $private_ipv4:4001\n peer-addr: $private_ipv4:7001\n fleet:\n public-ip: $private_ipv4\n units:\n - name: etcd.service\n command: start\n - name: fleet.service\n command: start\n register: newhosts\n\n - name: wait for ssh\n wait_for:\n delay: 1\n host: \"{{ item.public_ipv4 }}\"\n port: 22\n state: started\n timeout: 500\n with_items: \"{{ newhosts.devices }}\"\n\n\n# Other states of devices\n\n- name: remove 3 devices by uuid\n hosts: localhost\n tasks:\n - packet_device:\n project_id: 89b497ee-5afc-420a-8fb5-56984898f4df\n state: absent\n device_ids:\n - 1fb4faf8-a638-4ac7-8f47-86fe514c30d8\n - 2eb4faf8-a638-4ac7-8f47-86fe514c3043\n - 6bb4faf8-a638-4ac7-8f47-86fe514c301f\n'''\n\nRETURN = '''\nchanged:\n description: True if a device was altered in any way (created, modified or removed)\n type: bool\n sample: True\n returned: success\n\ndevices:\n description: Information about each device that was processed\n type: list\n sample: '[{\"hostname\": \"my-server.com\", \"id\": \"2a5122b9-c323-4d5c-b53c-9ad3f54273e7\",\n \"public_ipv4\": \"147.229.15.12\", \"private-ipv4\": \"10.0.15.12\",\n \"tags\": [], \"locked\": false, \"state\": \"provisioning\",\n \"public_ipv6\": \"\"2604:1380:2:5200::3\"}]'\n returned: success\n''' # NOQA\n\n\nimport os\nimport re\nimport time\nimport uuid\nimport traceback\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils._text import to_native\n\nHAS_PACKET_SDK = True\ntry:\n import packet\nexcept ImportError:\n HAS_PACKET_SDK = False\n\nfrom ansible.module_utils.basic import AnsibleModule\n\n\nNAME_RE = '({0}|{0}{1}*{0})'.format('[a-zA-Z0-9]', '[a-zA-Z0-9\\-]')\nHOSTNAME_RE = '({0}\\.)*{0}$'.format(NAME_RE)\nMAX_DEVICES = 100\n\nPACKET_DEVICE_STATES = (\n 'queued',\n 'provisioning',\n 'failed',\n 'powering_on',\n 'active',\n 'powering_off',\n 'inactive',\n 'rebooting',\n)\n\nPACKET_API_TOKEN_ENV_VAR = \"PACKET_API_TOKEN\"\n\n\nALLOWED_STATES = ['absent', 'active', 'inactive', 'rebooted', 'present']\n\n\ndef serialize_device(device):\n \"\"\"\n Standard represenation for a device as returned by various tasks::\n\n {\n 'id': 'device_id'\n 'hostname': 'device_hostname',\n 'tags': [],\n 'locked': false,\n 'state': 'provisioning',\n 'ip_addresses': [\n {\n \"address\": \"147.75.194.227\",\n \"address_family\": 4,\n \"public\": true\n },\n {\n \"address\": \"2604:1380:2:5200::3\",\n \"address_family\": 6,\n \"public\": true\n },\n {\n \"address\": \"10.100.11.129\",\n \"address_family\": 4,\n \"public\": false\n }\n ],\n \"private_ipv4\": \"10.100.11.129\",\n \"public_ipv4\": \"147.75.194.227\",\n \"public_ipv6\": \"2604:1380:2:5200::3\",\n }\n\n \"\"\"\n device_data = {}\n device_data['id'] = device.id\n device_data['hostname'] = device.hostname\n device_data['tags'] = device.tags\n device_data['locked'] = device.locked\n device_data['state'] = device.state\n device_data['ip_addresses'] = [\n {\n 'address': addr_data['address'],\n 'address_family': addr_data['address_family'],\n 'public': addr_data['public'],\n }\n for addr_data in device.ip_addresses\n ]\n # Also include each IPs as a key for easier lookup in roles.\n # Key names:\n # - public_ipv4\n # - public_ipv6\n # - private_ipv4\n # - private_ipv6 (if there is one)\n for ipdata in device_data['ip_addresses']:\n if ipdata['public']:\n if ipdata['address_family'] == 6:\n device_data['public_ipv6'] = ipdata['address']\n elif ipdata['address_family'] == 4:\n device_data['public_ipv4'] = ipdata['address']\n elif not ipdata['public']:\n if ipdata['address_family'] == 6:\n # Packet doesn't give public ipv6 yet, but maybe one\n # day they will\n device_data['private_ipv6'] = ipdata['address']\n elif ipdata['address_family'] == 4:\n device_data['private_ipv4'] = ipdata['address']\n return device_data\n\n\ndef is_valid_hostname(hostname):\n return re.match(HOSTNAME_RE, hostname) is not None\n\n\ndef is_valid_uuid(myuuid):\n try:\n val = uuid.UUID(myuuid, version=4)\n except ValueError:\n return False\n return str(val) == myuuid\n\n\ndef listify_string_name_or_id(s):\n if ',' in s:\n return s.split(',')\n else:\n return [s]\n\n\ndef get_hostname_list(module):\n # hostname is a list-typed param, so I guess it should return list\n # (and it does, in Ansible 2.2.1) but in order to be defensive,\n # I keep here the code to convert an eventual string to list\n hostnames = module.params.get('hostnames')\n count = module.params.get('count')\n count_offset = module.params.get('count_offset')\n if isinstance(hostnames, str):\n hostnames = listify_string_name_or_id(hostnames)\n if not isinstance(hostnames, list):\n raise Exception(\"name %s is not convertible to list\" % hostnames)\n\n # at this point, hostnames is a list\n hostnames = [h.strip() for h in hostnames]\n\n if (len(hostnames) > 1) and (count > 1):\n _msg = (\"If you set count>1, you should only specify one hostname \"\n \"with the %d formatter, not a list of hostnames.\")\n raise Exception(_msg)\n\n if (len(hostnames) == 1) and (count > 0):\n hostname_spec = hostnames[0]\n count_range = range(count_offset, count_offset + count)\n if re.search(\"%\\d{0,2}d\", hostname_spec):\n hostnames = [hostname_spec % i for i in count_range]\n elif count > 1:\n hostname_spec = '%s%%02d' % hostname_spec\n hostnames = [hostname_spec % i for i in count_range]\n\n for hn in hostnames:\n if not is_valid_hostname(hn):\n raise Exception(\"Hostname '%s' does not seem to be valid\" % hn)\n\n if len(hostnames) > MAX_DEVICES:\n raise Exception(\"You specified too many hostnames, max is %d\" %\n MAX_DEVICES)\n return hostnames\n\n\ndef get_device_id_list(module):\n device_ids = module.params.get('device_ids')\n\n if isinstance(device_ids, str):\n device_ids = listify_string_name_or_id(device_ids)\n\n device_ids = [di.strip() for di in device_ids]\n\n for di in device_ids:\n if not is_valid_uuid(di):\n raise Exception(\"Device ID '%s' does not seem to be valid\" % di)\n\n if len(device_ids) > MAX_DEVICES:\n raise Exception(\"You specified too many devices, max is %d\" %\n MAX_DEVICES)\n return device_ids\n\n\ndef create_single_device(module, packet_conn, hostname):\n\n for param in ('hostnames', 'operating_system', 'plan'):\n if not module.params.get(param):\n raise Exception(\"%s parameter is required for new device.\"\n % param)\n project_id = module.params.get('project_id')\n plan = module.params.get('plan')\n user_data = module.params.get('user_data')\n facility = module.params.get('facility')\n operating_system = module.params.get('operating_system')\n locked = module.params.get('locked')\n ipxe_script_url = module.params.get('ipxe_script_url')\n always_pxe = module.params.get('always_pxe')\n device = packet_conn.create_device(\n project_id=project_id,\n hostname=hostname,\n plan=plan,\n facility=facility,\n operating_system=operating_system,\n userdata=user_data,\n locked=locked,\n ipxe_script_url=ipxe_script_url,\n always_pxe=always_pxe)\n return device\n\n\ndef refresh_device_list(module, packet_conn, devices):\n device_ids = [d.id for d in devices]\n new_device_list = get_existing_devices(module, packet_conn)\n return [d for d in new_device_list if d.id in device_ids]\n\n\ndef wait_for_devices_active(module, packet_conn, watched_devices):\n wait_timeout = module.params.get('wait_timeout')\n wait_timeout = time.time() + wait_timeout\n refreshed = watched_devices\n while wait_timeout > time.time():\n refreshed = refresh_device_list(module, packet_conn, watched_devices)\n if all(d.state == 'active' for d in refreshed):\n return refreshed\n time.sleep(5)\n raise Exception(\"Waiting for state \\\"active\\\" timed out for devices: %s\"\n % [d.hostname for d in refreshed if d.state != \"active\"])\n\n\ndef wait_for_public_IPv(module, packet_conn, created_devices):\n\n def has_public_ip(addr_list, ip_v):\n return any([a['public'] and a['address_family'] == ip_v and\n a['address'] for a in addr_list])\n\n def all_have_public_ip(ds, ip_v):\n return all([has_public_ip(d.ip_addresses, ip_v) for d in ds])\n\n address_family = module.params.get('wait_for_public_IPv')\n\n wait_timeout = module.params.get('wait_timeout')\n wait_timeout = time.time() + wait_timeout\n while wait_timeout > time.time():\n refreshed = refresh_device_list(module, packet_conn, created_devices)\n if all_have_public_ip(refreshed, address_family):\n return refreshed\n time.sleep(5)\n\n raise Exception(\"Waiting for IPv%d address timed out. Hostnames: %s\"\n % (address_family, [d.hostname for d in created_devices]))\n\n\ndef get_existing_devices(module, packet_conn):\n project_id = module.params.get('project_id')\n return packet_conn.list_devices(\n project_id, params={\n 'per_page': MAX_DEVICES})\n\n\ndef get_specified_device_identifiers(module):\n if module.params.get('device_ids'):\n device_id_list = get_device_id_list(module)\n return {'ids': device_id_list, 'hostnames': []}\n elif module.params.get('hostnames'):\n hostname_list = get_hostname_list(module)\n return {'hostnames': hostname_list, 'ids': []}\n\n\ndef act_on_devices(module, packet_conn, target_state):\n specified_identifiers = get_specified_device_identifiers(module)\n existing_devices = get_existing_devices(module, packet_conn)\n changed = False\n create_hostnames = []\n if target_state in ['present', 'active', 'rebooted']:\n # states where we might create non-existing specified devices\n existing_devices_names = [ed.hostname for ed in existing_devices]\n create_hostnames = [hn for hn in specified_identifiers['hostnames']\n if hn not in existing_devices_names]\n\n process_devices = [d for d in existing_devices\n if (d.id in specified_identifiers['ids']) or\n (d.hostname in specified_identifiers['hostnames'])]\n\n if target_state != 'present':\n _absent_state_map = {}\n for s in PACKET_DEVICE_STATES:\n _absent_state_map[s] = packet.Device.delete\n\n state_map = {\n 'absent': _absent_state_map,\n 'active': {'inactive': packet.Device.power_on,\n 'provisioning': None, 'rebooting': None\n },\n 'inactive': {'active': packet.Device.power_off},\n 'rebooted': {'active': packet.Device.reboot,\n 'inactive': packet.Device.power_on,\n 'provisioning': None, 'rebooting': None\n },\n }\n\n # First do non-creation actions, it might be faster\n for d in process_devices:\n if d.state == target_state:\n continue\n if d.state in state_map[target_state]:\n api_operation = state_map[target_state].get(d.state)\n if api_operation is not None:\n api_operation(d)\n changed = True\n else:\n _msg = (\n \"I don't know how to process existing device %s from state %s \"\n \"to state %s\" %\n (d.hostname, d.state, target_state))\n raise Exception(_msg)\n\n # At last create missing devices\n created_devices = []\n if create_hostnames:\n created_devices = [create_single_device(module, packet_conn, n)\n for n in create_hostnames]\n if module.params.get('wait_for_public_IPv'):\n created_devices = wait_for_public_IPv(\n module, packet_conn, created_devices)\n changed = True\n\n processed_devices = created_devices + process_devices\n if target_state == 'active':\n processed_devices = wait_for_devices_active(\n module, packet_conn, processed_devices)\n\n return {\n 'changed': changed,\n 'devices': [serialize_device(d) for d in processed_devices]\n }\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n auth_token=dict(default=os.environ.get(PACKET_API_TOKEN_ENV_VAR),\n no_log=True),\n count=dict(type='int', default=1),\n count_offset=dict(type='int', default=1),\n device_ids=dict(type='list'),\n facility=dict(),\n features=dict(type='dict'),\n hostnames=dict(type='list', aliases=['name']),\n locked=dict(type='bool', default=False, aliases=['lock']),\n operating_system=dict(),\n plan=dict(),\n project_id=dict(required=True),\n state=dict(choices=ALLOWED_STATES, default='present'),\n user_data=dict(default=None),\n wait_for_public_IPv=dict(type='int', choices=[4, 6]),\n wait_timeout=dict(type='int', default=900),\n ipxe_script_url=dict(default=''),\n always_pxe=dict(type='bool', default=False),\n ),\n required_one_of=[('device_ids', 'hostnames',)],\n mutually_exclusive=[\n ('always_pxe', 'operating_system'),\n ('ipxe_script_url', 'operating_system'),\n ('hostnames', 'device_ids'),\n ('count', 'device_ids'),\n ('count_offset', 'device_ids'),\n ]\n )\n\n if not HAS_PACKET_SDK:\n module.fail_json(msg='packet required for this module')\n\n if not module.params.get('auth_token'):\n _fail_msg = (\"if Packet API token is not in environment variable %s, \"\n \"the auth_token parameter is required\" %\n PACKET_API_TOKEN_ENV_VAR)\n module.fail_json(msg=_fail_msg)\n\n auth_token = module.params.get('auth_token')\n\n packet_conn = packet.Manager(auth_token=auth_token)\n\n state = module.params.get('state')\n\n try:\n module.exit_json(**act_on_devices(module, packet_conn, state))\n except Exception as e:\n module.fail_json(msg='failed to set device state %s, error: %s' %\n (state, to_native(e)), exception=traceback.format_exc())\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/cloud/packet/packet_device.py" } ]
diff --git a/lib/ansible/modules/cloud/packet/packet_device.py b/lib/ansible/modules/cloud/packet/packet_device.py index b2879cb04a2c49..3ae8065e42e26b 100644 --- a/lib/ansible/modules/cloud/packet/packet_device.py +++ b/lib/ansible/modules/cloud/packet/packet_device.py @@ -458,7 +458,9 @@ def create_single_device(module, packet_conn, hostname): facility=facility, operating_system=operating_system, userdata=user_data, - locked=locked) + locked=locked, + ipxe_script_url=ipxe_script_url, + always_pxe=always_pxe) return device
open-telemetry__opentelemetry-python-1813
OpenTelemetry distro as a default distro for OpenTelemetry Instrumentation The `opentelemetry-instrumentation` auto instrumentation doesn't work without installing `opentelemetry-distro` as the components initialisation is done in distro package. How does a regular user know about this and shouldn't openetemetry distro be the default and can give an option to let user use others?
[ { "content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# otcollector.py\nimport time\n\nfrom opentelemetry import trace\nfrom opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (\n OTLPSpanExporter,\n)\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor\n\nspan_exporter = OTLPSpanExporter(\n # optional\n # endpoint:=\"myCollectorURL:4317\",\n # credentials=ChannelCredentials(credentials),\n # headers=((\"metadata\", \"metadata\")),\n)\ntracer_provider = TracerProvider()\ntrace.set_tracer_provider(tracer_provider)\nspan_processor = BatchSpanProcessor(span_exporter)\ntracer_provider.add_span_processor(span_processor)\n\n# Configure the tracer to use the collector exporter\ntracer = trace.get_tracer_provider().get_tracer(__name__)\n\nwith tracer.start_as_current_span(\"foo\"):\n print(\"Hello world!\")\n", "path": "docs/getting_started/otlpcollector_example.py" } ]
[ { "content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# otcollector.py\nimport time\n\nfrom opentelemetry import trace\nfrom opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (\n OTLPSpanExporter,\n)\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor\n\nspan_exporter = OTLPSpanExporter(\n # optional\n # endpoint=\"myCollectorURL:4317\",\n # credentials=ChannelCredentials(credentials),\n # headers=((\"metadata\", \"metadata\")),\n)\ntracer_provider = TracerProvider()\ntrace.set_tracer_provider(tracer_provider)\nspan_processor = BatchSpanProcessor(span_exporter)\ntracer_provider.add_span_processor(span_processor)\n\n# Configure the tracer to use the collector exporter\ntracer = trace.get_tracer_provider().get_tracer(__name__)\n\nwith tracer.start_as_current_span(\"foo\"):\n print(\"Hello world!\")\n", "path": "docs/getting_started/otlpcollector_example.py" } ]
diff --git a/docs/examples/auto-instrumentation/README.rst b/docs/examples/auto-instrumentation/README.rst index 9298c9bef2f..23fb47b3964 100644 --- a/docs/examples/auto-instrumentation/README.rst +++ b/docs/examples/auto-instrumentation/README.rst @@ -45,7 +45,7 @@ Manually instrumented server return "served" Server not instrumented manually -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ``server_uninstrumented.py`` @@ -57,7 +57,7 @@ Server not instrumented manually return "served" Prepare ------------ +------- Execute the following example in a separate virtual environment. Run the following commands to prepare for auto-instrumentation: @@ -69,7 +69,7 @@ Run the following commands to prepare for auto-instrumentation: $ source auto_instrumentation/bin/activate Install ------------- +------- Run the following commands to install the appropriate packages. The ``opentelemetry-instrumentation`` package provides several @@ -90,7 +90,7 @@ a server as well as the process of executing an automatically instrumented server. Execute a manually instrumented server -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Execute the server in two separate consoles, one to run each of the scripts that make up this example: @@ -145,7 +145,7 @@ similar to the following example: } Execute an automatically instrumented server -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Stop the execution of ``server_instrumented.py`` with ``ctrl + c`` and run the following command instead: @@ -208,7 +208,7 @@ You can see that both outputs are the same because automatic instrumentation doe exactly what manual instrumentation does. Instrumentation while debugging -=============================== +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The debug mode can be enabled in the Flask app like this: @@ -226,3 +226,11 @@ reloader. To run instrumentation while the debug mode is enabled, set the if __name__ == "__main__": app.run(port=8082, debug=True, use_reloader=False) + + +Additional resources +~~~~~~~~~~~~~~~~~~~~ + +In order to send telemetry to an OpenTelemetry Collector without doing any +additional configuration, read about the `OpenTelemetry Distro <../distro/README.html>`_ +package. diff --git a/docs/examples/distro/README.rst b/docs/examples/distro/README.rst new file mode 100644 index 00000000000..f58680609ab --- /dev/null +++ b/docs/examples/distro/README.rst @@ -0,0 +1,104 @@ +OpenTelemetry Distro +==================== + +In order to make using OpenTelemetry and auto-instrumentation as quick as possible without sacrificing flexibility, +OpenTelemetry distros provide a mechanism to automatically configure some of the more common options for users. By +harnessing their power, users of OpenTelemetry can configure the components as they need. The ``opentelemetry-distro`` +package provides some defaults to users looking to get started, it configures: + +- the SDK TracerProvider +- a BatchSpanProcessor +- the OTLP ``SpanExporter`` to send data to an OpenTelemetry collector + +The package also provides a starting point for anyone interested in producing an alternative distro. The +interfaces implemented by the package are loaded by the auto-instrumentation via the ``opentelemetry_distro`` +and ``opentelemetry_configurator`` entry points to configure the application before any other code is +executed. + +In order to automatically export data from OpenTelemetry to the OpenTelemetry collector, installing the +package will setup all the required entry points. + +.. code:: sh + + $ pip install opentelemetry-distro[otlp] opentelemetry-instrumentation + +Start the Collector locally to see data being exported. Write the following file: + +.. code-block:: yaml + + # /tmp/otel-collector-config.yaml + receivers: + otlp: + protocols: + grpc: + http: + exporters: + logging: + loglevel: debug + processors: + batch: + service: + pipelines: + traces: + receivers: [otlp] + exporters: [logging] + processors: [batch] + +Then start the Docker container: + +.. code-block:: sh + + docker run -p 4317:4317 \ + -v /tmp/otel-collector-config.yaml:/etc/otel-collector-config.yaml \ + otel/opentelemetry-collector:latest \ + --config=/etc/otel-collector-config.yaml + +The following code will create a span with no configuration. + +.. code:: python + + # no_configuration.py + from opentelemetry import trace + + with trace.get_tracer(__name__).start_as_current_span("foo"): + with trace.get_tracer(__name__).start_as_current_span("bar"): + print("baz") + +Lastly, run the ``no_configuration.py`` with the auto-instrumentation: + +.. code-block:: sh + + $ opentelemetry-instrument python no_configuration.py + +The resulting span will appear in the output from the collector and look similar to this: + +.. code-block:: sh + + Resource labels: + -> telemetry.sdk.language: STRING(python) + -> telemetry.sdk.name: STRING(opentelemetry) + -> telemetry.sdk.version: STRING(1.1.0) + -> service.name: STRING(unknown_service) + InstrumentationLibrarySpans #0 + InstrumentationLibrary __main__ + Span #0 + Trace ID : db3c99e5bfc50ef8be1773c3765e8845 + Parent ID : 0677126a4d110cb8 + ID : 3163b3022808ed1b + Name : bar + Kind : SPAN_KIND_INTERNAL + Start time : 2021-05-06 22:54:51.23063 +0000 UTC + End time : 2021-05-06 22:54:51.230684 +0000 UTC + Status code : STATUS_CODE_UNSET + Status message : + Span #1 + Trace ID : db3c99e5bfc50ef8be1773c3765e8845 + Parent ID : + ID : 0677126a4d110cb8 + Name : foo + Kind : SPAN_KIND_INTERNAL + Start time : 2021-05-06 22:54:51.230549 +0000 UTC + End time : 2021-05-06 22:54:51.230706 +0000 UTC + Status code : STATUS_CODE_UNSET + Status message : + diff --git a/docs/getting_started/otlpcollector_example.py b/docs/getting_started/otlpcollector_example.py index 48c0d32a594..71f9ed97541 100644 --- a/docs/getting_started/otlpcollector_example.py +++ b/docs/getting_started/otlpcollector_example.py @@ -24,7 +24,7 @@ span_exporter = OTLPSpanExporter( # optional - # endpoint:="myCollectorURL:4317", + # endpoint="myCollectorURL:4317", # credentials=ChannelCredentials(credentials), # headers=(("metadata", "metadata")), ) diff --git a/opentelemetry-distro/README.rst b/opentelemetry-distro/README.rst index 4189131fc26..80952839104 100644 --- a/opentelemetry-distro/README.rst +++ b/opentelemetry-distro/README.rst @@ -14,9 +14,10 @@ Installation pip install opentelemetry-distro -This package provides entrypoints to configure OpenTelemetry +This package provides entrypoints to configure OpenTelemetry. References ---------- * `OpenTelemetry Project <https://opentelemetry.io/>`_ +* `Example using opentelemetry-distro <https://opentelemetry-python.readthedocs.io/en/latest/examples/distro/README.html>`_
translate__translate-3683
setcontext is not working correctly for mounit Calling setcontext on mounit does currently nothing as it inherits code from base class: ``` python def setcontext(self, context): """Set the message context""" pass ``` I'd expect it to properly update context as it does for other storages.
[ { "content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2007 Zuza Software Foundation\n#\n# the function \"serialize\" was derived from Python v2.4\n# (Tools/i18n/msgfmt.py - function \"generate\"):\n# Written by Martin v. Löwis <[email protected]>\n# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation.\n# All rights reserved.\n# original license: Python Software Foundation (version 2)\n#\n#\n# This file is part of translate.\n#\n# translate is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# translate is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n#\n\n\"\"\"Module for parsing Gettext .mo files for translation.\n\nThe coding of .mo files was produced from `Gettext documentation\n<http://www.gnu.org/software/gettext/manual/gettext.html#MO-Files>`_,\nPythons msgfmt.py and by observing and testing existing .mo files in the wild.\n\nThe hash algorithm is implemented for MO files, this should result in\nfaster access of the MO file. The hash is optional for Gettext\nand is not needed for reading or writing MO files, in this implementation\nit is always on and does produce sometimes different results to Gettext\nin very small files.\n\"\"\"\n\nimport array\nimport re\nimport six\nimport struct\n\nfrom translate.misc.multistring import multistring\nfrom translate.storage import base, poheader\n\n\nMO_MAGIC_NUMBER = 0x950412de\n\n\ndef mounpack(filename='messages.mo'):\n \"\"\"Helper to unpack Gettext MO files into a Python string\"\"\"\n with open(filename, 'rb') as fh:\n s = fh.read()\n print(\"\\\\x%02x\" * len(s) % tuple(map(ord, s)))\n\n\ndef my_swap4(result):\n c0 = (result >> 0) & 0xff\n c1 = (result >> 8) & 0xff\n c2 = (result >> 16) & 0xff\n c3 = (result >> 24) & 0xff\n\n return (c0 << 24) | (c1 << 16) | (c2 << 8) | c3\n\n\ndef hashpjw(str_param):\n HASHWORDBITS = 32\n hval = 0\n g = None\n s = str_param\n for s in str_param:\n hval = hval << 4\n hval += ord(s) if six.PY2 else s\n g = hval & 0xf << (HASHWORDBITS - 4)\n if (g != 0):\n hval = hval ^ g >> (HASHWORDBITS - 8)\n hval = hval ^ g\n return hval\n\n\ndef get_next_prime_number(start):\n # find the smallest prime number that is greater or equal \"start\"\n\n def is_prime(num):\n # special small numbers\n if (num < 2) or (num == 4):\n return False\n if (num == 2) or (num == 3):\n return True\n # check for numbers > 4\n for divider in range(2, num // 2):\n if num % divider == 0:\n return False\n return True\n\n candidate = start\n while not is_prime(candidate):\n candidate += 1\n return candidate\n\n\nclass mounit(base.TranslationUnit):\n \"\"\"A class representing a .mo translation message.\"\"\"\n\n def __init__(self, source=None, **kwargs):\n self.msgctxt = []\n self.msgidcomments = []\n super(mounit, self).__init__(source)\n\n def getcontext(self):\n \"\"\"Get the message context\"\"\"\n # Still need to handle KDE comments\n if self.msgctxt is None:\n return None\n return \"\".join(self.msgctxt)\n\n def isheader(self):\n \"\"\"Is this a header entry?\"\"\"\n return self.source == u\"\"\n\n def istranslatable(self):\n \"\"\"Is this message translateable?\"\"\"\n return bool(self.source)\n\n\nclass mofile(poheader.poheader, base.TranslationStore):\n \"\"\"A class representing a .mo file.\"\"\"\n\n UnitClass = mounit\n Name = \"Gettext MO file\"\n Mimetypes = [\"application/x-gettext-catalog\", \"application/x-mo\"]\n Extensions = [\"mo\", \"gmo\"]\n _binary = True\n\n def __init__(self, inputfile=None, **kwargs):\n super(mofile, self).__init__(**kwargs)\n self.filename = ''\n if inputfile is not None:\n self.parsestring(inputfile)\n\n def serialize(self, out):\n \"\"\"Output a string representation of the MO data file\"\"\"\n # check the header of this file for the copyright note of this function\n\n def add_to_hash_table(string, i):\n V = hashpjw(string)\n # Taken from gettext-0.17:gettext-tools/src/write-mo.c:408-409\n S = hash_size <= 2 and 3 or hash_size\n hash_cursor = V % S\n orig_hash_cursor = hash_cursor\n increment = 1 + (V % (S - 2))\n while True:\n index = hash_table[hash_cursor]\n if (index == 0):\n hash_table[hash_cursor] = i + 1\n break\n hash_cursor += increment\n hash_cursor = hash_cursor % S\n assert (hash_cursor != orig_hash_cursor)\n\n def lst_encode(lst, join_char=b''):\n return join_char.join([i.encode('utf-8') for i in lst])\n\n # hash_size should be the smallest prime number that is greater\n # or equal (4 / 3 * N) - where N is the number of keys/units.\n # see gettext-0.17:gettext-tools/src/write-mo.c:406\n hash_size = get_next_prime_number(int((len(self.units) * 4) / 3))\n if hash_size <= 2:\n hash_size = 3\n MESSAGES = {}\n for unit in self.units:\n # If the unit is not translated, we should rather omit it entirely\n if not unit.istranslated():\n continue\n if isinstance(unit.source, multistring):\n source = (lst_encode(unit.msgidcomments) +\n lst_encode(unit.source.strings, b\"\\0\"))\n else:\n source = lst_encode(unit.msgidcomments) + unit.source.encode('utf-8')\n if unit.msgctxt:\n source = lst_encode(unit.msgctxt) + b\"\\x04\" + source\n if isinstance(unit.target, multistring):\n target = lst_encode(unit.target.strings, b\"\\0\")\n else:\n target = unit.target.encode('utf-8')\n if unit.target:\n MESSAGES[source] = target\n # using \"I\" works for 32- and 64-bit systems, but not for 16-bit!\n hash_table = array.array(\"I\", [0] * hash_size)\n # the keys are sorted in the .mo file\n keys = sorted(MESSAGES.keys())\n offsets = []\n ids = strs = b''\n for i, id in enumerate(keys):\n # For each string, we need size and file offset. Each string is\n # NUL terminated; the NUL does not count into the size.\n # TODO: We don't do any encoding detection from the PO Header\n add_to_hash_table(id, i)\n string = MESSAGES[id] # id already encoded for use as dictionary key\n offsets.append((len(ids), len(id), len(strs), len(string)))\n ids = ids + id + b'\\0'\n strs = strs + string + b'\\0'\n output = ''\n # The header is 7 32-bit unsigned integers\n keystart = 7 * 4 + 16 * len(keys) + hash_size * 4\n # and the values start after the keys\n valuestart = keystart + len(ids)\n koffsets = []\n voffsets = []\n # The string table first has the list of keys, then the list of values.\n # Each entry has first the size of the string, then the file offset.\n for o1, l1, o2, l2 in offsets:\n koffsets = koffsets + [l1, o1 + keystart]\n voffsets = voffsets + [l2, o2 + valuestart]\n offsets = koffsets + voffsets\n output = struct.pack(\"Iiiiiii\",\n MO_MAGIC_NUMBER, # Magic\n 0, # Version\n len(keys), # # of entries\n 7 * 4, # start of key index\n 7 * 4 + len(keys) * 8, # start of value index\n hash_size, # size of hash table\n 7 * 4 + 2 * (len(keys) * 8)) # offset of hash table\n # additional data is not necessary for empty mo files\n if (len(keys) > 0):\n output = output + array.array(\"i\", offsets).tostring()\n output = output + hash_table.tostring()\n output = output + ids\n output = output + strs\n return out.write(output)\n\n def parse(self, input):\n \"\"\"parses the given file or file source string\"\"\"\n if hasattr(input, 'name'):\n self.filename = input.name\n elif not getattr(self, 'filename', ''):\n self.filename = ''\n if hasattr(input, \"read\"):\n mosrc = input.read()\n input.close()\n input = mosrc\n little, = struct.unpack(\"<L\", input[:4])\n big, = struct.unpack(\">L\", input[:4])\n if little == MO_MAGIC_NUMBER:\n endian = \"<\"\n elif big == MO_MAGIC_NUMBER:\n endian = \">\"\n else:\n raise ValueError(\"This is not an MO file\")\n magic, version_maj, version_min, lenkeys, startkey, \\\n startvalue, sizehash, offsethash = struct.unpack(\"%sLHHiiiii\" % endian,\n input[:(7 * 4)])\n if version_maj >= 1:\n raise base.ParseError(\"\"\"Unable to process version %d.%d MO files\"\"\" % (version_maj, version_min))\n for i in range(lenkeys):\n nextkey = startkey + (i * 2 * 4)\n nextvalue = startvalue + (i * 2 * 4)\n klength, koffset = struct.unpack(\"%sii\" % endian,\n input[nextkey:nextkey + (2 * 4)])\n vlength, voffset = struct.unpack(\"%sii\" % endian,\n input[nextvalue:nextvalue + (2 * 4)])\n source = input[koffset:koffset + klength]\n context = None\n if b\"\\x04\" in source:\n context, source = source.split(b\"\\x04\")\n # Still need to handle KDE comments\n if source == \"\":\n charset = re.search(b\"charset=([^\\\\s]+)\",\n input[voffset:voffset + vlength])\n if charset:\n self.encoding = charset.group(1)\n source = multistring([s.decode(self.encoding)\n for s in source.split(b\"\\0\")])\n target = multistring([s.decode(self.encoding)\n for s in input[voffset:voffset + vlength].split(b\"\\0\")])\n newunit = mounit(source)\n newunit.target = target\n if context is not None:\n newunit.msgctxt.append(context)\n self.addunit(newunit)\n", "path": "translate/storage/mo.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2007 Zuza Software Foundation\n#\n# the function \"serialize\" was derived from Python v2.4\n# (Tools/i18n/msgfmt.py - function \"generate\"):\n# Written by Martin v. Löwis <[email protected]>\n# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation.\n# All rights reserved.\n# original license: Python Software Foundation (version 2)\n#\n#\n# This file is part of translate.\n#\n# translate is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# translate is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n#\n\n\"\"\"Module for parsing Gettext .mo files for translation.\n\nThe coding of .mo files was produced from `Gettext documentation\n<http://www.gnu.org/software/gettext/manual/gettext.html#MO-Files>`_,\nPythons msgfmt.py and by observing and testing existing .mo files in the wild.\n\nThe hash algorithm is implemented for MO files, this should result in\nfaster access of the MO file. The hash is optional for Gettext\nand is not needed for reading or writing MO files, in this implementation\nit is always on and does produce sometimes different results to Gettext\nin very small files.\n\"\"\"\n\nimport array\nimport re\nimport six\nimport struct\n\nfrom translate.misc.multistring import multistring\nfrom translate.storage import base, poheader\n\n\nMO_MAGIC_NUMBER = 0x950412de\n\n\ndef mounpack(filename='messages.mo'):\n \"\"\"Helper to unpack Gettext MO files into a Python string\"\"\"\n with open(filename, 'rb') as fh:\n s = fh.read()\n print(\"\\\\x%02x\" * len(s) % tuple(map(ord, s)))\n\n\ndef my_swap4(result):\n c0 = (result >> 0) & 0xff\n c1 = (result >> 8) & 0xff\n c2 = (result >> 16) & 0xff\n c3 = (result >> 24) & 0xff\n\n return (c0 << 24) | (c1 << 16) | (c2 << 8) | c3\n\n\ndef hashpjw(str_param):\n HASHWORDBITS = 32\n hval = 0\n g = None\n s = str_param\n for s in str_param:\n hval = hval << 4\n hval += ord(s) if six.PY2 else s\n g = hval & 0xf << (HASHWORDBITS - 4)\n if (g != 0):\n hval = hval ^ g >> (HASHWORDBITS - 8)\n hval = hval ^ g\n return hval\n\n\ndef get_next_prime_number(start):\n # find the smallest prime number that is greater or equal \"start\"\n\n def is_prime(num):\n # special small numbers\n if (num < 2) or (num == 4):\n return False\n if (num == 2) or (num == 3):\n return True\n # check for numbers > 4\n for divider in range(2, num // 2):\n if num % divider == 0:\n return False\n return True\n\n candidate = start\n while not is_prime(candidate):\n candidate += 1\n return candidate\n\n\nclass mounit(base.TranslationUnit):\n \"\"\"A class representing a .mo translation message.\"\"\"\n\n def __init__(self, source=None, **kwargs):\n self.msgctxt = []\n self.msgidcomments = []\n super(mounit, self).__init__(source)\n\n def getcontext(self):\n \"\"\"Get the message context\"\"\"\n # Still need to handle KDE comments\n if self.msgctxt is None:\n return None\n return \"\".join(self.msgctxt)\n\n def setcontext(self, context):\n self.msgctxt = [context]\n\n def isheader(self):\n \"\"\"Is this a header entry?\"\"\"\n return self.source == u\"\"\n\n def istranslatable(self):\n \"\"\"Is this message translateable?\"\"\"\n return bool(self.source)\n\n\nclass mofile(poheader.poheader, base.TranslationStore):\n \"\"\"A class representing a .mo file.\"\"\"\n\n UnitClass = mounit\n Name = \"Gettext MO file\"\n Mimetypes = [\"application/x-gettext-catalog\", \"application/x-mo\"]\n Extensions = [\"mo\", \"gmo\"]\n _binary = True\n\n def __init__(self, inputfile=None, **kwargs):\n super(mofile, self).__init__(**kwargs)\n self.filename = ''\n if inputfile is not None:\n self.parsestring(inputfile)\n\n def serialize(self, out):\n \"\"\"Output a string representation of the MO data file\"\"\"\n # check the header of this file for the copyright note of this function\n\n def add_to_hash_table(string, i):\n V = hashpjw(string)\n # Taken from gettext-0.17:gettext-tools/src/write-mo.c:408-409\n S = hash_size <= 2 and 3 or hash_size\n hash_cursor = V % S\n orig_hash_cursor = hash_cursor\n increment = 1 + (V % (S - 2))\n while True:\n index = hash_table[hash_cursor]\n if (index == 0):\n hash_table[hash_cursor] = i + 1\n break\n hash_cursor += increment\n hash_cursor = hash_cursor % S\n assert (hash_cursor != orig_hash_cursor)\n\n def lst_encode(lst, join_char=b''):\n return join_char.join([i.encode('utf-8') for i in lst])\n\n # hash_size should be the smallest prime number that is greater\n # or equal (4 / 3 * N) - where N is the number of keys/units.\n # see gettext-0.17:gettext-tools/src/write-mo.c:406\n hash_size = get_next_prime_number(int((len(self.units) * 4) / 3))\n if hash_size <= 2:\n hash_size = 3\n MESSAGES = {}\n for unit in self.units:\n # If the unit is not translated, we should rather omit it entirely\n if not unit.istranslated():\n continue\n if isinstance(unit.source, multistring):\n source = (lst_encode(unit.msgidcomments) +\n lst_encode(unit.source.strings, b\"\\0\"))\n else:\n source = lst_encode(unit.msgidcomments) + unit.source.encode('utf-8')\n if unit.msgctxt:\n source = lst_encode(unit.msgctxt) + b\"\\x04\" + source\n if isinstance(unit.target, multistring):\n target = lst_encode(unit.target.strings, b\"\\0\")\n else:\n target = unit.target.encode('utf-8')\n if unit.target:\n MESSAGES[source] = target\n # using \"I\" works for 32- and 64-bit systems, but not for 16-bit!\n hash_table = array.array(\"I\", [0] * hash_size)\n # the keys are sorted in the .mo file\n keys = sorted(MESSAGES.keys())\n offsets = []\n ids = strs = b''\n for i, id in enumerate(keys):\n # For each string, we need size and file offset. Each string is\n # NUL terminated; the NUL does not count into the size.\n # TODO: We don't do any encoding detection from the PO Header\n add_to_hash_table(id, i)\n string = MESSAGES[id] # id already encoded for use as dictionary key\n offsets.append((len(ids), len(id), len(strs), len(string)))\n ids = ids + id + b'\\0'\n strs = strs + string + b'\\0'\n output = ''\n # The header is 7 32-bit unsigned integers\n keystart = 7 * 4 + 16 * len(keys) + hash_size * 4\n # and the values start after the keys\n valuestart = keystart + len(ids)\n koffsets = []\n voffsets = []\n # The string table first has the list of keys, then the list of values.\n # Each entry has first the size of the string, then the file offset.\n for o1, l1, o2, l2 in offsets:\n koffsets = koffsets + [l1, o1 + keystart]\n voffsets = voffsets + [l2, o2 + valuestart]\n offsets = koffsets + voffsets\n output = struct.pack(\"Iiiiiii\",\n MO_MAGIC_NUMBER, # Magic\n 0, # Version\n len(keys), # # of entries\n 7 * 4, # start of key index\n 7 * 4 + len(keys) * 8, # start of value index\n hash_size, # size of hash table\n 7 * 4 + 2 * (len(keys) * 8)) # offset of hash table\n # additional data is not necessary for empty mo files\n if (len(keys) > 0):\n output = output + array.array(\"i\", offsets).tostring()\n output = output + hash_table.tostring()\n output = output + ids\n output = output + strs\n return out.write(output)\n\n def parse(self, input):\n \"\"\"parses the given file or file source string\"\"\"\n if hasattr(input, 'name'):\n self.filename = input.name\n elif not getattr(self, 'filename', ''):\n self.filename = ''\n if hasattr(input, \"read\"):\n mosrc = input.read()\n input.close()\n input = mosrc\n little, = struct.unpack(\"<L\", input[:4])\n big, = struct.unpack(\">L\", input[:4])\n if little == MO_MAGIC_NUMBER:\n endian = \"<\"\n elif big == MO_MAGIC_NUMBER:\n endian = \">\"\n else:\n raise ValueError(\"This is not an MO file\")\n magic, version_maj, version_min, lenkeys, startkey, \\\n startvalue, sizehash, offsethash = struct.unpack(\"%sLHHiiiii\" % endian,\n input[:(7 * 4)])\n if version_maj >= 1:\n raise base.ParseError(\"\"\"Unable to process version %d.%d MO files\"\"\" % (version_maj, version_min))\n for i in range(lenkeys):\n nextkey = startkey + (i * 2 * 4)\n nextvalue = startvalue + (i * 2 * 4)\n klength, koffset = struct.unpack(\"%sii\" % endian,\n input[nextkey:nextkey + (2 * 4)])\n vlength, voffset = struct.unpack(\"%sii\" % endian,\n input[nextvalue:nextvalue + (2 * 4)])\n source = input[koffset:koffset + klength]\n context = None\n if b\"\\x04\" in source:\n context, source = source.split(b\"\\x04\")\n # Still need to handle KDE comments\n if source == \"\":\n charset = re.search(b\"charset=([^\\\\s]+)\",\n input[voffset:voffset + vlength])\n if charset:\n self.encoding = charset.group(1)\n source = multistring([s.decode(self.encoding)\n for s in source.split(b\"\\0\")])\n target = multistring([s.decode(self.encoding)\n for s in input[voffset:voffset + vlength].split(b\"\\0\")])\n newunit = mounit(source)\n newunit.target = target\n if context is not None:\n newunit.msgctxt.append(context)\n self.addunit(newunit)\n", "path": "translate/storage/mo.py" } ]
diff --git a/translate/storage/mo.py b/translate/storage/mo.py index ad20515162..2a538fcc72 100644 --- a/translate/storage/mo.py +++ b/translate/storage/mo.py @@ -118,6 +118,9 @@ def getcontext(self): return None return "".join(self.msgctxt) + def setcontext(self, context): + self.msgctxt = [context] + def isheader(self): """Is this a header entry?""" return self.source == u"" diff --git a/translate/storage/test_mo.py b/translate/storage/test_mo.py index 9c14681198..a03911b213 100644 --- a/translate/storage/test_mo.py +++ b/translate/storage/test_mo.py @@ -9,6 +9,11 @@ class TestMOUnit(test_base.TestTranslationUnit): UnitClass = mo.mounit + def test_context(self): + unit = self.UnitClass("Message") + unit.setcontext('context') + assert unit.getcontext() == 'context' + posources = [ r''' @@ -124,6 +129,14 @@ def test_language(self): store.updateheader(add=True, Language="zu") assert store.gettargetlanguage() == "zu" + def test_context(self): + store = self.StoreClass() + unit = self.StoreClass.UnitClass('source') + unit.target = 'target' + unit.setcontext('context') + store.addunit(unit) + assert b'context' in store.__bytes__() + def test_output(self): for posource in posources: print("PO source file")
mit-ll-responsible-ai__hydra-zen-615
Bump actions/upload-artifact from 3 to 4 Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3 to 4. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/actions/upload-artifact/releases">actions/upload-artifact's releases</a>.</em></p> <blockquote> <h2>v4.0.0</h2> <h2>What's Changed</h2> <p>The release of upload-artifact@v4 and download-artifact@v4 are major changes to the backend architecture of Artifacts. They have numerous performance and behavioral improvements.</p> <p>For more information, see the <a href="https://github.com/actions/toolkit/tree/main/packages/artifact"><code>@​actions/artifact</code></a> documentation.</p> <h2>New Contributors</h2> <ul> <li><a href="https://github.com/vmjoseph"><code>@​vmjoseph</code></a> made their first contribution in <a href="https://redirect.github.com/actions/upload-artifact/pull/464">actions/upload-artifact#464</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/actions/upload-artifact/compare/v3...v4.0.0">https://github.com/actions/upload-artifact/compare/v3...v4.0.0</a></p> <h2>v3.1.3</h2> <h2>What's Changed</h2> <ul> <li>chore(github): remove trailing whitespaces by <a href="https://github.com/ljmf00"><code>@​ljmf00</code></a> in <a href="https://redirect.github.com/actions/upload-artifact/pull/313">actions/upload-artifact#313</a></li> <li>Bump <code>@​actions/artifact</code> version to v1.1.2 by <a href="https://github.com/bethanyj28"><code>@​bethanyj28</code></a> in <a href="https://redirect.github.com/actions/upload-artifact/pull/436">actions/upload-artifact#436</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/actions/upload-artifact/compare/v3...v3.1.3">https://github.com/actions/upload-artifact/compare/v3...v3.1.3</a></p> <h2>v3.1.2</h2> <ul> <li>Update all <code>@actions/*</code> NPM packages to their latest versions- <a href="https://redirect.github.com/actions/upload-artifact/issues/374">#374</a></li> <li>Update all dev dependencies to their most recent versions - <a href="https://redirect.github.com/actions/upload-artifact/issues/375">#375</a></li> </ul> <h2>v3.1.1</h2> <ul> <li>Update actions/core package to latest version to remove <code>set-output</code> deprecation warning <a href="https://redirect.github.com/actions/upload-artifact/issues/351">#351</a></li> </ul> <h2>v3.1.0</h2> <h2>What's Changed</h2> <ul> <li>Bump <code>@​actions/artifact</code> to v1.1.0 (<a href="https://redirect.github.com/actions/upload-artifact/pull/327">actions/upload-artifact#327</a>) <ul> <li>Adds checksum headers on artifact upload (<a href="https://redirect.github.com/actions/toolkit/pull/1095">actions/toolkit#1095</a>) (<a href="https://redirect.github.com/actions/toolkit/pull/1063">actions/toolkit#1063</a>)</li> </ul> </li> </ul> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/actions/upload-artifact/commit/c7d193f32edcb7bfad88892161225aeda64e9392"><code>c7d193f</code></a> Merge pull request <a href="https://redirect.github.com/actions/upload-artifact/issues/466">#466</a> from actions/v4-beta</li> <li><a href="https://github.com/actions/upload-artifact/commit/13131bb095770b4070a7477c3cd2d96e1c16d9f4"><code>13131bb</code></a> licensed cache</li> <li><a href="https://github.com/actions/upload-artifact/commit/4a6c273b9834f66a1d05c170dc3f80f9cdb9def1"><code>4a6c273</code></a> Merge branch 'main' into v4-beta</li> <li><a href="https://github.com/actions/upload-artifact/commit/f391bb91a3d3118aeca171c365bb319ece276b37"><code>f391bb9</code></a> Merge pull request <a href="https://redirect.github.com/actions/upload-artifact/issues/465">#465</a> from actions/robherley/v4-documentation</li> <li><a href="https://github.com/actions/upload-artifact/commit/9653d03c4b74c32144e02dae644fea70e079d4b3"><code>9653d03</code></a> Apply suggestions from code review</li> <li><a href="https://github.com/actions/upload-artifact/commit/875b63076402f25ef9d52c294c86ba4f97810575"><code>875b630</code></a> add limitations section</li> <li><a href="https://github.com/actions/upload-artifact/commit/ecb21463e93740a6be75c3116242169bfdbcb15a"><code>ecb2146</code></a> add compression example</li> <li><a href="https://github.com/actions/upload-artifact/commit/5e7604f84a055838f64ed68bb9904751523081ae"><code>5e7604f</code></a> trim some repeated info</li> <li><a href="https://github.com/actions/upload-artifact/commit/d6437d07581fe318a364512e6cf6b1dca6b4f92c"><code>d6437d0</code></a> naming</li> <li><a href="https://github.com/actions/upload-artifact/commit/1b561557037b4957d7d184e9aac02bec86c771eb"><code>1b56155</code></a> s/v4-beta/v4/g</li> <li>Additional commits viewable in <a href="https://github.com/actions/upload-artifact/compare/v3...v4">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=actions/upload-artifact&package-manager=github_actions&previous-version=3&new-version=4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) </details>
[ { "content": "# Copyright (c) 2023 Massachusetts Institute of Technology\n# SPDX-License-Identifier: MIT\n# pyright: strict\nfrom dataclasses import MISSING\nfrom functools import partial\nfrom typing import TYPE_CHECKING, Any, Type, Union\n\nfrom typing_extensions import TypeGuard\n\nfrom hydra_zen.funcs import get_obj, zen_processing\nfrom hydra_zen.structured_configs._utils import safe_name\nfrom hydra_zen.typing import Builds, Just, PartialBuilds\nfrom hydra_zen.typing._implementations import DataClass_, HasTarget\n\nfrom ._globals import (\n JUST_FIELD_NAME,\n PARTIAL_FIELD_NAME,\n TARGET_FIELD_NAME,\n ZEN_PARTIAL_FIELD_NAME,\n ZEN_PROCESSING_LOCATION,\n ZEN_TARGET_FIELD_NAME,\n)\n\n__all__ = [\"is_partial_builds\", \"uses_zen_processing\", \"is_dataclass\"]\n\n# We need to check if things are Builds, Just, PartialBuilds to a higher\n# fidelity than is provided by `isinstance(..., <Protocol>)`. I.e. we want to\n# check that the desired attributes *and* that their values match those of the\n# protocols. Failing to heed this would, for example, lead to any `Builds` that\n# happens to have a `path` attribute to be treated as `Just` in `get_target`.\n#\n# The following functions perform these desired checks. Note that they do not\n# require that the provided object be a dataclass; this enables compatibility\n# with omegaconf containers.\n#\n# These are not part of the public API for now, but they may be in the future.\n\n\ndef safe_getattr(obj: Any, field: str, *default: Any) -> Any:\n # We must access slotted class-attributes from a dataclass type\n # via its `__dataclass_fields__`. Otherwise we will get a member\n # descriptor\n\n assert len(default) < 2\n if (\n hasattr(obj, \"__slots__\")\n and isinstance(obj, type)\n and is_dataclass(obj)\n and field in obj.__slots__ # type: ignore\n ):\n try:\n _field = obj.__dataclass_fields__[field]\n if _field.default_factory is not MISSING or _field.default is MISSING:\n raise AttributeError\n\n return _field.default\n\n except (KeyError, AttributeError):\n if default:\n return default[0]\n\n raise AttributeError(\n f\"type object '{safe_name(obj)}' has no attribute '{field}'\"\n )\n\n return getattr(obj, field, *default)\n\n\ndef _get_target(x: HasTarget) -> Any:\n return safe_getattr(x, TARGET_FIELD_NAME)\n\n\ndef is_builds(x: Any) -> TypeGuard[Builds[Any]]:\n return hasattr(x, TARGET_FIELD_NAME)\n\n\ndef is_just(x: Any) -> TypeGuard[Just[Any]]:\n if is_builds(x) and hasattr(x, JUST_FIELD_NAME):\n attr = _get_target(x)\n if attr == _get_target(Just) or attr is get_obj:\n return True\n else:\n # ensures we convert this branch in tests\n return False\n return False\n\n\nif TYPE_CHECKING: # pragma: no cover\n\n def is_dataclass(obj: Any) -> TypeGuard[Union[DataClass_, Type[DataClass_]]]:\n ...\n\nelse:\n from dataclasses import is_dataclass\n\n\ndef is_old_partial_builds(x: Any) -> bool: # pragma: no cover\n # We don't care about coverage here.\n # This will only be used in `get_target` and we'll be sure to cover that branch\n if is_builds(x) and hasattr(x, \"_partial_target_\"):\n attr = _get_target(x)\n if (attr == \"hydra_zen.funcs.partial\" or attr is partial) and is_just(\n safe_getattr(x, \"_partial_target_\")\n ):\n return True\n else:\n # ensures we cover this branch in tests\n return False\n return False\n\n\ndef uses_zen_processing(x: Any) -> TypeGuard[Builds[Any]]:\n \"\"\"Returns `True` if the input is a targeted structured config that relies on\n zen-processing features during its instantiation process. See notes for more details\n\n Parameters\n ----------\n x : Any\n\n Returns\n -------\n uses_zen : bool\n\n Notes\n -----\n In order to support zen :ref:`meta-fields <meta-field>` and\n :ref:`zen wrappers <zen-wrapper>`, hydra-zen redirects Hydra to an intermediary\n function – `hydra_zen.funcs.zen_processing` – during instantiation; i.e.\n `zen_processing` is made to be the `_target_` of the config and `_zen_target`\n indicates the object that is ultimately being configured for instantiation.\n\n Examples\n --------\n >>> from hydra_zen import builds, uses_zen_processing, to_yaml\n >>> ConfA = builds(dict, a=1)\n >>> ConfB = builds(dict, a=1, zen_partial=True)\n >>> ConfC = builds(dict, a=1, zen_wrappers=lambda x: x)\n >>> ConfD = builds(dict, a=1, zen_meta=dict(hidden_field=None))\n >>> ConfE = builds(dict, a=1, zen_meta=dict(hidden_field=None), zen_partial=True)\n >>> uses_zen_processing(ConfA)\n False\n >>> uses_zen_processing(ConfB)\n False\n >>> uses_zen_processing(ConfC)\n True\n >>> uses_zen_processing(ConfD)\n True\n >>> uses_zen_processing(ConfE)\n True\n\n Demonstrating the indirection that is used to facilitate zen-processing features.\n\n >>> print(to_yaml(ConfE))\n _target_: hydra_zen.funcs.zen_processing\n _zen_target: builtins.dict\n _zen_partial: true\n _zen_exclude:\n - hidden_field\n a: 1\n hidden_field: null\n \"\"\"\n if not is_builds(x) or not hasattr(x, ZEN_TARGET_FIELD_NAME):\n return False\n\n attr = _get_target(x)\n if attr != ZEN_PROCESSING_LOCATION and attr is not zen_processing:\n return False\n return True\n\n\ndef is_partial_builds(x: Any) -> TypeGuard[PartialBuilds[Any]]:\n \"\"\"\n Returns `True` if the input is a targeted structured config that entails partial\n instantiation, either via `_partial_=True` [1]_ or via `_zen_partial=True`.\n\n Parameters\n ----------\n x : Any\n\n Returns\n -------\n is_partial_config : bool\n\n References\n ----------\n .. [1] https://hydra.cc/docs/advanced/instantiate_objects/overview/#partial-instantiation\n\n See Also\n --------\n uses_zen_processing\n\n Examples\n --------\n >>> from hydra_zen import is_partial_builds\n\n An example involving a basic structured config\n\n >>> from dataclasses import dataclass\n >>> @dataclass\n ... class A:\n ... _target_ : str = 'builtins.int'\n ... _partial_ : bool = True\n >>> is_partial_builds(A)\n True\n >>> is_partial_builds(A(_partial_=False))\n False\n\n An example of a config that leverages partial instantiation via zen-processing\n\n >>> from hydra_zen import builds, uses_zen_processing, instantiate\n >>> Conf = builds(int, 0, zen_partial=True, zen_meta=dict(a=1))\n >>> hasattr(Conf, \"_partial_\")\n False\n >>> uses_zen_processing(Conf)\n True\n >>> is_partial_builds(Conf)\n True\n >>> instantiate(Conf)\n functools.partial(<class 'int'>, 0)\n \"\"\"\n if is_builds(x):\n return (\n # check if partial'd config via Hydra\n safe_getattr(x, PARTIAL_FIELD_NAME, False)\n is True\n ) or (\n # check if partial'd config via `zen_processing`\n uses_zen_processing(x)\n and (safe_getattr(x, ZEN_PARTIAL_FIELD_NAME, False) is True)\n )\n return False\n", "path": "src/hydra_zen/structured_configs/_type_guards.py" } ]
[ { "content": "# Copyright (c) 2023 Massachusetts Institute of Technology\n# SPDX-License-Identifier: MIT\n# pyright: strict\nfrom dataclasses import MISSING\nfrom functools import partial\nfrom typing import TYPE_CHECKING, Any, Type, Union\n\nfrom typing_extensions import TypeGuard\n\nfrom hydra_zen.funcs import get_obj, zen_processing\nfrom hydra_zen.structured_configs._utils import safe_name\nfrom hydra_zen.typing import Builds, Just, PartialBuilds\nfrom hydra_zen.typing._implementations import DataClass_, HasTarget\n\nfrom ._globals import (\n JUST_FIELD_NAME,\n PARTIAL_FIELD_NAME,\n TARGET_FIELD_NAME,\n ZEN_PARTIAL_FIELD_NAME,\n ZEN_PROCESSING_LOCATION,\n ZEN_TARGET_FIELD_NAME,\n)\n\n__all__ = [\"is_partial_builds\", \"uses_zen_processing\", \"is_dataclass\"]\n\n# We need to check if things are Builds, Just, PartialBuilds to a higher\n# fidelity than is provided by `isinstance(..., <Protocol>)`. I.e. we want to\n# check that the desired attributes *and* that their values match those of the\n# protocols. Failing to heed this would, for example, lead to any `Builds` that\n# happens to have a `path` attribute to be treated as `Just` in `get_target`.\n#\n# The following functions perform these desired checks. Note that they do not\n# require that the provided object be a dataclass; this enables compatibility\n# with omegaconf containers.\n#\n# These are not part of the public API for now, but they may be in the future.\n\n\ndef safe_getattr(obj: Any, field: str, *default: Any) -> Any:\n # We must access slotted class-attributes from a dataclass type\n # via its `__dataclass_fields__`. Otherwise we will get a member\n # descriptor\n\n assert len(default) < 2\n if (\n hasattr(obj, \"__slots__\")\n and isinstance(obj, type)\n and is_dataclass(obj)\n and field in obj.__slots__ # type: ignore\n ):\n try:\n _field = obj.__dataclass_fields__[field]\n if _field.default_factory is not MISSING or _field.default is MISSING:\n raise AttributeError\n\n return _field.default\n\n except (KeyError, AttributeError):\n if default:\n return default[0]\n\n raise AttributeError(\n f\"type object '{safe_name(obj)}' has no attribute '{field}'\"\n )\n\n return getattr(obj, field, *default)\n\n\ndef _get_target(x: HasTarget) -> Any:\n return safe_getattr(x, TARGET_FIELD_NAME)\n\n\ndef is_builds(x: Any) -> TypeGuard[Builds[Any]]:\n return hasattr(x, TARGET_FIELD_NAME)\n\n\ndef is_just(x: Any) -> TypeGuard[Just[Any]]:\n if is_builds(x) and hasattr(x, JUST_FIELD_NAME):\n attr = _get_target(x)\n if attr == _get_target(Just) or attr is get_obj:\n return True\n else:\n # ensures we convert this branch in tests\n return False\n return False\n\n\nif TYPE_CHECKING: # pragma: no cover\n\n def is_dataclass(obj: Any) -> TypeGuard[Union[DataClass_, Type[DataClass_]]]:\n ...\n\nelse:\n from dataclasses import is_dataclass\n\n\ndef is_old_partial_builds(x: Any) -> bool: # pragma: no cover\n # We don't care about coverage here.\n # This will only be used in `get_target` and we'll be sure to cover that branch\n if is_builds(x) and hasattr(x, \"_partial_target_\"):\n attr = _get_target(x)\n if (attr == \"hydra_zen.funcs.partial\" or attr is partial) and is_just(\n safe_getattr(x, \"_partial_target_\")\n ):\n return True\n else: # pragma: no cover\n return False\n return False\n\n\ndef uses_zen_processing(x: Any) -> TypeGuard[Builds[Any]]:\n \"\"\"Returns `True` if the input is a targeted structured config that relies on\n zen-processing features during its instantiation process. See notes for more details\n\n Parameters\n ----------\n x : Any\n\n Returns\n -------\n uses_zen : bool\n\n Notes\n -----\n In order to support zen :ref:`meta-fields <meta-field>` and\n :ref:`zen wrappers <zen-wrapper>`, hydra-zen redirects Hydra to an intermediary\n function – `hydra_zen.funcs.zen_processing` – during instantiation; i.e.\n `zen_processing` is made to be the `_target_` of the config and `_zen_target`\n indicates the object that is ultimately being configured for instantiation.\n\n Examples\n --------\n >>> from hydra_zen import builds, uses_zen_processing, to_yaml\n >>> ConfA = builds(dict, a=1)\n >>> ConfB = builds(dict, a=1, zen_partial=True)\n >>> ConfC = builds(dict, a=1, zen_wrappers=lambda x: x)\n >>> ConfD = builds(dict, a=1, zen_meta=dict(hidden_field=None))\n >>> ConfE = builds(dict, a=1, zen_meta=dict(hidden_field=None), zen_partial=True)\n >>> uses_zen_processing(ConfA)\n False\n >>> uses_zen_processing(ConfB)\n False\n >>> uses_zen_processing(ConfC)\n True\n >>> uses_zen_processing(ConfD)\n True\n >>> uses_zen_processing(ConfE)\n True\n\n Demonstrating the indirection that is used to facilitate zen-processing features.\n\n >>> print(to_yaml(ConfE))\n _target_: hydra_zen.funcs.zen_processing\n _zen_target: builtins.dict\n _zen_partial: true\n _zen_exclude:\n - hidden_field\n a: 1\n hidden_field: null\n \"\"\"\n if not is_builds(x) or not hasattr(x, ZEN_TARGET_FIELD_NAME):\n return False\n\n attr = _get_target(x)\n if attr != ZEN_PROCESSING_LOCATION and attr is not zen_processing:\n return False\n return True\n\n\ndef is_partial_builds(x: Any) -> TypeGuard[PartialBuilds[Any]]:\n \"\"\"\n Returns `True` if the input is a targeted structured config that entails partial\n instantiation, either via `_partial_=True` [1]_ or via `_zen_partial=True`.\n\n Parameters\n ----------\n x : Any\n\n Returns\n -------\n is_partial_config : bool\n\n References\n ----------\n .. [1] https://hydra.cc/docs/advanced/instantiate_objects/overview/#partial-instantiation\n\n See Also\n --------\n uses_zen_processing\n\n Examples\n --------\n >>> from hydra_zen import is_partial_builds\n\n An example involving a basic structured config\n\n >>> from dataclasses import dataclass\n >>> @dataclass\n ... class A:\n ... _target_ : str = 'builtins.int'\n ... _partial_ : bool = True\n >>> is_partial_builds(A)\n True\n >>> is_partial_builds(A(_partial_=False))\n False\n\n An example of a config that leverages partial instantiation via zen-processing\n\n >>> from hydra_zen import builds, uses_zen_processing, instantiate\n >>> Conf = builds(int, 0, zen_partial=True, zen_meta=dict(a=1))\n >>> hasattr(Conf, \"_partial_\")\n False\n >>> uses_zen_processing(Conf)\n True\n >>> is_partial_builds(Conf)\n True\n >>> instantiate(Conf)\n functools.partial(<class 'int'>, 0)\n \"\"\"\n if is_builds(x):\n return (\n # check if partial'd config via Hydra\n safe_getattr(x, PARTIAL_FIELD_NAME, False)\n is True\n ) or (\n # check if partial'd config via `zen_processing`\n uses_zen_processing(x)\n and (safe_getattr(x, ZEN_PARTIAL_FIELD_NAME, False) is True)\n )\n return False\n", "path": "src/hydra_zen/structured_configs/_type_guards.py" } ]
diff --git a/.github/workflows/pypi_publish.yml b/.github/workflows/pypi_publish.yml index 578201ed8..2c220e552 100644 --- a/.github/workflows/pypi_publish.yml +++ b/.github/workflows/pypi_publish.yml @@ -22,7 +22,7 @@ jobs: pip install build python -m build - name: Upload artifacts - uses: actions/upload-artifact@v3 + uses: actions/upload-artifact@v4 with: name: dist path: dist @@ -36,7 +36,7 @@ jobs: id-token: write # IMPORTANT: this permission is mandatory for trusted publishing steps: - name: Download artifacts - uses: actions/download-artifact@v3 + uses: actions/download-artifact@v4 with: name: dist path: dist diff --git a/src/hydra_zen/structured_configs/_type_guards.py b/src/hydra_zen/structured_configs/_type_guards.py index b7adb4f9c..6ae1a289f 100644 --- a/src/hydra_zen/structured_configs/_type_guards.py +++ b/src/hydra_zen/structured_configs/_type_guards.py @@ -103,8 +103,7 @@ def is_old_partial_builds(x: Any) -> bool: # pragma: no cover safe_getattr(x, "_partial_target_") ): return True - else: - # ensures we cover this branch in tests + else: # pragma: no cover return False return False
kivy__python-for-android-2399
Pymunk,kivy apk crashing on Android 5.1 <!-- The issue tracker is a tool to address bugs NOT a support platform. Please use the Discord community or Stack Overflow for support questions, more information at https://github.com/kivy/python-for-android#support --> ### Checklist - [ ] the issue is indeed a bug and not a support request - [ ] issue doesn't already exist: https://github.com/kivy/python-for-android/issues - [ ] I have a short, runnable example that reproduces the issue - [ ] I reproduced the problem with the latest development version (`p4a.branch = develop`) - [ ] I used the grave accent (aka backticks) to format code or logs when appropriated ### Versions - Python:3.8.1 - OS:Android 5.1 - Kivy:2.0.2 - Cython: - OpenJDK:8 ### Description pymunk,kivy apk crashing on Android 5.1 // REPLACE ME: What are you trying to get done, what has happened, what went wrong, and what did you expect? ### buildozer.spec [app] # (str) Title of your application title = Tone # (str) Package name package.name = tone # (str) Package domain (needed for android/ios packaging) package.domain = org.test # (str) Source code where the main.py live source.dir = . # (list) Source files to include (let empty to include all the files) source.include_exts = py,png,jpg,kv,atlas # (list) List of inclusions using pattern matching #source.include_patterns = assets/*,images/*.png # (list) Source files to exclude (let empty to not exclude anything) #source.exclude_exts = spec # (list) List of directory to exclude (let empty to not exclude anything) source.exclude_dirs = tests, bin # (list) List of exclusions using pattern matching #source.exclude_patterns = license,images/*/*.jpg # (str) Application versioning (method 1) version = 0.1 # (str) Application versioning (method 2) # version.regex = __version__ = ['"](.*)['"] # version.filename = %(source.dir)s/main.py # (list) Application requirements # comma separated e.g. requirements = sqlite3,kivy requirements = python3,kivy==2.0.0,plyer,android,pyjnius,pymunk,cffi,pycparser,setuptools # (str) Custom source folders for requirements # Sets custom source for any requirements with recipes # requirements.source.kivy = ../../kivy # (list) Garden requirements #garden_requirements = # (str) Presplash of the application #presplash.filename = %(source.dir)s/data/presplash.png # (str) Icon of the application #icon.filename = %(source.dir)s/data/icon.png # (str) Supported orientation (one of landscape, sensorLandscape, portrait or all) orientation = portrait # (list) List of service to declare #services = NAME:ENTRYPOINT_TO_PY,NAME2:ENTRYPOINT2_TO_PY # # OSX Specific # # # author = © Copyright Info # change the major version of python used by the app osx.python_version = 3 # Kivy version to use osx.kivy_version = 1.9.1 # # Android specific # # (bool) Indicate if the application should be fullscreen or not fullscreen = 0 # (string) Presplash background color (for new android toolchain) # Supported formats are: #RRGGBB #AARRGGBB or one of the following names: # red, blue, green, black, white, gray, cyan, magenta, yellow, lightgray, # darkgray, grey, lightgrey, darkgrey, aqua, fuchsia, lime, maroon, navy, # olive, purple, silver, teal. #android.presplash_color = #FFFFFF # (list) Permissions android.permissions = INTERNET # (int) Target Android API, should be as high as possible. #android.api = 27 # (int) Minimum API your APK will support. #android.minapi = 21 # (int) Android SDK version to use #android.sdk = 20 # (str) Android NDK version to use #android.ndk = 19b # (int) Android NDK API to use. This is the minimum API your app will support, it should usually match android.minapi. #android.ndk_api = 21 # (bool) Use --private data storage (True) or --dir public storage (False) #android.private_storage = True # (str) Android NDK directory (if empty, it will be automatically downloaded.) #android.ndk_path = # (str) Android SDK directory (if empty, it will be automatically downloaded.) #android.sdk_path = # (str) ANT directory (if empty, it will be automatically downloaded.) #android.ant_path = # (bool) If True, then skip trying to update the Android sdk # This can be useful to avoid excess Internet downloads or save time # when an update is due and you just want to test/build your package # android.skip_update = False # (bool) If True, then automatically accept SDK license # agreements. This is intended for automation only. If set to False, # the default, you will be shown the license when first running # buildozer. # android.accept_sdk_license = False # (str) Android entry point, default is ok for Kivy-based app #android.entrypoint = org.renpy.android.PythonActivity # (str) Android app theme, default is ok for Kivy-based app # android.apptheme = "@android:style/Theme.NoTitleBar" # (list) Pattern to whitelist for the whole project #android.whitelist = # (str) Path to a custom whitelist file #android.whitelist_src = # (str) Path to a custom blacklist file #android.blacklist_src = # (list) List of Java .jar files to add to the libs so that pyjnius can access # their classes. Don't add jars that you do not need, since extra jars can slow # down the build process. Allows wildcards matching, for example: # OUYA-ODK/libs/*.jar #android.add_jars = foo.jar,bar.jar,path/to/more/*.jar # (list) List of Java files to add to the android project (can be java or a # directory containing the files) #android.add_src = # (list) Android AAR archives to add (currently works only with sdl2_gradle # bootstrap) #android.add_aars = # (list) Gradle dependencies to add (currently works only with sdl2_gradle # bootstrap) #android.gradle_dependencies = # (list) add java compile options # this can for example be necessary when importing certain java libraries using the 'android.gradle_dependencies' option # see https://developer.android.com/studio/write/java8-support for further information # android.add_compile_options = "sourceCompatibility = 1.8", "targetCompatibility = 1.8" # (list) Gradle repositories to add {can be necessary for some android.gradle_dependencies} # please enclose in double quotes # e.g. android.gradle_repositories = "maven { url 'https://kotlin.bintray.com/ktor' }" #android.add_gradle_repositories = # (list) packaging options to add # see https://google.github.io/android-gradle-dsl/current/com.android.build.gradle.internal.dsl.PackagingOptions.html # can be necessary to solve conflicts in gradle_dependencies # please enclose in double quotes # e.g. android.add_packaging_options = "exclude 'META-INF/common.kotlin_module'", "exclude 'META-INF/*.kotlin_module'" #android.add_gradle_repositories = # (list) Java classes to add as activities to the manifest. #android.add_activities = com.example.ExampleActivity # (str) OUYA Console category. Should be one of GAME or APP # If you leave this blank, OUYA support will not be enabled #android.ouya.category = GAME # (str) Filename of OUYA Console icon. It must be a 732x412 png image. #android.ouya.icon.filename = %(source.dir)s/data/ouya_icon.png # (str) XML file to include as an intent filters in <activity> tag #android.manifest.intent_filters = # (str) launchMode to set for the main activity #android.manifest.launch_mode = standard # (list) Android additional libraries to copy into libs/armeabi #android.add_libs_armeabi = libs/android/*.so #android.add_libs_armeabi_v7a = libs/android-v7/*.so #android.add_libs_arm64_v8a = libs/android-v8/*.so #android.add_libs_x86 = libs/android-x86/*.so #android.add_libs_mips = libs/android-mips/*.so # (bool) Indicate whether the screen should stay on # Don't forget to add the WAKE_LOCK permission if you set this to True #android.wakelock = False # (list) Android application meta-data to set (key=value format) #android.meta_data = # (list) Android library project to add (will be added in the # project.properties automatically.) #android.library_references = # (list) Android shared libraries which will be added to AndroidManifest.xml using <uses-library> tag #android.uses_library = # (str) Android logcat filters to use #android.logcat_filters = *:S python:D # (bool) Copy library instead of making a libpymodules.so #android.copy_libs = 1 # (str) The Android arch to build for, choices: armeabi-v7a, arm64-v8a, x86, x86_64 android.arch = armeabi-v7a # (int) overrides automatic versionCode computation (used in build.gradle) # this is not the same as app version and should only be edited if you know what you're doing # android.numeric_version = 1 # # Python for android (p4a) specific # # (str) python-for-android fork to use, defaults to upstream (kivy) #p4a.fork = kivy # (str) python-for-android branch to use, defaults to master #p4a.branch = master # (str) python-for-android git clone directory (if empty, it will be automatically cloned from github) #p4a.source_dir = # (str) The directory in which python-for-android should look for your own build recipes (if any) #p4a.local_recipes = # (str) Filename to the hook for p4a #p4a.hook = # (str) Bootstrap to use for android builds # p4a.bootstrap = sdl2 # (int) port number to specify an explicit --port= p4a argument (eg for bootstrap flask) #p4a.port = # # iOS specific # # (str) Path to a custom kivy-ios folder #ios.kivy_ios_dir = ../kivy-ios # Alternately, specify the URL and branch of a git checkout: ios.kivy_ios_url = https://github.com/kivy/kivy-ios ios.kivy_ios_branch = master # Another platform dependency: ios-deploy # Uncomment to use a custom checkout #ios.ios_deploy_dir = ../ios_deploy # Or specify URL and branch ios.ios_deploy_url = https://github.com/phonegap/ios-deploy ios.ios_deploy_branch = 1.7.0 # (str) Name of the certificate to use for signing the debug version # Get a list of available identities: buildozer ios list_identities #ios.codesign.debug = "iPhone Developer: <lastname> <firstname> (<hexstring>)" # (str) Name of the certificate to use for signing the release version #ios.codesign.release = %(ios.codesign.debug)s [buildozer] # (int) Log level (0 = error only, 1 = info, 2 = debug (with command output)) log_level = 2 # (int) Display warning if buildozer is run as root (0 = False, 1 = True) warn_on_root = 1 # (str) Path to build artifact storage, absolute or relative to spec file # build_dir = ./.buildozer # (str) Path to build output (i.e. .apk, .ipa) storage # bin_dir = ./bin # ----------------------------------------------------------------------------- # List as sections # # You can define all the "list" as [section:key]. # Each line will be considered as a option to the list. # Let's take [app] / source.exclude_patterns. # Instead of doing: # #[app] #source.exclude_patterns = license,data/audio/*.wav,data/images/original/* # # This can be translated into: # #[app:source.exclude_patterns] #license #data/audio/*.wav #data/images/original/* # # ----------------------------------------------------------------------------- # Profiles # # You can extend section / key with a profile # For example, you want to deploy a demo version of your application without # HD content. You could first change the title to add "(demo)" in the name # and extend the excluded directories to remove the HD content. # #[app@demo] #title = My Application (demo) # #[app:source.exclude_patterns@demo] #images/hd/* # # Then, invoke the command line with the "demo" profile: # #buildozer --profile demo android debug Command: ```sh // REPLACE ME: buildozer command ran? e.g. buildozer android debug // Keep the triple grave accent (aka backquote/backtick) to have the code formatted ``` Spec file: ``` // REPLACE ME: Paste your buildozer.spec file here ``` ### Logs I/python (17703): [INFO ] [GL ] Backend used <sdl2> I/python (17703): [INFO ] [GL ] OpenGL version <b'OpenGL ES 2.0'> I/python (17703): [INFO ] [GL ] OpenGL vendor <b'ARM'> I/python (17703): [INFO ] [GL ] OpenGL renderer <b'Mali-400 MP'> I/python (17703): [INFO ] [GL ] OpenGL parsed version: 2, 0 I/python (17703): [INFO ] [GL ] Texture max size <4096> I/python (17703): [INFO ] [GL ] Texture max units <8> I/python (17703): [INFO ] [Window ] auto add sdl2 input provider I/python (17703): [INFO ] [Window ] virtual keyboard not allowed, single mode, not docked I/python (17703): [INFO ] [Text ] Provider: sdl2 I/python (17703): /home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/cparser.py:162: UserWarning: Global variable '_cpBBNewForExtents' in cdef(): for consistency with C it should have a storage class specifier (usually 'extern') I/python (17703): /home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/cparser.py:162: UserWarning: Global variable '_cpBBNewForCircle' in cdef(): for consistency with C it should have a storage class specifier (usually 'extern') I/python (17703): /home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/cparser.py:162: UserWarning: Global variable '_cpBBIntersects' in cdef(): for consistency with C it should have a storage class specifier (usually 'extern') I/python (17703): /home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/cparser.py:162: UserWarning: Global variable '_cpBBContainsBB' in cdef(): for consistency with C it should have a storage class specifier (usually 'extern') I/python (17703): /home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/cparser.py:162: UserWarning: Global variable '_cpBBContainsVect' in cdef(): for consistency with C it should have a storage class specifier (usually 'extern') I/python (17703): /home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/cparser.py:162: UserWarning: Global variable '_cpBBMerge' in cdef(): for consistency with C it should have a storage class specifier (usually 'extern') I/python (17703): /home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/cparser.py:162: UserWarning: Global variable '_cpBBExpand' in cdef(): for consistency with C it should have a storage class specifier (usually 'extern') I/python (17703): /home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/cparser.py:162: UserWarning: Global variable '_cpBBCenter' in cdef(): for consistency with C it should have a storage class specifier (usually 'extern') I/python (17703): /home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/cparser.py:162: UserWarning: Global variable '_cpBBArea' in cdef(): for consistency with C it should have a storage class specifier (usually 'extern') I/python (17703): /home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/cparser.py:162: UserWarning: Global variable '_cpBBMergedArea' in cdef(): for consistency with C it should have a storage class specifier (usually 'extern') I/python (17703): /home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/cparser.py:162: UserWarning: Global variable '_cpBBSegmentQuery' in cdef(): for consistency with C it should have a storage class specifier (usually 'extern') I/python (17703): /home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/cparser.py:162: UserWarning: Global variable '_cpBBIntersectsSegment' in cdef(): for consistency with C it should have a storage class specifier (usually 'extern') I/python (17703): /home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/cparser.py:162: UserWarning: Global variable '_cpBBClampVect' in cdef(): for consistency with C it should have a storage class specifier (usually 'extern') I/python (17703): Loading chipmunk for Linux (32bit) [/data/data/org.test.tone/files/app/_python_bundle/site-packages/pymunk/libchipmunk.so] I/python (17703): Failed to load Pymunk library. I/python (17703): This error usually means that you don't have a compiled version of Chipmunk in I/python (17703): the correct spot where Pymunk can find it. If you tried to run Pymunk without I/python (17703): installing it properly this can be the result. I/python (17703): The good news is that it is usually enough (at least on *nix and OS X) to I/python (17703): run the build command: I/python (17703): You compile Chipmunk with I/python (17703): > python setup.py build_ext --inplace I/python (17703): and then verify with I/python (17703): > python -m pymunk.test I/python (17703): (for complete instructions please see the readme file) I/python (17703): Another cause of this problem could be if you didnt included the Chipmunk I/python (17703): library when using a freeze tool such as Py2exe or PyInstaller. Please see the I/python (17703): examples for how to include the library file when freezing a binary. I/python (17703): If it still doesnt work, please report as a bug on the issue tracker at I/python (17703): https://github.com/viblo/pymunk/issues I/python (17703): Remember to include information about your OS, which version of python you use I/python (17703): and the version of pymunk you tried to run. A description of what you did to I/python (17703): trigger the error is also good. Please include the exception traceback if any I/python (17703): (usually found below this message). I/python (17703): Traceback (most recent call last): I/python (17703): File "/home/sahil/app_test_kivy/.buildozer/android/app/main.py", line 33, in <module> I/python (17703): File "/home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/pymunk/__init__.py", line 58, in <module> I/python (17703): File "/home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/pymunk/_chipmunk_cffi.py", line 3, in <module> I/python (17703): File "/home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/pymunk/_chipmunk_cffi_abi.py", line 1475, in <module> I/python (17703): File "/home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/pymunk/_libload.py", line 50, in load_library I/python (17703): File "/home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/api.py", line 146, in dlopen I/python (17703): File "/home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/api.py", line 828, in _make_ffi_library I/python (17703): File "/home/sahil/app_test_kivy/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/tone/cffi/api.py", line 823, in _load_backend_lib I/python (17703): OSError: cannot load library '/data/data/org.test.tone/files/app/_python_bundle/site-packages/pymunk/libchipmunk.so': dlopen failed: cannot locate symbol "__sF" referenced by "libchipmunk.so".... Additionally, ctypes.util.find_library() did not manage to locate a library called '/data/data/org.test.tone/files/app/_python_bundle/site-packages/pymunk/libchipmunk.so' I/python (17703): Python for android ended. ``` // REPLACE ME: Paste the build output containing the error // Keep the triple grave accent (a.k.a. backquote/backtick) to have the code formatted ```
[ { "content": "from pythonforandroid.recipe import CompiledComponentsPythonRecipe\n\n\nclass PymunkRecipe(CompiledComponentsPythonRecipe):\n name = \"pymunk\"\n version = \"6.0.0\"\n url = \"https://pypi.python.org/packages/source/p/pymunk/pymunk-{version}.zip\"\n depends = [\"cffi\", \"setuptools\"]\n call_hostpython_via_targetpython = False\n\n def get_recipe_env(self, arch):\n env = super().get_recipe_env(arch)\n env[\"LDFLAGS\"] += \" -llog\"\n return env\n\n\nrecipe = PymunkRecipe()\n", "path": "pythonforandroid/recipes/pymunk/__init__.py" } ]
[ { "content": "from pythonforandroid.recipe import CompiledComponentsPythonRecipe\n\n\nclass PymunkRecipe(CompiledComponentsPythonRecipe):\n name = \"pymunk\"\n version = \"6.0.0\"\n url = \"https://pypi.python.org/packages/source/p/pymunk/pymunk-{version}.zip\"\n depends = [\"cffi\", \"setuptools\"]\n call_hostpython_via_targetpython = False\n\n def get_recipe_env(self, arch):\n env = super().get_recipe_env(arch)\n env[\"LDFLAGS\"] += \" -llog\" # Used by Chipmunk cpMessage\n env[\"LDFLAGS\"] += \" -lm\" # For older versions of Android\n return env\n\n\nrecipe = PymunkRecipe()\n", "path": "pythonforandroid/recipes/pymunk/__init__.py" } ]
diff --git a/pythonforandroid/recipes/pymunk/__init__.py b/pythonforandroid/recipes/pymunk/__init__.py index bf7cb5541c..a982098f26 100644 --- a/pythonforandroid/recipes/pymunk/__init__.py +++ b/pythonforandroid/recipes/pymunk/__init__.py @@ -10,7 +10,8 @@ class PymunkRecipe(CompiledComponentsPythonRecipe): def get_recipe_env(self, arch): env = super().get_recipe_env(arch) - env["LDFLAGS"] += " -llog" + env["LDFLAGS"] += " -llog" # Used by Chipmunk cpMessage + env["LDFLAGS"] += " -lm" # For older versions of Android return env
hpcaitech__ColossalAI-2777
[tensor] fix some unittests [tensor] fix some unittests [tensor] fix some unittests [BUG]: Wrong import in `zero/sharded_optim/_utils.py` ### 🐛 Describe the bug In issue #2774 , thanks @malfet for pointing out that we should not use `torch._six` to import `inf` and use `torch` to import `inf` instead, however, there is a small mistake in PR #2775 use an invalid `torch.six` module to import `inf`. We should fix this typo. ### Environment _No response_
[ { "content": "import math\nfrom typing import Optional\n\nimport torch\nimport torch.distributed as dist\nfrom torch.six import inf\nfrom torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors\n\nfrom colossalai.tensor import ColoParameter\nfrom colossalai.utils import is_model_parallel_parameter\n\n\ndef flatten(input_):\n return _flatten_dense_tensors(input_)\n\n\ndef unflatten(flat, tensors):\n return _unflatten_dense_tensors(flat, tensors)\n\n\ndef count_numel(tensor_list):\n res = 0\n for tensor in tensor_list:\n res += tensor.numel()\n return res\n\n\ndef calculate_padding(numel, unit_size):\n remainder = numel % unit_size\n return unit_size - remainder if remainder else remainder\n\n\ndef shuffle_by_round_robin(tensor_list, num_partitions):\n partitions = dict()\n\n for tensor_idx, tensor in enumerate(tensor_list):\n partition_to_go = tensor_idx % num_partitions\n if partition_to_go not in partitions:\n partitions[partition_to_go] = []\n partitions[partition_to_go].append(dict(tensor=tensor, index=tensor_idx))\n\n partitions_count = len(partitions)\n new_tensor_list = []\n tensor_index_mapping = dict()\n\n for partition_id in range(partitions_count):\n partition_tensors = partitions[partition_id]\n for item in partition_tensors:\n tensor_index_mapping[item['index']] = len(new_tensor_list)\n new_tensor_list.append(item['tensor'])\n\n return new_tensor_list, tensor_index_mapping\n\n\n# create a flat tensor aligned at the alignment boundary\ndef flatten_dense_tensors_with_padding(tensor_list, unit_size):\n num_elements = count_numel(tensor_list)\n padding = calculate_padding(num_elements, unit_size=unit_size)\n\n if padding > 0:\n pad_tensor = torch.zeros(padding, device=tensor_list[0].device, dtype=tensor_list[0].dtype)\n padded_tensor_list = tensor_list + [pad_tensor]\n else:\n padded_tensor_list = tensor_list\n\n return flatten(padded_tensor_list)\n\n\ndef is_nccl_aligned(tensor):\n return tensor.data_ptr() % 4 == 0\n\n\ndef get_grad_accumulate_object(tensor):\n \"\"\"\n Return the AccumulateGrad of the input tensor\n \"\"\"\n\n # grad_fn reference:\n # https://discuss.pytorch.org/t/in-the-grad-fn-i-find-a-next-functions-but-i-dont-understand-the-meaning-of-the-attribute/24463\n # expand_as reference: https://pytorch.org/docs/stable/generated/torch.Tensor.expand.html#torch.Tensor.expand\n #\n # `next_functions` will return the backward graph where\n # the first element is the AccumulateGrad of the leaf nodes.\n # we want to get the AccumulateGrad of the input tensor instead of the leaf\n # node in the whole computation graph.\n # Therefore, we call expand_as to create a dummy graph\n # where tensor_tmp and tensor indeed point to the same object.\n # You can check this by print(tensor.data_ptr() == tensor_tmp.data_ptr())\n tensor_tmp = tensor.expand_as(tensor)\n grad_acc_obj = tensor_tmp.grad_fn.next_functions[0][0]\n return grad_acc_obj\n\n\ndef split_half_float_double(tensor_list):\n dtypes = [\"torch.cuda.HalfTensor\", \"torch.cuda.FloatTensor\", \"torch.cuda.DoubleTensor\", \"torch.cuda.BFloat16Tensor\"]\n buckets = []\n for i, dtype in enumerate(dtypes):\n bucket = [t for t in tensor_list if t.type() == dtype]\n if bucket:\n buckets.append(bucket)\n return buckets\n\n\ndef reduce_tensor_dp_group(tensor: torch.Tensor,\n dtype: Optional[torch.dtype] = None,\n dst_local_rank: Optional[int] = None,\n dst_global_rank: Optional[int] = None,\n group: Optional[dist.ProcessGroup] = None):\n \"\"\"\n Reduce the tensor in the data parallel process group\n\n :param tensor: A tensor object to reduce/all-reduce\n :param dtype: The data type used in communication\n :param dst_rank: The source rank for reduce. If dst_rank is None,\n :param parallel_mode: Communication parallel mode\n all-reduce will be used instead of reduce. Default is None.\n\n :type tensor: torch.Tensor\n :type dtype: torch.dtype, optional\n :type dst_rank: int, optional\n :type pg: ProcessGroup, optional\n \"\"\"\n # use the original dtype\n if dtype is None:\n dtype = tensor.dtype\n\n # cast the data to specified dtype for reduce/all-reduce\n if tensor.dtype != dtype:\n tensor_to_reduce = tensor.to(dtype)\n else:\n tensor_to_reduce = tensor\n\n world_size = dist.get_world_size(group=group)\n tensor_to_reduce.div_(world_size)\n\n # if rank is None, all reduce will be used\n # else, reduce is used\n use_all_reduce = dst_local_rank is None\n\n if use_all_reduce:\n dist.all_reduce(tensor_to_reduce, group=group)\n else:\n dist.reduce(tensor=tensor_to_reduce, dst=dst_global_rank, group=group)\n\n # recover the original dtype\n if tensor.dtype != dtype and tensor is not tensor_to_reduce:\n local_rank = dist.get_rank(group=group)\n if use_all_reduce or dst_local_rank == local_rank:\n tensor.copy_(tensor_to_reduce)\n\n return tensor\n\n\ndef has_inf_or_nan(tensor):\n try:\n # if tensor is half, the .float() incurs an additional deep copy, but it's necessary if\n # Pytorch's .sum() creates a one-element tensor of the same type as tensor\n # (which is true for some recent version of pytorch).\n tensor_sum = float(tensor.float().sum())\n # More efficient version that can be used if .sum() returns a Python scalar\n # tensor_sum = float(tensor.sum())\n except RuntimeError as instance:\n # We want to check if inst is actually an overflow exception.\n # RuntimeError could come from a different error.\n # If so, we still want the exception to propagate.\n if \"value cannot be converted\" not in instance.args[0]:\n raise\n return True\n else:\n if tensor_sum == float('inf') or tensor_sum == -float('inf') or tensor_sum != tensor_sum:\n return True\n return False\n\n\ndef release_param_grad(tensor_list):\n for tensor in tensor_list:\n tensor.grad = None\n\n\ndef calculate_global_norm_from_list(norm_list):\n \"\"\" Compute total from a list of norms\n \"\"\"\n total_norm = 0.0\n for norm in norm_list:\n total_norm += norm**2.0\n return math.sqrt(total_norm)\n\n\ndef compute_norm(gradients, params, dp_group, mp_group, norm_type=2):\n \"\"\"Clips gradient norm of an iterable of parameters.\n This is adapted from torch.nn.utils.clip_grad.clip_grad_norm_ and\n added functionality to handle model parallel parameters. Note that\n the gradients are modified in place.\n Arguments:\n parameters (Iterable[Tensor] or Tensor): an iterable of Tensors or a\n single Tensor that will have gradients normalized\n max_norm (float or int): max norm of the gradients\n norm_type (float or int): type of the used p-norm. Can be ``'inf'`` for\n infinity norm.\n Returns:\n Total norm of the parameters (viewed as a single vector).\n \"\"\"\n\n if mp_group is None:\n mp_rank = 0\n else:\n mp_rank = dist.get_rank(mp_group)\n\n norm_type = float(norm_type)\n if norm_type == inf:\n total_norm = max(g.data.abs().max() for g in gradients)\n total_norm_cuda = torch.cuda.FloatTensor([float(total_norm)])\n dist.all_reduce(total_norm_cuda, op=torch.distributed.ReduceOp.MAX, group=dp_group)\n\n # Take max across all GPUs.\n if mp_group is not None:\n dist.all_reduce(tensor=total_norm_cuda, op=torch.distributed.ReduceOp.MAX)\n total_norm = total_norm_cuda[0].item()\n else:\n total_norm = 0.0\n # if dist.get_rank() == 0:\n # logger.info(f\"Total Norm beginning {total_norm}\")\n\n for g, p in zip(gradients, params):\n # Pipeline parallelism may replicate parameters. Avoid multi-counting.\n tp_param_flag = False\n if is_model_parallel_parameter(p) or (isinstance(p, ColoParameter) and not p.is_replicate()):\n tp_param_flag = True\n if tp_param_flag or mp_rank == 0:\n param_norm = g.data.double().norm(2)\n total_norm += param_norm.item()**2\n\n # Sum across all model parallel GPUs.\n total_norm_cuda = torch.cuda.FloatTensor([float(total_norm)])\n torch.distributed.all_reduce(total_norm_cuda, op=torch.distributed.ReduceOp.SUM, group=dp_group)\n\n if mp_group is not None:\n dist.all_reduce(tensor=total_norm_cuda, op=torch.distributed.ReduceOp.SUM, group=mp_group)\n\n total_norm = total_norm_cuda[0].item()**(1. / norm_type)\n\n if total_norm == float('inf') or total_norm == -float('inf') or total_norm != total_norm:\n total_norm = -1\n\n return total_norm\n\n\ndef sync_param(flat_tensor, tensor_list):\n \"\"\"\n Synchronize the flattened tensor and unflattened tensor list. When\n a list of tensor are flattened with `torch._utils._unflatten_dense_tensors`,\n a new tensor is created. Thus, the flat tensor and original tensor list do not\n share the same memory space. This function will update the tensor list so that\n they point to the same value.\n\n :param flat_tensor: A flat tensor obtained by calling `torch._utils._unflatten_dense_tensors` on a tensor lsit\n :param tensor_list: A list of tensors corresponding to the flattened tensor\n :type flat_tensor: torch.Tensor\n :type tensor_list: List[torch.Tensor]\n \"\"\"\n updated_params = unflatten(flat_tensor, tensor_list)\n\n # update the tensor data\n for p, q in zip(tensor_list, updated_params):\n p.data = q.data\n", "path": "colossalai/zero/sharded_optim/_utils.py" } ]
[ { "content": "import math\nfrom typing import Optional\n\nimport torch\nimport torch.distributed as dist\nfrom torch import inf\nfrom torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors\n\nfrom colossalai.tensor import ColoParameter\nfrom colossalai.utils import is_model_parallel_parameter\n\n\ndef flatten(input_):\n return _flatten_dense_tensors(input_)\n\n\ndef unflatten(flat, tensors):\n return _unflatten_dense_tensors(flat, tensors)\n\n\ndef count_numel(tensor_list):\n res = 0\n for tensor in tensor_list:\n res += tensor.numel()\n return res\n\n\ndef calculate_padding(numel, unit_size):\n remainder = numel % unit_size\n return unit_size - remainder if remainder else remainder\n\n\ndef shuffle_by_round_robin(tensor_list, num_partitions):\n partitions = dict()\n\n for tensor_idx, tensor in enumerate(tensor_list):\n partition_to_go = tensor_idx % num_partitions\n if partition_to_go not in partitions:\n partitions[partition_to_go] = []\n partitions[partition_to_go].append(dict(tensor=tensor, index=tensor_idx))\n\n partitions_count = len(partitions)\n new_tensor_list = []\n tensor_index_mapping = dict()\n\n for partition_id in range(partitions_count):\n partition_tensors = partitions[partition_id]\n for item in partition_tensors:\n tensor_index_mapping[item['index']] = len(new_tensor_list)\n new_tensor_list.append(item['tensor'])\n\n return new_tensor_list, tensor_index_mapping\n\n\n# create a flat tensor aligned at the alignment boundary\ndef flatten_dense_tensors_with_padding(tensor_list, unit_size):\n num_elements = count_numel(tensor_list)\n padding = calculate_padding(num_elements, unit_size=unit_size)\n\n if padding > 0:\n pad_tensor = torch.zeros(padding, device=tensor_list[0].device, dtype=tensor_list[0].dtype)\n padded_tensor_list = tensor_list + [pad_tensor]\n else:\n padded_tensor_list = tensor_list\n\n return flatten(padded_tensor_list)\n\n\ndef is_nccl_aligned(tensor):\n return tensor.data_ptr() % 4 == 0\n\n\ndef get_grad_accumulate_object(tensor):\n \"\"\"\n Return the AccumulateGrad of the input tensor\n \"\"\"\n\n # grad_fn reference:\n # https://discuss.pytorch.org/t/in-the-grad-fn-i-find-a-next-functions-but-i-dont-understand-the-meaning-of-the-attribute/24463\n # expand_as reference: https://pytorch.org/docs/stable/generated/torch.Tensor.expand.html#torch.Tensor.expand\n #\n # `next_functions` will return the backward graph where\n # the first element is the AccumulateGrad of the leaf nodes.\n # we want to get the AccumulateGrad of the input tensor instead of the leaf\n # node in the whole computation graph.\n # Therefore, we call expand_as to create a dummy graph\n # where tensor_tmp and tensor indeed point to the same object.\n # You can check this by print(tensor.data_ptr() == tensor_tmp.data_ptr())\n tensor_tmp = tensor.expand_as(tensor)\n grad_acc_obj = tensor_tmp.grad_fn.next_functions[0][0]\n return grad_acc_obj\n\n\ndef split_half_float_double(tensor_list):\n dtypes = [\"torch.cuda.HalfTensor\", \"torch.cuda.FloatTensor\", \"torch.cuda.DoubleTensor\", \"torch.cuda.BFloat16Tensor\"]\n buckets = []\n for i, dtype in enumerate(dtypes):\n bucket = [t for t in tensor_list if t.type() == dtype]\n if bucket:\n buckets.append(bucket)\n return buckets\n\n\ndef reduce_tensor_dp_group(tensor: torch.Tensor,\n dtype: Optional[torch.dtype] = None,\n dst_local_rank: Optional[int] = None,\n dst_global_rank: Optional[int] = None,\n group: Optional[dist.ProcessGroup] = None):\n \"\"\"\n Reduce the tensor in the data parallel process group\n\n :param tensor: A tensor object to reduce/all-reduce\n :param dtype: The data type used in communication\n :param dst_rank: The source rank for reduce. If dst_rank is None,\n :param parallel_mode: Communication parallel mode\n all-reduce will be used instead of reduce. Default is None.\n\n :type tensor: torch.Tensor\n :type dtype: torch.dtype, optional\n :type dst_rank: int, optional\n :type pg: ProcessGroup, optional\n \"\"\"\n # use the original dtype\n if dtype is None:\n dtype = tensor.dtype\n\n # cast the data to specified dtype for reduce/all-reduce\n if tensor.dtype != dtype:\n tensor_to_reduce = tensor.to(dtype)\n else:\n tensor_to_reduce = tensor\n\n world_size = dist.get_world_size(group=group)\n tensor_to_reduce.div_(world_size)\n\n # if rank is None, all reduce will be used\n # else, reduce is used\n use_all_reduce = dst_local_rank is None\n\n if use_all_reduce:\n dist.all_reduce(tensor_to_reduce, group=group)\n else:\n dist.reduce(tensor=tensor_to_reduce, dst=dst_global_rank, group=group)\n\n # recover the original dtype\n if tensor.dtype != dtype and tensor is not tensor_to_reduce:\n local_rank = dist.get_rank(group=group)\n if use_all_reduce or dst_local_rank == local_rank:\n tensor.copy_(tensor_to_reduce)\n\n return tensor\n\n\ndef has_inf_or_nan(tensor):\n try:\n # if tensor is half, the .float() incurs an additional deep copy, but it's necessary if\n # Pytorch's .sum() creates a one-element tensor of the same type as tensor\n # (which is true for some recent version of pytorch).\n tensor_sum = float(tensor.float().sum())\n # More efficient version that can be used if .sum() returns a Python scalar\n # tensor_sum = float(tensor.sum())\n except RuntimeError as instance:\n # We want to check if inst is actually an overflow exception.\n # RuntimeError could come from a different error.\n # If so, we still want the exception to propagate.\n if \"value cannot be converted\" not in instance.args[0]:\n raise\n return True\n else:\n if tensor_sum == float('inf') or tensor_sum == -float('inf') or tensor_sum != tensor_sum:\n return True\n return False\n\n\ndef release_param_grad(tensor_list):\n for tensor in tensor_list:\n tensor.grad = None\n\n\ndef calculate_global_norm_from_list(norm_list):\n \"\"\" Compute total from a list of norms\n \"\"\"\n total_norm = 0.0\n for norm in norm_list:\n total_norm += norm**2.0\n return math.sqrt(total_norm)\n\n\ndef compute_norm(gradients, params, dp_group, mp_group, norm_type=2):\n \"\"\"Clips gradient norm of an iterable of parameters.\n This is adapted from torch.nn.utils.clip_grad.clip_grad_norm_ and\n added functionality to handle model parallel parameters. Note that\n the gradients are modified in place.\n Arguments:\n parameters (Iterable[Tensor] or Tensor): an iterable of Tensors or a\n single Tensor that will have gradients normalized\n max_norm (float or int): max norm of the gradients\n norm_type (float or int): type of the used p-norm. Can be ``'inf'`` for\n infinity norm.\n Returns:\n Total norm of the parameters (viewed as a single vector).\n \"\"\"\n\n if mp_group is None:\n mp_rank = 0\n else:\n mp_rank = dist.get_rank(mp_group)\n\n norm_type = float(norm_type)\n if norm_type == inf:\n total_norm = max(g.data.abs().max() for g in gradients)\n total_norm_cuda = torch.cuda.FloatTensor([float(total_norm)])\n dist.all_reduce(total_norm_cuda, op=torch.distributed.ReduceOp.MAX, group=dp_group)\n\n # Take max across all GPUs.\n if mp_group is not None:\n dist.all_reduce(tensor=total_norm_cuda, op=torch.distributed.ReduceOp.MAX)\n total_norm = total_norm_cuda[0].item()\n else:\n total_norm = 0.0\n # if dist.get_rank() == 0:\n # logger.info(f\"Total Norm beginning {total_norm}\")\n\n for g, p in zip(gradients, params):\n # Pipeline parallelism may replicate parameters. Avoid multi-counting.\n tp_param_flag = False\n if is_model_parallel_parameter(p) or (isinstance(p, ColoParameter) and not p.is_replicate()):\n tp_param_flag = True\n if tp_param_flag or mp_rank == 0:\n param_norm = g.data.double().norm(2)\n total_norm += param_norm.item()**2\n\n # Sum across all model parallel GPUs.\n total_norm_cuda = torch.cuda.FloatTensor([float(total_norm)])\n torch.distributed.all_reduce(total_norm_cuda, op=torch.distributed.ReduceOp.SUM, group=dp_group)\n\n if mp_group is not None:\n dist.all_reduce(tensor=total_norm_cuda, op=torch.distributed.ReduceOp.SUM, group=mp_group)\n\n total_norm = total_norm_cuda[0].item()**(1. / norm_type)\n\n if total_norm == float('inf') or total_norm == -float('inf') or total_norm != total_norm:\n total_norm = -1\n\n return total_norm\n\n\ndef sync_param(flat_tensor, tensor_list):\n \"\"\"\n Synchronize the flattened tensor and unflattened tensor list. When\n a list of tensor are flattened with `torch._utils._unflatten_dense_tensors`,\n a new tensor is created. Thus, the flat tensor and original tensor list do not\n share the same memory space. This function will update the tensor list so that\n they point to the same value.\n\n :param flat_tensor: A flat tensor obtained by calling `torch._utils._unflatten_dense_tensors` on a tensor lsit\n :param tensor_list: A list of tensors corresponding to the flattened tensor\n :type flat_tensor: torch.Tensor\n :type tensor_list: List[torch.Tensor]\n \"\"\"\n updated_params = unflatten(flat_tensor, tensor_list)\n\n # update the tensor data\n for p, q in zip(tensor_list, updated_params):\n p.data = q.data\n", "path": "colossalai/zero/sharded_optim/_utils.py" } ]
diff --git a/colossalai/zero/sharded_optim/_utils.py b/colossalai/zero/sharded_optim/_utils.py index 68928b232660..9ca2fdf5aa06 100644 --- a/colossalai/zero/sharded_optim/_utils.py +++ b/colossalai/zero/sharded_optim/_utils.py @@ -3,7 +3,7 @@ import torch import torch.distributed as dist -from torch.six import inf +from torch import inf from torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors from colossalai.tensor import ColoParameter
systemd__mkosi-1956
[Meta] declare a policy about adding new distributions Before people start creating issues asking to support their favorite distribution, I think that mkosi should declare its policy regarding new distributions support. The policy should state in which terms (if any) you will be willing to support a new distributions.
[ { "content": "# SPDX-License-Identifier: LGPL-2.1+\n\nimport enum\nimport importlib\nimport re\nfrom collections.abc import Sequence\nfrom typing import TYPE_CHECKING, Optional, cast\n\nfrom mkosi.architecture import Architecture\nfrom mkosi.util import StrEnum, read_os_release\n\nif TYPE_CHECKING:\n from mkosi.state import MkosiState\n\n\nclass PackageType(StrEnum):\n none = enum.auto()\n rpm = enum.auto()\n deb = enum.auto()\n pkg = enum.auto()\n ebuild = enum.auto()\n\n\nclass DistributionInstaller:\n @classmethod\n def pretty_name(cls) -> str:\n raise NotImplementedError\n\n @classmethod\n def setup(cls, state: \"MkosiState\") -> None:\n raise NotImplementedError\n\n @classmethod\n def install(cls, state: \"MkosiState\") -> None:\n raise NotImplementedError\n\n @classmethod\n def install_packages(cls, state: \"MkosiState\", packages: Sequence[str]) -> None:\n raise NotImplementedError\n\n @classmethod\n def remove_packages(cls, state: \"MkosiState\", packages: Sequence[str]) -> None:\n raise NotImplementedError\n\n @classmethod\n def filesystem(cls) -> str:\n return \"ext4\"\n\n @staticmethod\n def architecture(arch: Architecture) -> str:\n return str(arch)\n\n @classmethod\n def package_type(cls) -> PackageType:\n return PackageType.none\n\n @classmethod\n def default_release(cls) -> str:\n return \"\"\n\n @classmethod\n def default_tools_tree_distribution(cls) -> Optional[\"Distribution\"]:\n return None\n\n @classmethod\n def tools_tree_repositories(cls) -> list[str]:\n return []\n\n @classmethod\n def tools_tree_packages(cls) -> list[str]:\n return []\n\n\nclass Distribution(StrEnum):\n fedora = enum.auto()\n debian = enum.auto()\n ubuntu = enum.auto()\n arch = enum.auto()\n opensuse = enum.auto()\n mageia = enum.auto()\n centos = enum.auto()\n rhel_ubi = enum.auto()\n openmandriva = enum.auto()\n rocky = enum.auto()\n alma = enum.auto()\n gentoo = enum.auto()\n custom = enum.auto()\n\n def is_centos_variant(self) -> bool:\n return self in (Distribution.centos, Distribution.alma, Distribution.rocky)\n\n def is_dnf_distribution(self) -> bool:\n return self in (\n Distribution.fedora,\n Distribution.mageia,\n Distribution.centos,\n Distribution.rhel_ubi,\n Distribution.openmandriva,\n Distribution.rocky,\n Distribution.alma,\n )\n\n def is_apt_distribution(self) -> bool:\n return self in (Distribution.debian, Distribution.ubuntu)\n\n def setup(self, state: \"MkosiState\") -> None:\n return self.installer().setup(state)\n\n def install(self, state: \"MkosiState\") -> None:\n return self.installer().install(state)\n\n def install_packages(self, state: \"MkosiState\", packages: Sequence[str]) -> None:\n return self.installer().install_packages(state, packages)\n\n def remove_packages(self, state: \"MkosiState\", packages: Sequence[str]) -> None:\n return self.installer().remove_packages(state, packages)\n\n def filesystem(self) -> str:\n return self.installer().filesystem()\n\n def architecture(self, arch: Architecture) -> str:\n return self.installer().architecture(arch)\n\n def package_type(self) -> PackageType:\n return self.installer().package_type()\n\n def default_release(self) -> str:\n return self.installer().default_release()\n\n def default_tools_tree_distribution(self) -> Optional[\"Distribution\"]:\n return self.installer().default_tools_tree_distribution()\n\n def tools_tree_repositories(self) -> list[str]:\n return self.installer().tools_tree_repositories()\n\n def tools_tree_packages(self) -> list[str]:\n return self.installer().tools_tree_packages()\n\n def installer(self) -> type[DistributionInstaller]:\n modname = str(self).replace('-', '_')\n mod = importlib.import_module(f\"mkosi.distributions.{modname}\")\n installer = getattr(mod, \"Installer\")\n assert issubclass(installer, DistributionInstaller)\n return cast(type[DistributionInstaller], installer)\n\n\ndef detect_distribution() -> tuple[Optional[Distribution], Optional[str]]:\n try:\n os_release = read_os_release()\n except FileNotFoundError:\n return None, None\n\n dist_id = os_release.get(\"ID\", \"linux\")\n dist_id_like = os_release.get(\"ID_LIKE\", \"\").split()\n version = os_release.get(\"VERSION\", None)\n version_id = os_release.get(\"VERSION_ID\", None)\n version_codename = os_release.get(\"VERSION_CODENAME\", None)\n extracted_codename = None\n\n if version:\n # extract Debian release codename\n m = re.search(r\"\\((.*?)\\)\", version)\n if m:\n extracted_codename = m.group(1)\n\n d: Optional[Distribution] = None\n for the_id in [dist_id, *dist_id_like]:\n d = Distribution.__members__.get(the_id, None)\n if d is not None:\n break\n\n if d in {Distribution.debian, Distribution.ubuntu} and (version_codename or extracted_codename):\n version_id = version_codename or extracted_codename\n\n return d, version_id\n", "path": "mkosi/distributions/__init__.py" } ]
[ { "content": "# SPDX-License-Identifier: LGPL-2.1+\n\nimport enum\nimport importlib\nimport re\nfrom collections.abc import Sequence\nfrom typing import TYPE_CHECKING, Optional, cast\n\nfrom mkosi.architecture import Architecture\nfrom mkosi.util import StrEnum, read_os_release\n\nif TYPE_CHECKING:\n from mkosi.state import MkosiState\n\n\nclass PackageType(StrEnum):\n none = enum.auto()\n rpm = enum.auto()\n deb = enum.auto()\n pkg = enum.auto()\n ebuild = enum.auto()\n\n\nclass DistributionInstaller:\n @classmethod\n def pretty_name(cls) -> str:\n raise NotImplementedError\n\n @classmethod\n def setup(cls, state: \"MkosiState\") -> None:\n raise NotImplementedError\n\n @classmethod\n def install(cls, state: \"MkosiState\") -> None:\n raise NotImplementedError\n\n @classmethod\n def install_packages(cls, state: \"MkosiState\", packages: Sequence[str]) -> None:\n raise NotImplementedError\n\n @classmethod\n def remove_packages(cls, state: \"MkosiState\", packages: Sequence[str]) -> None:\n raise NotImplementedError\n\n @classmethod\n def filesystem(cls) -> str:\n return \"ext4\"\n\n @staticmethod\n def architecture(arch: Architecture) -> str:\n return str(arch)\n\n @classmethod\n def package_type(cls) -> PackageType:\n return PackageType.none\n\n @classmethod\n def default_release(cls) -> str:\n return \"\"\n\n @classmethod\n def default_tools_tree_distribution(cls) -> Optional[\"Distribution\"]:\n return None\n\n @classmethod\n def tools_tree_repositories(cls) -> list[str]:\n return []\n\n @classmethod\n def tools_tree_packages(cls) -> list[str]:\n return []\n\n\nclass Distribution(StrEnum):\n # Please consult docs/distribution-policy.md and contact one\n # of the mkosi maintainers before implementing a new distribution.\n fedora = enum.auto()\n debian = enum.auto()\n ubuntu = enum.auto()\n arch = enum.auto()\n opensuse = enum.auto()\n mageia = enum.auto()\n centos = enum.auto()\n rhel_ubi = enum.auto()\n openmandriva = enum.auto()\n rocky = enum.auto()\n alma = enum.auto()\n gentoo = enum.auto()\n custom = enum.auto()\n\n def is_centos_variant(self) -> bool:\n return self in (Distribution.centos, Distribution.alma, Distribution.rocky)\n\n def is_dnf_distribution(self) -> bool:\n return self in (\n Distribution.fedora,\n Distribution.mageia,\n Distribution.centos,\n Distribution.rhel_ubi,\n Distribution.openmandriva,\n Distribution.rocky,\n Distribution.alma,\n )\n\n def is_apt_distribution(self) -> bool:\n return self in (Distribution.debian, Distribution.ubuntu)\n\n def setup(self, state: \"MkosiState\") -> None:\n return self.installer().setup(state)\n\n def install(self, state: \"MkosiState\") -> None:\n return self.installer().install(state)\n\n def install_packages(self, state: \"MkosiState\", packages: Sequence[str]) -> None:\n return self.installer().install_packages(state, packages)\n\n def remove_packages(self, state: \"MkosiState\", packages: Sequence[str]) -> None:\n return self.installer().remove_packages(state, packages)\n\n def filesystem(self) -> str:\n return self.installer().filesystem()\n\n def architecture(self, arch: Architecture) -> str:\n return self.installer().architecture(arch)\n\n def package_type(self) -> PackageType:\n return self.installer().package_type()\n\n def default_release(self) -> str:\n return self.installer().default_release()\n\n def default_tools_tree_distribution(self) -> Optional[\"Distribution\"]:\n return self.installer().default_tools_tree_distribution()\n\n def tools_tree_repositories(self) -> list[str]:\n return self.installer().tools_tree_repositories()\n\n def tools_tree_packages(self) -> list[str]:\n return self.installer().tools_tree_packages()\n\n def installer(self) -> type[DistributionInstaller]:\n modname = str(self).replace('-', '_')\n mod = importlib.import_module(f\"mkosi.distributions.{modname}\")\n installer = getattr(mod, \"Installer\")\n assert issubclass(installer, DistributionInstaller)\n return cast(type[DistributionInstaller], installer)\n\n\ndef detect_distribution() -> tuple[Optional[Distribution], Optional[str]]:\n try:\n os_release = read_os_release()\n except FileNotFoundError:\n return None, None\n\n dist_id = os_release.get(\"ID\", \"linux\")\n dist_id_like = os_release.get(\"ID_LIKE\", \"\").split()\n version = os_release.get(\"VERSION\", None)\n version_id = os_release.get(\"VERSION_ID\", None)\n version_codename = os_release.get(\"VERSION_CODENAME\", None)\n extracted_codename = None\n\n if version:\n # extract Debian release codename\n m = re.search(r\"\\((.*?)\\)\", version)\n if m:\n extracted_codename = m.group(1)\n\n d: Optional[Distribution] = None\n for the_id in [dist_id, *dist_id_like]:\n d = Distribution.__members__.get(the_id, None)\n if d is not None:\n break\n\n if d in {Distribution.debian, Distribution.ubuntu} and (version_codename or extracted_codename):\n version_id = version_codename or extracted_codename\n\n return d, version_id\n", "path": "mkosi/distributions/__init__.py" } ]
diff --git a/docs/distribution-policy.md b/docs/distribution-policy.md new file mode 100644 index 000000000..458b5d57d --- /dev/null +++ b/docs/distribution-policy.md @@ -0,0 +1,32 @@ +# Adding new distributions + +Merging support for a new distribution in mkosi depends on a few +factors. Not all of these are required but depending on how many of +these requirements are satisfied, the chances of us merging support for +your distribution will improve: + +1. Is the distribution somewhat popular? mkosi's goal is not to support + every distribution under the sun, the distribution should have a + substantial amount of users. +2. Does the distribution differentiate itself somehow from the + distributions that are already supported? We're generally not + interested in supporting distributions that only consist of minimal + configuration changes to another distribution. +3. Is there a long-term maintainer for the distribution in mkosi? When + proposing support for a new distribution, we expect you to be the + maintainer for the distribution and to respond when pinged for + support on distribution specific issues. +4. Does the distribution use a custom package manager or one of the + already supported ones (apt, dnf, pacman, zypper)? Supporting new + package managers in mkosi is generally a lot of work. We can support + new ones if needed for a new distribution, but we will insist on the + package manager having a somewhat sane design, with official support + for building in a chroot and running unprivileged in a user namespace + being the bare minimum features we expect from any new package + manager. + +We will only consider new distributions that satisfy all or most of +these requirements. However, you can still use mkosi with the +distribution by setting the `Distribution` setting to `custom` and +implementing either providing the rootfs via a skeleton tree or base +tree, or by providing the rootfs via a prepare script. diff --git a/mkosi/distributions/__init__.py b/mkosi/distributions/__init__.py index 8169983ae..839d6ec13 100644 --- a/mkosi/distributions/__init__.py +++ b/mkosi/distributions/__init__.py @@ -72,6 +72,8 @@ def tools_tree_packages(cls) -> list[str]: class Distribution(StrEnum): + # Please consult docs/distribution-policy.md and contact one + # of the mkosi maintainers before implementing a new distribution. fedora = enum.auto() debian = enum.auto() ubuntu = enum.auto()
activeloopai__deeplake-1447
[BUG] hub.read failing with h265 videos Hi, hub.read() fails to decompress h265 videos right. I get unreadable videos with a higher number of frames (x10). When converting it to h264 first, hub.read() seems to work fine.
[ { "content": "import hub\nfrom hub.util.exceptions import (\n SampleCompressionError,\n SampleDecompressionError,\n UnsupportedCompressionError,\n CorruptedSampleError,\n)\nfrom hub.compression import (\n get_compression_type,\n BYTE_COMPRESSION,\n IMAGE_COMPRESSION,\n VIDEO_COMPRESSION,\n AUDIO_COMPRESSION,\n)\nfrom typing import Union, Tuple, Sequence, List, Optional, BinaryIO\nimport numpy as np\nfrom pathlib import Path\nfrom PIL import Image, UnidentifiedImageError # type: ignore\nfrom io import BytesIO\nimport mmap\nimport struct\nimport sys\nimport re\nimport numcodecs.lz4 # type: ignore\nimport lz4.frame # type: ignore\nimport os\nimport subprocess as sp\nimport tempfile\nfrom miniaudio import ( # type: ignore\n mp3_read_file_f32,\n mp3_read_f32,\n mp3_get_file_info,\n mp3_get_info,\n flac_read_file_f32,\n flac_read_f32,\n flac_get_file_info,\n flac_get_info,\n wav_read_file_f32,\n wav_read_f32,\n wav_get_file_info,\n wav_get_info,\n)\nfrom numpy.core.fromnumeric import compress # type: ignore\nimport math\n\n\nif sys.byteorder == \"little\":\n _NATIVE_INT32 = \"<i4\"\n _NATIVE_FLOAT32 = \"<f4\"\nelse:\n _NATIVE_INT32 = \">i4\"\n _NATIVE_FLOAT32 = \">f4\"\n\nif os.name == \"nt\":\n _FFMPEG_BINARY = \"ffmpeg.exe\"\n _FFPROBE_BINARY = \"ffprobe.exe\"\nelse:\n _FFMPEG_BINARY = \"ffmpeg\"\n _FFPROBE_BINARY = \"ffprobe\"\n\nDIMS_RE = re.compile(rb\" ([0-9]+)x([0-9]+)\")\nFPS_RE = re.compile(rb\" ([0-9]+) fps,\")\nDURATION_RE = re.compile(rb\"Duration: ([0-9]{2}:[0-9]{2}:[0-9]{2}\\.[0-9]{2}),\")\nINFO_RE = re.compile(rb\"([a-z]+)=([0-9./]+)\")\n\n_JPEG_SOFS = [\n b\"\\xff\\xc0\",\n b\"\\xff\\xc2\",\n b\"\\xff\\xc1\",\n b\"\\xff\\xc3\",\n b\"\\xff\\xc5\",\n b\"\\xff\\xc6\",\n b\"\\xff\\xc7\",\n b\"\\xff\\xc9\",\n b\"\\xff\\xca\",\n b\"\\xff\\xcb\",\n b\"\\xff\\xcd\",\n b\"\\xff\\xce\",\n b\"\\xff\\xcf\",\n b\"\\xff\\xde\",\n # Skip:\n b\"\\xff\\xcc\",\n b\"\\xff\\xdc\",\n b\"\\xff\\xdd\",\n b\"\\xff\\xdf\",\n # App: (0xFFE0 - 0xFFEF):\n *map(lambda x: x.to_bytes(2, \"big\"), range(0xFFE0, 0xFFF0)),\n # DQT:\n b\"\\xff\\xdb\",\n # COM:\n b\"\\xff\\xfe\",\n # Start of scan\n b\"\\xff\\xda\",\n]\n\n_JPEG_SKIP_MARKERS = set(_JPEG_SOFS[14:])\n_JPEG_SOFS_RE = re.compile(b\"|\".join(_JPEG_SOFS))\n_STRUCT_HHB = struct.Struct(\">HHB\")\n_STRUCT_II = struct.Struct(\">ii\")\n\n_HUB_MKV_HEADER = b\"HUB_MKV_META\"\n\n_FFMPEG_EXISTS = None\n\n\ndef ffmpeg_exists():\n global _FFMPEG_EXISTS\n if _FFMPEG_EXISTS is None:\n _FFMPEG_EXISTS = True\n try:\n retval = sp.run(\n [_FFMPEG_BINARY, \"-h\"], stdout=sp.PIPE, stderr=sp.PIPE\n ).returncode\n except FileNotFoundError as e:\n _FFMPEG_EXISTS = False\n return _FFMPEG_EXISTS\n\n\ndef ffmpeg_binary():\n if ffmpeg_exists():\n return _FFMPEG_BINARY\n raise FileNotFoundError(\n \"FFMPEG not found. Install FFMPEG to use hub's video features\"\n )\n\n\ndef ffprobe_binary():\n if ffmpeg_exists():\n return _FFPROBE_BINARY\n raise FileNotFoundError(\n \"FFMPEG not found. Install FFMPEG to use hub's video features\"\n )\n\n\ndef to_image(array: np.ndarray) -> Image:\n shape = array.shape\n if len(shape) == 3 and shape[0] != 1 and shape[2] == 1:\n # convert (X,Y,1) grayscale to (X,Y) for pillow compatibility\n return Image.fromarray(array.squeeze(axis=2))\n\n return Image.fromarray(array)\n\n\ndef _compress_apng(array: np.ndarray) -> bytes:\n if array.ndim == 3:\n # Binary APNG\n frames = list(\n map(Image.fromarray, (array[:, :, i] for i in range(array.shape[2])))\n )\n elif array.ndim == 4 and array.shape[3] <= 4:\n # RGB(A) APNG\n frames = list(map(Image.fromarray, array))\n else:\n raise SampleCompressionError(array.shape, \"apng\", \"Unexpected shape.\")\n out = BytesIO()\n frames[0].save(out, \"png\", save_all=True, append_images=frames[1:])\n out.seek(0)\n ret = out.read()\n out.close()\n return ret\n\n\ndef _decompress_apng(buffer: Union[bytes, memoryview]) -> np.ndarray:\n img = Image.open(BytesIO(buffer))\n frame0 = np.array(img)\n if frame0.ndim == 2:\n ret = np.zeros(frame0.shape + (img.n_frames,), dtype=frame0.dtype)\n ret[:, :, 0] = frame0\n for i in range(1, img.n_frames):\n img.seek(i)\n ret[:, :, i] = np.array(img)\n else:\n ret = np.zeros((img.n_frames,) + frame0.shape, dtype=frame0.dtype)\n ret[0] = frame0\n for i in range(1, img.n_frames):\n img.seek(i)\n ret[i] = np.array(img)\n return ret\n\n\ndef compress_bytes(\n buffer: Union[bytes, memoryview], compression: Optional[str]\n) -> bytes:\n if compression == \"lz4\":\n return numcodecs.lz4.compress(buffer)\n else:\n raise SampleCompressionError(\n (len(buffer),), compression, f\"Not a byte compression: {compression}\"\n )\n\n\ndef decompress_bytes(\n buffer: Union[bytes, memoryview], compression: Optional[str]\n) -> bytes:\n if not buffer:\n return b\"\"\n if compression == \"lz4\":\n if (\n buffer[:4] == b'\\x04\"M\\x18'\n ): # python-lz4 magic number (backward compatiblity)\n return lz4.frame.decompress(buffer)\n return numcodecs.lz4.decompress(buffer)\n else:\n raise SampleDecompressionError()\n\n\ndef compress_array(array: np.ndarray, compression: Optional[str]) -> bytes:\n \"\"\"Compress some numpy array using `compression`. All meta information will be contained in the returned buffer.\n\n Note:\n `decompress_array` may be used to decompress from the returned bytes back into the `array`.\n\n Args:\n array (np.ndarray): Array to be compressed.\n compression (str, optional): `array` will be compressed with this compression into bytes. Right now only arrays compatible with `PIL` will be compressed.\n\n Raises:\n UnsupportedCompressionError: If `compression` is unsupported. See `hub.compressions`.\n SampleCompressionError: If there was a problem compressing `array`.\n NotImplementedError: If compression is not supported.\n\n Returns:\n bytes: Compressed `array` represented as bytes.\n \"\"\"\n\n # empty sample shouldn't be compressed\n\n if 0 in array.shape:\n return bytes()\n\n if compression not in hub.compressions:\n raise UnsupportedCompressionError(compression)\n\n if compression is None:\n return array.tobytes()\n\n compr_type = get_compression_type(compression)\n\n if compr_type == BYTE_COMPRESSION:\n return compress_bytes(array.tobytes(), compression)\n elif compr_type == AUDIO_COMPRESSION:\n raise NotImplementedError(\n \"In order to store audio data, you should use `hub.read(path_to_file)`. Compressing raw data is not yet supported.\"\n )\n elif compr_type == VIDEO_COMPRESSION:\n raise NotImplementedError(\n \"In order to store video data, you should use `hub.read(path_to_file)`. Compressing raw data is not yet supported.\"\n )\n\n if compression == \"apng\":\n return _compress_apng(array)\n try:\n img = to_image(array)\n out = BytesIO()\n out._close = out.close # type: ignore\n out.close = ( # type: ignore\n lambda: None\n ) # sgi save handler will try to close the stream (see https://github.com/python-pillow/Pillow/pull/5645)\n kwargs = {\"sizes\": [img.size]} if compression == \"ico\" else {}\n img.save(out, compression, **kwargs)\n out.seek(0)\n compressed_bytes = out.read()\n out._close() # type: ignore\n return compressed_bytes\n except (TypeError, OSError) as e:\n raise SampleCompressionError(array.shape, compression, str(e))\n\n\ndef decompress_array(\n buffer: Union[bytes, memoryview, str],\n shape: Optional[Tuple[int, ...]] = None,\n dtype: Optional[str] = None,\n compression: Optional[str] = None,\n) -> np.ndarray:\n \"\"\"Decompress some buffer into a numpy array. It is expected that all meta information is\n stored inside `buffer`.\n\n Note:\n `compress_array` may be used to get the `buffer` input.\n\n Args:\n buffer (bytes, memoryview, str): Buffer or file to be decompressed. It is assumed all meta information required to\n decompress is contained within `buffer`, except for byte compressions\n shape (Tuple[int], Optional): Desired shape of decompressed object. Reshape will attempt to match this shape before returning.\n dtype (str, Optional): Applicable only for byte compressions. Expected dtype of decompressed array.\n compression (str, Optional): Applicable only for byte compressions. Compression used to compression the given buffer.\n\n Raises:\n SampleDecompressionError: If decompression fails.\n ValueError: If dtype and shape are not specified for byte compression.\n\n Returns:\n np.ndarray: Array from the decompressed buffer.\n \"\"\"\n compr_type = get_compression_type(compression)\n if compr_type == BYTE_COMPRESSION:\n if dtype is None or shape is None:\n raise ValueError(\"dtype and shape must be specified for byte compressions.\")\n try:\n decompressed_bytes = decompress_bytes(buffer, compression) # type: ignore\n return np.frombuffer(decompressed_bytes, dtype=dtype).reshape(shape)\n except Exception:\n raise SampleDecompressionError()\n elif compr_type == AUDIO_COMPRESSION:\n return _decompress_audio(buffer, compression)\n elif compr_type == VIDEO_COMPRESSION:\n return _decompress_video(buffer, nframes=shape[0] if shape else None)\n\n if compression == \"apng\":\n return _decompress_apng(buffer) # type: ignore\n try:\n if shape is not None and 0 in shape:\n return np.zeros(shape, dtype=dtype)\n if not isinstance(buffer, str):\n buffer = BytesIO(buffer) # type: ignore\n img = Image.open(buffer) # type: ignore\n arr = np.array(img)\n if shape is not None:\n arr = arr.reshape(shape)\n return arr\n except Exception:\n raise SampleDecompressionError()\n\n\ndef _get_bounding_shape(shapes: Sequence[Tuple[int, ...]]) -> Tuple[int, int, int]:\n \"\"\"Gets the shape of a bounding box that can contain the given the shapes tiled horizontally.\"\"\"\n if len(shapes) == 0:\n return (0, 0, 0)\n channels_shape = shapes[0][2:]\n for shape in shapes:\n if shape[2:] != channels_shape:\n raise ValueError(\n \"The data can't be compressed as the number of channels doesn't match.\"\n )\n return (max(s[0] for s in shapes), sum(s[1] for s in shapes)) + channels_shape # type: ignore\n\n\ndef compress_multiple(\n arrays: Sequence[np.ndarray], compression: Optional[str]\n) -> bytes:\n \"\"\"Compress multiple arrays of different shapes into a single buffer. Used for chunk wise compression.\n The arrays are tiled horizontally and padded with zeros to fit in a bounding box, which is then compressed.\"\"\"\n dtype = arrays[0].dtype\n for arr in arrays:\n if arr.dtype != dtype:\n raise SampleCompressionError(\n arr.shape,\n compression,\n message=\"All arrays expected to have same dtype.\",\n )\n compr_type = get_compression_type(compression)\n if compr_type == BYTE_COMPRESSION:\n return compress_bytes(\n b\"\".join(arr.tobytes() for arr in arrays), compression\n ) # Note: shape and dtype info not included\n elif compr_type == AUDIO_COMPRESSION:\n raise NotImplementedError(\"compress_multiple does not support audio samples.\")\n elif compr_type == VIDEO_COMPRESSION:\n raise NotImplementedError(\"compress_multiple does not support video samples.\")\n elif compression == \"apng\":\n raise NotImplementedError(\"compress_multiple does not support apng samples.\")\n canvas = np.zeros(_get_bounding_shape([arr.shape for arr in arrays]), dtype=dtype)\n next_x = 0\n for arr in arrays:\n canvas[: arr.shape[0], next_x : next_x + arr.shape[1]] = arr\n next_x += arr.shape[1]\n return compress_array(canvas, compression=compression)\n\n\ndef decompress_multiple(\n buffer: Union[bytes, memoryview],\n shapes: Sequence[Tuple[int, ...]],\n dtype: Optional[Union[np.dtype, str]] = None,\n compression: Optional[str] = None,\n) -> List[np.ndarray]:\n \"\"\"Unpack a compressed buffer into multiple arrays.\"\"\"\n if not buffer:\n return []\n if compression and get_compression_type(compression) == \"byte\":\n decompressed_buffer = memoryview(decompress_bytes(buffer, compression))\n arrays = []\n itemsize = np.dtype(dtype).itemsize\n for shape in shapes:\n nbytes = int(np.prod(shape) * itemsize)\n arrays.append(\n np.frombuffer(decompressed_buffer[:nbytes], dtype=dtype).reshape(shape)\n )\n decompressed_buffer = decompressed_buffer[nbytes:]\n return arrays\n canvas = decompress_array(buffer)\n arrays = []\n next_x = 0\n for shape in shapes:\n arrays.append(canvas[: shape[0], next_x : next_x + shape[1]])\n next_x += shape[1]\n return arrays\n\n\ndef verify_compressed_file(\n file: Union[str, BinaryIO, bytes, memoryview], compression: str\n) -> Union[Tuple[Tuple[int, ...], str], None]:\n \"\"\"Verify the contents of an image file\n Args:\n file (Union[str, BinaryIO, bytes, memoryview]): Path to the file or file like object or contents of the file\n compression (str): Expected compression of the image file\n \"\"\"\n if isinstance(file, str):\n file = open(file, \"rb\")\n close = True\n elif hasattr(file, \"read\"):\n close = False\n file.seek(0) # type: ignore\n else:\n close = False\n try:\n if compression == \"png\":\n return _verify_png(file)\n elif compression == \"jpeg\":\n return _verify_jpeg(file), \"|u1\"\n elif get_compression_type(compression) == AUDIO_COMPRESSION:\n return _read_audio_shape(file, compression), \"<f4\" # type: ignore\n elif compression in (\"mp4\", \"mkv\", \"avi\"):\n if isinstance(file, (bytes, memoryview, str)):\n return _read_video_shape(file), \"|u1\"\n else:\n return _fast_decompress(file)\n except Exception as e:\n raise CorruptedSampleError(compression)\n finally:\n if close:\n file.close() # type: ignore\n\n return None\n\n\ndef get_compression(header=None, path=None):\n if path:\n # These formats are recognized by file extension for now\n file_formats = [\"mp3\", \"flac\", \"wav\", \"mp4\", \"mkv\", \"avi\"]\n for fmt in file_formats:\n if str(path).lower().endswith(\".\" + fmt):\n return fmt\n if header:\n if not Image.OPEN:\n Image.init()\n for fmt in Image.OPEN:\n accept = Image.OPEN[fmt][1]\n if accept and accept(header):\n return fmt.lower()\n raise SampleDecompressionError()\n\n\ndef _verify_png(buf):\n if not hasattr(buf, \"read\"):\n buf = BytesIO(buf)\n img = Image.open(buf)\n img.verify()\n return Image._conv_type_shape(img)\n\n\ndef _verify_jpeg(f):\n if hasattr(f, \"read\"):\n return _verify_jpeg_file(f)\n return _verify_jpeg_buffer(f)\n\n\ndef _verify_jpeg_buffer(buf: bytes):\n # Start of Image\n mview = memoryview(buf)\n assert buf.startswith(b\"\\xff\\xd8\")\n # Look for Start of Frame\n sof_idx = -1\n offset = 0\n while True:\n match = _re_find_first(_JPEG_SOFS_RE, mview[offset:])\n if match is None:\n break\n idx = match.start(0) + offset\n marker = buf[idx : idx + 2]\n if marker == _JPEG_SOFS[-1]:\n break\n offset = idx + int.from_bytes(buf[idx + 2 : idx + 4], \"big\") + 2\n if marker not in _JPEG_SKIP_MARKERS:\n sof_idx = idx\n if sof_idx == -1:\n raise Exception()\n\n length = int.from_bytes(mview[sof_idx + 2 : sof_idx + 4], \"big\")\n assert mview[sof_idx + length + 2 : sof_idx + length + 4] in [\n b\"\\xff\\xc4\",\n b\"\\xff\\xdb\",\n b\"\\xff\\xdd\",\n b\"\\xff\\xda\",\n ] # DHT, DQT, DRI, SOS\n shape = _STRUCT_HHB.unpack(mview[sof_idx + 5 : sof_idx + 10])\n assert buf.find(b\"\\xff\\xd9\") != -1\n if shape[-1] in (1, None):\n shape = shape[:-1]\n return shape\n\n\ndef _verify_jpeg_file(f):\n # See: https://dev.exiv2.org/projects/exiv2/wiki/The_Metadata_in_JPEG_files#2-The-metadata-structure-in-JPEG\n mm = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)\n mv = memoryview(mm)\n try:\n soi = f.read(2)\n # Start of Image\n assert soi == b\"\\xff\\xd8\"\n\n # Look for Start of Frame\n sof_idx = -1\n offset = 0\n while True:\n view = mv[offset:]\n match = _re_find_first(_JPEG_SOFS_RE, view)\n view.release()\n if match is None:\n break\n idx = match.start(0) + offset\n marker = mm[idx : idx + 2]\n if marker == _JPEG_SOFS[-1]:\n break\n f.seek(idx + 2)\n offset = idx + int.from_bytes(f.read(2), \"big\") + 2\n if marker not in _JPEG_SKIP_MARKERS:\n sof_idx = idx\n if sof_idx == -1:\n raise Exception() # Caught by verify_compressed_file()\n\n f.seek(sof_idx + 2)\n length = int.from_bytes(f.read(2), \"big\")\n f.seek(length - 2, 1)\n definition_start = f.read(2)\n assert definition_start in [\n b\"\\xff\\xc4\",\n b\"\\xff\\xdb\",\n b\"\\xff\\xdd\",\n b\"\\xff\\xda\",\n ] # DHT, DQT, DRI, SOS\n f.seek(sof_idx + 5)\n shape = _STRUCT_HHB.unpack(f.read(5))\n # TODO this check is too slow\n assert mm.find(b\"\\xff\\xd9\") != -1 # End of Image\n if shape[-1] in (1, None):\n shape = shape[:-1]\n return shape\n finally:\n mv.release()\n mm.close()\n\n\ndef _fast_decompress(buf):\n if not hasattr(buf, \"read\"):\n buf = BytesIO(buf)\n img = Image.open(buf)\n img.load()\n if img.mode == 1:\n args = (\"L\",)\n else:\n args = (img.mode,)\n enc = Image._getencoder(img.mode, \"raw\", args)\n enc.setimage(img.im)\n bufsize = max(65536, img.size[0] * 4)\n while True:\n status, err_code, buf = enc.encode(\n bufsize\n ) # See https://github.com/python-pillow/Pillow/blob/master/src/encode.c#L144\n if err_code:\n break\n if err_code < 0:\n raise Exception() # caught by verify_compressed_file()\n return Image._conv_type_shape(img)\n\n\ndef read_meta_from_compressed_file(\n file, compression: Optional[str] = None\n) -> Tuple[str, Tuple[int], str]:\n \"\"\"Reads shape, dtype and format without decompressing or verifying the sample.\"\"\"\n if isinstance(file, (str, Path)):\n f = open(file, \"rb\")\n isfile = True\n close = True\n elif hasattr(file, \"read\"):\n f = file\n close = False\n isfile = True\n f.seek(0)\n else:\n isfile = False\n f = file\n close = False\n try:\n if compression is None:\n path = file if isinstance(file, str) else None\n if hasattr(f, \"read\"):\n compression = get_compression(f.read(32), path)\n f.seek(0)\n else:\n compression = get_compression(f[:32], path) # type: ignore\n if compression == \"jpeg\":\n try:\n shape, typestr = _read_jpeg_shape(f), \"|u1\"\n except Exception:\n raise CorruptedSampleError(\"jpeg\")\n elif compression == \"png\":\n try:\n shape, typestr = _read_png_shape_and_dtype(f)\n except Exception:\n raise CorruptedSampleError(\"png\")\n elif get_compression_type(compression) == AUDIO_COMPRESSION:\n try:\n shape, typestr = _read_audio_shape(file, compression), \"<f4\"\n except Exception as e:\n raise CorruptedSampleError(compression)\n elif compression in (\"mp4\", \"mkv\", \"avi\"):\n try:\n shape, typestr = _read_video_shape(file), \"|u1\"\n except Exception as e:\n raise CorruptedSampleError(compression)\n else:\n img = Image.open(f) if isfile else Image.open(BytesIO(f)) # type: ignore\n shape, typestr = Image._conv_type_shape(img)\n compression = img.format.lower()\n return compression, shape, typestr # type: ignore\n finally:\n if close:\n f.close()\n\n\ndef _read_jpeg_shape(f: Union[bytes, BinaryIO]) -> Tuple[int, ...]:\n if hasattr(f, \"read\"):\n return _read_jpeg_shape_from_file(f)\n return _read_jpeg_shape_from_buffer(f) # type: ignore\n\n\ndef _re_find_first(pattern, string):\n for match in re.finditer(pattern, string):\n return match\n\n\ndef _read_jpeg_shape_from_file(f) -> Tuple[int, ...]:\n \"\"\"Reads shape of a jpeg image from file without loading the whole image in memory\"\"\"\n mm = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_COPY)\n mv = memoryview(mm)\n try:\n # Look for Start of Frame\n sof_idx = -1\n offset = 0\n while True:\n view = mv[offset:]\n match = _re_find_first(_JPEG_SOFS_RE, view)\n view.release()\n if match is None:\n break\n idx = match.start(0) + offset\n marker = mm[idx : idx + 2]\n if marker == _JPEG_SOFS[-1]:\n break\n f.seek(idx + 2)\n offset = idx + int.from_bytes(f.read(2), \"big\") + 2\n if marker not in _JPEG_SKIP_MARKERS:\n sof_idx = idx\n if sof_idx == -1:\n raise Exception()\n f.seek(sof_idx + 5)\n shape = _STRUCT_HHB.unpack(f.read(5)) # type: ignore\n if shape[-1] in (1, None):\n shape = shape[:-1]\n return shape\n finally:\n mv.release()\n mm.close()\n\n\ndef _read_jpeg_shape_from_buffer(buf: bytes) -> Tuple[int, ...]:\n \"\"\"Gets shape of a jpeg file from its contents\"\"\"\n # Look for Start of Frame\n mv = memoryview(buf)\n sof_idx = -1\n offset = 0\n while True:\n match = _re_find_first(_JPEG_SOFS_RE, mv[offset:])\n if match is None:\n break\n idx = match.start(0) + offset\n marker = buf[idx : idx + 2]\n if marker == _JPEG_SOFS[-1]:\n break\n offset = idx + int.from_bytes(buf[idx + 2 : idx + 4], \"big\") + 2\n if marker not in _JPEG_SKIP_MARKERS:\n sof_idx = idx\n if sof_idx == -1:\n raise Exception()\n shape = _STRUCT_HHB.unpack(memoryview(buf)[sof_idx + 5 : sof_idx + 10]) # type: ignore\n if shape[-1] in (1, None):\n shape = shape[:-1]\n return shape\n\n\ndef _read_png_shape_and_dtype(f: Union[bytes, BinaryIO]) -> Tuple[Tuple[int, ...], str]:\n \"\"\"Reads shape and dtype of a png file from a file like object or file contents.\n If a file like object is provided, all of its contents are NOT loaded into memory.\"\"\"\n if not hasattr(f, \"read\"):\n f = BytesIO(f) # type: ignore\n f.seek(16) # type: ignore\n size = _STRUCT_II.unpack(f.read(8))[::-1] # type: ignore\n bits, colors = f.read(2) # type: ignore\n\n # Get the number of channels and dtype based on bits and colors:\n if colors == 0:\n if bits == 1:\n typstr = \"|b1\"\n elif bits == 16:\n typstr = _NATIVE_INT32\n else:\n typstr = \"|u1\"\n nlayers = None\n else:\n typstr = \"|u1\"\n if colors == 2:\n nlayers = 3\n elif colors == 3:\n nlayers = None\n elif colors == 4:\n if bits == 8:\n nlayers = 2\n else:\n nlayers = 4\n else:\n nlayers = 4\n shape = size if nlayers is None else size + (nlayers,)\n return shape, typstr # type: ignore\n\n\ndef _decompress_audio(\n file: Union[bytes, memoryview, str], compression: Optional[str]\n) -> np.ndarray:\n decompressor = globals()[\n f\"{compression}_read{'_file' if isinstance(file, str) else ''}_f32\"\n ]\n if isinstance(file, memoryview):\n if (\n isinstance(file.obj, bytes)\n and file.strides == (1,)\n and file.shape == (len(file.obj),)\n ):\n file = file.obj\n else:\n file = bytes(file)\n raw_audio = decompressor(file)\n return np.frombuffer(raw_audio.samples, dtype=\"<f4\").reshape(\n raw_audio.num_frames, raw_audio.nchannels\n )\n\n\ndef _read_audio_shape(\n file: Union[bytes, memoryview, str], compression: str\n) -> Tuple[int, ...]:\n f_info = globals()[\n f\"{compression}_get{'_file' if isinstance(file, str) else ''}_info\"\n ]\n info = f_info(file)\n return (info.num_frames, info.nchannels)\n\n\ndef _strip_hub_mp4_header(buffer: bytes):\n if buffer[: len(_HUB_MKV_HEADER)] == _HUB_MKV_HEADER:\n return memoryview(buffer)[len(_HUB_MKV_HEADER) + 6 :]\n return buffer\n\n\ndef _decompress_video(\n file: Union[bytes, memoryview, str], nframes: Optional[int] = None\n) -> np.ndarray:\n\n shape = _read_video_shape(file)\n command = [\n ffmpeg_binary(),\n \"-i\",\n \"pipe:\",\n \"-f\",\n \"image2pipe\",\n \"-pix_fmt\",\n \"rgb24\",\n \"-vcodec\",\n \"rawvideo\",\n \"-\",\n ]\n if isinstance(file, str):\n command[2] = file\n pipe = sp.Popen(command, stdout=sp.PIPE, stderr=sp.PIPE, bufsize=10 ** 8)\n raw_video = pipe.communicate()[0]\n else:\n file = _strip_hub_mp4_header(file)\n pipe = sp.Popen(\n command, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE, bufsize=10 ** 8\n )\n raw_video = pipe.communicate(input=file)[0] # type: ignore\n nbytes = len(raw_video)\n if nframes is not None:\n shape = (nframes,) + shape[1:]\n size = np.prod(shape)\n if nbytes >= size: # size is computed from fps and duration, might not be accurate.\n return np.frombuffer(memoryview(raw_video)[:size], dtype=np.uint8).reshape(\n shape\n )\n else: # If size was overestimated, append blank frames to the end.\n arr = np.zeros(shape, dtype=np.uint8)\n arr.reshape(-1)[: len(raw_video)] = np.frombuffer(raw_video, dtype=np.uint8)\n return arr\n\n\ndef _read_video_shape(file: Union[bytes, memoryview, str]) -> Tuple[int, ...]:\n info = _get_video_info(file)\n if info[\"duration\"] is None:\n nframes = -1\n else:\n nframes = math.floor(info[\"duration\"] * info[\"rate\"])\n return (nframes, info[\"height\"], info[\"width\"], 3)\n\n\ndef _get_video_info(file: Union[bytes, memoryview, str]) -> dict:\n duration = None\n command = [\n ffprobe_binary(),\n \"-select_streams\",\n \"v:0\",\n \"-show_entries\",\n \"stream=width,height,duration,r_frame_rate\",\n \"-of\",\n \"default=noprint_wrappers=1\",\n \"pipe:\",\n ]\n\n if isinstance(file, str):\n command[-1] = file\n pipe = sp.Popen(command, stdout=sp.PIPE, stderr=sp.PIPE, bufsize=10 ** 5)\n raw_info = pipe.stdout.read() # type: ignore\n raw_err = pipe.stderr.read() # type: ignore\n pipe.communicate()\n duration = bytes.decode(re.search(DURATION_RE, raw_err).groups()[0]) # type: ignore\n duration = to_seconds(duration)\n else:\n if file[: len(_HUB_MKV_HEADER)] == _HUB_MKV_HEADER:\n mv = memoryview(file)\n n = len(_HUB_MKV_HEADER) + 2\n duration = struct.unpack(\"f\", mv[n : n + 4])[0]\n file = mv[n + 4 :]\n pipe = sp.Popen(\n command, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE, bufsize=10 ** 5\n )\n raw_info = pipe.communicate(input=file)[0]\n ret = dict(\n map(lambda kv: (bytes.decode(kv[0]), kv[1]), re.findall(INFO_RE, raw_info))\n )\n ret[\"width\"] = int(ret[\"width\"])\n ret[\"height\"] = int(ret[\"height\"])\n if \"duration\" in ret:\n ret[\"duration\"] = float(ret[\"duration\"])\n else:\n ret[\"duration\"] = duration\n rate_fraction = map(float, ret[\"rate\"].split(b\"/\"))\n ret[\"rate\"] = next(rate_fraction) / next(rate_fraction)\n return ret\n\n\nDURATION_RE = re.compile(rb\"Duration: ([0-9:.]+),\")\n\n\ndef to_seconds(time):\n return sum([60 ** i * float(j) for (i, j) in enumerate(time.split(\":\")[::-1])])\n\n\ndef _to_hub_mkv(file: str):\n command = [\n ffmpeg_binary(),\n \"-i\",\n file,\n \"-codec\",\n \"copy\",\n \"-f\",\n \"matroska\",\n \"pipe:\",\n ]\n pipe = sp.Popen(\n command, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE, bufsize=10 ** 5\n )\n mkv, raw_info = pipe.communicate()\n duration = bytes.decode(re.search(DURATION_RE, raw_info).groups()[0]) # type: ignore\n duration = to_seconds(duration)\n mkv = _HUB_MKV_HEADER + struct.pack(\"<Hf\", 4, duration) + mkv\n return mkv\n", "path": "hub/core/compression.py" } ]
[ { "content": "import hub\nfrom hub.util.exceptions import (\n SampleCompressionError,\n SampleDecompressionError,\n UnsupportedCompressionError,\n CorruptedSampleError,\n)\nfrom hub.compression import (\n get_compression_type,\n BYTE_COMPRESSION,\n IMAGE_COMPRESSION,\n VIDEO_COMPRESSION,\n AUDIO_COMPRESSION,\n)\nfrom typing import Union, Tuple, Sequence, List, Optional, BinaryIO\nimport numpy as np\nfrom pathlib import Path\nfrom PIL import Image, UnidentifiedImageError # type: ignore\nfrom io import BytesIO\nimport mmap\nimport struct\nimport sys\nimport re\nimport numcodecs.lz4 # type: ignore\nimport lz4.frame # type: ignore\nimport os\nimport subprocess as sp\nimport tempfile\nfrom miniaudio import ( # type: ignore\n mp3_read_file_f32,\n mp3_read_f32,\n mp3_get_file_info,\n mp3_get_info,\n flac_read_file_f32,\n flac_read_f32,\n flac_get_file_info,\n flac_get_info,\n wav_read_file_f32,\n wav_read_f32,\n wav_get_file_info,\n wav_get_info,\n)\nfrom numpy.core.fromnumeric import compress # type: ignore\nimport math\n\n\nif sys.byteorder == \"little\":\n _NATIVE_INT32 = \"<i4\"\n _NATIVE_FLOAT32 = \"<f4\"\nelse:\n _NATIVE_INT32 = \">i4\"\n _NATIVE_FLOAT32 = \">f4\"\n\nif os.name == \"nt\":\n _FFMPEG_BINARY = \"ffmpeg.exe\"\n _FFPROBE_BINARY = \"ffprobe.exe\"\nelse:\n _FFMPEG_BINARY = \"ffmpeg\"\n _FFPROBE_BINARY = \"ffprobe\"\n\nDIMS_RE = re.compile(rb\" ([0-9]+)x([0-9]+)\")\nFPS_RE = re.compile(rb\" ([0-9]+) fps,\")\nDURATION_RE = re.compile(rb\"Duration: ([0-9]{2}:[0-9]{2}:[0-9]{2}\\.[0-9]{2}),\")\nINFO_RE = re.compile(rb\"([a-z]+)=([0-9./]+)\")\n\n_JPEG_SOFS = [\n b\"\\xff\\xc0\",\n b\"\\xff\\xc2\",\n b\"\\xff\\xc1\",\n b\"\\xff\\xc3\",\n b\"\\xff\\xc5\",\n b\"\\xff\\xc6\",\n b\"\\xff\\xc7\",\n b\"\\xff\\xc9\",\n b\"\\xff\\xca\",\n b\"\\xff\\xcb\",\n b\"\\xff\\xcd\",\n b\"\\xff\\xce\",\n b\"\\xff\\xcf\",\n b\"\\xff\\xde\",\n # Skip:\n b\"\\xff\\xcc\",\n b\"\\xff\\xdc\",\n b\"\\xff\\xdd\",\n b\"\\xff\\xdf\",\n # App: (0xFFE0 - 0xFFEF):\n *map(lambda x: x.to_bytes(2, \"big\"), range(0xFFE0, 0xFFF0)),\n # DQT:\n b\"\\xff\\xdb\",\n # COM:\n b\"\\xff\\xfe\",\n # Start of scan\n b\"\\xff\\xda\",\n]\n\n_JPEG_SKIP_MARKERS = set(_JPEG_SOFS[14:])\n_JPEG_SOFS_RE = re.compile(b\"|\".join(_JPEG_SOFS))\n_STRUCT_HHB = struct.Struct(\">HHB\")\n_STRUCT_II = struct.Struct(\">ii\")\n\n_HUB_MKV_HEADER = b\"HUB_MKV_META\"\n\n_FFMPEG_EXISTS = None\n\n\ndef ffmpeg_exists():\n global _FFMPEG_EXISTS\n if _FFMPEG_EXISTS is None:\n _FFMPEG_EXISTS = True\n try:\n retval = sp.run(\n [_FFMPEG_BINARY, \"-h\"], stdout=sp.PIPE, stderr=sp.PIPE\n ).returncode\n except FileNotFoundError as e:\n _FFMPEG_EXISTS = False\n return _FFMPEG_EXISTS\n\n\ndef ffmpeg_binary():\n if ffmpeg_exists():\n return _FFMPEG_BINARY\n raise FileNotFoundError(\n \"FFMPEG not found. Install FFMPEG to use hub's video features\"\n )\n\n\ndef ffprobe_binary():\n if ffmpeg_exists():\n return _FFPROBE_BINARY\n raise FileNotFoundError(\n \"FFMPEG not found. Install FFMPEG to use hub's video features\"\n )\n\n\ndef to_image(array: np.ndarray) -> Image:\n shape = array.shape\n if len(shape) == 3 and shape[0] != 1 and shape[2] == 1:\n # convert (X,Y,1) grayscale to (X,Y) for pillow compatibility\n return Image.fromarray(array.squeeze(axis=2))\n\n return Image.fromarray(array)\n\n\ndef _compress_apng(array: np.ndarray) -> bytes:\n if array.ndim == 3:\n # Binary APNG\n frames = list(\n map(Image.fromarray, (array[:, :, i] for i in range(array.shape[2])))\n )\n elif array.ndim == 4 and array.shape[3] <= 4:\n # RGB(A) APNG\n frames = list(map(Image.fromarray, array))\n else:\n raise SampleCompressionError(array.shape, \"apng\", \"Unexpected shape.\")\n out = BytesIO()\n frames[0].save(out, \"png\", save_all=True, append_images=frames[1:])\n out.seek(0)\n ret = out.read()\n out.close()\n return ret\n\n\ndef _decompress_apng(buffer: Union[bytes, memoryview]) -> np.ndarray:\n img = Image.open(BytesIO(buffer))\n frame0 = np.array(img)\n if frame0.ndim == 2:\n ret = np.zeros(frame0.shape + (img.n_frames,), dtype=frame0.dtype)\n ret[:, :, 0] = frame0\n for i in range(1, img.n_frames):\n img.seek(i)\n ret[:, :, i] = np.array(img)\n else:\n ret = np.zeros((img.n_frames,) + frame0.shape, dtype=frame0.dtype)\n ret[0] = frame0\n for i in range(1, img.n_frames):\n img.seek(i)\n ret[i] = np.array(img)\n return ret\n\n\ndef compress_bytes(\n buffer: Union[bytes, memoryview], compression: Optional[str]\n) -> bytes:\n if compression == \"lz4\":\n return numcodecs.lz4.compress(buffer)\n else:\n raise SampleCompressionError(\n (len(buffer),), compression, f\"Not a byte compression: {compression}\"\n )\n\n\ndef decompress_bytes(\n buffer: Union[bytes, memoryview], compression: Optional[str]\n) -> bytes:\n if not buffer:\n return b\"\"\n if compression == \"lz4\":\n if (\n buffer[:4] == b'\\x04\"M\\x18'\n ): # python-lz4 magic number (backward compatiblity)\n return lz4.frame.decompress(buffer)\n return numcodecs.lz4.decompress(buffer)\n else:\n raise SampleDecompressionError()\n\n\ndef compress_array(array: np.ndarray, compression: Optional[str]) -> bytes:\n \"\"\"Compress some numpy array using `compression`. All meta information will be contained in the returned buffer.\n\n Note:\n `decompress_array` may be used to decompress from the returned bytes back into the `array`.\n\n Args:\n array (np.ndarray): Array to be compressed.\n compression (str, optional): `array` will be compressed with this compression into bytes. Right now only arrays compatible with `PIL` will be compressed.\n\n Raises:\n UnsupportedCompressionError: If `compression` is unsupported. See `hub.compressions`.\n SampleCompressionError: If there was a problem compressing `array`.\n NotImplementedError: If compression is not supported.\n\n Returns:\n bytes: Compressed `array` represented as bytes.\n \"\"\"\n\n # empty sample shouldn't be compressed\n\n if 0 in array.shape:\n return bytes()\n\n if compression not in hub.compressions:\n raise UnsupportedCompressionError(compression)\n\n if compression is None:\n return array.tobytes()\n\n compr_type = get_compression_type(compression)\n\n if compr_type == BYTE_COMPRESSION:\n return compress_bytes(array.tobytes(), compression)\n elif compr_type == AUDIO_COMPRESSION:\n raise NotImplementedError(\n \"In order to store audio data, you should use `hub.read(path_to_file)`. Compressing raw data is not yet supported.\"\n )\n elif compr_type == VIDEO_COMPRESSION:\n raise NotImplementedError(\n \"In order to store video data, you should use `hub.read(path_to_file)`. Compressing raw data is not yet supported.\"\n )\n\n if compression == \"apng\":\n return _compress_apng(array)\n try:\n img = to_image(array)\n out = BytesIO()\n out._close = out.close # type: ignore\n out.close = ( # type: ignore\n lambda: None\n ) # sgi save handler will try to close the stream (see https://github.com/python-pillow/Pillow/pull/5645)\n kwargs = {\"sizes\": [img.size]} if compression == \"ico\" else {}\n img.save(out, compression, **kwargs)\n out.seek(0)\n compressed_bytes = out.read()\n out._close() # type: ignore\n return compressed_bytes\n except (TypeError, OSError) as e:\n raise SampleCompressionError(array.shape, compression, str(e))\n\n\ndef decompress_array(\n buffer: Union[bytes, memoryview, str],\n shape: Optional[Tuple[int, ...]] = None,\n dtype: Optional[str] = None,\n compression: Optional[str] = None,\n) -> np.ndarray:\n \"\"\"Decompress some buffer into a numpy array. It is expected that all meta information is\n stored inside `buffer`.\n\n Note:\n `compress_array` may be used to get the `buffer` input.\n\n Args:\n buffer (bytes, memoryview, str): Buffer or file to be decompressed. It is assumed all meta information required to\n decompress is contained within `buffer`, except for byte compressions\n shape (Tuple[int], Optional): Desired shape of decompressed object. Reshape will attempt to match this shape before returning.\n dtype (str, Optional): Applicable only for byte compressions. Expected dtype of decompressed array.\n compression (str, Optional): Applicable only for byte compressions. Compression used to compression the given buffer.\n\n Raises:\n SampleDecompressionError: If decompression fails.\n ValueError: If dtype and shape are not specified for byte compression.\n\n Returns:\n np.ndarray: Array from the decompressed buffer.\n \"\"\"\n compr_type = get_compression_type(compression)\n if compr_type == BYTE_COMPRESSION:\n if dtype is None or shape is None:\n raise ValueError(\"dtype and shape must be specified for byte compressions.\")\n try:\n decompressed_bytes = decompress_bytes(buffer, compression) # type: ignore\n return np.frombuffer(decompressed_bytes, dtype=dtype).reshape(shape)\n except Exception:\n raise SampleDecompressionError()\n elif compr_type == AUDIO_COMPRESSION:\n return _decompress_audio(buffer, compression)\n elif compr_type == VIDEO_COMPRESSION:\n return _decompress_video(buffer, nframes=shape[0] if shape else None)\n\n if compression == \"apng\":\n return _decompress_apng(buffer) # type: ignore\n try:\n if shape is not None and 0 in shape:\n return np.zeros(shape, dtype=dtype)\n if not isinstance(buffer, str):\n buffer = BytesIO(buffer) # type: ignore\n img = Image.open(buffer) # type: ignore\n arr = np.array(img)\n if shape is not None:\n arr = arr.reshape(shape)\n return arr\n except Exception:\n raise SampleDecompressionError()\n\n\ndef _get_bounding_shape(shapes: Sequence[Tuple[int, ...]]) -> Tuple[int, int, int]:\n \"\"\"Gets the shape of a bounding box that can contain the given the shapes tiled horizontally.\"\"\"\n if len(shapes) == 0:\n return (0, 0, 0)\n channels_shape = shapes[0][2:]\n for shape in shapes:\n if shape[2:] != channels_shape:\n raise ValueError(\n \"The data can't be compressed as the number of channels doesn't match.\"\n )\n return (max(s[0] for s in shapes), sum(s[1] for s in shapes)) + channels_shape # type: ignore\n\n\ndef compress_multiple(\n arrays: Sequence[np.ndarray], compression: Optional[str]\n) -> bytes:\n \"\"\"Compress multiple arrays of different shapes into a single buffer. Used for chunk wise compression.\n The arrays are tiled horizontally and padded with zeros to fit in a bounding box, which is then compressed.\"\"\"\n dtype = arrays[0].dtype\n for arr in arrays:\n if arr.dtype != dtype:\n raise SampleCompressionError(\n arr.shape,\n compression,\n message=\"All arrays expected to have same dtype.\",\n )\n compr_type = get_compression_type(compression)\n if compr_type == BYTE_COMPRESSION:\n return compress_bytes(\n b\"\".join(arr.tobytes() for arr in arrays), compression\n ) # Note: shape and dtype info not included\n elif compr_type == AUDIO_COMPRESSION:\n raise NotImplementedError(\"compress_multiple does not support audio samples.\")\n elif compr_type == VIDEO_COMPRESSION:\n raise NotImplementedError(\"compress_multiple does not support video samples.\")\n elif compression == \"apng\":\n raise NotImplementedError(\"compress_multiple does not support apng samples.\")\n canvas = np.zeros(_get_bounding_shape([arr.shape for arr in arrays]), dtype=dtype)\n next_x = 0\n for arr in arrays:\n canvas[: arr.shape[0], next_x : next_x + arr.shape[1]] = arr\n next_x += arr.shape[1]\n return compress_array(canvas, compression=compression)\n\n\ndef decompress_multiple(\n buffer: Union[bytes, memoryview],\n shapes: Sequence[Tuple[int, ...]],\n dtype: Optional[Union[np.dtype, str]] = None,\n compression: Optional[str] = None,\n) -> List[np.ndarray]:\n \"\"\"Unpack a compressed buffer into multiple arrays.\"\"\"\n if not buffer:\n return []\n if compression and get_compression_type(compression) == \"byte\":\n decompressed_buffer = memoryview(decompress_bytes(buffer, compression))\n arrays = []\n itemsize = np.dtype(dtype).itemsize\n for shape in shapes:\n nbytes = int(np.prod(shape) * itemsize)\n arrays.append(\n np.frombuffer(decompressed_buffer[:nbytes], dtype=dtype).reshape(shape)\n )\n decompressed_buffer = decompressed_buffer[nbytes:]\n return arrays\n canvas = decompress_array(buffer)\n arrays = []\n next_x = 0\n for shape in shapes:\n arrays.append(canvas[: shape[0], next_x : next_x + shape[1]])\n next_x += shape[1]\n return arrays\n\n\ndef verify_compressed_file(\n file: Union[str, BinaryIO, bytes, memoryview], compression: str\n) -> Union[Tuple[Tuple[int, ...], str], None]:\n \"\"\"Verify the contents of an image file\n Args:\n file (Union[str, BinaryIO, bytes, memoryview]): Path to the file or file like object or contents of the file\n compression (str): Expected compression of the image file\n \"\"\"\n if isinstance(file, str):\n file = open(file, \"rb\")\n close = True\n elif hasattr(file, \"read\"):\n close = False\n file.seek(0) # type: ignore\n else:\n close = False\n try:\n if compression == \"png\":\n return _verify_png(file)\n elif compression == \"jpeg\":\n return _verify_jpeg(file), \"|u1\"\n elif get_compression_type(compression) == AUDIO_COMPRESSION:\n return _read_audio_shape(file, compression), \"<f4\" # type: ignore\n elif compression in (\"mp4\", \"mkv\", \"avi\"):\n if isinstance(file, (bytes, memoryview, str)):\n return _read_video_shape(file), \"|u1\"\n else:\n return _fast_decompress(file)\n except Exception as e:\n raise CorruptedSampleError(compression)\n finally:\n if close:\n file.close() # type: ignore\n\n return None\n\n\ndef get_compression(header=None, path=None):\n if path:\n # These formats are recognized by file extension for now\n file_formats = [\"mp3\", \"flac\", \"wav\", \"mp4\", \"mkv\", \"avi\"]\n for fmt in file_formats:\n if str(path).lower().endswith(\".\" + fmt):\n return fmt\n if header:\n if not Image.OPEN:\n Image.init()\n for fmt in Image.OPEN:\n accept = Image.OPEN[fmt][1]\n if accept and accept(header):\n return fmt.lower()\n raise SampleDecompressionError()\n\n\ndef _verify_png(buf):\n if not hasattr(buf, \"read\"):\n buf = BytesIO(buf)\n img = Image.open(buf)\n img.verify()\n return Image._conv_type_shape(img)\n\n\ndef _verify_jpeg(f):\n if hasattr(f, \"read\"):\n return _verify_jpeg_file(f)\n return _verify_jpeg_buffer(f)\n\n\ndef _verify_jpeg_buffer(buf: bytes):\n # Start of Image\n mview = memoryview(buf)\n assert buf.startswith(b\"\\xff\\xd8\")\n # Look for Start of Frame\n sof_idx = -1\n offset = 0\n while True:\n match = _re_find_first(_JPEG_SOFS_RE, mview[offset:])\n if match is None:\n break\n idx = match.start(0) + offset\n marker = buf[idx : idx + 2]\n if marker == _JPEG_SOFS[-1]:\n break\n offset = idx + int.from_bytes(buf[idx + 2 : idx + 4], \"big\") + 2\n if marker not in _JPEG_SKIP_MARKERS:\n sof_idx = idx\n if sof_idx == -1:\n raise Exception()\n\n length = int.from_bytes(mview[sof_idx + 2 : sof_idx + 4], \"big\")\n assert mview[sof_idx + length + 2 : sof_idx + length + 4] in [\n b\"\\xff\\xc4\",\n b\"\\xff\\xdb\",\n b\"\\xff\\xdd\",\n b\"\\xff\\xda\",\n ] # DHT, DQT, DRI, SOS\n shape = _STRUCT_HHB.unpack(mview[sof_idx + 5 : sof_idx + 10])\n assert buf.find(b\"\\xff\\xd9\") != -1\n if shape[-1] in (1, None):\n shape = shape[:-1]\n return shape\n\n\ndef _verify_jpeg_file(f):\n # See: https://dev.exiv2.org/projects/exiv2/wiki/The_Metadata_in_JPEG_files#2-The-metadata-structure-in-JPEG\n mm = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)\n mv = memoryview(mm)\n try:\n soi = f.read(2)\n # Start of Image\n assert soi == b\"\\xff\\xd8\"\n\n # Look for Start of Frame\n sof_idx = -1\n offset = 0\n while True:\n view = mv[offset:]\n match = _re_find_first(_JPEG_SOFS_RE, view)\n view.release()\n if match is None:\n break\n idx = match.start(0) + offset\n marker = mm[idx : idx + 2]\n if marker == _JPEG_SOFS[-1]:\n break\n f.seek(idx + 2)\n offset = idx + int.from_bytes(f.read(2), \"big\") + 2\n if marker not in _JPEG_SKIP_MARKERS:\n sof_idx = idx\n if sof_idx == -1:\n raise Exception() # Caught by verify_compressed_file()\n\n f.seek(sof_idx + 2)\n length = int.from_bytes(f.read(2), \"big\")\n f.seek(length - 2, 1)\n definition_start = f.read(2)\n assert definition_start in [\n b\"\\xff\\xc4\",\n b\"\\xff\\xdb\",\n b\"\\xff\\xdd\",\n b\"\\xff\\xda\",\n ] # DHT, DQT, DRI, SOS\n f.seek(sof_idx + 5)\n shape = _STRUCT_HHB.unpack(f.read(5))\n # TODO this check is too slow\n assert mm.find(b\"\\xff\\xd9\") != -1 # End of Image\n if shape[-1] in (1, None):\n shape = shape[:-1]\n return shape\n finally:\n mv.release()\n mm.close()\n\n\ndef _fast_decompress(buf):\n if not hasattr(buf, \"read\"):\n buf = BytesIO(buf)\n img = Image.open(buf)\n img.load()\n if img.mode == 1:\n args = (\"L\",)\n else:\n args = (img.mode,)\n enc = Image._getencoder(img.mode, \"raw\", args)\n enc.setimage(img.im)\n bufsize = max(65536, img.size[0] * 4)\n while True:\n status, err_code, buf = enc.encode(\n bufsize\n ) # See https://github.com/python-pillow/Pillow/blob/master/src/encode.c#L144\n if err_code:\n break\n if err_code < 0:\n raise Exception() # caught by verify_compressed_file()\n return Image._conv_type_shape(img)\n\n\ndef read_meta_from_compressed_file(\n file, compression: Optional[str] = None\n) -> Tuple[str, Tuple[int], str]:\n \"\"\"Reads shape, dtype and format without decompressing or verifying the sample.\"\"\"\n if isinstance(file, (str, Path)):\n f = open(file, \"rb\")\n isfile = True\n close = True\n elif hasattr(file, \"read\"):\n f = file\n close = False\n isfile = True\n f.seek(0)\n else:\n isfile = False\n f = file\n close = False\n try:\n if compression is None:\n path = file if isinstance(file, str) else None\n if hasattr(f, \"read\"):\n compression = get_compression(f.read(32), path)\n f.seek(0)\n else:\n compression = get_compression(f[:32], path) # type: ignore\n if compression == \"jpeg\":\n try:\n shape, typestr = _read_jpeg_shape(f), \"|u1\"\n except Exception:\n raise CorruptedSampleError(\"jpeg\")\n elif compression == \"png\":\n try:\n shape, typestr = _read_png_shape_and_dtype(f)\n except Exception:\n raise CorruptedSampleError(\"png\")\n elif get_compression_type(compression) == AUDIO_COMPRESSION:\n try:\n shape, typestr = _read_audio_shape(file, compression), \"<f4\"\n except Exception as e:\n raise CorruptedSampleError(compression)\n elif compression in (\"mp4\", \"mkv\", \"avi\"):\n try:\n shape, typestr = _read_video_shape(file), \"|u1\"\n except Exception as e:\n raise CorruptedSampleError(compression)\n else:\n img = Image.open(f) if isfile else Image.open(BytesIO(f)) # type: ignore\n shape, typestr = Image._conv_type_shape(img)\n compression = img.format.lower()\n return compression, shape, typestr # type: ignore\n finally:\n if close:\n f.close()\n\n\ndef _read_jpeg_shape(f: Union[bytes, BinaryIO]) -> Tuple[int, ...]:\n if hasattr(f, \"read\"):\n return _read_jpeg_shape_from_file(f)\n return _read_jpeg_shape_from_buffer(f) # type: ignore\n\n\ndef _re_find_first(pattern, string):\n for match in re.finditer(pattern, string):\n return match\n\n\ndef _read_jpeg_shape_from_file(f) -> Tuple[int, ...]:\n \"\"\"Reads shape of a jpeg image from file without loading the whole image in memory\"\"\"\n mm = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_COPY)\n mv = memoryview(mm)\n try:\n # Look for Start of Frame\n sof_idx = -1\n offset = 0\n while True:\n view = mv[offset:]\n match = _re_find_first(_JPEG_SOFS_RE, view)\n view.release()\n if match is None:\n break\n idx = match.start(0) + offset\n marker = mm[idx : idx + 2]\n if marker == _JPEG_SOFS[-1]:\n break\n f.seek(idx + 2)\n offset = idx + int.from_bytes(f.read(2), \"big\") + 2\n if marker not in _JPEG_SKIP_MARKERS:\n sof_idx = idx\n if sof_idx == -1:\n raise Exception()\n f.seek(sof_idx + 5)\n shape = _STRUCT_HHB.unpack(f.read(5)) # type: ignore\n if shape[-1] in (1, None):\n shape = shape[:-1]\n return shape\n finally:\n mv.release()\n mm.close()\n\n\ndef _read_jpeg_shape_from_buffer(buf: bytes) -> Tuple[int, ...]:\n \"\"\"Gets shape of a jpeg file from its contents\"\"\"\n # Look for Start of Frame\n mv = memoryview(buf)\n sof_idx = -1\n offset = 0\n while True:\n match = _re_find_first(_JPEG_SOFS_RE, mv[offset:])\n if match is None:\n break\n idx = match.start(0) + offset\n marker = buf[idx : idx + 2]\n if marker == _JPEG_SOFS[-1]:\n break\n offset = idx + int.from_bytes(buf[idx + 2 : idx + 4], \"big\") + 2\n if marker not in _JPEG_SKIP_MARKERS:\n sof_idx = idx\n if sof_idx == -1:\n raise Exception()\n shape = _STRUCT_HHB.unpack(memoryview(buf)[sof_idx + 5 : sof_idx + 10]) # type: ignore\n if shape[-1] in (1, None):\n shape = shape[:-1]\n return shape\n\n\ndef _read_png_shape_and_dtype(f: Union[bytes, BinaryIO]) -> Tuple[Tuple[int, ...], str]:\n \"\"\"Reads shape and dtype of a png file from a file like object or file contents.\n If a file like object is provided, all of its contents are NOT loaded into memory.\"\"\"\n if not hasattr(f, \"read\"):\n f = BytesIO(f) # type: ignore\n f.seek(16) # type: ignore\n size = _STRUCT_II.unpack(f.read(8))[::-1] # type: ignore\n bits, colors = f.read(2) # type: ignore\n\n # Get the number of channels and dtype based on bits and colors:\n if colors == 0:\n if bits == 1:\n typstr = \"|b1\"\n elif bits == 16:\n typstr = _NATIVE_INT32\n else:\n typstr = \"|u1\"\n nlayers = None\n else:\n typstr = \"|u1\"\n if colors == 2:\n nlayers = 3\n elif colors == 3:\n nlayers = None\n elif colors == 4:\n if bits == 8:\n nlayers = 2\n else:\n nlayers = 4\n else:\n nlayers = 4\n shape = size if nlayers is None else size + (nlayers,)\n return shape, typstr # type: ignore\n\n\ndef _decompress_audio(\n file: Union[bytes, memoryview, str], compression: Optional[str]\n) -> np.ndarray:\n decompressor = globals()[\n f\"{compression}_read{'_file' if isinstance(file, str) else ''}_f32\"\n ]\n if isinstance(file, memoryview):\n if (\n isinstance(file.obj, bytes)\n and file.strides == (1,)\n and file.shape == (len(file.obj),)\n ):\n file = file.obj\n else:\n file = bytes(file)\n raw_audio = decompressor(file)\n return np.frombuffer(raw_audio.samples, dtype=\"<f4\").reshape(\n raw_audio.num_frames, raw_audio.nchannels\n )\n\n\ndef _read_audio_shape(\n file: Union[bytes, memoryview, str], compression: str\n) -> Tuple[int, ...]:\n f_info = globals()[\n f\"{compression}_get{'_file' if isinstance(file, str) else ''}_info\"\n ]\n info = f_info(file)\n return (info.num_frames, info.nchannels)\n\n\ndef _strip_hub_mp4_header(buffer: bytes):\n if buffer[: len(_HUB_MKV_HEADER)] == _HUB_MKV_HEADER:\n return memoryview(buffer)[len(_HUB_MKV_HEADER) + 6 :]\n return buffer\n\n\ndef _decompress_video(\n file: Union[bytes, memoryview, str], nframes: Optional[int] = None\n) -> np.ndarray:\n\n shape = _read_video_shape(file)\n command = [\n ffmpeg_binary(),\n \"-i\",\n \"pipe:\",\n \"-f\",\n \"image2pipe\",\n \"-pix_fmt\",\n \"rgb24\",\n \"-vcodec\",\n \"rawvideo\",\n \"-\",\n ]\n if isinstance(file, str):\n command[2] = file\n pipe = sp.Popen(command, stdout=sp.PIPE, stderr=sp.PIPE, bufsize=10 ** 8)\n raw_video = pipe.communicate()[0]\n else:\n file = _strip_hub_mp4_header(file)\n pipe = sp.Popen(\n command, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE, bufsize=10 ** 8\n )\n raw_video = pipe.communicate(input=file)[0] # type: ignore\n nbytes = len(raw_video)\n if nframes is not None:\n shape = (nframes,) + shape[1:]\n size = np.prod(shape)\n if nbytes >= size: # size is computed from fps and duration, might not be accurate.\n return np.frombuffer(memoryview(raw_video)[:size], dtype=np.uint8).reshape(\n shape\n )\n else: # If size was overestimated, append blank frames to the end.\n arr = np.zeros(shape, dtype=np.uint8)\n arr.reshape(-1)[: len(raw_video)] = np.frombuffer(raw_video, dtype=np.uint8)\n return arr\n\n\ndef _read_video_shape(file: Union[bytes, memoryview, str]) -> Tuple[int, ...]:\n info = _get_video_info(file)\n if info[\"duration\"] is None:\n nframes = -1\n else:\n nframes = math.floor(info[\"duration\"] * info[\"rate\"])\n return (nframes, info[\"height\"], info[\"width\"], 3)\n\n\ndef _get_video_info(file: Union[bytes, memoryview, str]) -> dict:\n duration = None\n command = [\n ffprobe_binary(),\n \"-select_streams\",\n \"v:0\",\n \"-show_entries\",\n \"stream=width,height,duration,avg_frame_rate\",\n \"-of\",\n \"default=noprint_wrappers=1\",\n \"pipe:\",\n ]\n\n if isinstance(file, str):\n command[-1] = file\n pipe = sp.Popen(command, stdout=sp.PIPE, stderr=sp.PIPE, bufsize=10 ** 5)\n raw_info = pipe.stdout.read() # type: ignore\n raw_err = pipe.stderr.read() # type: ignore\n pipe.communicate()\n duration = bytes.decode(re.search(DURATION_RE, raw_err).groups()[0]) # type: ignore\n duration = to_seconds(duration)\n else:\n if file[: len(_HUB_MKV_HEADER)] == _HUB_MKV_HEADER:\n mv = memoryview(file)\n n = len(_HUB_MKV_HEADER) + 2\n duration = struct.unpack(\"f\", mv[n : n + 4])[0]\n file = mv[n + 4 :]\n pipe = sp.Popen(\n command, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE, bufsize=10 ** 5\n )\n raw_info = pipe.communicate(input=file)[0]\n ret = dict(\n map(lambda kv: (bytes.decode(kv[0]), kv[1]), re.findall(INFO_RE, raw_info))\n )\n ret[\"width\"] = int(ret[\"width\"])\n ret[\"height\"] = int(ret[\"height\"])\n if \"duration\" in ret:\n ret[\"duration\"] = float(ret[\"duration\"])\n else:\n ret[\"duration\"] = duration\n rate_fraction = map(float, ret[\"rate\"].split(b\"/\"))\n ret[\"rate\"] = next(rate_fraction) / next(rate_fraction)\n return ret\n\n\nDURATION_RE = re.compile(rb\"Duration: ([0-9:.]+),\")\n\n\ndef to_seconds(time):\n return sum([60 ** i * float(j) for (i, j) in enumerate(time.split(\":\")[::-1])])\n\n\ndef _to_hub_mkv(file: str):\n command = [\n ffmpeg_binary(),\n \"-i\",\n file,\n \"-codec\",\n \"copy\",\n \"-f\",\n \"matroska\",\n \"pipe:\",\n ]\n pipe = sp.Popen(\n command, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE, bufsize=10 ** 5\n )\n mkv, raw_info = pipe.communicate()\n duration = bytes.decode(re.search(DURATION_RE, raw_info).groups()[0]) # type: ignore\n duration = to_seconds(duration)\n mkv = _HUB_MKV_HEADER + struct.pack(\"<Hf\", 4, duration) + mkv\n return mkv\n", "path": "hub/core/compression.py" } ]
diff --git a/hub/core/compression.py b/hub/core/compression.py index b2bff7a6d1..52528e9d70 100644 --- a/hub/core/compression.py +++ b/hub/core/compression.py @@ -827,7 +827,7 @@ def _get_video_info(file: Union[bytes, memoryview, str]) -> dict: "-select_streams", "v:0", "-show_entries", - "stream=width,height,duration,r_frame_rate", + "stream=width,height,duration,avg_frame_rate", "-of", "default=noprint_wrappers=1", "pipe:",
plone__Products.CMFPlone-690
adding site with default language 'de-at' breaks With current Plone 5 beta 1 and the merged language control panel, adding a site with a language like 'de-at' breaks and gives the following traceback: ``` 2015-04-01 20:25:30 ERROR Zope.SiteErrorLog 1427912730.490.962488456232 http://localhost:8888/@@plone-addsite Traceback (innermost last): Module ZPublisher.Publish, line 138, in publish Module ZPublisher.mapply, line 77, in mapply Module ZPublisher.Publish, line 48, in call_object Module Products.CMFPlone.browser.admin, line 232, in __call__ Module Products.CMFPlone.factory, line 95, in addPloneSite Module plone.registry.registry, line 47, in __setitem__ Module plone.registry.record, line 83, in _set_value Module zope.schema._bootstrapfields, line 182, in validate Module zope.schema._field, line 389, in _validate ConstraintNotSatisfied: de-at ``` Please note, 'de-at' is selected by default, as reported by my browser. - `Products.CMFPlone.browser.admin.AddPloneSite.languages` uses `plone.i18n.locales.interfaces.IContentLanguageAvailability` and constructs a list of languages with combined language codes, if the default language as reported by the browser contains a "-". - `Products.CMFPlone.interfaces.controlpanel.ILanguageSchema.use_combined_language_codes` as a default value of False. - `plone.app.vocabularies.language.AvailableContentLanguageVocabulary` constructs it's language list via `plone.i18n.utility.LanguageUtility.getAvailableLanguages` according to the setting `use_combined_language_codes`, which is False by default. I guess, we have to set `use_combined_language_codes` to True in CMFPlone/factory, if CMFPlone/admin/AddPloneSite decides to use combined language codes...
[ { "content": "# -*- coding: utf-8 -*-\nfrom plone.supermodel import model\nfrom Products.CMFPlone import PloneMessageFactory as _ # NOQA\nfrom Products.CMFPlone.utils import validate_json\nfrom basetool import IPloneBaseTool\nfrom plone.locking.interfaces import ILockSettings\nfrom zope import schema\nfrom zope.interface import Interface, implements\nfrom zope.schema.vocabulary import SimpleTerm\nfrom zope.schema.vocabulary import SimpleVocabulary\nimport json\n\n\nclass IControlPanel(IPloneBaseTool):\n \"\"\" Interface for the ControlPanel \"\"\"\n\n def registerConfiglet(id, name, action, condition='', permission='', # NOQA\n category='Plone', visible=1, appId=None,\n imageUrl=None, description='', REQUEST=None):\n \"\"\" Registration of a Configlet \"\"\"\n\n def unregisterConfiglet(id): # NOQA\n \"\"\" unregister Configlet \"\"\"\n\n def unregisterApplication(appId): # NOQA\n \"\"\" unregister Application with all configlets \"\"\"\n\n def getGroupIds(): # NOQA\n \"\"\" list of the group ids \"\"\"\n\n def getGroups(): # NOQA\n \"\"\" list of groups as dicts with id and title \"\"\"\n\n def enumConfiglets(group=None): # NOQA\n \"\"\" lists the Configlets of a group, returns them as dicts by\n calling .getAction() on each of them \"\"\"\n\n\nclass IEditingSchema(Interface):\n\n visible_ids = schema.Bool(\n title=_(u\"Show 'Short Name' on content?\"),\n description=_(\n u\"Display and allow users to edit the \"\n u\"'Short name' content identifiers, which form the \"\n u\"URL part of a content item's address. Once \"\n u\"enabled, users will then be able to enable this \"\n u\"option in their preferences.\"),\n default=False,\n required=False)\n\n available_editors = schema.List(\n title=_(u'Available editors'),\n description=_(u\"Available editors in the portal.\"),\n default=['TinyMCE'],\n value_type=schema.TextLine(),\n required=True\n )\n\n default_editor = schema.Choice(\n title=_(u'Default editor'),\n description=_(\n u\"Select the default wysiwyg \"\n u\"editor. Users will be able to choose their \"\n u\"own or select to use the site default.\"),\n default=u'TinyMCE',\n missing_value=set(),\n vocabulary=\"plone.app.vocabularies.AvailableEditors\",\n required=True)\n\n ext_editor = schema.Bool(\n title=_(u'Enable External Editor feature'),\n description=_(\n u\"Determines if the external editor \"\n u\"feature is enabled. This feature requires a \"\n u\"special client-side application installed. The \"\n u\"users also have to enable this in their \"\n u\"preferences.\"),\n default=False,\n required=False)\n\n enable_link_integrity_checks = schema.Bool(\n title=_(u\"Enable link integrity checks\"),\n description=_(\n u\"Determines if the users should get \"\n u\"warnings when they delete or move content that \"\n u\"is linked from inside the site.\"),\n default=True,\n required=False)\n\n lock_on_ttw_edit = schema.Bool(\n title=_(u\"Enable locking for through-the-web edits\"),\n description=_(\n u\"Disabling locking here will only \"\n u\"affect users editing content through the \"\n u\"Plone web UI. Content edited via WebDAV \"\n u\"clients will still be subject to locking.\"),\n default=True,\n required=False)\n\n\nclass ILanguageSchema(Interface):\n model.fieldset(\n 'general',\n label=_(u'General', default=u'General'),\n fields=[\n 'default_language',\n 'available_languages',\n 'use_combined_language_codes',\n 'display_flags',\n 'always_show_selector'\n ],\n )\n\n default_language = schema.Choice(\n title=_(u\"heading_site_language\",\n default=u\"Site language\"),\n description=_(\n u\"description_site_language\",\n default=u\"The language used for the content and the UI \"\n u\"of this site.\"),\n default='en',\n required=True,\n vocabulary=\"plone.app.vocabularies.AvailableContentLanguages\"\n )\n\n available_languages = schema.List(\n title=_(u\"heading_available_languages\",\n default=u\"Available languages\"),\n description=_(u\"description_available_languages\",\n default=u\"The languages in which the site should be \"\n u\"translatable.\"),\n required=True,\n default=['en'],\n value_type=schema.Choice(\n vocabulary=\"plone.app.vocabularies.AvailableContentLanguages\"\n )\n )\n\n use_combined_language_codes = schema.Bool(\n title=_(\n u'label_allow_combined_language_codes',\n default=u\"Show country-specific language variants\"\n ),\n description=_(\n u\"help_allow_combined_language_codes\",\n default=u\"Examples: pt-br (Brazilian Portuguese), \"\n u\"en-us (American English) etc.\"\n ),\n default=False,\n required=False\n )\n\n display_flags = schema.Bool(\n title=_(\n u'label_display_flags',\n default=u\"Show language flags\"\n ),\n description=u\"\",\n default=False,\n required=False\n )\n\n always_show_selector = schema.Bool(\n title=_(\n u'label_always_show_selector',\n default=u\"Always show language selector\"\n ),\n description=_(\n u\"help_always_show_selector\",\n default=u\"\"\n ),\n default=False,\n required=False\n )\n\n model.fieldset(\n 'negotiation_scheme',\n label=_(u'Negotiation scheme', default=u'Negotiation scheme'),\n fields=[\n 'use_content_negotiation',\n 'use_path_negotiation',\n 'use_cookie_negotiation',\n 'authenticated_users_only',\n 'set_cookie_always',\n 'use_subdomain_negotiation',\n 'use_cctld_negotiation',\n 'use_request_negotiation',\n ],\n )\n use_content_negotiation = schema.Bool(\n title=_(u\"heading_language_of_the_content\",\n default=u\"Use the language of the content item\"),\n description=_(u\"description_language_of_the_content\",\n default=u\"Use the language of the content item.\"),\n default=False,\n required=False,\n )\n\n use_path_negotiation = schema.Bool(\n title=_(\n u\"heading_language_codes_in_URL\",\n default=u\"Use language codes in URL path for manual override\"),\n description=_(\n u\"description_language_codes_in_URL\",\n default=u\"Use language codes in URL path for manual override.\"),\n default=False,\n required=False,\n )\n\n use_cookie_negotiation = schema.Bool(\n title=_(u\"heading_cookie_manual_override\",\n default=(u\"Use cookie for manual override\")),\n description=_(\n u\"description_cookie_manual_override\",\n default=(u\"Required for the language selector viewlet to be rendered.\")\n ),\n default=False,\n required=False,\n )\n\n authenticated_users_only = schema.Bool(\n title=_(u\"heading_auth_cookie_manual_override\",\n default=u\"Authenticated users only\"),\n description=_(\n u\"description_auth_ookie_manual_override\",\n default=(u\"Related to Use cookie for manual override\")\n ),\n default=False,\n required=False,\n )\n\n set_cookie_always = schema.Bool(\n title=_(\n u\"heading_set_language_cookie_always\",\n default=(u\"Set the language cookie always\")),\n description=_(\n u\"description_set_language_cookie_always\",\n default=(u\"i.e. also when the 'set_language' request parameter is absent\")),\n default=False,\n required=False,\n )\n\n use_subdomain_negotiation = schema.Bool(\n title=_(u\"heading_use_subdomain\",\n default=u\"Use subdomain\"),\n description=_(u\"description_use_subdomain\",\n default=u\"e.g.: de.plone.org\"),\n default=False,\n required=False,\n )\n\n use_cctld_negotiation = schema.Bool(\n title=_(u\"heading_top_level_domain\",\n default=u\"Use top-level domain\"),\n description=_(u\"description_top_level_domain\",\n default=u\"e.g.: www.plone.de\"),\n default=False,\n required=False,\n )\n\n use_request_negotiation = schema.Bool(\n title=_(u\"heading_browser_language_request_negotiation\",\n default=u\"Use browser language request negotiation\"),\n description=_(u\"description_browser_language_request_negotiation\",\n default=u\"Use browser language request negotiation.\"),\n default=False,\n required=False,\n )\n\n\nclass ITagAttrPair(Interface):\n tags = schema.TextLine(title=u\"tags\")\n attributes = schema.TextLine(title=u\"attributes\")\n\n\nclass TagAttrPair(object):\n\n implements(ITagAttrPair)\n\n def __init__(self, tags='', attributes=''):\n self.tags = tags\n self.attributes = attributes\n\n\nclass IFilterSchema(Interface):\n \"\"\"Combined schema for the adapter lookup.\n \"\"\"\n\n # class IFilterTagsSchema(Interface):\n\n disable_filtering = schema.Bool(\n title=_(u'Disable html filtering'),\n description=_(u'Warning, disabling can be potentially dangereous. '\n u'Only disable if you know what you are doing.'),\n default=False,\n required=False)\n\n nasty_tags = schema.List(\n title=_(u'Nasty tags'),\n description=_(u\"These tags, and their content are completely blocked \"\n \"when a page is saved or rendered.\"),\n default=[u'applet', u'embed', u'object', u'script'],\n value_type=schema.TextLine(),\n required=False)\n\n stripped_tags = schema.List(\n title=_(u'Stripped tags'),\n description=_(u\"These tags are stripped when saving or rendering, \"\n \"but any content is preserved.\"),\n default=[u'font', ],\n value_type=schema.TextLine(),\n required=False)\n\n custom_tags = schema.List(\n title=_(u'Custom tags'),\n description=_(u\"Add tag names here for tags which are not part of \"\n \"XHTML but which should be permitted.\"),\n default=[],\n value_type=schema.TextLine(),\n required=False)\n\n # class IFilterAttributesSchema(Interface):\n\n stripped_attributes = schema.List(\n title=_(u'Stripped attributes'),\n description=_(u\"These attributes are stripped from any tag when \"\n \"saving.\"),\n default=(u'dir lang valign halign border frame rules cellspacing '\n 'cellpadding bgcolor').split(),\n value_type=schema.TextLine(),\n required=False)\n\n stripped_combinations = schema.Dict(\n title=_(u'Stripped combinations'),\n description=_(u\"These attributes are stripped from those tags when \"\n \"saving.\"),\n key_type=schema.TextLine(title=u\"tags\"),\n value_type=schema.TextLine(title=u\"attributes\"),\n default={},\n # XXX replace with value adapter\n # default={'table th td': 'width height', 'other tags': 'other attrs'}\n required=False)\n\n # class IFilterEditorSchema(Interface):\n\n style_whitelist = schema.List(\n title=_(u'Permitted properties'),\n description=_(\n u'These CSS properties are allowed in style attributes.'),\n default=u'text-align list-style-type float text-decoration'.split(),\n value_type=schema.TextLine(),\n required=False)\n\n class_blacklist = schema.List(\n title=_(u'Filtered classes'),\n description=_(u'These class names are not allowed in class '\n 'attributes.'),\n default=[],\n value_type=schema.TextLine(),\n required=False)\n\n\nclass ITinyMCELayoutSchema(Interface):\n \"\"\"This interface defines the layout properties.\"\"\"\n\n resizing = schema.Bool(\n title=_(u\"Enable resizing the editor window.\"),\n description=_(u\"This option gives you the ability to enable/disable \"\n \"resizing the editor window. \"),\n default=True,\n required=False)\n\n autoresize = schema.Bool(\n title=_(u\"Enable auto resizing of the editor window.\"),\n description=_(u\"This option gives you the ability to enable/disable \"\n \"auto resizing the editor window depending \"\n \"on the content.\"),\n default=False,\n required=False)\n\n # TODO: add validation to assert % and px in the value\n editor_width = schema.TextLine(\n title=_(u\"Editor width\"),\n description=_(u\"This option gives you the ability to specify the \"\n \"width of the editor (like 100% or 400px).\"),\n default=None,\n required=False)\n\n # TODO: add validation to assert % and px in the value\n editor_height = schema.TextLine(\n title=_(u\"Editor height\"),\n description=_(u\"This option gives you the ability to specify the \"\n \"height of the editor in pixels. \"\n \"If auto resize is enabled this value is used \"\n \"as minimum height.\"),\n default=None,\n required=False)\n\n content_css = schema.TextLine(\n title=_(u\"Choose the CSS used in WYSIWYG Editor Area\"),\n description=_(u\"This option enables you to specify a custom CSS file \"\n \"that provides content CSS. \"\n \"This CSS file is the one used within the editor \"\n \"(the editable area). In addition to what is listed here, \"\n \"the plone bundle CSS and diazo themes using the \"\n \"tinymce-content-css setting are also added.\"),\n default=u'++plone++static/components/tinymce/skins/lightgray/content.min.css',\n required=False)\n\n header_styles = schema.List(\n title=_(u\"Header styles\"),\n description=_('Name|tag'),\n value_type=schema.TextLine(),\n default=[\n u'Header 1|h1',\n u\"Header 2|h2\",\n u\"Header 3|h3\",\n u\"Header 4|h4\",\n u\"Header 5|h5\",\n u\"Header 6|h6\"\n ])\n\n inline_styles = schema.List(\n title=_(u\"Inline styles\"),\n description=_('Name|format|icon'),\n value_type=schema.TextLine(),\n default=[\n u\"Bold|bold|bold\",\n u\"Italic|italic|italic\",\n u\"Underline|underline|underline\",\n u\"Strikethrough|strikethrough|strikethrough\",\n u\"Superscript|superscript|superscript\",\n u\"Subscript|subscript|subscript\",\n u\"Code|code|code\"])\n\n block_styles = schema.List(\n title=_(u\"Block styles\"),\n description=_('Name|format'),\n value_type=schema.TextLine(),\n default=[\n u\"Paragraph|p\",\n u\"Blockquote|blockquote\",\n u\"Div|div\",\n u\"Pre|pre\"])\n\n alignment_styles = schema.List(\n title=_(u\"Alignment styles\"),\n description=_('Name|format|icon'),\n value_type=schema.TextLine(),\n default=[\n u\"Left|alignleft|alignleft\",\n u\"Center|aligncenter|aligncenter\",\n u\"Right|alignright|alignright\",\n u\"Justify|alignjustify|alignjustify\"])\n\n formats = schema.Text(\n title=_(u\"Formats\"),\n description=_(\n u\"Enter a JSON-formatted style format configuration. \"\n u\"A format is for example the style that get applied when \"\n u\"you press the bold button inside the editor. \"\n u\"See http://www.tinymce.com/wiki.php/Configuration:formats\"),\n constraint=validate_json,\n default=json.dumps({\n 'discreet': {'inline': 'span', 'classes': 'discreet'},\n 'clearfix': {'block': 'div', 'classes': 'clearfix'}\n }, indent=4).decode('utf8'),\n required=True,\n )\n\n\nclass ITinyMCEPluginSchema(Interface):\n \"\"\"This interface defines the toolbar properties.\"\"\"\n\n plugins = schema.List(\n title=_(\"label_tinymce_plugins\", default=u\"Editor Plugins\"),\n description=_(\"help_tinymce_plugins\", default=(\n u\"Select plugins to include with tinymce\")),\n value_type=schema.Choice(vocabulary=SimpleVocabulary([\n SimpleTerm('advlist', 'advlist', u\"advlist\"),\n SimpleTerm('anchor', 'anchor', u\"anchor\"),\n SimpleTerm('autosave', 'autosave', u\"autosave\"),\n SimpleTerm('charmap', 'charmap', u\"charmap\"),\n SimpleTerm('code', 'code', u\"code\"),\n SimpleTerm('colorpicker', 'colorpicker', u\"colorpicker\"),\n SimpleTerm('contextmenu', 'contextmenu', u\"contextmenu\"),\n SimpleTerm('directionality', 'directionality', u\"directionality\"),\n SimpleTerm('emoticons', 'emoticons', u\"emoticons\"),\n SimpleTerm('fullpage', 'fullpage', u\"fullpage\"),\n SimpleTerm('fullscreen', 'fullscreen', u\"fullscreen\"),\n SimpleTerm('hr', 'hr', u\"hr\"),\n SimpleTerm('insertdatetime', 'insertdatetime', u\"insertdatetime\"),\n SimpleTerm('layer', 'layer', u\"layer\"),\n SimpleTerm('lists', 'lists', u\"lists\"),\n SimpleTerm('media', 'media', u\"media\"),\n SimpleTerm('nonbreaking', 'nonbreaking', u\"nonbreaking\"),\n SimpleTerm('noneditable', 'noneditable', u\"noneditable\"),\n SimpleTerm('pagebreak', 'pagebreak', u\"pagebreak\"),\n SimpleTerm('paste', 'paste', u\"paste\"),\n SimpleTerm('preview', 'preview', u\"preview\"),\n SimpleTerm('print', 'print', u\"print\"),\n SimpleTerm('save', 'save', u\"save\"),\n SimpleTerm('searchreplace', 'searchreplace', u\"searchreplace\"),\n SimpleTerm('tabfocus', 'tabfocus', u\"tabfocus\"),\n SimpleTerm('table', 'table', u\"table\"),\n SimpleTerm('textcolor', 'textcolor', u\"textcolor\"),\n SimpleTerm('textpattern', 'textpattern', u\"textpattern\"),\n SimpleTerm('visualblocks', 'visualblocks', u\"visualblocks\"),\n SimpleTerm('visualchars', 'visualchars', u\"visualchars\"),\n SimpleTerm('wordcount', 'wordcount', u\"wordcount\")\n ])),\n default=['advlist', 'directionality', 'emoticons',\n 'fullscreen', 'hr', 'insertdatetime', 'lists', 'media',\n 'nonbreaking', 'noneditable', 'pagebreak', 'paste', 'preview',\n 'print', 'save', 'searchreplace', 'tabfocus', 'table',\n 'visualchars', 'wordcount', 'code'],\n required=False)\n\n menubar = schema.List(\n title=_(\"label_tinymce_menubar\", default=u\"Menubar\"),\n description=_(\"help_tinymce_menubar\", default=(\n u\"Enter what items you would like in the menu bar.\")),\n required=True,\n value_type=schema.TextLine(),\n default=[\n u'edit', u'table', u'format',\n u'tools' u'view', u'insert'])\n\n menu = schema.Text(\n title=_('label_tinymce_menu', 'Menu'),\n description=_('hint_tinymce_menu',\n default='JSON formatted Menu configuration.'),\n default=json.dumps({\n 'file': {'title': 'File', 'items': 'newdocument'},\n 'edit': {'title': 'Edit', 'items': 'undo redo | cut '\n 'copy paste pastetext | selectall'},\n 'insert': {'title': 'Insert', 'items': 'link media | template hr'},\n 'view': {'title': 'View', 'items': 'visualaid'},\n 'format': {'title': 'Format',\n 'items': 'bold italic underline strikethrough '\n 'superscript subscript | formats | removeformat'},\n 'table': {'title': 'Table', 'items': 'inserttable tableprops deletetable '\n '| cell row column'},\n 'tools': {'title': 'Tools', 'items': 'spellchecker code'}\n }, indent=4).decode('utf8')\n )\n\n templates = schema.Text(\n title=_(\"label_tinymce_templates\", default=u\"Templates\"),\n description=_(\"help_tinymce_templates\", default=(\n u\"Enter the list of templates in json format \\\n http://www.tinymce.com/wiki.php/Plugin:template\")),\n required=False,\n default=u\"\")\n\n toolbar = schema.Text(\n title=_(\"label_tinymce_toolbar\", default=u\"Toolbar\"),\n description=_(\"help_tinymce_toolbar\", default=(\n u\"Enter how you would like the toolbar items to list.\")),\n required=True,\n default=u'undo redo | styleselect | bold italic | '\n u'alignleft aligncenter alignright alignjustify | '\n u'bullist numlist outdent indent | '\n u'unlink plonelink ploneimage')\n\n custom_plugins = schema.List(\n title=_(u\"Custom Plugins\"),\n description=_(u\"Enter a list of custom plugins which will be loaded \"\n \"in the editor. Format is \"\n \"pluginname|location, one per line.\"),\n required=False,\n value_type=schema.TextLine(),\n default=[])\n\n custom_buttons = schema.List(\n title=_(u\"Custom Buttons\"),\n description=_(u\"Enter a list of custom buttons which will be added to toolbar\"),\n required=False,\n value_type=schema.TextLine(),\n default=[])\nITinyMCELibrariesSchema = ITinyMCEPluginSchema # bw compat\n\n\nclass ITinyMCESpellCheckerSchema(Interface):\n \"\"\"This interface defines the libraries properties.\"\"\"\n\n libraries_spellchecker_choice = schema.Choice(\n title=_(u\"Spellchecker plugin to use\"),\n description=_(u\"This option allows you to choose the spellchecker for \"\n u\"TinyMCE.\"),\n missing_value=set(),\n vocabulary=SimpleVocabulary([\n SimpleTerm('browser', 'browser',\n _(u\"Default browser spellchecker\")),\n SimpleTerm('AtD', 'AtD',\n _(u\"After the deadline (FLOSS)\")),\n ]),\n default=u'browser',\n required=False)\n\n libraries_atd_ignore_strings = schema.List(\n title=_(u\"AtD Ignore strings\"),\n description=_(\n 'label_atd_ignore_strings',\n default=u\"A list of strings which the \\\"After the Deadline\\\" \"\n u\"spellchecker should ignore. \"\n u\"Note: This option is only applicable when the \"\n u\"appropriate spellchecker has been chosen above.\"),\n default=[\n u\"Zope\",\n u\"Plone\",\n u\"TinyMCE\"],\n value_type=schema.TextLine(),\n required=False)\n\n libraries_atd_show_types = schema.List(\n title=_(u\"AtD Error types to show\"),\n description=_(\n 'help_atderrortypes_to_show',\n default=u\"A list of error types which the \"\n u\"\\\"After the Deadline\\\" spellchecker should check for. \"\n u\"By default, all the available error type will be \"\n u\"listed here.\"),\n value_type=schema.TextLine(),\n default=[\n u\"Bias Language\",\n u\"Cliches\",\n u\"Complex Expression\",\n u\"Diacritical Marks\",\n u\"Double Negatives\",\n u\"Hidden Verbs\",\n u\"Jargon Language\",\n u\"Passive voice\",\n u\"Phrases to Avoid\",\n u\"Redundant Expression\"],\n required=False)\n\n libraries_atd_service_url = schema.TextLine(\n title=_(u\"AtD Service URL\"),\n description=_(\n 'help_atd_service_url',\n default=u\"The URL of the \\\"After the Deadline\\\" grammar and spell \"\n u\"checking server. \"\n u\"The default value is the public server, \"\n u\"but ideally you should download and install your own \"\n u\"and specify its address here.\"),\n required=True,\n default=u\"service.afterthedeadline.com\",)\n\n\nclass ITinyMCEResourceTypesSchema(Interface):\n \"\"\"This interface defines the resource types properties.\"\"\"\n\n # XXX Not implemented in new tinymce version. Need to decide about this\n # rooted = schema.Bool(\n # title=_(u\"Rooted to current object\"),\n # description=_(u\"When enabled the user will be rooted to the current \"\n # \"object and can't add links and images from other parts \"\n # \"of the site.\"),\n # default=False,\n # required=False)\n\n contains_objects = schema.List(\n title=_(u\"Contains Objects\"),\n description=_(u\"Enter a list of content types which can contain other \"\n \"objects. Format is one contenttype per line.\"),\n value_type=schema.TextLine(),\n default=[\n u\"Folder\",\n u\"Large Plone Folder\",\n u\"Plone Site\"],\n required=False)\n\n # XXX not implements\n # containsanchors = schema.Text(\n # title=_(u\"Contains Anchors\"),\n # description=_(u\"Enter a list of content types which can contain \"\n # \"anchors. Format is one contenttype per line.\"),\n # default=u\"Event\\n\"\n # u\"News Item\\n\"\n # u\"Document\\n\"\n # u\"ATRelativePathCriterion\",\n # required=False)\n\n # XXX do we still want this?\n # seems like it could be really annoying for users\n # creating new types.\n # linkable = schema.Text(\n # title=_(u\"Linkable Objects\"),\n # description=_(u\"Enter a list of content types which can be linked. \"\n # \"Format is one contenttype per line.\"),\n # required=False)\n\n image_objects = schema.List(\n title=_(u\"Image Objects\"),\n description=_(u\"Enter a list of content types which can be used as \"\n \"images. Format is one contenttype per line.\"),\n default=[u\"Image\"],\n value_type=schema.TextLine(),\n required=False)\n\n entity_encoding = schema.Choice(\n title=_(u\"Entity encoding\"),\n description=_(\n u\"This option controls how entities/characters get processed. \"\n \"Named: Characters will be converted into named entities \"\n \"based on the entities option. \"\n \"Numeric: Characters will be converted into numeric entities. \"\n \"Raw: All characters will be stored in non-entity form \"\n \"except these XML default entities: amp lt gt quot\"),\n missing_value=set(),\n vocabulary=SimpleVocabulary(\n [SimpleTerm('named', 'named', _(u\"Named\")),\n SimpleTerm('numeric', 'numeric', _(u\"Numeric\")),\n SimpleTerm('raw', 'raw', _(u\"Raw\"))]),\n default=u\"raw\",\n required=False)\n\n\nclass ITinyMCESchema(\n ITinyMCELayoutSchema,\n ITinyMCEPluginSchema,\n ITinyMCESpellCheckerSchema,\n ITinyMCEResourceTypesSchema\n):\n \"\"\"TinyMCE Schema\"\"\"\n\n\nclass IMaintenanceSchema(Interface):\n\n days = schema.Int(\n title=_(u\"Days of object history to keep after packing\"),\n description=_(\n u\"You should pack your database regularly. This number \"\n u\"indicates how many days of undo history you want to \"\n u\"keep. It is unrelated to versioning, so even if you \"\n u\"pack the database, the history of the content changes \"\n u\"will be kept. Recommended value is 7 days.\"\n ),\n default=7,\n required=True\n )\n\n\nclass INavigationSchema(Interface):\n\n generate_tabs = schema.Bool(\n title=_(u\"Automatically generate tabs\"),\n description=_(\n u\"By default, all items created at the root level will \"\n u\"add to the global section navigation. You can turn this off \"\n u\"if you prefer manually constructing this part of the \"\n u\"navigation.\"),\n default=True,\n required=False)\n\n nonfolderish_tabs = schema.Bool(\n title=_(u\"Generate tabs for items other than folders.\"),\n description=_(\n u\"By default, any content item in the root of the portal will \"\n u\"be shown as a global section. If you turn this option off, \"\n u\"only folders will be shown. This only has an effect if \"\n u\"'Automatically generate tabs' is enabled.\"),\n default=True,\n required=False)\n\n displayed_types = schema.Tuple(\n title=_(u\"Displayed content types\"),\n description=_(\n u\"The content types that should be shown in the navigation and \"\n u\"site map.\"),\n required=False,\n default=(\n 'Image',\n 'File',\n 'Link',\n 'News Item',\n 'Folder',\n 'Document',\n 'Event'\n ),\n value_type=schema.Choice(\n source=\"plone.app.vocabularies.ReallyUserFriendlyTypes\"\n ))\n\n filter_on_workflow = schema.Bool(\n title=_(u\"Filter on workflow state\"),\n description=_(\n u\"The workflow states that should be shown in the navigation \"\n u\"tree and the site map.\"),\n default=False,\n required=False)\n\n workflow_states_to_show = schema.Tuple(\n required=False,\n default=(),\n value_type=schema.Choice(\n source=\"plone.app.vocabularies.WorkflowStates\"))\n\n show_excluded_items = schema.Bool(\n title=_(\n u\"Show items normally excluded from navigation if viewing their \"\n u\"children.\"),\n description=_(\n u\"If an item has been excluded from navigation should it be \"\n u\"shown in navigation when viewing content contained within it \"\n u\"or within a subfolder.\"),\n default=True,\n required=False)\n\n\nclass ISearchSchema(Interface):\n\n enable_livesearch = schema.Bool(\n title=_(u'Enable LiveSearch'),\n description=_(\n u\"Enables the LiveSearch feature, which shows live \"\n u\"results if the browser supports JavaScript.\"),\n default=True,\n required=False\n )\n\n types_not_searched = schema.Tuple(\n title=_(u\"Define the types to be shown in the site and searched\"),\n description=_(\n u\"Define the types that should be searched and be \"\n u\"available in the user facing part of the site. \"\n u\"Note that if new content types are installed, they \"\n u\"will be enabled by default unless explicitly turned \"\n u\"off here or by the relevant installer.\"\n ),\n required=False,\n default=(\n 'ATBooleanCriterion',\n 'ATDateCriteria',\n 'ATDateRangeCriterion',\n 'ATListCriterion',\n 'ATPortalTypeCriterion',\n 'ATReferenceCriterion',\n 'ATSelectionCriterion',\n 'ATSimpleIntCriterion',\n 'ATSimpleStringCriterion',\n 'ATSortCriterion',\n 'ChangeSet',\n 'Discussion Item',\n 'Plone Site',\n 'TempFolder',\n 'ATCurrentAuthorCriterion',\n 'ATPathCriterion',\n 'ATRelativePathCriterion',\n ),\n value_type=schema.Choice(\n source=\"plone.app.vocabularies.PortalTypes\"\n ),\n )\n\n\nclass ISecuritySchema(Interface):\n\n enable_self_reg = schema.Bool(\n title=_(u'Enable self-registration'),\n description=_(\n u\"Allows users to register themselves on the site. If \"\n u\"not selected, only site managers can add new users.\"),\n default=False,\n required=False)\n\n enable_user_pwd_choice = schema.Bool(\n title=_(u'Let users select their own passwords'),\n description=_(\n u\"If not selected, a URL will be generated and \"\n u\"e-mailed. Users are instructed to follow the link to \"\n u\"reach a page where they can change their password and \"\n u\"complete the registration process; this also verifies \"\n u\"that they have entered a valid email address.\"),\n default=False,\n required=False)\n\n enable_user_folders = schema.Bool(\n title=_(u'Enable User Folders'),\n description=_(\n u\"If selected, home folders where users can create \"\n u\"content will be created when they log in.\"),\n default=False,\n required=False)\n\n allow_anon_views_about = schema.Bool(\n title=_(u\"Allow anyone to view 'about' information\"),\n description=_(\n u\"If not selected only logged-in users will be able to \"\n u\"view information about who created an item and when it \"\n u\"was modified.\"),\n default=False,\n required=False)\n\n use_email_as_login = schema.Bool(\n title=_(u'Use email address as login name'),\n description=_(\n u\"Allows users to login with their email address instead \"\n u\"of specifying a separate login name. This also updates \"\n u\"the login name of existing users, which may take a \"\n u\"while on large sites. The login name is saved as \"\n u\"lower case, but to be userfriendly it does not matter \"\n u\"which case you use to login. When duplicates are found, \"\n u\"saving this form will fail. You can use the \"\n u\"@@migrate-to-emaillogin page to show the duplicates.\"),\n default=False,\n required=False)\n\n use_uuid_as_userid = schema.Bool(\n title=_(u'Use UUID user ids'),\n description=_(\n u\"Use automatically generated UUIDs as user id for new users. \"\n u\"When not turned on, the default is to use the same as the \"\n u\"login name, or when using the email address as login name we \"\n u\"generate a user id based on the fullname.\"),\n default=False,\n required=False)\n\n\n# XXX: Why does ISiteSchema inherit from ILockSettings here ???\nclass ISiteSchema(ILockSettings):\n\n site_title = schema.TextLine(\n title=_(u'Site title'),\n description=_(\n u\"This shows up in the title bar of \"\n u\"browsers and in syndication feeds.\"),\n default=u'Plone site')\n\n site_logo = schema.ASCII(\n title=_(u\"Site Logo\"),\n description=_(u\"This shows a custom Logo on your Site.\"),\n required=False,\n )\n\n exposeDCMetaTags = schema.Bool(\n title=_(u\"Expose Dublin Core metadata\"),\n description=_(u\"Exposes the Dublin Core properties as metatags.\"),\n default=False,\n required=False)\n\n enable_sitemap = schema.Bool(\n title=_(u\"Expose sitemap.xml.gz\"),\n description=_(\n u\"Exposes your content as a file \"\n u\"according to the sitemaps.org standard. You \"\n u\"can submit this to compliant search engines \"\n u\"like Google, Yahoo and Microsoft. It allows \"\n u\"these search engines to more intelligently \"\n u\"crawl your site.\"),\n default=False,\n required=False)\n\n webstats_js = schema.SourceText(\n title=_(u'JavaScript for web statistics support'),\n description=_(\n u\"For enabling web statistics support \"\n u\"from external providers (for e.g. Google \"\n u\"Analytics). Paste the code snippets provided. \"\n u\"It will be included in the rendered HTML as \"\n u\"entered near the end of the page.\"),\n default=u'',\n required=False)\n\n\nclass IDateAndTimeSchema(Interface):\n \"\"\"Controlpanel settings for date and time related settings.\n \"\"\"\n\n portal_timezone = schema.Choice(\n title=_(u\"Portal default timezone\"),\n description=_(\n u\"help_portal_timezone\",\n default=u\"The timezone setting of the portal. Users can set \"\n u\"their own timezone, if available timezones are \"\n u\"defined.\"),\n required=True,\n default=None,\n vocabulary=\"plone.app.vocabularies.CommonTimezones\")\n\n available_timezones = schema.List(\n title=_(u\"Available timezones\"),\n description=_(\n u\"help_available_timezones\",\n default=u\"The timezones, which should be available for the \"\n u\"portal. Can be set for users and events\"),\n required=False,\n default=[],\n value_type=schema.Choice(\n vocabulary=\"plone.app.vocabularies.Timezones\"))\n\n first_weekday = schema.Choice(\n title=_(u'label_first_weekday', default=u'First weekday'),\n description=_(\n u'help_first_weekday',\n default=u'First day in the week.'),\n required=True,\n default=None,\n vocabulary=\"plone.app.vocabularies.Weekdays\")\n\n\nclass ITypesSchema(Interface):\n \"\"\"\n \"\"\"\n\n\nclass IMailSchema(Interface):\n\n smtp_host = schema.TextLine(\n title=_(\n u'label_smtp_server',\n default=u'SMTP server'),\n description=_(\n u\"help_smtp_server\",\n default=u\"The address of your local \"\n u\"SMTP (outgoing e-mail) server. Usually \"\n u\"'localhost', unless you use an \"\n u\"external server to send e-mail.\"),\n default=u'localhost',\n required=True)\n\n smtp_port = schema.Int(\n title=_(u'label_smtp_port',\n default=u'SMTP port'),\n description=_(u\"help_smtp_port\",\n default=u\"The port of your local SMTP \"\n u\"(outgoing e-mail) server. Usually '25'.\"),\n default=25,\n required=True)\n\n smtp_userid = schema.TextLine(\n title=_(\n u'label_smtp_userid',\n default=u'ESMTP username'),\n description=_(\n u\"help_smtp_userid\",\n default=u\"Username for authentication \"\n u\"to your e-mail server. Not required \"\n u\"unless you are using ESMTP.\"),\n default=None,\n required=False)\n\n smtp_pass = schema.Password(\n title=_(\n u'label_smtp_pass',\n default=u'ESMTP password'),\n description=_(\n u\"help_smtp_pass\",\n default=u\"The password for the ESMTP \"\n u\"user account.\"),\n default=None,\n required=False)\n\n email_from_name = schema.TextLine(\n title=_(u\"Site 'From' name\"),\n description=_(\n u\"Plone generates e-mail using \"\n u\"this name as the e-mail \"\n u\"sender.\"),\n default=None,\n required=True)\n\n email_from_address = schema.ASCIILine(\n title=_(u\"Site 'From' address\"),\n description=_(\n u\"Plone generates e-mail using \"\n u\"this address as the e-mail \"\n u\"return address. It is also \"\n u\"used as the destination \"\n u\"address for the site-wide \"\n u\"contact form and the 'Send test \"\n u\"e-mail' feature.\"),\n default=None,\n required=True)\n\n\nclass IMarkupSchema(Interface):\n\n default_type = schema.Choice(\n title=_(u'Default format'),\n description=_(\n u\"Select the default format of textfields for newly \"\n u\"created content objects.\"\n ),\n default=u'text/html',\n vocabulary=\"plone.app.vocabularies.AllowableContentTypes\",\n required=True\n )\n\n allowed_types = schema.Tuple(\n title=_(u'Alternative formats'),\n description=_(\n u\"Select which formats are available for users as \"\n u\"alternative to the default format. Note that if new \"\n u\"formats are installed, they will be enabled for text \"\n u\"fields by default unless explicitly turned off here \"\n u\"or by the relevant installer.\"\n ),\n required=True,\n default=('text/html', 'text/x-web-textile'),\n value_type=schema.Choice(\n vocabulary=\"plone.app.vocabularies.AllowableContentTypes\"\n )\n )\n\n\nclass IUserGroupsSettingsSchema(Interface):\n\n many_groups = schema.Bool(\n title=_(u'Many groups?'),\n description=_(\n u\"Determines if your Plone is optimized \"\n u\"for small or large sites. In environments with a \"\n u\"lot of groups it can be very slow or impossible \"\n u\"to build a list all groups. This option tunes the \"\n u\"user interface and behaviour of Plone for this \"\n u\"case by allowing you to search for groups instead \"\n u\"of listing all of them.\"),\n default=False\n )\n\n many_users = schema.Bool(\n title=_(u'Many users?'),\n description=_(\n u\"Determines if your Plone is optimized \"\n u\"for small or large sites. In environments with a \"\n u\"lot of users it can be very slow or impossible to \"\n u\"build a list all users. This option tunes the user \"\n u\"interface and behaviour of Plone for this case by \"\n u\"allowing you to search for users instead of \"\n u\"listing all of them.\"),\n default=False\n )\n\n\nclass ISocialMediaSchema(Interface):\n\n share_social_data = schema.Bool(\n title=_(u'Share social data'),\n description=_(u'Include meta tags on pages to give hints to '\n u'social media on how to render your pages better '\n u'when shared'),\n default=True)\n\n twitter_username = schema.TextLine(\n title=_(u'Twitter Username'),\n description=_(u'To idenitify things like Twitter Cards'),\n required=False,\n default=u'')\n\n facebook_app_id = schema.TextLine(\n title=_(u'Facebook app id'),\n description=_(u'To be used with some integrations like open graph data'),\n required=False,\n default=u'')\n\n facebook_username = schema.TextLine(\n title=_(u'Facebook username'),\n description=_(u'For linking open graph data to a facebook account'),\n required=False,\n default=u'')\n\n\nclass IImagingSchema(Interface):\n allowed_sizes = schema.List(\n title=_(u'Allowed image sizes'),\n description=_(u'Specify all allowed maximum image dimensions, '\n 'one per line. '\n 'The required format is &lt;name&gt; &lt;width&gt;:&lt;height&gt;.'),\n value_type=schema.TextLine(),\n default=[\n \"large 768:768\",\n \"preview 400:400\",\n \"mini 200:200\",\n \"thumb 128:128\",\n \"tile 64:64\",\n \"icon 32:32\",\n \"listing 16:16\"],\n required=False,\n )\n\n quality = schema.Int(\n title=_(u'Scaled image quality'),\n description=_(u'A value for the quality of scaled images, from 1 '\n '(lowest) to 95 (highest). A value of 0 will mean '\n 'plone.scaling\\'s default will be used, which is '\n 'currently 88.'),\n min=0,\n max=95,\n default=88\n )\n", "path": "Products/CMFPlone/interfaces/controlpanel.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\nfrom plone.supermodel import model\nfrom Products.CMFPlone import PloneMessageFactory as _ # NOQA\nfrom Products.CMFPlone.utils import validate_json\nfrom basetool import IPloneBaseTool\nfrom plone.locking.interfaces import ILockSettings\nfrom zope import schema\nfrom zope.interface import Interface, implements\nfrom zope.schema.vocabulary import SimpleTerm\nfrom zope.schema.vocabulary import SimpleVocabulary\nimport json\n\n\nclass IControlPanel(IPloneBaseTool):\n \"\"\" Interface for the ControlPanel \"\"\"\n\n def registerConfiglet(id, name, action, condition='', permission='', # NOQA\n category='Plone', visible=1, appId=None,\n imageUrl=None, description='', REQUEST=None):\n \"\"\" Registration of a Configlet \"\"\"\n\n def unregisterConfiglet(id): # NOQA\n \"\"\" unregister Configlet \"\"\"\n\n def unregisterApplication(appId): # NOQA\n \"\"\" unregister Application with all configlets \"\"\"\n\n def getGroupIds(): # NOQA\n \"\"\" list of the group ids \"\"\"\n\n def getGroups(): # NOQA\n \"\"\" list of groups as dicts with id and title \"\"\"\n\n def enumConfiglets(group=None): # NOQA\n \"\"\" lists the Configlets of a group, returns them as dicts by\n calling .getAction() on each of them \"\"\"\n\n\nclass IEditingSchema(Interface):\n\n visible_ids = schema.Bool(\n title=_(u\"Show 'Short Name' on content?\"),\n description=_(\n u\"Display and allow users to edit the \"\n u\"'Short name' content identifiers, which form the \"\n u\"URL part of a content item's address. Once \"\n u\"enabled, users will then be able to enable this \"\n u\"option in their preferences.\"),\n default=False,\n required=False)\n\n available_editors = schema.List(\n title=_(u'Available editors'),\n description=_(u\"Available editors in the portal.\"),\n default=['TinyMCE'],\n value_type=schema.TextLine(),\n required=True\n )\n\n default_editor = schema.Choice(\n title=_(u'Default editor'),\n description=_(\n u\"Select the default wysiwyg \"\n u\"editor. Users will be able to choose their \"\n u\"own or select to use the site default.\"),\n default=u'TinyMCE',\n missing_value=set(),\n vocabulary=\"plone.app.vocabularies.AvailableEditors\",\n required=True)\n\n ext_editor = schema.Bool(\n title=_(u'Enable External Editor feature'),\n description=_(\n u\"Determines if the external editor \"\n u\"feature is enabled. This feature requires a \"\n u\"special client-side application installed. The \"\n u\"users also have to enable this in their \"\n u\"preferences.\"),\n default=False,\n required=False)\n\n enable_link_integrity_checks = schema.Bool(\n title=_(u\"Enable link integrity checks\"),\n description=_(\n u\"Determines if the users should get \"\n u\"warnings when they delete or move content that \"\n u\"is linked from inside the site.\"),\n default=True,\n required=False)\n\n lock_on_ttw_edit = schema.Bool(\n title=_(u\"Enable locking for through-the-web edits\"),\n description=_(\n u\"Disabling locking here will only \"\n u\"affect users editing content through the \"\n u\"Plone web UI. Content edited via WebDAV \"\n u\"clients will still be subject to locking.\"),\n default=True,\n required=False)\n\n\nclass ILanguageSchema(Interface):\n model.fieldset(\n 'general',\n label=_(u'General', default=u'General'),\n fields=[\n 'default_language',\n 'available_languages',\n 'use_combined_language_codes',\n 'display_flags',\n 'always_show_selector'\n ],\n )\n\n default_language = schema.Choice(\n title=_(u\"heading_site_language\",\n default=u\"Site language\"),\n description=_(\n u\"description_site_language\",\n default=u\"The language used for the content and the UI \"\n u\"of this site.\"),\n default='en',\n required=True,\n vocabulary=\"plone.app.vocabularies.AvailableContentLanguages\"\n )\n\n available_languages = schema.List(\n title=_(u\"heading_available_languages\",\n default=u\"Available languages\"),\n description=_(u\"description_available_languages\",\n default=u\"The languages in which the site should be \"\n u\"translatable.\"),\n required=True,\n default=['en'],\n value_type=schema.Choice(\n vocabulary=\"plone.app.vocabularies.AvailableContentLanguages\"\n )\n )\n\n use_combined_language_codes = schema.Bool(\n title=_(\n u'label_allow_combined_language_codes',\n default=u\"Show country-specific language variants\"\n ),\n description=_(\n u\"help_allow_combined_language_codes\",\n default=u\"Examples: pt-br (Brazilian Portuguese), \"\n u\"en-us (American English) etc.\"\n ),\n default=True,\n required=False\n )\n\n display_flags = schema.Bool(\n title=_(\n u'label_display_flags',\n default=u\"Show language flags\"\n ),\n description=u\"\",\n default=False,\n required=False\n )\n\n always_show_selector = schema.Bool(\n title=_(\n u'label_always_show_selector',\n default=u\"Always show language selector\"\n ),\n description=_(\n u\"help_always_show_selector\",\n default=u\"\"\n ),\n default=False,\n required=False\n )\n\n model.fieldset(\n 'negotiation_scheme',\n label=_(u'Negotiation scheme', default=u'Negotiation scheme'),\n fields=[\n 'use_content_negotiation',\n 'use_path_negotiation',\n 'use_cookie_negotiation',\n 'authenticated_users_only',\n 'set_cookie_always',\n 'use_subdomain_negotiation',\n 'use_cctld_negotiation',\n 'use_request_negotiation',\n ],\n )\n use_content_negotiation = schema.Bool(\n title=_(u\"heading_language_of_the_content\",\n default=u\"Use the language of the content item\"),\n description=_(u\"description_language_of_the_content\",\n default=u\"Use the language of the content item.\"),\n default=False,\n required=False,\n )\n\n use_path_negotiation = schema.Bool(\n title=_(\n u\"heading_language_codes_in_URL\",\n default=u\"Use language codes in URL path for manual override\"),\n description=_(\n u\"description_language_codes_in_URL\",\n default=u\"Use language codes in URL path for manual override.\"),\n default=False,\n required=False,\n )\n\n use_cookie_negotiation = schema.Bool(\n title=_(u\"heading_cookie_manual_override\",\n default=(u\"Use cookie for manual override\")),\n description=_(\n u\"description_cookie_manual_override\",\n default=(u\"Required for the language selector viewlet to be rendered.\")\n ),\n default=False,\n required=False,\n )\n\n authenticated_users_only = schema.Bool(\n title=_(u\"heading_auth_cookie_manual_override\",\n default=u\"Authenticated users only\"),\n description=_(\n u\"description_auth_ookie_manual_override\",\n default=(u\"Related to Use cookie for manual override\")\n ),\n default=False,\n required=False,\n )\n\n set_cookie_always = schema.Bool(\n title=_(\n u\"heading_set_language_cookie_always\",\n default=(u\"Set the language cookie always\")),\n description=_(\n u\"description_set_language_cookie_always\",\n default=(u\"i.e. also when the 'set_language' request parameter is absent\")),\n default=False,\n required=False,\n )\n\n use_subdomain_negotiation = schema.Bool(\n title=_(u\"heading_use_subdomain\",\n default=u\"Use subdomain\"),\n description=_(u\"description_use_subdomain\",\n default=u\"e.g.: de.plone.org\"),\n default=False,\n required=False,\n )\n\n use_cctld_negotiation = schema.Bool(\n title=_(u\"heading_top_level_domain\",\n default=u\"Use top-level domain\"),\n description=_(u\"description_top_level_domain\",\n default=u\"e.g.: www.plone.de\"),\n default=False,\n required=False,\n )\n\n use_request_negotiation = schema.Bool(\n title=_(u\"heading_browser_language_request_negotiation\",\n default=u\"Use browser language request negotiation\"),\n description=_(u\"description_browser_language_request_negotiation\",\n default=u\"Use browser language request negotiation.\"),\n default=False,\n required=False,\n )\n\n\nclass ITagAttrPair(Interface):\n tags = schema.TextLine(title=u\"tags\")\n attributes = schema.TextLine(title=u\"attributes\")\n\n\nclass TagAttrPair(object):\n\n implements(ITagAttrPair)\n\n def __init__(self, tags='', attributes=''):\n self.tags = tags\n self.attributes = attributes\n\n\nclass IFilterSchema(Interface):\n \"\"\"Combined schema for the adapter lookup.\n \"\"\"\n\n # class IFilterTagsSchema(Interface):\n\n disable_filtering = schema.Bool(\n title=_(u'Disable html filtering'),\n description=_(u'Warning, disabling can be potentially dangereous. '\n u'Only disable if you know what you are doing.'),\n default=False,\n required=False)\n\n nasty_tags = schema.List(\n title=_(u'Nasty tags'),\n description=_(u\"These tags, and their content are completely blocked \"\n \"when a page is saved or rendered.\"),\n default=[u'applet', u'embed', u'object', u'script'],\n value_type=schema.TextLine(),\n required=False)\n\n stripped_tags = schema.List(\n title=_(u'Stripped tags'),\n description=_(u\"These tags are stripped when saving or rendering, \"\n \"but any content is preserved.\"),\n default=[u'font', ],\n value_type=schema.TextLine(),\n required=False)\n\n custom_tags = schema.List(\n title=_(u'Custom tags'),\n description=_(u\"Add tag names here for tags which are not part of \"\n \"XHTML but which should be permitted.\"),\n default=[],\n value_type=schema.TextLine(),\n required=False)\n\n # class IFilterAttributesSchema(Interface):\n\n stripped_attributes = schema.List(\n title=_(u'Stripped attributes'),\n description=_(u\"These attributes are stripped from any tag when \"\n \"saving.\"),\n default=(u'dir lang valign halign border frame rules cellspacing '\n 'cellpadding bgcolor').split(),\n value_type=schema.TextLine(),\n required=False)\n\n stripped_combinations = schema.Dict(\n title=_(u'Stripped combinations'),\n description=_(u\"These attributes are stripped from those tags when \"\n \"saving.\"),\n key_type=schema.TextLine(title=u\"tags\"),\n value_type=schema.TextLine(title=u\"attributes\"),\n default={},\n # XXX replace with value adapter\n # default={'table th td': 'width height', 'other tags': 'other attrs'}\n required=False)\n\n # class IFilterEditorSchema(Interface):\n\n style_whitelist = schema.List(\n title=_(u'Permitted properties'),\n description=_(\n u'These CSS properties are allowed in style attributes.'),\n default=u'text-align list-style-type float text-decoration'.split(),\n value_type=schema.TextLine(),\n required=False)\n\n class_blacklist = schema.List(\n title=_(u'Filtered classes'),\n description=_(u'These class names are not allowed in class '\n 'attributes.'),\n default=[],\n value_type=schema.TextLine(),\n required=False)\n\n\nclass ITinyMCELayoutSchema(Interface):\n \"\"\"This interface defines the layout properties.\"\"\"\n\n resizing = schema.Bool(\n title=_(u\"Enable resizing the editor window.\"),\n description=_(u\"This option gives you the ability to enable/disable \"\n \"resizing the editor window. \"),\n default=True,\n required=False)\n\n autoresize = schema.Bool(\n title=_(u\"Enable auto resizing of the editor window.\"),\n description=_(u\"This option gives you the ability to enable/disable \"\n \"auto resizing the editor window depending \"\n \"on the content.\"),\n default=False,\n required=False)\n\n # TODO: add validation to assert % and px in the value\n editor_width = schema.TextLine(\n title=_(u\"Editor width\"),\n description=_(u\"This option gives you the ability to specify the \"\n \"width of the editor (like 100% or 400px).\"),\n default=None,\n required=False)\n\n # TODO: add validation to assert % and px in the value\n editor_height = schema.TextLine(\n title=_(u\"Editor height\"),\n description=_(u\"This option gives you the ability to specify the \"\n \"height of the editor in pixels. \"\n \"If auto resize is enabled this value is used \"\n \"as minimum height.\"),\n default=None,\n required=False)\n\n content_css = schema.TextLine(\n title=_(u\"Choose the CSS used in WYSIWYG Editor Area\"),\n description=_(u\"This option enables you to specify a custom CSS file \"\n \"that provides content CSS. \"\n \"This CSS file is the one used within the editor \"\n \"(the editable area). In addition to what is listed here, \"\n \"the plone bundle CSS and diazo themes using the \"\n \"tinymce-content-css setting are also added.\"),\n default=u'++plone++static/components/tinymce/skins/lightgray/content.min.css',\n required=False)\n\n header_styles = schema.List(\n title=_(u\"Header styles\"),\n description=_('Name|tag'),\n value_type=schema.TextLine(),\n default=[\n u'Header 1|h1',\n u\"Header 2|h2\",\n u\"Header 3|h3\",\n u\"Header 4|h4\",\n u\"Header 5|h5\",\n u\"Header 6|h6\"\n ])\n\n inline_styles = schema.List(\n title=_(u\"Inline styles\"),\n description=_('Name|format|icon'),\n value_type=schema.TextLine(),\n default=[\n u\"Bold|bold|bold\",\n u\"Italic|italic|italic\",\n u\"Underline|underline|underline\",\n u\"Strikethrough|strikethrough|strikethrough\",\n u\"Superscript|superscript|superscript\",\n u\"Subscript|subscript|subscript\",\n u\"Code|code|code\"])\n\n block_styles = schema.List(\n title=_(u\"Block styles\"),\n description=_('Name|format'),\n value_type=schema.TextLine(),\n default=[\n u\"Paragraph|p\",\n u\"Blockquote|blockquote\",\n u\"Div|div\",\n u\"Pre|pre\"])\n\n alignment_styles = schema.List(\n title=_(u\"Alignment styles\"),\n description=_('Name|format|icon'),\n value_type=schema.TextLine(),\n default=[\n u\"Left|alignleft|alignleft\",\n u\"Center|aligncenter|aligncenter\",\n u\"Right|alignright|alignright\",\n u\"Justify|alignjustify|alignjustify\"])\n\n formats = schema.Text(\n title=_(u\"Formats\"),\n description=_(\n u\"Enter a JSON-formatted style format configuration. \"\n u\"A format is for example the style that get applied when \"\n u\"you press the bold button inside the editor. \"\n u\"See http://www.tinymce.com/wiki.php/Configuration:formats\"),\n constraint=validate_json,\n default=json.dumps({\n 'discreet': {'inline': 'span', 'classes': 'discreet'},\n 'clearfix': {'block': 'div', 'classes': 'clearfix'}\n }, indent=4).decode('utf8'),\n required=True,\n )\n\n\nclass ITinyMCEPluginSchema(Interface):\n \"\"\"This interface defines the toolbar properties.\"\"\"\n\n plugins = schema.List(\n title=_(\"label_tinymce_plugins\", default=u\"Editor Plugins\"),\n description=_(\"help_tinymce_plugins\", default=(\n u\"Select plugins to include with tinymce\")),\n value_type=schema.Choice(vocabulary=SimpleVocabulary([\n SimpleTerm('advlist', 'advlist', u\"advlist\"),\n SimpleTerm('anchor', 'anchor', u\"anchor\"),\n SimpleTerm('autosave', 'autosave', u\"autosave\"),\n SimpleTerm('charmap', 'charmap', u\"charmap\"),\n SimpleTerm('code', 'code', u\"code\"),\n SimpleTerm('colorpicker', 'colorpicker', u\"colorpicker\"),\n SimpleTerm('contextmenu', 'contextmenu', u\"contextmenu\"),\n SimpleTerm('directionality', 'directionality', u\"directionality\"),\n SimpleTerm('emoticons', 'emoticons', u\"emoticons\"),\n SimpleTerm('fullpage', 'fullpage', u\"fullpage\"),\n SimpleTerm('fullscreen', 'fullscreen', u\"fullscreen\"),\n SimpleTerm('hr', 'hr', u\"hr\"),\n SimpleTerm('insertdatetime', 'insertdatetime', u\"insertdatetime\"),\n SimpleTerm('layer', 'layer', u\"layer\"),\n SimpleTerm('lists', 'lists', u\"lists\"),\n SimpleTerm('media', 'media', u\"media\"),\n SimpleTerm('nonbreaking', 'nonbreaking', u\"nonbreaking\"),\n SimpleTerm('noneditable', 'noneditable', u\"noneditable\"),\n SimpleTerm('pagebreak', 'pagebreak', u\"pagebreak\"),\n SimpleTerm('paste', 'paste', u\"paste\"),\n SimpleTerm('preview', 'preview', u\"preview\"),\n SimpleTerm('print', 'print', u\"print\"),\n SimpleTerm('save', 'save', u\"save\"),\n SimpleTerm('searchreplace', 'searchreplace', u\"searchreplace\"),\n SimpleTerm('tabfocus', 'tabfocus', u\"tabfocus\"),\n SimpleTerm('table', 'table', u\"table\"),\n SimpleTerm('textcolor', 'textcolor', u\"textcolor\"),\n SimpleTerm('textpattern', 'textpattern', u\"textpattern\"),\n SimpleTerm('visualblocks', 'visualblocks', u\"visualblocks\"),\n SimpleTerm('visualchars', 'visualchars', u\"visualchars\"),\n SimpleTerm('wordcount', 'wordcount', u\"wordcount\")\n ])),\n default=['advlist', 'directionality', 'emoticons',\n 'fullscreen', 'hr', 'insertdatetime', 'lists', 'media',\n 'nonbreaking', 'noneditable', 'pagebreak', 'paste', 'preview',\n 'print', 'save', 'searchreplace', 'tabfocus', 'table',\n 'visualchars', 'wordcount', 'code'],\n required=False)\n\n menubar = schema.List(\n title=_(\"label_tinymce_menubar\", default=u\"Menubar\"),\n description=_(\"help_tinymce_menubar\", default=(\n u\"Enter what items you would like in the menu bar.\")),\n required=True,\n value_type=schema.TextLine(),\n default=[\n u'edit', u'table', u'format',\n u'tools' u'view', u'insert'])\n\n menu = schema.Text(\n title=_('label_tinymce_menu', 'Menu'),\n description=_('hint_tinymce_menu',\n default='JSON formatted Menu configuration.'),\n default=json.dumps({\n 'file': {'title': 'File', 'items': 'newdocument'},\n 'edit': {'title': 'Edit', 'items': 'undo redo | cut '\n 'copy paste pastetext | selectall'},\n 'insert': {'title': 'Insert', 'items': 'link media | template hr'},\n 'view': {'title': 'View', 'items': 'visualaid'},\n 'format': {'title': 'Format',\n 'items': 'bold italic underline strikethrough '\n 'superscript subscript | formats | removeformat'},\n 'table': {'title': 'Table', 'items': 'inserttable tableprops deletetable '\n '| cell row column'},\n 'tools': {'title': 'Tools', 'items': 'spellchecker code'}\n }, indent=4).decode('utf8')\n )\n\n templates = schema.Text(\n title=_(\"label_tinymce_templates\", default=u\"Templates\"),\n description=_(\"help_tinymce_templates\", default=(\n u\"Enter the list of templates in json format \\\n http://www.tinymce.com/wiki.php/Plugin:template\")),\n required=False,\n default=u\"\")\n\n toolbar = schema.Text(\n title=_(\"label_tinymce_toolbar\", default=u\"Toolbar\"),\n description=_(\"help_tinymce_toolbar\", default=(\n u\"Enter how you would like the toolbar items to list.\")),\n required=True,\n default=u'undo redo | styleselect | bold italic | '\n u'alignleft aligncenter alignright alignjustify | '\n u'bullist numlist outdent indent | '\n u'unlink plonelink ploneimage')\n\n custom_plugins = schema.List(\n title=_(u\"Custom Plugins\"),\n description=_(u\"Enter a list of custom plugins which will be loaded \"\n \"in the editor. Format is \"\n \"pluginname|location, one per line.\"),\n required=False,\n value_type=schema.TextLine(),\n default=[])\n\n custom_buttons = schema.List(\n title=_(u\"Custom Buttons\"),\n description=_(u\"Enter a list of custom buttons which will be added to toolbar\"),\n required=False,\n value_type=schema.TextLine(),\n default=[])\nITinyMCELibrariesSchema = ITinyMCEPluginSchema # bw compat\n\n\nclass ITinyMCESpellCheckerSchema(Interface):\n \"\"\"This interface defines the libraries properties.\"\"\"\n\n libraries_spellchecker_choice = schema.Choice(\n title=_(u\"Spellchecker plugin to use\"),\n description=_(u\"This option allows you to choose the spellchecker for \"\n u\"TinyMCE.\"),\n missing_value=set(),\n vocabulary=SimpleVocabulary([\n SimpleTerm('browser', 'browser',\n _(u\"Default browser spellchecker\")),\n SimpleTerm('AtD', 'AtD',\n _(u\"After the deadline (FLOSS)\")),\n ]),\n default=u'browser',\n required=False)\n\n libraries_atd_ignore_strings = schema.List(\n title=_(u\"AtD Ignore strings\"),\n description=_(\n 'label_atd_ignore_strings',\n default=u\"A list of strings which the \\\"After the Deadline\\\" \"\n u\"spellchecker should ignore. \"\n u\"Note: This option is only applicable when the \"\n u\"appropriate spellchecker has been chosen above.\"),\n default=[\n u\"Zope\",\n u\"Plone\",\n u\"TinyMCE\"],\n value_type=schema.TextLine(),\n required=False)\n\n libraries_atd_show_types = schema.List(\n title=_(u\"AtD Error types to show\"),\n description=_(\n 'help_atderrortypes_to_show',\n default=u\"A list of error types which the \"\n u\"\\\"After the Deadline\\\" spellchecker should check for. \"\n u\"By default, all the available error type will be \"\n u\"listed here.\"),\n value_type=schema.TextLine(),\n default=[\n u\"Bias Language\",\n u\"Cliches\",\n u\"Complex Expression\",\n u\"Diacritical Marks\",\n u\"Double Negatives\",\n u\"Hidden Verbs\",\n u\"Jargon Language\",\n u\"Passive voice\",\n u\"Phrases to Avoid\",\n u\"Redundant Expression\"],\n required=False)\n\n libraries_atd_service_url = schema.TextLine(\n title=_(u\"AtD Service URL\"),\n description=_(\n 'help_atd_service_url',\n default=u\"The URL of the \\\"After the Deadline\\\" grammar and spell \"\n u\"checking server. \"\n u\"The default value is the public server, \"\n u\"but ideally you should download and install your own \"\n u\"and specify its address here.\"),\n required=True,\n default=u\"service.afterthedeadline.com\",)\n\n\nclass ITinyMCEResourceTypesSchema(Interface):\n \"\"\"This interface defines the resource types properties.\"\"\"\n\n # XXX Not implemented in new tinymce version. Need to decide about this\n # rooted = schema.Bool(\n # title=_(u\"Rooted to current object\"),\n # description=_(u\"When enabled the user will be rooted to the current \"\n # \"object and can't add links and images from other parts \"\n # \"of the site.\"),\n # default=False,\n # required=False)\n\n contains_objects = schema.List(\n title=_(u\"Contains Objects\"),\n description=_(u\"Enter a list of content types which can contain other \"\n \"objects. Format is one contenttype per line.\"),\n value_type=schema.TextLine(),\n default=[\n u\"Folder\",\n u\"Large Plone Folder\",\n u\"Plone Site\"],\n required=False)\n\n # XXX not implements\n # containsanchors = schema.Text(\n # title=_(u\"Contains Anchors\"),\n # description=_(u\"Enter a list of content types which can contain \"\n # \"anchors. Format is one contenttype per line.\"),\n # default=u\"Event\\n\"\n # u\"News Item\\n\"\n # u\"Document\\n\"\n # u\"ATRelativePathCriterion\",\n # required=False)\n\n # XXX do we still want this?\n # seems like it could be really annoying for users\n # creating new types.\n # linkable = schema.Text(\n # title=_(u\"Linkable Objects\"),\n # description=_(u\"Enter a list of content types which can be linked. \"\n # \"Format is one contenttype per line.\"),\n # required=False)\n\n image_objects = schema.List(\n title=_(u\"Image Objects\"),\n description=_(u\"Enter a list of content types which can be used as \"\n \"images. Format is one contenttype per line.\"),\n default=[u\"Image\"],\n value_type=schema.TextLine(),\n required=False)\n\n entity_encoding = schema.Choice(\n title=_(u\"Entity encoding\"),\n description=_(\n u\"This option controls how entities/characters get processed. \"\n \"Named: Characters will be converted into named entities \"\n \"based on the entities option. \"\n \"Numeric: Characters will be converted into numeric entities. \"\n \"Raw: All characters will be stored in non-entity form \"\n \"except these XML default entities: amp lt gt quot\"),\n missing_value=set(),\n vocabulary=SimpleVocabulary(\n [SimpleTerm('named', 'named', _(u\"Named\")),\n SimpleTerm('numeric', 'numeric', _(u\"Numeric\")),\n SimpleTerm('raw', 'raw', _(u\"Raw\"))]),\n default=u\"raw\",\n required=False)\n\n\nclass ITinyMCESchema(\n ITinyMCELayoutSchema,\n ITinyMCEPluginSchema,\n ITinyMCESpellCheckerSchema,\n ITinyMCEResourceTypesSchema\n):\n \"\"\"TinyMCE Schema\"\"\"\n\n\nclass IMaintenanceSchema(Interface):\n\n days = schema.Int(\n title=_(u\"Days of object history to keep after packing\"),\n description=_(\n u\"You should pack your database regularly. This number \"\n u\"indicates how many days of undo history you want to \"\n u\"keep. It is unrelated to versioning, so even if you \"\n u\"pack the database, the history of the content changes \"\n u\"will be kept. Recommended value is 7 days.\"\n ),\n default=7,\n required=True\n )\n\n\nclass INavigationSchema(Interface):\n\n generate_tabs = schema.Bool(\n title=_(u\"Automatically generate tabs\"),\n description=_(\n u\"By default, all items created at the root level will \"\n u\"add to the global section navigation. You can turn this off \"\n u\"if you prefer manually constructing this part of the \"\n u\"navigation.\"),\n default=True,\n required=False)\n\n nonfolderish_tabs = schema.Bool(\n title=_(u\"Generate tabs for items other than folders.\"),\n description=_(\n u\"By default, any content item in the root of the portal will \"\n u\"be shown as a global section. If you turn this option off, \"\n u\"only folders will be shown. This only has an effect if \"\n u\"'Automatically generate tabs' is enabled.\"),\n default=True,\n required=False)\n\n displayed_types = schema.Tuple(\n title=_(u\"Displayed content types\"),\n description=_(\n u\"The content types that should be shown in the navigation and \"\n u\"site map.\"),\n required=False,\n default=(\n 'Image',\n 'File',\n 'Link',\n 'News Item',\n 'Folder',\n 'Document',\n 'Event'\n ),\n value_type=schema.Choice(\n source=\"plone.app.vocabularies.ReallyUserFriendlyTypes\"\n ))\n\n filter_on_workflow = schema.Bool(\n title=_(u\"Filter on workflow state\"),\n description=_(\n u\"The workflow states that should be shown in the navigation \"\n u\"tree and the site map.\"),\n default=False,\n required=False)\n\n workflow_states_to_show = schema.Tuple(\n required=False,\n default=(),\n value_type=schema.Choice(\n source=\"plone.app.vocabularies.WorkflowStates\"))\n\n show_excluded_items = schema.Bool(\n title=_(\n u\"Show items normally excluded from navigation if viewing their \"\n u\"children.\"),\n description=_(\n u\"If an item has been excluded from navigation should it be \"\n u\"shown in navigation when viewing content contained within it \"\n u\"or within a subfolder.\"),\n default=True,\n required=False)\n\n\nclass ISearchSchema(Interface):\n\n enable_livesearch = schema.Bool(\n title=_(u'Enable LiveSearch'),\n description=_(\n u\"Enables the LiveSearch feature, which shows live \"\n u\"results if the browser supports JavaScript.\"),\n default=True,\n required=False\n )\n\n types_not_searched = schema.Tuple(\n title=_(u\"Define the types to be shown in the site and searched\"),\n description=_(\n u\"Define the types that should be searched and be \"\n u\"available in the user facing part of the site. \"\n u\"Note that if new content types are installed, they \"\n u\"will be enabled by default unless explicitly turned \"\n u\"off here or by the relevant installer.\"\n ),\n required=False,\n default=(\n 'ATBooleanCriterion',\n 'ATDateCriteria',\n 'ATDateRangeCriterion',\n 'ATListCriterion',\n 'ATPortalTypeCriterion',\n 'ATReferenceCriterion',\n 'ATSelectionCriterion',\n 'ATSimpleIntCriterion',\n 'ATSimpleStringCriterion',\n 'ATSortCriterion',\n 'ChangeSet',\n 'Discussion Item',\n 'Plone Site',\n 'TempFolder',\n 'ATCurrentAuthorCriterion',\n 'ATPathCriterion',\n 'ATRelativePathCriterion',\n ),\n value_type=schema.Choice(\n source=\"plone.app.vocabularies.PortalTypes\"\n ),\n )\n\n\nclass ISecuritySchema(Interface):\n\n enable_self_reg = schema.Bool(\n title=_(u'Enable self-registration'),\n description=_(\n u\"Allows users to register themselves on the site. If \"\n u\"not selected, only site managers can add new users.\"),\n default=False,\n required=False)\n\n enable_user_pwd_choice = schema.Bool(\n title=_(u'Let users select their own passwords'),\n description=_(\n u\"If not selected, a URL will be generated and \"\n u\"e-mailed. Users are instructed to follow the link to \"\n u\"reach a page where they can change their password and \"\n u\"complete the registration process; this also verifies \"\n u\"that they have entered a valid email address.\"),\n default=False,\n required=False)\n\n enable_user_folders = schema.Bool(\n title=_(u'Enable User Folders'),\n description=_(\n u\"If selected, home folders where users can create \"\n u\"content will be created when they log in.\"),\n default=False,\n required=False)\n\n allow_anon_views_about = schema.Bool(\n title=_(u\"Allow anyone to view 'about' information\"),\n description=_(\n u\"If not selected only logged-in users will be able to \"\n u\"view information about who created an item and when it \"\n u\"was modified.\"),\n default=False,\n required=False)\n\n use_email_as_login = schema.Bool(\n title=_(u'Use email address as login name'),\n description=_(\n u\"Allows users to login with their email address instead \"\n u\"of specifying a separate login name. This also updates \"\n u\"the login name of existing users, which may take a \"\n u\"while on large sites. The login name is saved as \"\n u\"lower case, but to be userfriendly it does not matter \"\n u\"which case you use to login. When duplicates are found, \"\n u\"saving this form will fail. You can use the \"\n u\"@@migrate-to-emaillogin page to show the duplicates.\"),\n default=False,\n required=False)\n\n use_uuid_as_userid = schema.Bool(\n title=_(u'Use UUID user ids'),\n description=_(\n u\"Use automatically generated UUIDs as user id for new users. \"\n u\"When not turned on, the default is to use the same as the \"\n u\"login name, or when using the email address as login name we \"\n u\"generate a user id based on the fullname.\"),\n default=False,\n required=False)\n\n\n# XXX: Why does ISiteSchema inherit from ILockSettings here ???\nclass ISiteSchema(ILockSettings):\n\n site_title = schema.TextLine(\n title=_(u'Site title'),\n description=_(\n u\"This shows up in the title bar of \"\n u\"browsers and in syndication feeds.\"),\n default=u'Plone site')\n\n site_logo = schema.ASCII(\n title=_(u\"Site Logo\"),\n description=_(u\"This shows a custom Logo on your Site.\"),\n required=False,\n )\n\n exposeDCMetaTags = schema.Bool(\n title=_(u\"Expose Dublin Core metadata\"),\n description=_(u\"Exposes the Dublin Core properties as metatags.\"),\n default=False,\n required=False)\n\n enable_sitemap = schema.Bool(\n title=_(u\"Expose sitemap.xml.gz\"),\n description=_(\n u\"Exposes your content as a file \"\n u\"according to the sitemaps.org standard. You \"\n u\"can submit this to compliant search engines \"\n u\"like Google, Yahoo and Microsoft. It allows \"\n u\"these search engines to more intelligently \"\n u\"crawl your site.\"),\n default=False,\n required=False)\n\n webstats_js = schema.SourceText(\n title=_(u'JavaScript for web statistics support'),\n description=_(\n u\"For enabling web statistics support \"\n u\"from external providers (for e.g. Google \"\n u\"Analytics). Paste the code snippets provided. \"\n u\"It will be included in the rendered HTML as \"\n u\"entered near the end of the page.\"),\n default=u'',\n required=False)\n\n\nclass IDateAndTimeSchema(Interface):\n \"\"\"Controlpanel settings for date and time related settings.\n \"\"\"\n\n portal_timezone = schema.Choice(\n title=_(u\"Portal default timezone\"),\n description=_(\n u\"help_portal_timezone\",\n default=u\"The timezone setting of the portal. Users can set \"\n u\"their own timezone, if available timezones are \"\n u\"defined.\"),\n required=True,\n default=None,\n vocabulary=\"plone.app.vocabularies.CommonTimezones\")\n\n available_timezones = schema.List(\n title=_(u\"Available timezones\"),\n description=_(\n u\"help_available_timezones\",\n default=u\"The timezones, which should be available for the \"\n u\"portal. Can be set for users and events\"),\n required=False,\n default=[],\n value_type=schema.Choice(\n vocabulary=\"plone.app.vocabularies.Timezones\"))\n\n first_weekday = schema.Choice(\n title=_(u'label_first_weekday', default=u'First weekday'),\n description=_(\n u'help_first_weekday',\n default=u'First day in the week.'),\n required=True,\n default=None,\n vocabulary=\"plone.app.vocabularies.Weekdays\")\n\n\nclass ITypesSchema(Interface):\n \"\"\"\n \"\"\"\n\n\nclass IMailSchema(Interface):\n\n smtp_host = schema.TextLine(\n title=_(\n u'label_smtp_server',\n default=u'SMTP server'),\n description=_(\n u\"help_smtp_server\",\n default=u\"The address of your local \"\n u\"SMTP (outgoing e-mail) server. Usually \"\n u\"'localhost', unless you use an \"\n u\"external server to send e-mail.\"),\n default=u'localhost',\n required=True)\n\n smtp_port = schema.Int(\n title=_(u'label_smtp_port',\n default=u'SMTP port'),\n description=_(u\"help_smtp_port\",\n default=u\"The port of your local SMTP \"\n u\"(outgoing e-mail) server. Usually '25'.\"),\n default=25,\n required=True)\n\n smtp_userid = schema.TextLine(\n title=_(\n u'label_smtp_userid',\n default=u'ESMTP username'),\n description=_(\n u\"help_smtp_userid\",\n default=u\"Username for authentication \"\n u\"to your e-mail server. Not required \"\n u\"unless you are using ESMTP.\"),\n default=None,\n required=False)\n\n smtp_pass = schema.Password(\n title=_(\n u'label_smtp_pass',\n default=u'ESMTP password'),\n description=_(\n u\"help_smtp_pass\",\n default=u\"The password for the ESMTP \"\n u\"user account.\"),\n default=None,\n required=False)\n\n email_from_name = schema.TextLine(\n title=_(u\"Site 'From' name\"),\n description=_(\n u\"Plone generates e-mail using \"\n u\"this name as the e-mail \"\n u\"sender.\"),\n default=None,\n required=True)\n\n email_from_address = schema.ASCIILine(\n title=_(u\"Site 'From' address\"),\n description=_(\n u\"Plone generates e-mail using \"\n u\"this address as the e-mail \"\n u\"return address. It is also \"\n u\"used as the destination \"\n u\"address for the site-wide \"\n u\"contact form and the 'Send test \"\n u\"e-mail' feature.\"),\n default=None,\n required=True)\n\n\nclass IMarkupSchema(Interface):\n\n default_type = schema.Choice(\n title=_(u'Default format'),\n description=_(\n u\"Select the default format of textfields for newly \"\n u\"created content objects.\"\n ),\n default=u'text/html',\n vocabulary=\"plone.app.vocabularies.AllowableContentTypes\",\n required=True\n )\n\n allowed_types = schema.Tuple(\n title=_(u'Alternative formats'),\n description=_(\n u\"Select which formats are available for users as \"\n u\"alternative to the default format. Note that if new \"\n u\"formats are installed, they will be enabled for text \"\n u\"fields by default unless explicitly turned off here \"\n u\"or by the relevant installer.\"\n ),\n required=True,\n default=('text/html', 'text/x-web-textile'),\n value_type=schema.Choice(\n vocabulary=\"plone.app.vocabularies.AllowableContentTypes\"\n )\n )\n\n\nclass IUserGroupsSettingsSchema(Interface):\n\n many_groups = schema.Bool(\n title=_(u'Many groups?'),\n description=_(\n u\"Determines if your Plone is optimized \"\n u\"for small or large sites. In environments with a \"\n u\"lot of groups it can be very slow or impossible \"\n u\"to build a list all groups. This option tunes the \"\n u\"user interface and behaviour of Plone for this \"\n u\"case by allowing you to search for groups instead \"\n u\"of listing all of them.\"),\n default=False\n )\n\n many_users = schema.Bool(\n title=_(u'Many users?'),\n description=_(\n u\"Determines if your Plone is optimized \"\n u\"for small or large sites. In environments with a \"\n u\"lot of users it can be very slow or impossible to \"\n u\"build a list all users. This option tunes the user \"\n u\"interface and behaviour of Plone for this case by \"\n u\"allowing you to search for users instead of \"\n u\"listing all of them.\"),\n default=False\n )\n\n\nclass ISocialMediaSchema(Interface):\n\n share_social_data = schema.Bool(\n title=_(u'Share social data'),\n description=_(u'Include meta tags on pages to give hints to '\n u'social media on how to render your pages better '\n u'when shared'),\n default=True)\n\n twitter_username = schema.TextLine(\n title=_(u'Twitter Username'),\n description=_(u'To idenitify things like Twitter Cards'),\n required=False,\n default=u'')\n\n facebook_app_id = schema.TextLine(\n title=_(u'Facebook app id'),\n description=_(u'To be used with some integrations like open graph data'),\n required=False,\n default=u'')\n\n facebook_username = schema.TextLine(\n title=_(u'Facebook username'),\n description=_(u'For linking open graph data to a facebook account'),\n required=False,\n default=u'')\n\n\nclass IImagingSchema(Interface):\n allowed_sizes = schema.List(\n title=_(u'Allowed image sizes'),\n description=_(u'Specify all allowed maximum image dimensions, '\n 'one per line. '\n 'The required format is &lt;name&gt; &lt;width&gt;:&lt;height&gt;.'),\n value_type=schema.TextLine(),\n default=[\n \"large 768:768\",\n \"preview 400:400\",\n \"mini 200:200\",\n \"thumb 128:128\",\n \"tile 64:64\",\n \"icon 32:32\",\n \"listing 16:16\"],\n required=False,\n )\n\n quality = schema.Int(\n title=_(u'Scaled image quality'),\n description=_(u'A value for the quality of scaled images, from 1 '\n '(lowest) to 95 (highest). A value of 0 will mean '\n 'plone.scaling\\'s default will be used, which is '\n 'currently 88.'),\n min=0,\n max=95,\n default=88\n )\n", "path": "Products/CMFPlone/interfaces/controlpanel.py" } ]
diff --git a/CHANGES.rst b/CHANGES.rst index 5cf42666d3..9e7842d370 100644 --- a/CHANGES.rst +++ b/CHANGES.rst @@ -8,6 +8,9 @@ Changelog 5.0b3 (unreleased) ------------------ +- Fix adding a new Plone site with country specific language. Refs #411. + [jaroel] + - fix plone-logged-in bundle not using global jquery for requirejs dependency and in weird cases causing select2 load errors in patterns(especially resource registry) [vangheem] diff --git a/Products/CMFPlone/controlpanel/README.rst b/Products/CMFPlone/controlpanel/README.rst index 7d9a7a4b4d..f3ba7c66c6 100644 --- a/Products/CMFPlone/controlpanel/README.rst +++ b/Products/CMFPlone/controlpanel/README.rst @@ -68,7 +68,7 @@ Language Control Panel ['en'] >>> language_settings.use_combined_language_codes - False + True >>> language_settings.display_flags False diff --git a/Products/CMFPlone/controlpanel/tests/test_controlpanel_bbb_language_adapter.py b/Products/CMFPlone/controlpanel/tests/test_controlpanel_bbb_language_adapter.py index 75fd6fe837..a96d998680 100644 --- a/Products/CMFPlone/controlpanel/tests/test_controlpanel_bbb_language_adapter.py +++ b/Products/CMFPlone/controlpanel/tests/test_controlpanel_bbb_language_adapter.py @@ -74,24 +74,24 @@ def test_get_use_combined_language_codes(self): self.assertEqual( getAdapter( self.portal, ILanguageSchema).use_combined_language_codes, - False + True ) - self.settings.use_combined_language_codes = True + self.settings.use_combined_language_codes = False self.assertEquals( getAdapter(self.portal, ILanguageSchema).use_combined_language_codes, - True + False ) def test_set_use_combined_language_codes(self): self.assertEquals( self.settings.use_combined_language_codes, - False + True ) getAdapter( - self.portal, ILanguageSchema).use_combined_language_codes = True + self.portal, ILanguageSchema).use_combined_language_codes = False self.assertEquals( self.settings.use_combined_language_codes, - True + False ) def test_get_display_flags(self): diff --git a/Products/CMFPlone/controlpanel/tests/test_controlpanel_browser_language.py b/Products/CMFPlone/controlpanel/tests/test_controlpanel_browser_language.py index 1e0bda00eb..eb32c75fa8 100644 --- a/Products/CMFPlone/controlpanel/tests/test_controlpanel_browser_language.py +++ b/Products/CMFPlone/controlpanel/tests/test_controlpanel_browser_language.py @@ -108,25 +108,26 @@ def test_default_language(self): # self.assertEqual(settings.available_languages, ['en', 'de']) def test_use_combined_language_codes(self): + """This checks swithing combined languages codes support off/on.""" registry = getUtility(IRegistry) settings = registry.forInterface(ILanguageSchema, prefix='plone') self.browser.open( "%s/@@language-controlpanel" % self.portal_url) - self.assertEqual(settings.use_combined_language_codes, False) + self.assertEqual(settings.use_combined_language_codes, True) self.assertEqual( self.browser.getControl( 'Show country-specific language variants' ).selected, - False + True ) self.browser.getControl( 'Show country-specific language variants' - ).selected = True + ).selected = False self._inject_available_languages_field('en') self.browser.getControl('Save').click() - self.assertEqual(settings.use_combined_language_codes, True) + self.assertEqual(settings.use_combined_language_codes, False) def test_display_flags(self): registry = getUtility(IRegistry) diff --git a/Products/CMFPlone/interfaces/controlpanel.py b/Products/CMFPlone/interfaces/controlpanel.py index cb97f5319f..cd3e13c412 100644 --- a/Products/CMFPlone/interfaces/controlpanel.py +++ b/Products/CMFPlone/interfaces/controlpanel.py @@ -147,7 +147,7 @@ class ILanguageSchema(Interface): default=u"Examples: pt-br (Brazilian Portuguese), " u"en-us (American English) etc." ), - default=False, + default=True, required=False )
netbox-community__netbox-1403
Device interface shows twice on IP Addresses page Python version: 2.7.5 NetBox version: 2.1.2 On the "IP addresses" page the device interface is showing up in both the "Device" column and the Interface Column. I am able to replicate this issue with any link to the "IP Addresses" page. I dont see any purpose in listing the interface twice. It just clutters the page. ![2017-08-07 17_06_32-ip addresses - netbox](https://user-images.githubusercontent.com/29483942/29045848-0982b5bc-7b93-11e7-8a3e-15c657e51bb7.png)
[ { "content": "from __future__ import unicode_literals\n\nimport django_tables2 as tables\nfrom django_tables2.utils import Accessor\n\nfrom utilities.tables import BaseTable, ToggleColumn\nfrom .models import Aggregate, IPAddress, Prefix, RIR, Role, VLAN, VLANGroup, VRF\n\n\nRIR_UTILIZATION = \"\"\"\n<div class=\"progress\">\n {% if record.stats.total %}\n <div class=\"progress-bar\" role=\"progressbar\" style=\"width: {{ record.stats.percentages.active }}%;\">\n <span class=\"sr-only\">{{ record.stats.percentages.active }}%</span>\n </div>\n <div class=\"progress-bar progress-bar-info\" role=\"progressbar\" style=\"width: {{ record.stats.percentages.reserved }}%;\">\n <span class=\"sr-only\">{{ record.stats.percentages.reserved }}%</span>\n </div>\n <div class=\"progress-bar progress-bar-danger\" role=\"progressbar\" style=\"width: {{ record.stats.percentages.deprecated }}%;\">\n <span class=\"sr-only\">{{ record.stats.percentages.deprecated }}%</span>\n </div>\n <div class=\"progress-bar progress-bar-success\" role=\"progressbar\" style=\"width: {{ record.stats.percentages.available }}%;\">\n <span class=\"sr-only\">{{ record.stats.percentages.available }}%</span>\n </div>\n {% endif %}\n</div>\n\"\"\"\n\nRIR_ACTIONS = \"\"\"\n{% if perms.ipam.change_rir %}\n <a href=\"{% url 'ipam:rir_edit' slug=record.slug %}\" class=\"btn btn-xs btn-warning\"><i class=\"glyphicon glyphicon-pencil\" aria-hidden=\"true\"></i></a>\n{% endif %}\n\"\"\"\n\nUTILIZATION_GRAPH = \"\"\"\n{% load helpers %}\n{% if record.pk %}{% utilization_graph value %}{% else %}&mdash;{% endif %}\n\"\"\"\n\nROLE_ACTIONS = \"\"\"\n{% if perms.ipam.change_role %}\n <a href=\"{% url 'ipam:role_edit' slug=record.slug %}\" class=\"btn btn-xs btn-warning\"><i class=\"glyphicon glyphicon-pencil\" aria-hidden=\"true\"></i></a>\n{% endif %}\n\"\"\"\n\nPREFIX_LINK = \"\"\"\n{% if record.has_children %}\n <span style=\"padding-left: {{ record.depth }}0px \"><i class=\"fa fa-caret-right\"></i></a>\n{% else %}\n <span style=\"padding-left: {{ record.depth }}9px\">\n{% endif %}\n <a href=\"{% if record.pk %}{% url 'ipam:prefix' pk=record.pk %}{% else %}{% url 'ipam:prefix_add' %}?prefix={{ record }}{% if parent.vrf %}&vrf={{ parent.vrf.pk }}{% endif %}{% if parent.site %}&site={{ parent.site.pk }}{% endif %}{% endif %}\">{{ record.prefix }}</a>\n</span>\n\"\"\"\n\nPREFIX_LINK_BRIEF = \"\"\"\n<span style=\"padding-left: {{ record.depth }}0px\">\n <a href=\"{% if record.pk %}{% url 'ipam:prefix' pk=record.pk %}{% else %}{% url 'ipam:prefix_add' %}?prefix={{ record }}{% if parent.vrf %}&vrf={{ parent.vrf.pk }}{% endif %}{% if parent.site %}&site={{ parent.site.pk }}{% endif %}{% endif %}\">{{ record.prefix }}</a>\n</span>\n\"\"\"\n\nPREFIX_ROLE_LINK = \"\"\"\n{% if record.role %}\n <a href=\"{% url 'ipam:prefix_list' %}?role={{ record.role.slug }}\">{{ record.role }}</a>\n{% else %}\n &mdash;\n{% endif %}\n\"\"\"\n\nIPADDRESS_LINK = \"\"\"\n{% if record.pk %}\n <a href=\"{{ record.get_absolute_url }}\">{{ record.address }}</a>\n{% elif perms.ipam.add_ipaddress %}\n <a href=\"{% url 'ipam:ipaddress_add' %}?address={{ record.1 }}{% if prefix.vrf %}&vrf={{ prefix.vrf.pk }}{% endif %}\" class=\"btn btn-xs btn-success\">{% if record.0 <= 65536 %}{{ record.0 }}{% else %}Many{% endif %} IP{{ record.0|pluralize }} available</a>\n{% else %}\n {% if record.0 <= 65536 %}{{ record.0 }}{% else %}Many{% endif %} IP{{ record.0|pluralize }} available\n{% endif %}\n\"\"\"\n\nIPADDRESS_DEVICE = \"\"\"\n{% if record.interface %}\n <a href=\"{{ record.interface.device.get_absolute_url }}\">{{ record.interface.device }}</a>\n ({{ record.interface.name }})\n{% else %}\n &mdash;\n{% endif %}\n\"\"\"\n\nVRF_LINK = \"\"\"\n{% if record.vrf %}\n <a href=\"{{ record.vrf.get_absolute_url }}\">{{ record.vrf }}</a>\n{% elif prefix.vrf %}\n {{ prefix.vrf }}\n{% else %}\n Global\n{% endif %}\n\"\"\"\n\nSTATUS_LABEL = \"\"\"\n{% if record.pk %}\n <span class=\"label label-{{ record.get_status_class }}\">{{ record.get_status_display }}</span>\n{% else %}\n <span class=\"label label-success\">Available</span>\n{% endif %}\n\"\"\"\n\nVLAN_PREFIXES = \"\"\"\n{% for prefix in record.prefixes.all %}\n <a href=\"{% url 'ipam:prefix' pk=prefix.pk %}\">{{ prefix }}</a>{% if not forloop.last %}<br />{% endif %}\n{% empty %}\n &mdash;\n{% endfor %}\n\"\"\"\n\nVLAN_ROLE_LINK = \"\"\"\n{% if record.role %}\n <a href=\"{% url 'ipam:vlan_list' %}?role={{ record.role.slug }}\">{{ record.role }}</a>\n{% else %}\n &mdash;\n{% endif %}\n\"\"\"\n\nVLANGROUP_ACTIONS = \"\"\"\n{% if perms.ipam.change_vlangroup %}\n <a href=\"{% url 'ipam:vlangroup_edit' pk=record.pk %}\" class=\"btn btn-xs btn-warning\"><i class=\"glyphicon glyphicon-pencil\" aria-hidden=\"true\"></i></a>\n{% endif %}\n\"\"\"\n\nTENANT_LINK = \"\"\"\n{% if record.tenant %}\n <a href=\"{% url 'tenancy:tenant' slug=record.tenant.slug %}\">{{ record.tenant }}</a>\n{% elif record.vrf.tenant %}\n <a href=\"{% url 'tenancy:tenant' slug=record.vrf.tenant.slug %}\">{{ record.vrf.tenant }}</a>*\n{% else %}\n &mdash;\n{% endif %}\n\"\"\"\n\n\n#\n# VRFs\n#\n\nclass VRFTable(BaseTable):\n pk = ToggleColumn()\n name = tables.LinkColumn()\n rd = tables.Column(verbose_name='RD')\n tenant = tables.LinkColumn('tenancy:tenant', args=[Accessor('tenant.slug')])\n\n class Meta(BaseTable.Meta):\n model = VRF\n fields = ('pk', 'name', 'rd', 'tenant', 'description')\n\n\n#\n# RIRs\n#\n\nclass RIRTable(BaseTable):\n pk = ToggleColumn()\n name = tables.LinkColumn(verbose_name='Name')\n is_private = tables.BooleanColumn(verbose_name='Private')\n aggregate_count = tables.Column(verbose_name='Aggregates')\n actions = tables.TemplateColumn(template_code=RIR_ACTIONS, attrs={'td': {'class': 'text-right'}}, verbose_name='')\n\n class Meta(BaseTable.Meta):\n model = RIR\n fields = ('pk', 'name', 'is_private', 'aggregate_count', 'actions')\n\n\nclass RIRDetailTable(RIRTable):\n stats_total = tables.Column(accessor='stats.total', verbose_name='Total',\n footer=lambda table: sum(r.stats['total'] for r in table.data))\n stats_active = tables.Column(accessor='stats.active', verbose_name='Active',\n footer=lambda table: sum(r.stats['active'] for r in table.data))\n stats_reserved = tables.Column(accessor='stats.reserved', verbose_name='Reserved',\n footer=lambda table: sum(r.stats['reserved'] for r in table.data))\n stats_deprecated = tables.Column(accessor='stats.deprecated', verbose_name='Deprecated',\n footer=lambda table: sum(r.stats['deprecated'] for r in table.data))\n stats_available = tables.Column(accessor='stats.available', verbose_name='Available',\n footer=lambda table: sum(r.stats['available'] for r in table.data))\n utilization = tables.TemplateColumn(template_code=RIR_UTILIZATION, verbose_name='Utilization')\n\n class Meta(RIRTable.Meta):\n fields = (\n 'pk', 'name', 'is_private', 'aggregate_count', 'stats_total', 'stats_active', 'stats_reserved',\n 'stats_deprecated', 'stats_available', 'utilization', 'actions',\n )\n\n\n#\n# Aggregates\n#\n\nclass AggregateTable(BaseTable):\n pk = ToggleColumn()\n prefix = tables.LinkColumn(verbose_name='Aggregate')\n date_added = tables.DateColumn(format=\"Y-m-d\", verbose_name='Added')\n\n class Meta(BaseTable.Meta):\n model = Aggregate\n fields = ('pk', 'prefix', 'rir', 'date_added', 'description')\n\n\nclass AggregateDetailTable(AggregateTable):\n child_count = tables.Column(verbose_name='Prefixes')\n get_utilization = tables.TemplateColumn(UTILIZATION_GRAPH, orderable=False, verbose_name='Utilization')\n\n class Meta(AggregateTable.Meta):\n fields = ('pk', 'prefix', 'rir', 'child_count', 'get_utilization', 'date_added', 'description')\n\n\n#\n# Roles\n#\n\nclass RoleTable(BaseTable):\n pk = ToggleColumn()\n name = tables.Column(verbose_name='Name')\n prefix_count = tables.Column(accessor=Accessor('count_prefixes'), orderable=False, verbose_name='Prefixes')\n vlan_count = tables.Column(accessor=Accessor('count_vlans'), orderable=False, verbose_name='VLANs')\n slug = tables.Column(verbose_name='Slug')\n actions = tables.TemplateColumn(template_code=ROLE_ACTIONS, attrs={'td': {'class': 'text-right'}}, verbose_name='')\n\n class Meta(BaseTable.Meta):\n model = Role\n fields = ('pk', 'name', 'prefix_count', 'vlan_count', 'slug', 'actions')\n\n\n#\n# Prefixes\n#\n\nclass PrefixTable(BaseTable):\n pk = ToggleColumn()\n prefix = tables.TemplateColumn(PREFIX_LINK, attrs={'th': {'style': 'padding-left: 17px'}})\n status = tables.TemplateColumn(STATUS_LABEL)\n vrf = tables.TemplateColumn(VRF_LINK, verbose_name='VRF')\n tenant = tables.TemplateColumn(TENANT_LINK)\n site = tables.LinkColumn('dcim:site', args=[Accessor('site.slug')])\n vlan = tables.LinkColumn('ipam:vlan', args=[Accessor('vlan.pk')], verbose_name='VLAN')\n role = tables.TemplateColumn(PREFIX_ROLE_LINK)\n\n class Meta(BaseTable.Meta):\n model = Prefix\n fields = ('pk', 'prefix', 'status', 'vrf', 'tenant', 'site', 'vlan', 'role', 'description')\n row_attrs = {\n 'class': lambda record: 'success' if not record.pk else '',\n }\n\n\nclass PrefixDetailTable(PrefixTable):\n get_utilization = tables.TemplateColumn(UTILIZATION_GRAPH, orderable=False, verbose_name='Utilization')\n\n class Meta(PrefixTable.Meta):\n fields = ('pk', 'prefix', 'status', 'vrf', 'get_utilization', 'tenant', 'site', 'vlan', 'role', 'description')\n\n\n#\n# IPAddresses\n#\n\nclass IPAddressTable(BaseTable):\n pk = ToggleColumn()\n address = tables.TemplateColumn(IPADDRESS_LINK, verbose_name='IP Address')\n status = tables.TemplateColumn(STATUS_LABEL)\n vrf = tables.TemplateColumn(VRF_LINK, verbose_name='VRF')\n tenant = tables.TemplateColumn(TENANT_LINK)\n device = tables.TemplateColumn(IPADDRESS_DEVICE, orderable=False)\n interface = tables.Column(orderable=False)\n\n class Meta(BaseTable.Meta):\n model = IPAddress\n fields = ('pk', 'address', 'vrf', 'status', 'role', 'tenant', 'device', 'interface', 'description')\n row_attrs = {\n 'class': lambda record: 'success' if not isinstance(record, IPAddress) else '',\n }\n\n\nclass IPAddressDetailTable(IPAddressTable):\n nat_inside = tables.LinkColumn(\n 'ipam:ipaddress', args=[Accessor('nat_inside.pk')], orderable=False, verbose_name='NAT (Inside)'\n )\n\n class Meta(IPAddressTable.Meta):\n fields = (\n 'pk', 'address', 'vrf', 'status', 'role', 'tenant', 'nat_inside', 'device', 'interface', 'description',\n )\n\n\n#\n# VLAN groups\n#\n\nclass VLANGroupTable(BaseTable):\n pk = ToggleColumn()\n name = tables.LinkColumn(verbose_name='Name')\n site = tables.LinkColumn('dcim:site', args=[Accessor('site.slug')], verbose_name='Site')\n vlan_count = tables.Column(verbose_name='VLANs')\n slug = tables.Column(verbose_name='Slug')\n actions = tables.TemplateColumn(template_code=VLANGROUP_ACTIONS, attrs={'td': {'class': 'text-right'}},\n verbose_name='')\n\n class Meta(BaseTable.Meta):\n model = VLANGroup\n fields = ('pk', 'name', 'site', 'vlan_count', 'slug', 'actions')\n\n\n#\n# VLANs\n#\n\nclass VLANTable(BaseTable):\n pk = ToggleColumn()\n vid = tables.LinkColumn('ipam:vlan', args=[Accessor('pk')], verbose_name='ID')\n site = tables.LinkColumn('dcim:site', args=[Accessor('site.slug')])\n group = tables.Column(accessor=Accessor('group.name'), verbose_name='Group')\n tenant = tables.LinkColumn('tenancy:tenant', args=[Accessor('tenant.slug')])\n status = tables.TemplateColumn(STATUS_LABEL)\n role = tables.TemplateColumn(VLAN_ROLE_LINK)\n\n class Meta(BaseTable.Meta):\n model = VLAN\n fields = ('pk', 'vid', 'site', 'group', 'name', 'tenant', 'status', 'role', 'description')\n\n\nclass VLANDetailTable(VLANTable):\n prefixes = tables.TemplateColumn(VLAN_PREFIXES, orderable=False, verbose_name='Prefixes')\n\n class Meta(VLANTable.Meta):\n fields = ('pk', 'vid', 'site', 'group', 'name', 'prefixes', 'tenant', 'status', 'role', 'description')\n", "path": "netbox/ipam/tables.py" } ]
[ { "content": "from __future__ import unicode_literals\n\nimport django_tables2 as tables\nfrom django_tables2.utils import Accessor\n\nfrom utilities.tables import BaseTable, ToggleColumn\nfrom .models import Aggregate, IPAddress, Prefix, RIR, Role, VLAN, VLANGroup, VRF\n\n\nRIR_UTILIZATION = \"\"\"\n<div class=\"progress\">\n {% if record.stats.total %}\n <div class=\"progress-bar\" role=\"progressbar\" style=\"width: {{ record.stats.percentages.active }}%;\">\n <span class=\"sr-only\">{{ record.stats.percentages.active }}%</span>\n </div>\n <div class=\"progress-bar progress-bar-info\" role=\"progressbar\" style=\"width: {{ record.stats.percentages.reserved }}%;\">\n <span class=\"sr-only\">{{ record.stats.percentages.reserved }}%</span>\n </div>\n <div class=\"progress-bar progress-bar-danger\" role=\"progressbar\" style=\"width: {{ record.stats.percentages.deprecated }}%;\">\n <span class=\"sr-only\">{{ record.stats.percentages.deprecated }}%</span>\n </div>\n <div class=\"progress-bar progress-bar-success\" role=\"progressbar\" style=\"width: {{ record.stats.percentages.available }}%;\">\n <span class=\"sr-only\">{{ record.stats.percentages.available }}%</span>\n </div>\n {% endif %}\n</div>\n\"\"\"\n\nRIR_ACTIONS = \"\"\"\n{% if perms.ipam.change_rir %}\n <a href=\"{% url 'ipam:rir_edit' slug=record.slug %}\" class=\"btn btn-xs btn-warning\"><i class=\"glyphicon glyphicon-pencil\" aria-hidden=\"true\"></i></a>\n{% endif %}\n\"\"\"\n\nUTILIZATION_GRAPH = \"\"\"\n{% load helpers %}\n{% if record.pk %}{% utilization_graph value %}{% else %}&mdash;{% endif %}\n\"\"\"\n\nROLE_ACTIONS = \"\"\"\n{% if perms.ipam.change_role %}\n <a href=\"{% url 'ipam:role_edit' slug=record.slug %}\" class=\"btn btn-xs btn-warning\"><i class=\"glyphicon glyphicon-pencil\" aria-hidden=\"true\"></i></a>\n{% endif %}\n\"\"\"\n\nPREFIX_LINK = \"\"\"\n{% if record.has_children %}\n <span style=\"padding-left: {{ record.depth }}0px \"><i class=\"fa fa-caret-right\"></i></a>\n{% else %}\n <span style=\"padding-left: {{ record.depth }}9px\">\n{% endif %}\n <a href=\"{% if record.pk %}{% url 'ipam:prefix' pk=record.pk %}{% else %}{% url 'ipam:prefix_add' %}?prefix={{ record }}{% if parent.vrf %}&vrf={{ parent.vrf.pk }}{% endif %}{% if parent.site %}&site={{ parent.site.pk }}{% endif %}{% endif %}\">{{ record.prefix }}</a>\n</span>\n\"\"\"\n\nPREFIX_LINK_BRIEF = \"\"\"\n<span style=\"padding-left: {{ record.depth }}0px\">\n <a href=\"{% if record.pk %}{% url 'ipam:prefix' pk=record.pk %}{% else %}{% url 'ipam:prefix_add' %}?prefix={{ record }}{% if parent.vrf %}&vrf={{ parent.vrf.pk }}{% endif %}{% if parent.site %}&site={{ parent.site.pk }}{% endif %}{% endif %}\">{{ record.prefix }}</a>\n</span>\n\"\"\"\n\nPREFIX_ROLE_LINK = \"\"\"\n{% if record.role %}\n <a href=\"{% url 'ipam:prefix_list' %}?role={{ record.role.slug }}\">{{ record.role }}</a>\n{% else %}\n &mdash;\n{% endif %}\n\"\"\"\n\nIPADDRESS_LINK = \"\"\"\n{% if record.pk %}\n <a href=\"{{ record.get_absolute_url }}\">{{ record.address }}</a>\n{% elif perms.ipam.add_ipaddress %}\n <a href=\"{% url 'ipam:ipaddress_add' %}?address={{ record.1 }}{% if prefix.vrf %}&vrf={{ prefix.vrf.pk }}{% endif %}\" class=\"btn btn-xs btn-success\">{% if record.0 <= 65536 %}{{ record.0 }}{% else %}Many{% endif %} IP{{ record.0|pluralize }} available</a>\n{% else %}\n {% if record.0 <= 65536 %}{{ record.0 }}{% else %}Many{% endif %} IP{{ record.0|pluralize }} available\n{% endif %}\n\"\"\"\n\nIPADDRESS_DEVICE = \"\"\"\n{% if record.interface %}\n <a href=\"{{ record.interface.device.get_absolute_url }}\">{{ record.interface.device }}</a>\n{% else %}\n &mdash;\n{% endif %}\n\"\"\"\n\nVRF_LINK = \"\"\"\n{% if record.vrf %}\n <a href=\"{{ record.vrf.get_absolute_url }}\">{{ record.vrf }}</a>\n{% elif prefix.vrf %}\n {{ prefix.vrf }}\n{% else %}\n Global\n{% endif %}\n\"\"\"\n\nSTATUS_LABEL = \"\"\"\n{% if record.pk %}\n <span class=\"label label-{{ record.get_status_class }}\">{{ record.get_status_display }}</span>\n{% else %}\n <span class=\"label label-success\">Available</span>\n{% endif %}\n\"\"\"\n\nVLAN_PREFIXES = \"\"\"\n{% for prefix in record.prefixes.all %}\n <a href=\"{% url 'ipam:prefix' pk=prefix.pk %}\">{{ prefix }}</a>{% if not forloop.last %}<br />{% endif %}\n{% empty %}\n &mdash;\n{% endfor %}\n\"\"\"\n\nVLAN_ROLE_LINK = \"\"\"\n{% if record.role %}\n <a href=\"{% url 'ipam:vlan_list' %}?role={{ record.role.slug }}\">{{ record.role }}</a>\n{% else %}\n &mdash;\n{% endif %}\n\"\"\"\n\nVLANGROUP_ACTIONS = \"\"\"\n{% if perms.ipam.change_vlangroup %}\n <a href=\"{% url 'ipam:vlangroup_edit' pk=record.pk %}\" class=\"btn btn-xs btn-warning\"><i class=\"glyphicon glyphicon-pencil\" aria-hidden=\"true\"></i></a>\n{% endif %}\n\"\"\"\n\nTENANT_LINK = \"\"\"\n{% if record.tenant %}\n <a href=\"{% url 'tenancy:tenant' slug=record.tenant.slug %}\">{{ record.tenant }}</a>\n{% elif record.vrf.tenant %}\n <a href=\"{% url 'tenancy:tenant' slug=record.vrf.tenant.slug %}\">{{ record.vrf.tenant }}</a>*\n{% else %}\n &mdash;\n{% endif %}\n\"\"\"\n\n\n#\n# VRFs\n#\n\nclass VRFTable(BaseTable):\n pk = ToggleColumn()\n name = tables.LinkColumn()\n rd = tables.Column(verbose_name='RD')\n tenant = tables.LinkColumn('tenancy:tenant', args=[Accessor('tenant.slug')])\n\n class Meta(BaseTable.Meta):\n model = VRF\n fields = ('pk', 'name', 'rd', 'tenant', 'description')\n\n\n#\n# RIRs\n#\n\nclass RIRTable(BaseTable):\n pk = ToggleColumn()\n name = tables.LinkColumn(verbose_name='Name')\n is_private = tables.BooleanColumn(verbose_name='Private')\n aggregate_count = tables.Column(verbose_name='Aggregates')\n actions = tables.TemplateColumn(template_code=RIR_ACTIONS, attrs={'td': {'class': 'text-right'}}, verbose_name='')\n\n class Meta(BaseTable.Meta):\n model = RIR\n fields = ('pk', 'name', 'is_private', 'aggregate_count', 'actions')\n\n\nclass RIRDetailTable(RIRTable):\n stats_total = tables.Column(accessor='stats.total', verbose_name='Total',\n footer=lambda table: sum(r.stats['total'] for r in table.data))\n stats_active = tables.Column(accessor='stats.active', verbose_name='Active',\n footer=lambda table: sum(r.stats['active'] for r in table.data))\n stats_reserved = tables.Column(accessor='stats.reserved', verbose_name='Reserved',\n footer=lambda table: sum(r.stats['reserved'] for r in table.data))\n stats_deprecated = tables.Column(accessor='stats.deprecated', verbose_name='Deprecated',\n footer=lambda table: sum(r.stats['deprecated'] for r in table.data))\n stats_available = tables.Column(accessor='stats.available', verbose_name='Available',\n footer=lambda table: sum(r.stats['available'] for r in table.data))\n utilization = tables.TemplateColumn(template_code=RIR_UTILIZATION, verbose_name='Utilization')\n\n class Meta(RIRTable.Meta):\n fields = (\n 'pk', 'name', 'is_private', 'aggregate_count', 'stats_total', 'stats_active', 'stats_reserved',\n 'stats_deprecated', 'stats_available', 'utilization', 'actions',\n )\n\n\n#\n# Aggregates\n#\n\nclass AggregateTable(BaseTable):\n pk = ToggleColumn()\n prefix = tables.LinkColumn(verbose_name='Aggregate')\n date_added = tables.DateColumn(format=\"Y-m-d\", verbose_name='Added')\n\n class Meta(BaseTable.Meta):\n model = Aggregate\n fields = ('pk', 'prefix', 'rir', 'date_added', 'description')\n\n\nclass AggregateDetailTable(AggregateTable):\n child_count = tables.Column(verbose_name='Prefixes')\n get_utilization = tables.TemplateColumn(UTILIZATION_GRAPH, orderable=False, verbose_name='Utilization')\n\n class Meta(AggregateTable.Meta):\n fields = ('pk', 'prefix', 'rir', 'child_count', 'get_utilization', 'date_added', 'description')\n\n\n#\n# Roles\n#\n\nclass RoleTable(BaseTable):\n pk = ToggleColumn()\n name = tables.Column(verbose_name='Name')\n prefix_count = tables.Column(accessor=Accessor('count_prefixes'), orderable=False, verbose_name='Prefixes')\n vlan_count = tables.Column(accessor=Accessor('count_vlans'), orderable=False, verbose_name='VLANs')\n slug = tables.Column(verbose_name='Slug')\n actions = tables.TemplateColumn(template_code=ROLE_ACTIONS, attrs={'td': {'class': 'text-right'}}, verbose_name='')\n\n class Meta(BaseTable.Meta):\n model = Role\n fields = ('pk', 'name', 'prefix_count', 'vlan_count', 'slug', 'actions')\n\n\n#\n# Prefixes\n#\n\nclass PrefixTable(BaseTable):\n pk = ToggleColumn()\n prefix = tables.TemplateColumn(PREFIX_LINK, attrs={'th': {'style': 'padding-left: 17px'}})\n status = tables.TemplateColumn(STATUS_LABEL)\n vrf = tables.TemplateColumn(VRF_LINK, verbose_name='VRF')\n tenant = tables.TemplateColumn(TENANT_LINK)\n site = tables.LinkColumn('dcim:site', args=[Accessor('site.slug')])\n vlan = tables.LinkColumn('ipam:vlan', args=[Accessor('vlan.pk')], verbose_name='VLAN')\n role = tables.TemplateColumn(PREFIX_ROLE_LINK)\n\n class Meta(BaseTable.Meta):\n model = Prefix\n fields = ('pk', 'prefix', 'status', 'vrf', 'tenant', 'site', 'vlan', 'role', 'description')\n row_attrs = {\n 'class': lambda record: 'success' if not record.pk else '',\n }\n\n\nclass PrefixDetailTable(PrefixTable):\n get_utilization = tables.TemplateColumn(UTILIZATION_GRAPH, orderable=False, verbose_name='Utilization')\n\n class Meta(PrefixTable.Meta):\n fields = ('pk', 'prefix', 'status', 'vrf', 'get_utilization', 'tenant', 'site', 'vlan', 'role', 'description')\n\n\n#\n# IPAddresses\n#\n\nclass IPAddressTable(BaseTable):\n pk = ToggleColumn()\n address = tables.TemplateColumn(IPADDRESS_LINK, verbose_name='IP Address')\n status = tables.TemplateColumn(STATUS_LABEL)\n vrf = tables.TemplateColumn(VRF_LINK, verbose_name='VRF')\n tenant = tables.TemplateColumn(TENANT_LINK)\n device = tables.TemplateColumn(IPADDRESS_DEVICE, orderable=False)\n interface = tables.Column(orderable=False)\n\n class Meta(BaseTable.Meta):\n model = IPAddress\n fields = ('pk', 'address', 'vrf', 'status', 'role', 'tenant', 'device', 'interface', 'description')\n row_attrs = {\n 'class': lambda record: 'success' if not isinstance(record, IPAddress) else '',\n }\n\n\nclass IPAddressDetailTable(IPAddressTable):\n nat_inside = tables.LinkColumn(\n 'ipam:ipaddress', args=[Accessor('nat_inside.pk')], orderable=False, verbose_name='NAT (Inside)'\n )\n\n class Meta(IPAddressTable.Meta):\n fields = (\n 'pk', 'address', 'vrf', 'status', 'role', 'tenant', 'nat_inside', 'device', 'interface', 'description',\n )\n\n\n#\n# VLAN groups\n#\n\nclass VLANGroupTable(BaseTable):\n pk = ToggleColumn()\n name = tables.LinkColumn(verbose_name='Name')\n site = tables.LinkColumn('dcim:site', args=[Accessor('site.slug')], verbose_name='Site')\n vlan_count = tables.Column(verbose_name='VLANs')\n slug = tables.Column(verbose_name='Slug')\n actions = tables.TemplateColumn(template_code=VLANGROUP_ACTIONS, attrs={'td': {'class': 'text-right'}},\n verbose_name='')\n\n class Meta(BaseTable.Meta):\n model = VLANGroup\n fields = ('pk', 'name', 'site', 'vlan_count', 'slug', 'actions')\n\n\n#\n# VLANs\n#\n\nclass VLANTable(BaseTable):\n pk = ToggleColumn()\n vid = tables.LinkColumn('ipam:vlan', args=[Accessor('pk')], verbose_name='ID')\n site = tables.LinkColumn('dcim:site', args=[Accessor('site.slug')])\n group = tables.Column(accessor=Accessor('group.name'), verbose_name='Group')\n tenant = tables.LinkColumn('tenancy:tenant', args=[Accessor('tenant.slug')])\n status = tables.TemplateColumn(STATUS_LABEL)\n role = tables.TemplateColumn(VLAN_ROLE_LINK)\n\n class Meta(BaseTable.Meta):\n model = VLAN\n fields = ('pk', 'vid', 'site', 'group', 'name', 'tenant', 'status', 'role', 'description')\n\n\nclass VLANDetailTable(VLANTable):\n prefixes = tables.TemplateColumn(VLAN_PREFIXES, orderable=False, verbose_name='Prefixes')\n\n class Meta(VLANTable.Meta):\n fields = ('pk', 'vid', 'site', 'group', 'name', 'prefixes', 'tenant', 'status', 'role', 'description')\n", "path": "netbox/ipam/tables.py" } ]
diff --git a/netbox/ipam/tables.py b/netbox/ipam/tables.py index 65ab5b2e407..af82042dd00 100644 --- a/netbox/ipam/tables.py +++ b/netbox/ipam/tables.py @@ -80,7 +80,6 @@ IPADDRESS_DEVICE = """ {% if record.interface %} <a href="{{ record.interface.device.get_absolute_url }}">{{ record.interface.device }}</a> - ({{ record.interface.name }}) {% else %} &mdash; {% endif %}
google__flax-628
After update from 0.2.0: AttributeError: module 'jax.core' has no attribute 'eval_context' After updating from flax 0.2.0 to flax 0.2.2 I get the above error message. Downgrading to 0.2.0 solves this, so the error source is located. I'm working with the now deprecated flax.nn package if backward-compatibility might be the reason for this issue. The Issue is encountered in a custom RNN, when using the init_by_shape function in conjunction with jax.lax.scan.
[ { "content": "# Copyright 2020 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"setup.py for Flax.\"\"\"\n\nimport os\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n README = open(os.path.join(here, \"README.md\"), encoding='utf-8').read()\nexcept IOError:\n README = \"\"\n\ninstall_requires = [\n \"numpy>=1.12\",\n \"jax>=0.1.59\",\n \"matplotlib\", # only needed for tensorboard export\n \"dataclasses;python_version<'3.7'\", # will only install on py3.6\n \"msgpack\",\n]\n\ntests_require = [\n \"atari-py\",\n \"clu\", # All examples.\n \"gym\",\n \"jaxlib\",\n \"ml-collections\",\n \"opencv-python\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-xdist==1.34.0\", # upgrading to 2.0 broke tests, need to investigate\n \"sentencepiece\", # WMT example.\n \"svn\",\n \"tensorflow\",\n \"tensorflow_text\", # WMT example.\n \"tensorflow_datasets\",\n]\n\n__version__ = None\n\nwith open('flax/version.py') as f:\n exec(f.read(), globals())\n\nsetup(\n name=\"flax\",\n version=__version__,\n description=\"Flax: A neural network library for JAX designed for flexibility\",\n long_description=\"\\n\\n\".join([README]),\n long_description_content_type='text/markdown',\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n keywords=\"\",\n author=\"Flax team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/google/flax\",\n packages=find_packages(),\n include_package_data=False,\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n \"testing\": tests_require,\n },\n )\n", "path": "setup.py" } ]
[ { "content": "# Copyright 2020 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"setup.py for Flax.\"\"\"\n\nimport os\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n README = open(os.path.join(here, \"README.md\"), encoding='utf-8').read()\nexcept IOError:\n README = \"\"\n\ninstall_requires = [\n \"numpy>=1.12\",\n \"jax>=0.1.77\",\n \"matplotlib\", # only needed for tensorboard export\n \"dataclasses;python_version<'3.7'\", # will only install on py3.6\n \"msgpack\",\n]\n\ntests_require = [\n \"atari-py\",\n \"clu\", # All examples.\n \"gym\",\n \"jaxlib\",\n \"ml-collections\",\n \"opencv-python\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-xdist==1.34.0\", # upgrading to 2.0 broke tests, need to investigate\n \"sentencepiece\", # WMT example.\n \"svn\",\n \"tensorflow\",\n \"tensorflow_text\", # WMT example.\n \"tensorflow_datasets\",\n]\n\n__version__ = None\n\nwith open('flax/version.py') as f:\n exec(f.read(), globals())\n\nsetup(\n name=\"flax\",\n version=__version__,\n description=\"Flax: A neural network library for JAX designed for flexibility\",\n long_description=\"\\n\\n\".join([README]),\n long_description_content_type='text/markdown',\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n keywords=\"\",\n author=\"Flax team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/google/flax\",\n packages=find_packages(),\n include_package_data=False,\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n \"testing\": tests_require,\n },\n )\n", "path": "setup.py" } ]
diff --git a/setup.py b/setup.py index 58bfe451a..7a6cf9292 100644 --- a/setup.py +++ b/setup.py @@ -26,7 +26,7 @@ install_requires = [ "numpy>=1.12", - "jax>=0.1.59", + "jax>=0.1.77", "matplotlib", # only needed for tensorboard export "dataclasses;python_version<'3.7'", # will only install on py3.6 "msgpack",
django-haystack__django-haystack-1831
Update extras_require in setup.py to support Elasticsearch 7 Pipenv failed to generate lock file with `elasticsearch>=7.0.0` due to the bounded version in `setup.py`. Should adjust bounded version in `extras_require` so that `Pipfile.lock` can be generated ![](https://user-images.githubusercontent.com/16464044/149522995-848940d1-bc6e-4e90-b06d-947b6cdd115b.png)
[ { "content": "#!/usr/bin/env python\nfrom setuptools import setup\n\ninstall_requires = [\"Django>=2.2\"]\n\ntests_require = [\n \"pysolr>=3.7.0\",\n \"whoosh>=2.5.4,<3.0\",\n \"python-dateutil\",\n \"geopy==2.0.0\",\n \"nose\",\n \"coverage\",\n \"requests\",\n]\n\nsetup(\n name=\"django-haystack\",\n use_scm_version=True,\n description=\"Pluggable search for Django.\",\n author=\"Daniel Lindsley\",\n author_email=\"[email protected]\",\n long_description=open(\"README.rst\", \"r\").read(),\n url=\"http://haystacksearch.org/\",\n packages=[\n \"haystack\",\n \"haystack.backends\",\n \"haystack.management\",\n \"haystack.management.commands\",\n \"haystack.templatetags\",\n \"haystack.utils\",\n ],\n package_data={\n \"haystack\": [\"templates/panels/*\", \"templates/search_configuration/*\"]\n },\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Django\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.1\",\n \"Framework :: Django :: 3.2\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Utilities\",\n ],\n zip_safe=False,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require={\n \"elasticsearch\": [\"elasticsearch>=5,<6\"],\n },\n test_suite=\"test_haystack.run_tests.run_all\",\n)\n", "path": "setup.py" } ]
[ { "content": "#!/usr/bin/env python\nfrom setuptools import setup\n\ninstall_requires = [\"Django>=2.2\"]\n\ntests_require = [\n \"pysolr>=3.7.0\",\n \"whoosh>=2.5.4,<3.0\",\n \"python-dateutil\",\n \"geopy==2.0.0\",\n \"nose\",\n \"coverage\",\n \"requests\",\n]\n\nsetup(\n name=\"django-haystack\",\n use_scm_version=True,\n description=\"Pluggable search for Django.\",\n author=\"Daniel Lindsley\",\n author_email=\"[email protected]\",\n long_description=open(\"README.rst\", \"r\").read(),\n url=\"http://haystacksearch.org/\",\n packages=[\n \"haystack\",\n \"haystack.backends\",\n \"haystack.management\",\n \"haystack.management.commands\",\n \"haystack.templatetags\",\n \"haystack.utils\",\n ],\n package_data={\n \"haystack\": [\"templates/panels/*\", \"templates/search_configuration/*\"]\n },\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Django\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.1\",\n \"Framework :: Django :: 3.2\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Utilities\",\n ],\n zip_safe=False,\n install_requires=install_requires,\n tests_require=tests_require,\n extras_require={\n \"elasticsearch\": [\"elasticsearch>=5,<8\"],\n },\n test_suite=\"test_haystack.run_tests.run_all\",\n)\n", "path": "setup.py" } ]
diff --git a/setup.py b/setup.py index 6033e8dfd..3224ed2a1 100644 --- a/setup.py +++ b/setup.py @@ -55,7 +55,7 @@ install_requires=install_requires, tests_require=tests_require, extras_require={ - "elasticsearch": ["elasticsearch>=5,<6"], + "elasticsearch": ["elasticsearch>=5,<8"], }, test_suite="test_haystack.run_tests.run_all", )
Gallopsled__pwntools-1660
SSL Timeout Error immediately when switching to interactive #### PoC ``` from pwn import * r = remote('google.com', 443, ssl=True) r.interactive() r.close() ``` It immediately results in: ``` [+] Opening connection to google.com on port 443: Done [*] Switching to interactive mode Exception in thread Thread-2: Traceback (most recent call last): File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner self.run() File "/usr/lib/python2.7/threading.py", line 754, in run self.__target(*self.__args, **self.__kwargs) File "/home/hopkins/.local/lib/python2.7/site-packages/pwnlib/tubes/tube.py", line 784, in recv_thread cur = self.recv(timeout = 0.05) File "/home/hopkins/.local/lib/python2.7/site-packages/pwnlib/tubes/tube.py", line 78, in recv return self._recv(numb, timeout) or '' File "/home/hopkins/.local/lib/python2.7/site-packages/pwnlib/tubes/tube.py", line 156, in _recv if not self.buffer and not self._fillbuffer(timeout): File "/home/hopkins/.local/lib/python2.7/site-packages/pwnlib/tubes/tube.py", line 126, in _fillbuffer data = self.recv_raw(self.buffer.get_fill_size()) File "/home/hopkins/.local/lib/python2.7/site-packages/pwnlib/tubes/sock.py", line 37, in recv_raw data = self.sock.recv(numb, *a) File "/usr/lib/python2.7/ssl.py", line 772, in recv return self.read(buflen) File "/usr/lib/python2.7/ssl.py", line 659, in read v = self._sslobj.read(len) SSLError: ('The read operation timed out',) ``` Note that doing so on a non-SSL server doesn't have this issue: ``` from pwn import * r = remote('google.com', 80, ssl=False) r.interactive() r.close() ``` It allows you to type in HTTP Request in interactive mode, and return the server response without any issues. ``` GET / ``` ``` <HTTP Responses> ``` Is the SSL feature is broken in pwntools?
[ { "content": "from __future__ import absolute_import\nfrom __future__ import division\n\nimport errno\nimport select\nimport six\nimport socket\n\nfrom pwnlib.log import getLogger\nfrom pwnlib.tubes.tube import tube\n\nlog = getLogger(__name__)\n\nclass sock(tube):\n \"\"\"Base type used for :class:`.tubes.remote` and :class:`.tubes.listen` classes\"\"\"\n\n def __init__(self, *args, **kwargs):\n super(sock, self).__init__(*args, **kwargs)\n self.closed = {\"recv\": False, \"send\": False}\n\n # Overwritten for better usability\n def recvall(self, timeout = tube.forever):\n \"\"\"recvall() -> str\n\n Receives data until the socket is closed.\n \"\"\"\n\n if getattr(self, 'type', None) == socket.SOCK_DGRAM:\n self.error(\"UDP sockets does not supports recvall\")\n else:\n return super(sock, self).recvall(timeout)\n\n def recv_raw(self, numb, *a):\n if self.closed[\"recv\"]:\n raise EOFError\n\n while True:\n try:\n data = self.sock.recv(numb, *a)\n break\n except socket.timeout:\n return None\n except IOError as e:\n if e.errno == errno.EAGAIN:\n return None\n elif e.errno in (errno.ECONNREFUSED, errno.ECONNRESET):\n self.shutdown(\"recv\")\n raise EOFError\n elif e.errno == errno.EINTR:\n continue\n else:\n raise\n\n if not data:\n self.shutdown(\"recv\")\n raise EOFError\n\n return data\n\n def send_raw(self, data):\n if self.closed[\"send\"]:\n raise EOFError\n\n try:\n self.sock.sendall(data)\n except IOError as e:\n eof_numbers = (errno.EPIPE, errno.ECONNRESET, errno.ECONNREFUSED)\n if e.errno in eof_numbers or 'Socket is closed' in e.args:\n self.shutdown(\"send\")\n raise EOFError\n else:\n raise\n\n def settimeout_raw(self, timeout):\n if getattr(self, 'sock', None):\n self.sock.settimeout(timeout)\n\n def can_recv_raw(self, timeout):\n \"\"\"\n Tests:\n\n >>> l = listen()\n >>> r = remote('localhost', l.lport)\n >>> r.can_recv_raw(timeout=0)\n False\n >>> l.send(b'a')\n >>> r.can_recv_raw(timeout=1)\n True\n >>> r.recv()\n b'a'\n >>> r.can_recv_raw(timeout=0)\n False\n >>> l.close()\n >>> r.can_recv_raw(timeout=1)\n False\n >>> r.closed['recv']\n True\n \"\"\"\n if not self.sock or self.closed[\"recv\"]:\n return False\n\n # select() will tell us data is available at EOF\n can_recv = select.select([self.sock], [], [], timeout) == ([self.sock], [], [])\n\n if not can_recv:\n return False\n\n # Ensure there's actually data, not just EOF\n try:\n self.recv_raw(1, socket.MSG_PEEK)\n except EOFError:\n return False\n\n return True\n\n def connected_raw(self, direction):\n \"\"\"\n Tests:\n\n >>> l = listen()\n >>> r = remote('localhost', l.lport)\n >>> r.connected()\n True\n >>> l.close()\n >>> time.sleep(0.1) # Avoid race condition\n >>> r.connected()\n False\n \"\"\"\n # If there's no socket, it's definitely closed\n if not self.sock:\n return False\n\n # If we have noticed a connection close in a given direction before,\n # return fast.\n if self.closed.get(direction, False):\n return False\n\n # If a connection is closed in all manners, return fast\n if all(self.closed.values()):\n return False\n\n # Use poll() to determine the connection state\n want = {\n 'recv': select.POLLIN,\n 'send': select.POLLOUT,\n 'any': select.POLLIN | select.POLLOUT,\n }[direction]\n\n poll = select.poll()\n poll.register(self, want | select.POLLHUP | select.POLLERR)\n\n for fd, event in poll.poll(0):\n if event & select.POLLHUP:\n self.close()\n return False\n if event & select.POLLIN:\n return True\n if event & select.POLLOUT:\n return True\n\n return True\n\n def close(self):\n if not getattr(self, 'sock', None):\n return\n\n # Mark as closed in both directions\n self.closed['send'] = True\n self.closed['recv'] = True\n\n self.sock.close()\n self.sock = None\n self._close_msg()\n\n def _close_msg(self):\n self.info('Closed connection to %s port %d' % (self.rhost, self.rport))\n\n def fileno(self):\n if not self.sock:\n self.error(\"A closed socket does not have a file number\")\n\n return self.sock.fileno()\n\n def shutdown_raw(self, direction):\n if self.closed[direction]:\n return\n\n self.closed[direction] = True\n\n if direction == \"send\":\n try:\n self.sock.shutdown(socket.SHUT_WR)\n except IOError as e:\n if e.errno == errno.ENOTCONN:\n pass\n else:\n raise\n\n if direction == \"recv\":\n try:\n self.sock.shutdown(socket.SHUT_RD)\n except IOError as e:\n if e.errno == errno.ENOTCONN:\n pass\n else:\n raise\n\n if False not in self.closed.values():\n self.close()\n\n @classmethod\n def _get_family(cls, fam):\n if isinstance(fam, six.integer_types):\n pass\n elif fam == 'any':\n fam = socket.AF_UNSPEC\n elif fam.lower() in ['ipv4', 'ip4', 'v4', '4']:\n fam = socket.AF_INET\n elif fam.lower() in ['ipv6', 'ip6', 'v6', '6']:\n fam = socket.AF_INET6\n else:\n self.error(\"%s(): socket family %r is not supported\",\n cls.__name__,\n fam)\n\n return fam\n\n @classmethod\n def _get_type(cls, typ):\n if isinstance(typ, six.integer_types):\n pass\n elif typ == \"tcp\":\n typ = socket.SOCK_STREAM\n elif typ == \"udp\":\n typ = socket.SOCK_DGRAM\n else:\n self.error(\"%s(): socket type %r is not supported\",\n cls.__name__,\n typ)\n\n return typ\n", "path": "pwnlib/tubes/sock.py" } ]
[ { "content": "from __future__ import absolute_import\nfrom __future__ import division\n\nimport errno\nimport select\nimport six\nimport socket\n\nfrom pwnlib.log import getLogger\nfrom pwnlib.tubes.tube import tube\n\nlog = getLogger(__name__)\n\nclass sock(tube):\n \"\"\"Base type used for :class:`.tubes.remote` and :class:`.tubes.listen` classes\"\"\"\n\n def __init__(self, *args, **kwargs):\n super(sock, self).__init__(*args, **kwargs)\n self.closed = {\"recv\": False, \"send\": False}\n\n # Overwritten for better usability\n def recvall(self, timeout = tube.forever):\n \"\"\"recvall() -> str\n\n Receives data until the socket is closed.\n \"\"\"\n\n if getattr(self, 'type', None) == socket.SOCK_DGRAM:\n self.error(\"UDP sockets does not supports recvall\")\n else:\n return super(sock, self).recvall(timeout)\n\n def recv_raw(self, numb, *a):\n if self.closed[\"recv\"]:\n raise EOFError\n\n while True:\n try:\n data = self.sock.recv(numb, *a)\n break\n except socket.timeout:\n return None\n except IOError as e:\n if e.errno == errno.EAGAIN:\n return None\n elif e.errno in (errno.ECONNREFUSED, errno.ECONNRESET):\n self.shutdown(\"recv\")\n raise EOFError\n elif e.errno == errno.EINTR:\n continue\n elif 'timed out' in e.message:\n return None\n else:\n raise\n\n if not data:\n self.shutdown(\"recv\")\n raise EOFError\n\n return data\n\n def send_raw(self, data):\n if self.closed[\"send\"]:\n raise EOFError\n\n try:\n self.sock.sendall(data)\n except IOError as e:\n eof_numbers = (errno.EPIPE, errno.ECONNRESET, errno.ECONNREFUSED)\n if e.errno in eof_numbers or 'Socket is closed' in e.args:\n self.shutdown(\"send\")\n raise EOFError\n else:\n raise\n\n def settimeout_raw(self, timeout):\n if getattr(self, 'sock', None):\n self.sock.settimeout(timeout)\n\n def can_recv_raw(self, timeout):\n \"\"\"\n Tests:\n\n >>> l = listen()\n >>> r = remote('localhost', l.lport)\n >>> r.can_recv_raw(timeout=0)\n False\n >>> l.send(b'a')\n >>> r.can_recv_raw(timeout=1)\n True\n >>> r.recv()\n b'a'\n >>> r.can_recv_raw(timeout=0)\n False\n >>> l.close()\n >>> r.can_recv_raw(timeout=1)\n False\n >>> r.closed['recv']\n True\n \"\"\"\n if not self.sock or self.closed[\"recv\"]:\n return False\n\n # select() will tell us data is available at EOF\n can_recv = select.select([self.sock], [], [], timeout) == ([self.sock], [], [])\n\n if not can_recv:\n return False\n\n # Ensure there's actually data, not just EOF\n try:\n self.recv_raw(1, socket.MSG_PEEK)\n except EOFError:\n return False\n\n return True\n\n def connected_raw(self, direction):\n \"\"\"\n Tests:\n\n >>> l = listen()\n >>> r = remote('localhost', l.lport)\n >>> r.connected()\n True\n >>> l.close()\n >>> time.sleep(0.1) # Avoid race condition\n >>> r.connected()\n False\n \"\"\"\n # If there's no socket, it's definitely closed\n if not self.sock:\n return False\n\n # If we have noticed a connection close in a given direction before,\n # return fast.\n if self.closed.get(direction, False):\n return False\n\n # If a connection is closed in all manners, return fast\n if all(self.closed.values()):\n return False\n\n # Use poll() to determine the connection state\n want = {\n 'recv': select.POLLIN,\n 'send': select.POLLOUT,\n 'any': select.POLLIN | select.POLLOUT,\n }[direction]\n\n poll = select.poll()\n poll.register(self, want | select.POLLHUP | select.POLLERR)\n\n for fd, event in poll.poll(0):\n if event & select.POLLHUP:\n self.close()\n return False\n if event & select.POLLIN:\n return True\n if event & select.POLLOUT:\n return True\n\n return True\n\n def close(self):\n if not getattr(self, 'sock', None):\n return\n\n # Mark as closed in both directions\n self.closed['send'] = True\n self.closed['recv'] = True\n\n self.sock.close()\n self.sock = None\n self._close_msg()\n\n def _close_msg(self):\n self.info('Closed connection to %s port %d' % (self.rhost, self.rport))\n\n def fileno(self):\n if not self.sock:\n self.error(\"A closed socket does not have a file number\")\n\n return self.sock.fileno()\n\n def shutdown_raw(self, direction):\n if self.closed[direction]:\n return\n\n self.closed[direction] = True\n\n if direction == \"send\":\n try:\n self.sock.shutdown(socket.SHUT_WR)\n except IOError as e:\n if e.errno == errno.ENOTCONN:\n pass\n else:\n raise\n\n if direction == \"recv\":\n try:\n self.sock.shutdown(socket.SHUT_RD)\n except IOError as e:\n if e.errno == errno.ENOTCONN:\n pass\n else:\n raise\n\n if False not in self.closed.values():\n self.close()\n\n @classmethod\n def _get_family(cls, fam):\n if isinstance(fam, six.integer_types):\n pass\n elif fam == 'any':\n fam = socket.AF_UNSPEC\n elif fam.lower() in ['ipv4', 'ip4', 'v4', '4']:\n fam = socket.AF_INET\n elif fam.lower() in ['ipv6', 'ip6', 'v6', '6']:\n fam = socket.AF_INET6\n else:\n self.error(\"%s(): socket family %r is not supported\",\n cls.__name__,\n fam)\n\n return fam\n\n @classmethod\n def _get_type(cls, typ):\n if isinstance(typ, six.integer_types):\n pass\n elif typ == \"tcp\":\n typ = socket.SOCK_STREAM\n elif typ == \"udp\":\n typ = socket.SOCK_DGRAM\n else:\n self.error(\"%s(): socket type %r is not supported\",\n cls.__name__,\n typ)\n\n return typ\n", "path": "pwnlib/tubes/sock.py" } ]
diff --git a/pwnlib/tubes/sock.py b/pwnlib/tubes/sock.py index f25419836..f2a4adf54 100644 --- a/pwnlib/tubes/sock.py +++ b/pwnlib/tubes/sock.py @@ -48,6 +48,8 @@ def recv_raw(self, numb, *a): raise EOFError elif e.errno == errno.EINTR: continue + elif 'timed out' in e.message: + return None else: raise
pymodbus-dev__pymodbus-1443
StartAsyncSerialServer doesn't work pymodbus version 3.2 ### Discussed in https://github.com/pymodbus-dev/pymodbus/discussions/1433 <sup>Originally posted by **dlmoffett** March 13, 2023</sup> ### Versions - Python: 3.9 - OS: Windows - Pymodbus: 3.2.0 - Modbus Hardware (if used): USB to RS458 ### Pymodbus Specific - Server: `StartAsyncSerialServer` ### Description `StartAsyncSerialServer` no longer actually starts the server. ### Code and Logs ```python import asyncio import logging from pymodbus.framer.rtu_framer import ModbusRtuFramer from pymodbus.server import StartAsyncSerialServer logging.basicConfig( format="[%(asctime)s] [%(levelname)s] [%(name)s] %(message)s", datefmt="%Y-%m-%d %H:%M:%S %z", level=logging.INFO, ) log = logging.getLogger(__name__) async def modbus_slave(): return await StartAsyncSerialServer( framer=ModbusRtuFramer, port="COM10", baudrate=19200, ) def sanity(): log.info(f"✅ Here I am!") asyncio.run(modbus_slave()) log.info(f"❌ I'm going insane!") ``` Per the [example documentation](https://pymodbus.readthedocs.io/en/dev/source/examples.html#asynchronous-server-example) I would expect to only see my first log statement and for the server to be up and running indefinitely until I send a keyboard interrupt. However, that's not the case and I see the following output when running the above code, which exits immediately after making the logs: ``` [2023-03-13 16:07:53 -0600] [INFO] [fix_pymodbus.sanity] ✅ Here I am! [2023-03-13 16:07:53 -0600] [INFO] [fix_pymodbus.sanity] ❌ I'm going insane! ``` ### Working Monkey Patch If I make the following monkey patch everything works as expected: ```python async def StartAsyncSerialServer( # pylint: disable=invalid-name,dangerous-default-value context=None, identity=None, custom_functions=[], **kwargs, ): # pragma: no cover """Start and run a serial modbus server. :param context: The ModbusServerContext datastore :param identity: An optional identify structure :param custom_functions: An optional list of custom function classes supported by server instance. :param kwargs: The rest """ server = ModbusSerialServer( context, kwargs.pop("framer", ModbusAsciiFramer), identity=identity, **kwargs ) await server.start() # <----------------------Adding this makes it work 🤔 await _serverList.run(server, custom_functions) ``` ### Expected Behavior I expect the sample code above to to run until a keyboard interrupt.
[ { "content": "\"\"\"Implementation of a Threaded Modbus Server.\"\"\"\n# pylint: disable=missing-type-doc\nimport asyncio\nimport ssl\nimport time\nimport traceback\nfrom typing import Union\n\nfrom pymodbus.client.serial_asyncio import create_serial_connection\nfrom pymodbus.constants import Defaults\nfrom pymodbus.datastore import ModbusServerContext\nfrom pymodbus.device import ModbusControlBlock, ModbusDeviceIdentification\nfrom pymodbus.exceptions import NoSuchSlaveException, NotImplementedException\nfrom pymodbus.factory import ServerDecoder\nfrom pymodbus.logging import Log\nfrom pymodbus.pdu import ModbusExceptions as merror\nfrom pymodbus.transaction import (\n ModbusAsciiFramer,\n ModbusRtuFramer,\n ModbusSocketFramer,\n ModbusTlsFramer,\n)\n\n\ntry:\n import serial\nexcept ImportError:\n pass\n\n\ndef sslctx_provider(\n sslctx=None, certfile=None, keyfile=None, password=None, reqclicert=False\n):\n \"\"\"Provide the SSLContext for ModbusTlsServer.\n\n If the user defined SSLContext is not passed in, sslctx_provider will\n produce a default one.\n\n :param sslctx: The user defined SSLContext to use for TLS (default None and\n auto create)\n :param certfile: The cert file path for TLS (used if sslctx is None)\n :param keyfile: The key file path for TLS (used if sslctx is None)\n :param password: The password for for decrypting the private key file\n :param reqclicert: Force the sever request client's certificate\n \"\"\"\n if sslctx is None:\n # According to MODBUS/TCP Security Protocol Specification, it is\n # TLSv2 at least\n sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)\n sslctx.verify_mode = ssl.CERT_NONE\n sslctx.check_hostname = False\n sslctx.options |= ssl.OP_NO_TLSv1_1\n sslctx.options |= ssl.OP_NO_TLSv1\n sslctx.options |= ssl.OP_NO_SSLv3\n sslctx.options |= ssl.OP_NO_SSLv2\n sslctx.load_cert_chain(certfile=certfile, keyfile=keyfile, password=password)\n\n if reqclicert:\n sslctx.verify_mode = ssl.CERT_REQUIRED\n\n return sslctx\n\n\n# --------------------------------------------------------------------------- #\n# Protocol Handlers\n# --------------------------------------------------------------------------- #\n\n\nclass ModbusBaseRequestHandler(asyncio.BaseProtocol):\n \"\"\"Implements modbus slave wire protocol.\n\n This uses the asyncio.Protocol to implement the client handler.\n\n When a connection is established, the asyncio.Protocol.connection_made\n callback is called. This callback will setup the connection and\n create and schedule an asyncio.Task and assign it to running_task.\n\n running_task will be canceled upon connection_lost event.\n \"\"\"\n\n def __init__(self, owner):\n \"\"\"Initialize.\"\"\"\n self.server = owner\n self.running = False\n self.receive_queue = asyncio.Queue()\n self.handler_task = None # coroutine to be run on asyncio loop\n self._sent = b\"\" # for handle_local_echo\n\n def _log_exception(self):\n \"\"\"Show log exception.\"\"\"\n if isinstance(self, ModbusConnectedRequestHandler):\n Log.debug(\n \"Handler for stream [{}] has been canceled\", self.client_address[:2]\n )\n elif isinstance(self, ModbusSingleRequestHandler):\n Log.debug(\"Handler for serial port has been cancelled\")\n else:\n if hasattr(self, \"protocol\"):\n sock_name = (\n self.protocol._sock.getsockname() # pylint: disable=protected-access\n )\n else:\n sock_name = \"No socket\"\n Log.debug(\"Handler for UDP socket [{}] has been canceled\", sock_name[1])\n\n def connection_made(self, transport):\n \"\"\"Call for socket establish\n\n For streamed protocols (TCP) this will also correspond to an\n entire conversation; however for datagram protocols (UDP) this\n corresponds to the socket being opened\n \"\"\"\n try:\n if (\n hasattr(transport, \"get_extra_info\")\n and transport.get_extra_info(\"sockname\") is not None\n ):\n sockname = transport.get_extra_info(\"sockname\")[:2]\n Log.debug(\"Socket [{}] opened\", sockname)\n elif hasattr(transport, \"serial\"):\n Log.debug(\"Serial connection opened on port: {}\", transport.serial.port)\n else:\n Log.warning(\"Unable to get information about transport {}\", transport)\n self.transport = transport # pylint: disable=attribute-defined-outside-init\n self.running = True\n self.framer = ( # pylint: disable=attribute-defined-outside-init\n self.server.framer(\n self.server.decoder,\n client=None,\n )\n )\n\n # schedule the connection handler on the event loop\n self.handler_task = asyncio.create_task(self.handle())\n except Exception as exc: # pragma: no cover pylint: disable=broad-except\n Log.error(\n \"Datastore unable to fulfill request: {}; {}\",\n exc,\n traceback.format_exc(),\n )\n\n def connection_lost(self, call_exc):\n \"\"\"Call for socket tear down.\n\n For streamed protocols any break in the network connection will\n be reported here; for datagram protocols, only a teardown of the\n socket itself will result in this call.\n \"\"\"\n try:\n if self.handler_task:\n self.handler_task.cancel()\n if call_exc is None:\n self._log_exception()\n elif hasattr(self, \"client_address\"): # TCP connection\n Log.debug(\n \"Client Disconnection {} due to {}\", self.client_address, call_exc\n )\n\n self.running = False\n except Exception as exc: # pylint: disable=broad-except\n Log.error(\n \"Datastore unable to fulfill request: {}; {}\",\n exc,\n traceback.format_exc(),\n )\n\n async def handle(self): # pylint: disable=too-complex\n \"\"\"Return Asyncio coroutine which represents a single conversation.\n\n between the modbus slave and master\n\n Once the client connection is established, the data chunks will be\n fed to this coroutine via the asyncio.Queue object which is fed by\n the ModbusBaseRequestHandler class's callback Future.\n\n This callback future gets data from either\n asyncio.DatagramProtocol.datagram_received or\n from asyncio.BaseProtocol.data_received.\n\n This function will execute without blocking in the while-loop and\n yield to the asyncio event loop when the frame is exhausted.\n As a result, multiple clients can be interleaved without any\n interference between them.\n\n For ModbusConnectedRequestHandler, each connection will be given an\n instance of the handle() coroutine and this instance will be put in the\n active_connections dict. Calling server_close will individually cancel\n each running handle() task.\n\n For ModbusDisconnectedRequestHandler, a single handle() coroutine will\n be started and maintained. Calling server_close will cancel that task.\n \"\"\"\n reset_frame = False\n while self.running:\n try:\n slaves = self.server.context.slaves()\n # this is an asyncio.Queue await, it will never fail\n data = await self._recv_()\n if isinstance(data, tuple):\n # addr is populated when talking over UDP\n data, *addr = data\n else:\n addr = (None,) # empty tuple\n\n if not isinstance(slaves, (list, tuple)):\n slaves = [slaves]\n # if broadcast is enabled make sure to\n # process requests to address 0\n if self.server.broadcast_enable: # pragma: no cover\n if 0 not in slaves:\n slaves.append(0)\n\n Log.debug(\"Handling data: {}\", data, \":hex\")\n\n single = self.server.context.single\n self.framer.processIncomingPacket(\n data=data,\n callback=lambda x: self.execute(x, *addr),\n slave=slaves,\n single=single,\n )\n\n except asyncio.CancelledError:\n # catch and ignore cancellation errors\n if self.running:\n self._log_exception()\n self.running = False\n except Exception as exc: # pylint: disable=broad-except\n # force TCP socket termination as processIncomingPacket\n # should handle application layer errors\n # for UDP sockets, simply reset the frame\n if isinstance(self, ModbusConnectedRequestHandler):\n client_addr = self.client_address[:2]\n Log.error(\n 'Unknown exception \"{}\" on stream {} forcing disconnect',\n exc,\n client_addr,\n )\n self.transport.close()\n else:\n Log.error(\"Unknown error occurred {}\", exc)\n reset_frame = True # graceful recovery\n finally:\n if reset_frame:\n self.framer.resetFrame()\n reset_frame = False\n\n def execute(self, request, *addr):\n \"\"\"Call with the resulting message.\n\n :param request: The decoded request message\n :param addr: the address\n \"\"\"\n if self.server.request_tracer:\n self.server.request_tracer(request, *addr)\n\n broadcast = False\n try:\n if self.server.broadcast_enable and not request.slave_id:\n broadcast = True\n # if broadcasting then execute on all slave contexts,\n # note response will be ignored\n for slave_id in self.server.context.slaves():\n response = request.execute(self.server.context[slave_id])\n else:\n context = self.server.context[request.slave_id]\n response = request.execute(context)\n except NoSuchSlaveException:\n Log.error(\"requested slave does not exist: {}\", request.slave_id)\n if self.server.ignore_missing_slaves:\n return # the client will simply timeout waiting for a response\n response = request.doException(merror.GatewayNoResponse)\n except Exception as exc: # pylint: disable=broad-except\n Log.error(\n \"Datastore unable to fulfill request: {}; {}\",\n exc,\n traceback.format_exc(),\n )\n response = request.doException(merror.SlaveFailure)\n # no response when broadcasting\n if not broadcast:\n response.transaction_id = request.transaction_id\n response.slave_id = request.slave_id\n skip_encoding = False\n if self.server.response_manipulator:\n response, skip_encoding = self.server.response_manipulator(response)\n self.send(response, *addr, skip_encoding=skip_encoding)\n\n def send(self, message, *addr, **kwargs):\n \"\"\"Send message.\"\"\"\n\n def __send(msg, *addr):\n Log.debug(\"send: [{}]- {}\", message, msg, \":b2a\")\n if addr == (None,):\n self._send_(msg)\n else:\n self._send_(msg, *addr)\n\n if kwargs.get(\"skip_encoding\", False):\n __send(message, *addr)\n elif message.should_respond:\n # self.server.control.Counter.BusMessage += 1\n pdu = self.framer.buildPacket(message)\n __send(pdu, *addr)\n else:\n Log.debug(\"Skipping sending response!!\")\n\n # ----------------------------------------------------------------------- #\n # Derived class implementations\n # ----------------------------------------------------------------------- #\n\n def _send_(self, data): # pragma: no cover\n \"\"\"Send a request (string) to the network.\n\n :param data: The unencoded modbus response\n :raises NotImplementedException:\n \"\"\"\n raise NotImplementedException(\"Method not implemented by derived class\")\n\n async def _recv_(self): # pragma: no cover\n \"\"\"Receive data from the network.\n\n :raises NotImplementedException:\n \"\"\"\n raise NotImplementedException(\"Method not implemented by derived class\")\n\n\nclass ModbusConnectedRequestHandler(ModbusBaseRequestHandler, asyncio.Protocol):\n \"\"\"Implements the modbus server protocol\n\n This uses asyncio.Protocol to implement\n the client handler for a connected protocol (TCP).\n \"\"\"\n\n def connection_made(self, transport):\n \"\"\"Call when a connection is made.\"\"\"\n super().connection_made(transport)\n\n self.client_address = ( # pylint: disable=attribute-defined-outside-init\n transport.get_extra_info(\"peername\")\n )\n self.server.active_connections[self.client_address] = self\n txt = f\"TCP client connection established [{self.client_address[:2]}]\"\n Log.debug(txt)\n\n def connection_lost(self, call_exc):\n \"\"\"Call when the connection is lost or closed.\"\"\"\n super().connection_lost(call_exc)\n client_addr = self.client_address[:2]\n Log.debug(\"TCP client disconnected [{}]\", client_addr)\n if self.client_address in self.server.active_connections:\n self.server.active_connections.pop(self.client_address)\n\n def data_received(self, data):\n \"\"\"Call when some data is received.\n\n data is a non-empty bytes object containing the incoming data.\n \"\"\"\n self.receive_queue.put_nowait(data)\n\n async def _recv_(self):\n try:\n result = await self.receive_queue.get()\n except RuntimeError:\n Log.error(\"Event loop is closed\")\n result = None\n return result\n\n def _send_(self, data):\n \"\"\"Send tcp.\"\"\"\n self.transport.write(data)\n\n def close(self):\n \"\"\"Close socket.\"\"\"\n self.transport.abort()\n\n\nclass ModbusDisconnectedRequestHandler(\n ModbusBaseRequestHandler, asyncio.DatagramProtocol\n):\n \"\"\"Implements the modbus server protocol\n\n This uses the socketserver.BaseRequestHandler to implement\n the client handler for a disconnected protocol (UDP). The\n only difference is that we have to specify who to send the\n resulting packet data to.\n \"\"\"\n\n def __init__(self, owner):\n \"\"\"Initialize.\"\"\"\n super().__init__(owner)\n _future = asyncio.get_running_loop().create_future()\n self.server.on_connection_terminated = _future\n\n def connection_lost(self, call_exc):\n \"\"\"Handle connection lost.\"\"\"\n super().connection_lost(call_exc)\n self.server.on_connection_terminated.set_result(True)\n\n def datagram_received(self, data, addr):\n \"\"\"Call when a datagram is received.\n\n data is a bytes object containing the incoming data. addr\n is the address of the peer sending the data; the exact\n format depends on the transport.\n \"\"\"\n self.receive_queue.put_nowait((data, addr))\n\n def error_received(self, exc): # pragma: no cover\n \"\"\"Call when a previous send/receive raises an OSError.\n\n exc is the OSError instance.\n\n This method is called in rare conditions,\n when the transport (e.g. UDP) detects that a datagram could\n not be delivered to its recipient. In many conditions\n though, undeliverable datagrams will be silently dropped.\n \"\"\"\n Log.error(\"datagram connection error [{}]\", exc)\n\n async def _recv_(self):\n return await self.receive_queue.get()\n\n def _send_(self, data, addr=None):\n self.transport.sendto(data, addr=addr)\n\n\nclass ModbusSingleRequestHandler(ModbusBaseRequestHandler, asyncio.Protocol):\n \"\"\"Implement the modbus server protocol.\n\n This uses asyncio.Protocol to implement\n the client handler for a serial connection.\n \"\"\"\n\n def connection_made(self, transport):\n \"\"\"Handle connect made.\"\"\"\n self.server.active_connection = self\n super().connection_made(transport)\n Log.debug(\"Serial connection established\")\n\n def connection_lost(self, call_exc):\n \"\"\"Handle connection lost.\"\"\"\n super().connection_lost(call_exc)\n Log.debug(\"Serial connection lost\")\n if hasattr(self.server, \"on_connection_lost\"):\n self.server.on_connection_lost()\n\n def data_received(self, data):\n \"\"\"Receive data.\"\"\"\n if (\n hasattr(self.server, \"handle_local_echo\")\n and self.server.handle_local_echo is True\n and self._sent\n ):\n if self._sent in data:\n data, self._sent = data.replace(self._sent, b\"\", 1), b\"\"\n elif self._sent.startswith(data):\n self._sent, data = self._sent.replace(data, b\"\", 1), b\"\"\n else:\n self._sent = b\"\"\n if not data:\n return\n self.receive_queue.put_nowait(data)\n\n async def _recv_(self):\n return await self.receive_queue.get()\n\n def _send_(self, data):\n if self.transport is not None:\n self.transport.write(data)\n if (\n hasattr(self.server, \"handle_local_echo\")\n and self.server.handle_local_echo is True\n ):\n self._sent = data\n\n\n# --------------------------------------------------------------------------- #\n# Server Implementations\n# --------------------------------------------------------------------------- #\n\n\nclass ModbusUnixServer:\n \"\"\"A modbus threaded Unix socket server.\n\n We inherit and overload the socket server so that we\n can control the client threads as well as have a single\n server context instance.\n \"\"\"\n\n def __init__(\n self,\n context,\n path,\n framer=None,\n identity=None,\n handler=None,\n **kwargs,\n ):\n \"\"\"Initialize the socket server.\n\n If the identify structure is not passed in, the ModbusControlBlock\n uses its own default structure.\n\n :param context: The ModbusServerContext datastore\n :param path: unix socket path\n :param framer: The framer strategy to use\n :param identity: An optional identify structure\n :param handler: A handler for each client session; default is\n ModbusConnectedRequestHandler. The handler class\n receives connection create/teardown events\n :param allow_reuse_address: Whether the server will allow the\n reuse of an address.\n :param ignore_missing_slaves: True to not send errors on a request\n to a missing slave\n :param broadcast_enable: True to treat slave_id 0 as broadcast address,\n False to treat 0 as any other slave_id\n :param response_manipulator: Callback method for manipulating the\n response\n \"\"\"\n self.active_connections = {}\n self.loop = kwargs.get(\"loop\") or asyncio.get_event_loop()\n self.decoder = ServerDecoder()\n self.framer = framer or ModbusSocketFramer\n self.context = context or ModbusServerContext()\n self.control = ModbusControlBlock()\n self.path = path\n self.handler = handler or ModbusConnectedRequestHandler\n self.handler.server = self\n self.ignore_missing_slaves = kwargs.get(\n \"ignore_missing_slaves\", Defaults.IgnoreMissingSlaves\n )\n self.broadcast_enable = kwargs.get(\"broadcast_enable\", Defaults.BroadcastEnable)\n self.response_manipulator = kwargs.get(\"response_manipulator\", None)\n if isinstance(identity, ModbusDeviceIdentification):\n self.control.Identity.update(identity)\n\n # asyncio future that will be done once server has started\n self.serving = self.loop.create_future()\n # constructors cannot be declared async, so we have to\n # defer the initialization of the server\n self.server = None\n self.request_tracer = None\n self.factory_parms = {}\n\n async def serve_forever(self):\n \"\"\"Start endless loop.\"\"\"\n if self.server is None:\n try:\n self.server = await self.loop.create_unix_server(\n lambda: self.handler(self),\n self.path,\n )\n self.serving.set_result(True)\n Log.info(\"Server(Unix) listening.\")\n await self.server.serve_forever()\n except asyncio.exceptions.CancelledError:\n raise\n except Exception as exc: # pylint: disable=broad-except\n Log.error(\"Server unexpected exception {}\", exc)\n else:\n raise RuntimeError(\n \"Can't call serve_forever on an already running server object\"\n )\n Log.info(\"Server graceful shutdown.\")\n\n async def shutdown(self):\n \"\"\"Shutdown server.\"\"\"\n await self.server_close()\n\n async def server_close(self):\n \"\"\"Close server.\"\"\"\n for k_item, v_item in self.active_connections.items():\n Log.warning(\"aborting active session {}\", k_item)\n v_item.handler_task.cancel()\n self.active_connections = {}\n if self.server is not None:\n self.server.close()\n await self.server.wait_closed()\n self.server = None\n\n\nclass ModbusTcpServer:\n \"\"\"A modbus threaded tcp socket server.\n\n We inherit and overload the socket server so that we\n can control the client threads as well as have a single\n server context instance.\n \"\"\"\n\n def __init__(\n self,\n context,\n framer=None,\n identity=None,\n address=None,\n handler=None,\n allow_reuse_address=False,\n defer_start=False,\n backlog=20,\n **kwargs,\n ):\n \"\"\"Initialize the socket server.\n\n If the identify structure is not passed in, the ModbusControlBlock\n uses its own empty structure.\n\n :param context: The ModbusServerContext datastore\n :param framer: The framer strategy to use\n :param identity: An optional identify structure\n :param address: An optional (interface, port) to bind to.\n :param handler: A handler for each client session; default is\n ModbusConnectedRequestHandler. The handler class\n receives connection create/teardown events\n :param allow_reuse_address: Whether the server will allow the\n reuse of an address.\n :param backlog: is the maximum number of queued connections\n passed to listen(). Defaults to 20, increase if many\n connections are being made and broken to your Modbus slave\n :param ignore_missing_slaves: True to not send errors on a request\n to a missing slave\n :param broadcast_enable: True to treat slave_id 0 as broadcast address,\n False to treat 0 as any other slave_id\n :param response_manipulator: Callback method for manipulating the\n response\n \"\"\"\n self.active_connections = {}\n self.loop = kwargs.get(\"loop\") or asyncio.get_event_loop()\n self.allow_reuse_address = allow_reuse_address\n self.decoder = ServerDecoder()\n self.framer = framer or ModbusSocketFramer\n self.context = context or ModbusServerContext()\n self.control = ModbusControlBlock()\n self.address = address or (\"\", Defaults.TcpPort)\n self.handler = handler or ModbusConnectedRequestHandler\n self.handler.server = self\n self.ignore_missing_slaves = kwargs.get(\n \"ignore_missing_slaves\", Defaults.IgnoreMissingSlaves\n )\n self.broadcast_enable = kwargs.get(\"broadcast_enable\", Defaults.BroadcastEnable)\n self.response_manipulator = kwargs.get(\"response_manipulator\", None)\n self.request_tracer = kwargs.get(\"request_tracer\", None)\n if isinstance(identity, ModbusDeviceIdentification):\n self.control.Identity.update(identity)\n\n # asyncio future that will be done once server has started\n self.serving = self.loop.create_future()\n # constructors cannot be declared async, so we have to\n # defer the initialization of the server\n self.server = None\n self.factory_parms = {\n \"reuse_address\": allow_reuse_address,\n \"backlog\": backlog,\n \"start_serving\": not defer_start,\n }\n\n async def serve_forever(self):\n \"\"\"Start endless loop.\"\"\"\n if self.server is None:\n self.server = await self.loop.create_server(\n lambda: self.handler(self),\n *self.address,\n **self.factory_parms,\n )\n self.serving.set_result(True)\n Log.info(\"Server(TCP) listening.\")\n try:\n await self.server.serve_forever()\n except asyncio.exceptions.CancelledError:\n raise\n except Exception as exc: # pylint: disable=broad-except\n Log.error(\"Server unexpected exception {}\", exc)\n else:\n raise RuntimeError(\n \"Can't call serve_forever on an already running server object\"\n )\n Log.info(\"Server graceful shutdown.\")\n\n async def shutdown(self):\n \"\"\"Shutdown server.\"\"\"\n await self.server_close()\n\n async def server_close(self):\n \"\"\"Close server.\"\"\"\n active_connecions = self.active_connections.copy()\n for k_item, v_item in active_connecions.items():\n Log.warning(\"aborting active session {}\", k_item)\n v_item.transport.close()\n await asyncio.sleep(0.1)\n v_item.handler_task.cancel()\n await v_item.handler_task\n self.active_connections = {}\n if self.server is not None:\n self.server.close()\n await self.server.wait_closed()\n self.server = None\n\n\nclass ModbusTlsServer(ModbusTcpServer):\n \"\"\"A modbus threaded tls socket server.\n\n We inherit and overload the socket server so that we\n can control the client threads as well as have a single\n server context instance.\n \"\"\"\n\n def __init__( # pylint: disable=too-many-arguments\n self,\n context,\n framer=None,\n identity=None,\n address=None,\n sslctx=None,\n certfile=None,\n keyfile=None,\n password=None,\n reqclicert=False,\n handler=None,\n allow_reuse_address=False,\n defer_start=False,\n backlog=20,\n **kwargs,\n ):\n \"\"\"Overloaded initializer for the socket server.\n\n If the identify structure is not passed in, the ModbusControlBlock\n uses its own empty structure.\n\n :param context: The ModbusServerContext datastore\n :param framer: The framer strategy to use\n :param identity: An optional identify structure\n :param address: An optional (interface, port) to bind to.\n :param sslctx: The SSLContext to use for TLS (default None and auto\n create)\n :param certfile: The cert file path for TLS (used if sslctx is None)\n :param keyfile: The key file path for TLS (used if sslctx is None)\n :param password: The password for for decrypting the private key file\n :param reqclicert: Force the sever request client's certificate\n :param handler: A handler for each client session; default is\n ModbusConnectedRequestHandler. The handler class\n receives connection create/teardown events\n :param allow_reuse_address: Whether the server will allow the\n reuse of an address.\n :param backlog: is the maximum number of queued connections\n passed to listen(). Defaults to 20, increase if many\n connections are being made and broken to your Modbus slave\n :param ignore_missing_slaves: True to not send errors on a request\n to a missing slave\n :param broadcast_enable: True to treat slave_id 0 as broadcast address,\n False to treat 0 as any other slave_id\n :param response_manipulator: Callback method for\n manipulating the response\n \"\"\"\n super().__init__(\n context,\n framer=framer,\n identity=identity,\n address=address,\n handler=handler,\n allow_reuse_address=allow_reuse_address,\n defer_start=defer_start,\n backlog=backlog,\n **kwargs,\n )\n self.sslctx = sslctx_provider(sslctx, certfile, keyfile, password, reqclicert)\n self.factory_parms[\"ssl\"] = self.sslctx\n\n\nclass ModbusUdpServer:\n \"\"\"A modbus threaded udp socket server.\n\n We inherit and overload the socket server so that we\n can control the client threads as well as have a single\n server context instance.\n \"\"\"\n\n def __init__(\n self,\n context,\n framer=None,\n identity=None,\n address=None,\n handler=None,\n defer_start=False,\n backlog=20,\n **kwargs,\n ):\n \"\"\"Overloaded initializer for the socket server.\n\n If the identify structure is not passed in, the ModbusControlBlock\n uses its own empty structure.\n\n :param context: The ModbusServerContext datastore\n :param framer: The framer strategy to use\n :param identity: An optional identify structure\n :param address: An optional (interface, port) to bind to.\n :param handler: A handler for each client session; default is\n ModbusDisonnectedRequestHandler\n :param ignore_missing_slaves: True to not send errors on a request\n to a missing slave\n :param broadcast_enable: True to treat slave_id 0 as broadcast address,\n False to treat 0 as any other slave_id\n :param response_manipulator: Callback method for\n manipulating the response\n \"\"\"\n # TO BE REMOVED:\n self.defer_start = defer_start\n self.backlog = backlog\n # ----------------\n self.loop = asyncio.get_running_loop()\n self.decoder = ServerDecoder()\n self.framer = framer or ModbusSocketFramer\n self.context = context or ModbusServerContext()\n self.control = ModbusControlBlock()\n self.address = address or (\"\", Defaults.TcpPort)\n self.handler = handler or ModbusDisconnectedRequestHandler\n self.ignore_missing_slaves = kwargs.get(\n \"ignore_missing_slaves\", Defaults.IgnoreMissingSlaves\n )\n self.broadcast_enable = kwargs.get(\"broadcast_enable\", Defaults.BroadcastEnable)\n self.response_manipulator = kwargs.get(\"response_manipulator\", None)\n\n if isinstance(identity, ModbusDeviceIdentification):\n self.control.Identity.update(identity)\n\n self.protocol = None\n self.endpoint = None\n self.on_connection_terminated = None\n # asyncio future that will be done once server has started\n self.serving = self.loop.create_future()\n self.factory_parms = {\n \"local_addr\": self.address,\n \"allow_broadcast\": True,\n }\n self.request_tracer = None\n\n async def serve_forever(self):\n \"\"\"Start endless loop.\"\"\"\n if self.protocol is None:\n try:\n self.protocol, self.endpoint = await self.loop.create_datagram_endpoint(\n lambda: self.handler(self),\n **self.factory_parms,\n )\n except asyncio.exceptions.CancelledError:\n raise\n except Exception as exc:\n Log.error(\"Server unexpected exception {}\", exc)\n raise RuntimeError(exc) from exc\n Log.info(\"Server(UDP) listening.\")\n self.serving.set_result(True)\n else:\n raise RuntimeError(\n \"Can't call serve_forever on an already running server object\"\n )\n\n async def shutdown(self):\n \"\"\"Shutdown server.\"\"\"\n await self.server_close()\n\n async def server_close(self):\n \"\"\"Close server.\"\"\"\n if self.endpoint:\n self.endpoint.running = False\n if self.endpoint is not None and self.endpoint.handler_task is not None:\n self.endpoint.handler_task.cancel()\n if self.protocol is not None:\n self.protocol.close()\n self.protocol = None\n\n\nclass ModbusSerialServer: # pylint: disable=too-many-instance-attributes\n \"\"\"A modbus threaded serial socket server.\n\n We inherit and overload the socket server so that we\n can control the client threads as well as have a single\n server context instance.\n \"\"\"\n\n handler: ModbusSingleRequestHandler = None\n\n def __init__(\n self, context, framer=ModbusRtuFramer, identity=None, **kwargs\n ): # pragma: no cover\n \"\"\"Initialize the socket server.\n\n If the identity structure is not passed in, the ModbusControlBlock\n uses its own empty structure.\n :param context: The ModbusServerContext datastore\n :param framer: The framer strategy to use, default ModbusRtuFramer\n :param identity: An optional identify structure\n :param port: The serial port to attach to\n :param stopbits: The number of stop bits to use\n :param bytesize: The bytesize of the serial messages\n :param parity: Which kind of parity to use\n :param baudrate: The baud rate to use for the serial device\n :param timeout: The timeout to use for the serial device\n :param handle_local_echo: (optional) Discard local echo from dongle.\n :param ignore_missing_slaves: True to not send errors on a request\n to a missing slave\n :param broadcast_enable: True to treat slave_id 0 as broadcast address,\n False to treat 0 as any other slave_id\n :param auto_reconnect: True to enable automatic reconnection,\n False otherwise\n :param reconnect_delay: reconnect delay in seconds\n :param response_manipulator: Callback method for\n manipulating the response\n \"\"\"\n self.loop = kwargs.get(\"loop\") or asyncio.get_event_loop()\n self.bytesize = kwargs.get(\"bytesize\", Defaults.Bytesize)\n self.parity = kwargs.get(\"parity\", Defaults.Parity)\n self.baudrate = kwargs.get(\"baudrate\", Defaults.Baudrate)\n self.timeout = kwargs.get(\"timeout\", Defaults.Timeout)\n self.device = kwargs.get(\"port\", 0)\n self.stopbits = kwargs.get(\"stopbits\", Defaults.Stopbits)\n self.handle_local_echo = kwargs.get(\n \"handle_local_echo\", Defaults.HandleLocalEcho\n )\n self.ignore_missing_slaves = kwargs.get(\n \"ignore_missing_slaves\", Defaults.IgnoreMissingSlaves\n )\n self.broadcast_enable = kwargs.get(\"broadcast_enable\", Defaults.BroadcastEnable)\n self.auto_reconnect = kwargs.get(\"auto_reconnect\", False)\n self.reconnect_delay = kwargs.get(\"reconnect_delay\", 2)\n self.reconnecting_task = None\n self.handler = kwargs.get(\"handler\") or ModbusSingleRequestHandler\n self.framer = framer or ModbusRtuFramer\n self.decoder = ServerDecoder()\n self.context = context or ModbusServerContext()\n self.response_manipulator = kwargs.get(\"response_manipulator\", None)\n self.control = ModbusControlBlock()\n if isinstance(identity, ModbusDeviceIdentification):\n self.control.Identity.update(identity)\n self.active_connection = None\n self.request_tracer = None\n self.protocol = None\n self.transport = None\n self.server = None\n self.control = ModbusControlBlock()\n identity = kwargs.get(\"identity\")\n if isinstance(identity, ModbusDeviceIdentification):\n self.control.Identity.update(identity)\n\n async def start(self):\n \"\"\"Start connecting.\"\"\"\n await self._connect()\n\n async def _delayed_connect(self):\n \"\"\"Delay connect.\"\"\"\n await asyncio.sleep(self.reconnect_delay)\n await self._connect()\n\n async def _connect(self):\n \"\"\"Connect.\"\"\"\n if self.reconnecting_task is not None:\n self.reconnecting_task.cancel()\n await self.reconnecting_task\n self.reconnecting_task = None\n if self.device.startswith(\"socket:\"):\n return\n try:\n self.transport, self.protocol = await create_serial_connection(\n self.loop,\n lambda: self.handler(self),\n self.device,\n baudrate=self.baudrate,\n bytesize=self.bytesize,\n parity=self.parity,\n stopbits=self.stopbits,\n timeout=self.timeout,\n )\n except serial.serialutil.SerialException as exc:\n Log.debug(\"Failed to open serial port: {}\", self.device)\n if not self.auto_reconnect:\n raise exc\n self._check_reconnect()\n except Exception as exc: # pylint: disable=broad-except\n Log.debug(\"Exception while create - {}\", exc)\n\n def on_connection_lost(self):\n \"\"\"Call on lost connection.\"\"\"\n if self.transport is not None:\n self.transport.close()\n self.transport = None\n self.protocol = None\n if self.server is None:\n self._check_reconnect()\n\n async def shutdown(self):\n \"\"\"Terminate server.\"\"\"\n if self.transport:\n self.transport.abort()\n self.transport = None\n if self.active_connection:\n self.active_connection.transport.close()\n await asyncio.sleep(0.1)\n self.active_connection.handler_task.cancel()\n await self.active_connection.handler_task\n self.active_connection = None\n if self.server:\n self.server.close()\n await asyncio.wait_for(self.server.wait_closed(), 10)\n self.server = None\n if self.reconnecting_task:\n self.reconnecting_task.cancel()\n await self.reconnecting_task\n self.reconnecting_task = None\n self.protocol = None\n\n def _check_reconnect(self):\n \"\"\"Check reconnect.\"\"\"\n Log.debug(\n \"checking autoreconnect {} {}\", self.auto_reconnect, self.reconnecting_task\n )\n if self.auto_reconnect and (self.reconnecting_task is None):\n Log.debug(\"Scheduling serial connection reconnect\")\n self.reconnecting_task = self.loop.create_task(self._delayed_connect())\n\n async def serve_forever(self):\n \"\"\"Start endless loop.\"\"\"\n if self.server:\n raise RuntimeError(\n \"Can't call serve_forever on an already running server object\"\n )\n Log.info(\"Server(Serial) listening.\")\n if self.device.startswith(\"socket:\"):\n # Socket server means listen so start a socket server\n parts = self.device[9:].split(\":\")\n host_addr = (parts[0], int(parts[1]))\n self.server = await self.loop.create_server(\n lambda: self.handler(self),\n *host_addr,\n reuse_address=True,\n start_serving=True,\n backlog=20,\n )\n try:\n await self.server.serve_forever()\n except asyncio.exceptions.CancelledError:\n raise\n except Exception as exc: # pylint: disable=broad-except\n Log.error(\"Server unexpected exception {}\", exc)\n return\n\n while self.server or self.transport or self.protocol:\n await asyncio.sleep(10)\n\n\n# --------------------------------------------------------------------------- #\n# Creation Factories\n# --------------------------------------------------------------------------- #\n\n\nclass _serverList:\n \"\"\"Maintains information about the active server.\n\n :meta private:\n \"\"\"\n\n active_server: Union[\n ModbusUnixServer, ModbusTcpServer, ModbusUdpServer, ModbusSerialServer\n ] = None\n\n def __init__(self, server):\n \"\"\"Register new server.\"\"\"\n self.server = server\n self.loop = asyncio.get_event_loop()\n\n @classmethod\n async def run(cls, server, custom_functions):\n \"\"\"Help starting/stopping server.\"\"\"\n for func in custom_functions:\n server.decoder.register(func)\n cls.active_server = _serverList(server)\n try:\n await server.serve_forever()\n except asyncio.CancelledError:\n pass\n\n @classmethod\n async def async_stop(cls):\n \"\"\"Wait for server stop.\"\"\"\n if not cls.active_server:\n raise RuntimeError(\"ServerAsyncStop called without server task active.\")\n await cls.active_server.server.shutdown()\n await asyncio.sleep(1)\n cls.active_server = None\n\n @classmethod\n def stop(cls):\n \"\"\"Wait for server stop.\"\"\"\n if not cls.active_server:\n raise RuntimeError(\"ServerStop called without server task active.\")\n if not cls.active_server:\n raise RuntimeError(\"ServerStop called with loop stopped.\")\n asyncio.run_coroutine_threadsafe(cls.async_stop(), cls.active_server.loop)\n time.sleep(10)\n\n\nasync def StartAsyncUnixServer( # pylint: disable=invalid-name,dangerous-default-value\n context=None,\n identity=None,\n path=None,\n custom_functions=[],\n **kwargs,\n):\n \"\"\"Start and run a tcp modbus server.\n\n :param context: The ModbusServerContext datastore\n :param identity: An optional identify structure\n :param path: An optional path to bind to.\n :param custom_functions: An optional list of custom function classes\n supported by server instance.\n :param kwargs: The rest\n \"\"\"\n server = ModbusUnixServer(\n context, path, kwargs.pop(\"framer\", ModbusSocketFramer), identity, **kwargs\n )\n await _serverList.run(server, custom_functions)\n\n\nasync def StartAsyncTcpServer( # pylint: disable=invalid-name,dangerous-default-value\n context=None,\n identity=None,\n address=None,\n custom_functions=[],\n **kwargs,\n):\n \"\"\"Start and run a tcp modbus server.\n\n :param context: The ModbusServerContext datastore\n :param identity: An optional identify structure\n :param address: An optional (interface, port) to bind to.\n :param custom_functions: An optional list of custom function classes\n supported by server instance.\n :param kwargs: The rest\n \"\"\"\n server = ModbusTcpServer(\n context, kwargs.pop(\"framer\", ModbusSocketFramer), identity, address, **kwargs\n )\n await _serverList.run(server, custom_functions)\n\n\nasync def StartAsyncTlsServer( # pylint: disable=invalid-name,dangerous-default-value\n context=None,\n identity=None,\n address=None,\n sslctx=None,\n certfile=None,\n keyfile=None,\n password=None,\n reqclicert=False,\n allow_reuse_address=False,\n custom_functions=[],\n **kwargs,\n):\n \"\"\"Start and run a tls modbus server.\n\n :param context: The ModbusServerContext datastore\n :param identity: An optional identify structure\n :param address: An optional (interface, port) to bind to.\n :param sslctx: The SSLContext to use for TLS (default None and auto create)\n :param certfile: The cert file path for TLS (used if sslctx is None)\n :param keyfile: The key file path for TLS (used if sslctx is None)\n :param password: The password for for decrypting the private key file\n :param reqclicert: Force the sever request client's certificate\n :param allow_reuse_address: Whether the server will allow the reuse of an\n address.\n :param custom_functions: An optional list of custom function classes\n supported by server instance.\n :param kwargs: The rest\n \"\"\"\n server = ModbusTlsServer(\n context,\n kwargs.pop(\"framer\", ModbusTlsFramer),\n identity,\n address,\n sslctx,\n certfile,\n keyfile,\n password,\n reqclicert,\n allow_reuse_address=allow_reuse_address,\n **kwargs,\n )\n await _serverList.run(server, custom_functions)\n\n\nasync def StartAsyncUdpServer( # pylint: disable=invalid-name,dangerous-default-value\n context=None,\n identity=None,\n address=None,\n custom_functions=[],\n **kwargs,\n):\n \"\"\"Start and run a udp modbus server.\n\n :param context: The ModbusServerContext datastore\n :param identity: An optional identify structure\n :param address: An optional (interface, port) to bind to.\n :param custom_functions: An optional list of custom function classes\n supported by server instance.\n :param kwargs:\n \"\"\"\n server = ModbusUdpServer(\n context, kwargs.pop(\"framer\", ModbusSocketFramer), identity, address, **kwargs\n )\n await _serverList.run(server, custom_functions)\n\n\nasync def StartAsyncSerialServer( # pylint: disable=invalid-name,dangerous-default-value\n context=None,\n identity=None,\n custom_functions=[],\n **kwargs,\n): # pragma: no cover\n \"\"\"Start and run a serial modbus server.\n\n :param context: The ModbusServerContext datastore\n :param identity: An optional identify structure\n :param custom_functions: An optional list of custom function classes\n supported by server instance.\n :param kwargs: The rest\n \"\"\"\n server = ModbusSerialServer(\n context, kwargs.pop(\"framer\", ModbusAsciiFramer), identity=identity, **kwargs\n )\n await _serverList.run(server, custom_functions)\n\n\ndef StartSerialServer(**kwargs): # pylint: disable=invalid-name\n \"\"\"Start and run a serial modbus server.\"\"\"\n return asyncio.run(StartAsyncSerialServer(**kwargs))\n\n\ndef StartTcpServer(**kwargs): # pylint: disable=invalid-name\n \"\"\"Start and run a serial modbus server.\"\"\"\n return asyncio.run(StartAsyncTcpServer(**kwargs))\n\n\ndef StartTlsServer(**kwargs): # pylint: disable=invalid-name\n \"\"\"Start and run a serial modbus server.\"\"\"\n return asyncio.run(StartAsyncTlsServer(**kwargs))\n\n\ndef StartUdpServer(**kwargs): # pylint: disable=invalid-name\n \"\"\"Start and run a serial modbus server.\"\"\"\n return asyncio.run(StartAsyncUdpServer(**kwargs))\n\n\nasync def ServerAsyncStop(): # pylint: disable=invalid-name\n \"\"\"Terminate server.\"\"\"\n await _serverList.async_stop()\n\n\ndef ServerStop(): # pylint: disable=invalid-name\n \"\"\"Terminate server.\"\"\"\n _serverList.stop()\n", "path": "pymodbus/server/async_io.py" } ]
[ { "content": "\"\"\"Implementation of a Threaded Modbus Server.\"\"\"\n# pylint: disable=missing-type-doc\nimport asyncio\nimport ssl\nimport time\nimport traceback\nfrom typing import Union\n\nfrom pymodbus.client.serial_asyncio import create_serial_connection\nfrom pymodbus.constants import Defaults\nfrom pymodbus.datastore import ModbusServerContext\nfrom pymodbus.device import ModbusControlBlock, ModbusDeviceIdentification\nfrom pymodbus.exceptions import NoSuchSlaveException, NotImplementedException\nfrom pymodbus.factory import ServerDecoder\nfrom pymodbus.logging import Log\nfrom pymodbus.pdu import ModbusExceptions as merror\nfrom pymodbus.transaction import (\n ModbusAsciiFramer,\n ModbusRtuFramer,\n ModbusSocketFramer,\n ModbusTlsFramer,\n)\n\n\ntry:\n import serial\nexcept ImportError:\n pass\n\n\ndef sslctx_provider(\n sslctx=None, certfile=None, keyfile=None, password=None, reqclicert=False\n):\n \"\"\"Provide the SSLContext for ModbusTlsServer.\n\n If the user defined SSLContext is not passed in, sslctx_provider will\n produce a default one.\n\n :param sslctx: The user defined SSLContext to use for TLS (default None and\n auto create)\n :param certfile: The cert file path for TLS (used if sslctx is None)\n :param keyfile: The key file path for TLS (used if sslctx is None)\n :param password: The password for for decrypting the private key file\n :param reqclicert: Force the sever request client's certificate\n \"\"\"\n if sslctx is None:\n # According to MODBUS/TCP Security Protocol Specification, it is\n # TLSv2 at least\n sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)\n sslctx.verify_mode = ssl.CERT_NONE\n sslctx.check_hostname = False\n sslctx.options |= ssl.OP_NO_TLSv1_1\n sslctx.options |= ssl.OP_NO_TLSv1\n sslctx.options |= ssl.OP_NO_SSLv3\n sslctx.options |= ssl.OP_NO_SSLv2\n sslctx.load_cert_chain(certfile=certfile, keyfile=keyfile, password=password)\n\n if reqclicert:\n sslctx.verify_mode = ssl.CERT_REQUIRED\n\n return sslctx\n\n\n# --------------------------------------------------------------------------- #\n# Protocol Handlers\n# --------------------------------------------------------------------------- #\n\n\nclass ModbusBaseRequestHandler(asyncio.BaseProtocol):\n \"\"\"Implements modbus slave wire protocol.\n\n This uses the asyncio.Protocol to implement the client handler.\n\n When a connection is established, the asyncio.Protocol.connection_made\n callback is called. This callback will setup the connection and\n create and schedule an asyncio.Task and assign it to running_task.\n\n running_task will be canceled upon connection_lost event.\n \"\"\"\n\n def __init__(self, owner):\n \"\"\"Initialize.\"\"\"\n self.server = owner\n self.running = False\n self.receive_queue = asyncio.Queue()\n self.handler_task = None # coroutine to be run on asyncio loop\n self._sent = b\"\" # for handle_local_echo\n\n def _log_exception(self):\n \"\"\"Show log exception.\"\"\"\n if isinstance(self, ModbusConnectedRequestHandler):\n Log.debug(\n \"Handler for stream [{}] has been canceled\", self.client_address[:2]\n )\n elif isinstance(self, ModbusSingleRequestHandler):\n Log.debug(\"Handler for serial port has been cancelled\")\n else:\n if hasattr(self, \"protocol\"):\n sock_name = (\n self.protocol._sock.getsockname() # pylint: disable=protected-access\n )\n else:\n sock_name = \"No socket\"\n Log.debug(\"Handler for UDP socket [{}] has been canceled\", sock_name[1])\n\n def connection_made(self, transport):\n \"\"\"Call for socket establish\n\n For streamed protocols (TCP) this will also correspond to an\n entire conversation; however for datagram protocols (UDP) this\n corresponds to the socket being opened\n \"\"\"\n try:\n if (\n hasattr(transport, \"get_extra_info\")\n and transport.get_extra_info(\"sockname\") is not None\n ):\n sockname = transport.get_extra_info(\"sockname\")[:2]\n Log.debug(\"Socket [{}] opened\", sockname)\n elif hasattr(transport, \"serial\"):\n Log.debug(\"Serial connection opened on port: {}\", transport.serial.port)\n else:\n Log.warning(\"Unable to get information about transport {}\", transport)\n self.transport = transport # pylint: disable=attribute-defined-outside-init\n self.running = True\n self.framer = ( # pylint: disable=attribute-defined-outside-init\n self.server.framer(\n self.server.decoder,\n client=None,\n )\n )\n\n # schedule the connection handler on the event loop\n self.handler_task = asyncio.create_task(self.handle())\n except Exception as exc: # pragma: no cover pylint: disable=broad-except\n Log.error(\n \"Datastore unable to fulfill request: {}; {}\",\n exc,\n traceback.format_exc(),\n )\n\n def connection_lost(self, call_exc):\n \"\"\"Call for socket tear down.\n\n For streamed protocols any break in the network connection will\n be reported here; for datagram protocols, only a teardown of the\n socket itself will result in this call.\n \"\"\"\n try:\n if self.handler_task:\n self.handler_task.cancel()\n if call_exc is None:\n self._log_exception()\n elif hasattr(self, \"client_address\"): # TCP connection\n Log.debug(\n \"Client Disconnection {} due to {}\", self.client_address, call_exc\n )\n\n self.running = False\n except Exception as exc: # pylint: disable=broad-except\n Log.error(\n \"Datastore unable to fulfill request: {}; {}\",\n exc,\n traceback.format_exc(),\n )\n\n async def handle(self): # pylint: disable=too-complex\n \"\"\"Return Asyncio coroutine which represents a single conversation.\n\n between the modbus slave and master\n\n Once the client connection is established, the data chunks will be\n fed to this coroutine via the asyncio.Queue object which is fed by\n the ModbusBaseRequestHandler class's callback Future.\n\n This callback future gets data from either\n asyncio.DatagramProtocol.datagram_received or\n from asyncio.BaseProtocol.data_received.\n\n This function will execute without blocking in the while-loop and\n yield to the asyncio event loop when the frame is exhausted.\n As a result, multiple clients can be interleaved without any\n interference between them.\n\n For ModbusConnectedRequestHandler, each connection will be given an\n instance of the handle() coroutine and this instance will be put in the\n active_connections dict. Calling server_close will individually cancel\n each running handle() task.\n\n For ModbusDisconnectedRequestHandler, a single handle() coroutine will\n be started and maintained. Calling server_close will cancel that task.\n \"\"\"\n reset_frame = False\n while self.running:\n try:\n slaves = self.server.context.slaves()\n # this is an asyncio.Queue await, it will never fail\n data = await self._recv_()\n if isinstance(data, tuple):\n # addr is populated when talking over UDP\n data, *addr = data\n else:\n addr = (None,) # empty tuple\n\n if not isinstance(slaves, (list, tuple)):\n slaves = [slaves]\n # if broadcast is enabled make sure to\n # process requests to address 0\n if self.server.broadcast_enable: # pragma: no cover\n if 0 not in slaves:\n slaves.append(0)\n\n Log.debug(\"Handling data: {}\", data, \":hex\")\n\n single = self.server.context.single\n self.framer.processIncomingPacket(\n data=data,\n callback=lambda x: self.execute(x, *addr),\n slave=slaves,\n single=single,\n )\n\n except asyncio.CancelledError:\n # catch and ignore cancellation errors\n if self.running:\n self._log_exception()\n self.running = False\n except Exception as exc: # pylint: disable=broad-except\n # force TCP socket termination as processIncomingPacket\n # should handle application layer errors\n # for UDP sockets, simply reset the frame\n if isinstance(self, ModbusConnectedRequestHandler):\n client_addr = self.client_address[:2]\n Log.error(\n 'Unknown exception \"{}\" on stream {} forcing disconnect',\n exc,\n client_addr,\n )\n self.transport.close()\n else:\n Log.error(\"Unknown error occurred {}\", exc)\n reset_frame = True # graceful recovery\n finally:\n if reset_frame:\n self.framer.resetFrame()\n reset_frame = False\n\n def execute(self, request, *addr):\n \"\"\"Call with the resulting message.\n\n :param request: The decoded request message\n :param addr: the address\n \"\"\"\n if self.server.request_tracer:\n self.server.request_tracer(request, *addr)\n\n broadcast = False\n try:\n if self.server.broadcast_enable and not request.slave_id:\n broadcast = True\n # if broadcasting then execute on all slave contexts,\n # note response will be ignored\n for slave_id in self.server.context.slaves():\n response = request.execute(self.server.context[slave_id])\n else:\n context = self.server.context[request.slave_id]\n response = request.execute(context)\n except NoSuchSlaveException:\n Log.error(\"requested slave does not exist: {}\", request.slave_id)\n if self.server.ignore_missing_slaves:\n return # the client will simply timeout waiting for a response\n response = request.doException(merror.GatewayNoResponse)\n except Exception as exc: # pylint: disable=broad-except\n Log.error(\n \"Datastore unable to fulfill request: {}; {}\",\n exc,\n traceback.format_exc(),\n )\n response = request.doException(merror.SlaveFailure)\n # no response when broadcasting\n if not broadcast:\n response.transaction_id = request.transaction_id\n response.slave_id = request.slave_id\n skip_encoding = False\n if self.server.response_manipulator:\n response, skip_encoding = self.server.response_manipulator(response)\n self.send(response, *addr, skip_encoding=skip_encoding)\n\n def send(self, message, *addr, **kwargs):\n \"\"\"Send message.\"\"\"\n\n def __send(msg, *addr):\n Log.debug(\"send: [{}]- {}\", message, msg, \":b2a\")\n if addr == (None,):\n self._send_(msg)\n else:\n self._send_(msg, *addr)\n\n if kwargs.get(\"skip_encoding\", False):\n __send(message, *addr)\n elif message.should_respond:\n # self.server.control.Counter.BusMessage += 1\n pdu = self.framer.buildPacket(message)\n __send(pdu, *addr)\n else:\n Log.debug(\"Skipping sending response!!\")\n\n # ----------------------------------------------------------------------- #\n # Derived class implementations\n # ----------------------------------------------------------------------- #\n\n def _send_(self, data): # pragma: no cover\n \"\"\"Send a request (string) to the network.\n\n :param data: The unencoded modbus response\n :raises NotImplementedException:\n \"\"\"\n raise NotImplementedException(\"Method not implemented by derived class\")\n\n async def _recv_(self): # pragma: no cover\n \"\"\"Receive data from the network.\n\n :raises NotImplementedException:\n \"\"\"\n raise NotImplementedException(\"Method not implemented by derived class\")\n\n\nclass ModbusConnectedRequestHandler(ModbusBaseRequestHandler, asyncio.Protocol):\n \"\"\"Implements the modbus server protocol\n\n This uses asyncio.Protocol to implement\n the client handler for a connected protocol (TCP).\n \"\"\"\n\n def connection_made(self, transport):\n \"\"\"Call when a connection is made.\"\"\"\n super().connection_made(transport)\n\n self.client_address = ( # pylint: disable=attribute-defined-outside-init\n transport.get_extra_info(\"peername\")\n )\n self.server.active_connections[self.client_address] = self\n txt = f\"TCP client connection established [{self.client_address[:2]}]\"\n Log.debug(txt)\n\n def connection_lost(self, call_exc):\n \"\"\"Call when the connection is lost or closed.\"\"\"\n super().connection_lost(call_exc)\n client_addr = self.client_address[:2]\n Log.debug(\"TCP client disconnected [{}]\", client_addr)\n if self.client_address in self.server.active_connections:\n self.server.active_connections.pop(self.client_address)\n\n def data_received(self, data):\n \"\"\"Call when some data is received.\n\n data is a non-empty bytes object containing the incoming data.\n \"\"\"\n self.receive_queue.put_nowait(data)\n\n async def _recv_(self):\n try:\n result = await self.receive_queue.get()\n except RuntimeError:\n Log.error(\"Event loop is closed\")\n result = None\n return result\n\n def _send_(self, data):\n \"\"\"Send tcp.\"\"\"\n self.transport.write(data)\n\n def close(self):\n \"\"\"Close socket.\"\"\"\n self.transport.abort()\n\n\nclass ModbusDisconnectedRequestHandler(\n ModbusBaseRequestHandler, asyncio.DatagramProtocol\n):\n \"\"\"Implements the modbus server protocol\n\n This uses the socketserver.BaseRequestHandler to implement\n the client handler for a disconnected protocol (UDP). The\n only difference is that we have to specify who to send the\n resulting packet data to.\n \"\"\"\n\n def __init__(self, owner):\n \"\"\"Initialize.\"\"\"\n super().__init__(owner)\n _future = asyncio.get_running_loop().create_future()\n self.server.on_connection_terminated = _future\n\n def connection_lost(self, call_exc):\n \"\"\"Handle connection lost.\"\"\"\n super().connection_lost(call_exc)\n self.server.on_connection_terminated.set_result(True)\n\n def datagram_received(self, data, addr):\n \"\"\"Call when a datagram is received.\n\n data is a bytes object containing the incoming data. addr\n is the address of the peer sending the data; the exact\n format depends on the transport.\n \"\"\"\n self.receive_queue.put_nowait((data, addr))\n\n def error_received(self, exc): # pragma: no cover\n \"\"\"Call when a previous send/receive raises an OSError.\n\n exc is the OSError instance.\n\n This method is called in rare conditions,\n when the transport (e.g. UDP) detects that a datagram could\n not be delivered to its recipient. In many conditions\n though, undeliverable datagrams will be silently dropped.\n \"\"\"\n Log.error(\"datagram connection error [{}]\", exc)\n\n async def _recv_(self):\n return await self.receive_queue.get()\n\n def _send_(self, data, addr=None):\n self.transport.sendto(data, addr=addr)\n\n\nclass ModbusSingleRequestHandler(ModbusBaseRequestHandler, asyncio.Protocol):\n \"\"\"Implement the modbus server protocol.\n\n This uses asyncio.Protocol to implement\n the client handler for a serial connection.\n \"\"\"\n\n def connection_made(self, transport):\n \"\"\"Handle connect made.\"\"\"\n self.server.active_connection = self\n super().connection_made(transport)\n Log.debug(\"Serial connection established\")\n\n def connection_lost(self, call_exc):\n \"\"\"Handle connection lost.\"\"\"\n super().connection_lost(call_exc)\n Log.debug(\"Serial connection lost\")\n if hasattr(self.server, \"on_connection_lost\"):\n self.server.on_connection_lost()\n\n def data_received(self, data):\n \"\"\"Receive data.\"\"\"\n if (\n hasattr(self.server, \"handle_local_echo\")\n and self.server.handle_local_echo is True\n and self._sent\n ):\n if self._sent in data:\n data, self._sent = data.replace(self._sent, b\"\", 1), b\"\"\n elif self._sent.startswith(data):\n self._sent, data = self._sent.replace(data, b\"\", 1), b\"\"\n else:\n self._sent = b\"\"\n if not data:\n return\n self.receive_queue.put_nowait(data)\n\n async def _recv_(self):\n return await self.receive_queue.get()\n\n def _send_(self, data):\n if self.transport is not None:\n self.transport.write(data)\n if (\n hasattr(self.server, \"handle_local_echo\")\n and self.server.handle_local_echo is True\n ):\n self._sent = data\n\n\n# --------------------------------------------------------------------------- #\n# Server Implementations\n# --------------------------------------------------------------------------- #\n\n\nclass ModbusUnixServer:\n \"\"\"A modbus threaded Unix socket server.\n\n We inherit and overload the socket server so that we\n can control the client threads as well as have a single\n server context instance.\n \"\"\"\n\n def __init__(\n self,\n context,\n path,\n framer=None,\n identity=None,\n handler=None,\n **kwargs,\n ):\n \"\"\"Initialize the socket server.\n\n If the identify structure is not passed in, the ModbusControlBlock\n uses its own default structure.\n\n :param context: The ModbusServerContext datastore\n :param path: unix socket path\n :param framer: The framer strategy to use\n :param identity: An optional identify structure\n :param handler: A handler for each client session; default is\n ModbusConnectedRequestHandler. The handler class\n receives connection create/teardown events\n :param allow_reuse_address: Whether the server will allow the\n reuse of an address.\n :param ignore_missing_slaves: True to not send errors on a request\n to a missing slave\n :param broadcast_enable: True to treat slave_id 0 as broadcast address,\n False to treat 0 as any other slave_id\n :param response_manipulator: Callback method for manipulating the\n response\n \"\"\"\n self.active_connections = {}\n self.loop = kwargs.get(\"loop\") or asyncio.get_event_loop()\n self.decoder = ServerDecoder()\n self.framer = framer or ModbusSocketFramer\n self.context = context or ModbusServerContext()\n self.control = ModbusControlBlock()\n self.path = path\n self.handler = handler or ModbusConnectedRequestHandler\n self.handler.server = self\n self.ignore_missing_slaves = kwargs.get(\n \"ignore_missing_slaves\", Defaults.IgnoreMissingSlaves\n )\n self.broadcast_enable = kwargs.get(\"broadcast_enable\", Defaults.BroadcastEnable)\n self.response_manipulator = kwargs.get(\"response_manipulator\", None)\n if isinstance(identity, ModbusDeviceIdentification):\n self.control.Identity.update(identity)\n\n # asyncio future that will be done once server has started\n self.serving = self.loop.create_future()\n # constructors cannot be declared async, so we have to\n # defer the initialization of the server\n self.server = None\n self.request_tracer = None\n self.factory_parms = {}\n\n async def serve_forever(self):\n \"\"\"Start endless loop.\"\"\"\n if self.server is None:\n try:\n self.server = await self.loop.create_unix_server(\n lambda: self.handler(self),\n self.path,\n )\n self.serving.set_result(True)\n Log.info(\"Server(Unix) listening.\")\n await self.server.serve_forever()\n except asyncio.exceptions.CancelledError:\n raise\n except Exception as exc: # pylint: disable=broad-except\n Log.error(\"Server unexpected exception {}\", exc)\n else:\n raise RuntimeError(\n \"Can't call serve_forever on an already running server object\"\n )\n Log.info(\"Server graceful shutdown.\")\n\n async def shutdown(self):\n \"\"\"Shutdown server.\"\"\"\n await self.server_close()\n\n async def server_close(self):\n \"\"\"Close server.\"\"\"\n for k_item, v_item in self.active_connections.items():\n Log.warning(\"aborting active session {}\", k_item)\n v_item.handler_task.cancel()\n self.active_connections = {}\n if self.server is not None:\n self.server.close()\n await self.server.wait_closed()\n self.server = None\n\n\nclass ModbusTcpServer:\n \"\"\"A modbus threaded tcp socket server.\n\n We inherit and overload the socket server so that we\n can control the client threads as well as have a single\n server context instance.\n \"\"\"\n\n def __init__(\n self,\n context,\n framer=None,\n identity=None,\n address=None,\n handler=None,\n allow_reuse_address=False,\n defer_start=False,\n backlog=20,\n **kwargs,\n ):\n \"\"\"Initialize the socket server.\n\n If the identify structure is not passed in, the ModbusControlBlock\n uses its own empty structure.\n\n :param context: The ModbusServerContext datastore\n :param framer: The framer strategy to use\n :param identity: An optional identify structure\n :param address: An optional (interface, port) to bind to.\n :param handler: A handler for each client session; default is\n ModbusConnectedRequestHandler. The handler class\n receives connection create/teardown events\n :param allow_reuse_address: Whether the server will allow the\n reuse of an address.\n :param backlog: is the maximum number of queued connections\n passed to listen(). Defaults to 20, increase if many\n connections are being made and broken to your Modbus slave\n :param ignore_missing_slaves: True to not send errors on a request\n to a missing slave\n :param broadcast_enable: True to treat slave_id 0 as broadcast address,\n False to treat 0 as any other slave_id\n :param response_manipulator: Callback method for manipulating the\n response\n \"\"\"\n self.active_connections = {}\n self.loop = kwargs.get(\"loop\") or asyncio.get_event_loop()\n self.allow_reuse_address = allow_reuse_address\n self.decoder = ServerDecoder()\n self.framer = framer or ModbusSocketFramer\n self.context = context or ModbusServerContext()\n self.control = ModbusControlBlock()\n self.address = address or (\"\", Defaults.TcpPort)\n self.handler = handler or ModbusConnectedRequestHandler\n self.handler.server = self\n self.ignore_missing_slaves = kwargs.get(\n \"ignore_missing_slaves\", Defaults.IgnoreMissingSlaves\n )\n self.broadcast_enable = kwargs.get(\"broadcast_enable\", Defaults.BroadcastEnable)\n self.response_manipulator = kwargs.get(\"response_manipulator\", None)\n self.request_tracer = kwargs.get(\"request_tracer\", None)\n if isinstance(identity, ModbusDeviceIdentification):\n self.control.Identity.update(identity)\n\n # asyncio future that will be done once server has started\n self.serving = self.loop.create_future()\n # constructors cannot be declared async, so we have to\n # defer the initialization of the server\n self.server = None\n self.factory_parms = {\n \"reuse_address\": allow_reuse_address,\n \"backlog\": backlog,\n \"start_serving\": not defer_start,\n }\n\n async def serve_forever(self):\n \"\"\"Start endless loop.\"\"\"\n if self.server is None:\n self.server = await self.loop.create_server(\n lambda: self.handler(self),\n *self.address,\n **self.factory_parms,\n )\n self.serving.set_result(True)\n Log.info(\"Server(TCP) listening.\")\n try:\n await self.server.serve_forever()\n except asyncio.exceptions.CancelledError:\n raise\n except Exception as exc: # pylint: disable=broad-except\n Log.error(\"Server unexpected exception {}\", exc)\n else:\n raise RuntimeError(\n \"Can't call serve_forever on an already running server object\"\n )\n Log.info(\"Server graceful shutdown.\")\n\n async def shutdown(self):\n \"\"\"Shutdown server.\"\"\"\n await self.server_close()\n\n async def server_close(self):\n \"\"\"Close server.\"\"\"\n active_connecions = self.active_connections.copy()\n for k_item, v_item in active_connecions.items():\n Log.warning(\"aborting active session {}\", k_item)\n v_item.transport.close()\n await asyncio.sleep(0.1)\n v_item.handler_task.cancel()\n await v_item.handler_task\n self.active_connections = {}\n if self.server is not None:\n self.server.close()\n await self.server.wait_closed()\n self.server = None\n\n\nclass ModbusTlsServer(ModbusTcpServer):\n \"\"\"A modbus threaded tls socket server.\n\n We inherit and overload the socket server so that we\n can control the client threads as well as have a single\n server context instance.\n \"\"\"\n\n def __init__( # pylint: disable=too-many-arguments\n self,\n context,\n framer=None,\n identity=None,\n address=None,\n sslctx=None,\n certfile=None,\n keyfile=None,\n password=None,\n reqclicert=False,\n handler=None,\n allow_reuse_address=False,\n defer_start=False,\n backlog=20,\n **kwargs,\n ):\n \"\"\"Overloaded initializer for the socket server.\n\n If the identify structure is not passed in, the ModbusControlBlock\n uses its own empty structure.\n\n :param context: The ModbusServerContext datastore\n :param framer: The framer strategy to use\n :param identity: An optional identify structure\n :param address: An optional (interface, port) to bind to.\n :param sslctx: The SSLContext to use for TLS (default None and auto\n create)\n :param certfile: The cert file path for TLS (used if sslctx is None)\n :param keyfile: The key file path for TLS (used if sslctx is None)\n :param password: The password for for decrypting the private key file\n :param reqclicert: Force the sever request client's certificate\n :param handler: A handler for each client session; default is\n ModbusConnectedRequestHandler. The handler class\n receives connection create/teardown events\n :param allow_reuse_address: Whether the server will allow the\n reuse of an address.\n :param backlog: is the maximum number of queued connections\n passed to listen(). Defaults to 20, increase if many\n connections are being made and broken to your Modbus slave\n :param ignore_missing_slaves: True to not send errors on a request\n to a missing slave\n :param broadcast_enable: True to treat slave_id 0 as broadcast address,\n False to treat 0 as any other slave_id\n :param response_manipulator: Callback method for\n manipulating the response\n \"\"\"\n super().__init__(\n context,\n framer=framer,\n identity=identity,\n address=address,\n handler=handler,\n allow_reuse_address=allow_reuse_address,\n defer_start=defer_start,\n backlog=backlog,\n **kwargs,\n )\n self.sslctx = sslctx_provider(sslctx, certfile, keyfile, password, reqclicert)\n self.factory_parms[\"ssl\"] = self.sslctx\n\n\nclass ModbusUdpServer:\n \"\"\"A modbus threaded udp socket server.\n\n We inherit and overload the socket server so that we\n can control the client threads as well as have a single\n server context instance.\n \"\"\"\n\n def __init__(\n self,\n context,\n framer=None,\n identity=None,\n address=None,\n handler=None,\n defer_start=False,\n backlog=20,\n **kwargs,\n ):\n \"\"\"Overloaded initializer for the socket server.\n\n If the identify structure is not passed in, the ModbusControlBlock\n uses its own empty structure.\n\n :param context: The ModbusServerContext datastore\n :param framer: The framer strategy to use\n :param identity: An optional identify structure\n :param address: An optional (interface, port) to bind to.\n :param handler: A handler for each client session; default is\n ModbusDisonnectedRequestHandler\n :param ignore_missing_slaves: True to not send errors on a request\n to a missing slave\n :param broadcast_enable: True to treat slave_id 0 as broadcast address,\n False to treat 0 as any other slave_id\n :param response_manipulator: Callback method for\n manipulating the response\n \"\"\"\n # TO BE REMOVED:\n self.defer_start = defer_start\n self.backlog = backlog\n # ----------------\n self.loop = asyncio.get_running_loop()\n self.decoder = ServerDecoder()\n self.framer = framer or ModbusSocketFramer\n self.context = context or ModbusServerContext()\n self.control = ModbusControlBlock()\n self.address = address or (\"\", Defaults.TcpPort)\n self.handler = handler or ModbusDisconnectedRequestHandler\n self.ignore_missing_slaves = kwargs.get(\n \"ignore_missing_slaves\", Defaults.IgnoreMissingSlaves\n )\n self.broadcast_enable = kwargs.get(\"broadcast_enable\", Defaults.BroadcastEnable)\n self.response_manipulator = kwargs.get(\"response_manipulator\", None)\n\n if isinstance(identity, ModbusDeviceIdentification):\n self.control.Identity.update(identity)\n\n self.protocol = None\n self.endpoint = None\n self.on_connection_terminated = None\n # asyncio future that will be done once server has started\n self.serving = self.loop.create_future()\n self.factory_parms = {\n \"local_addr\": self.address,\n \"allow_broadcast\": True,\n }\n self.request_tracer = None\n\n async def serve_forever(self):\n \"\"\"Start endless loop.\"\"\"\n if self.protocol is None:\n try:\n self.protocol, self.endpoint = await self.loop.create_datagram_endpoint(\n lambda: self.handler(self),\n **self.factory_parms,\n )\n except asyncio.exceptions.CancelledError:\n raise\n except Exception as exc:\n Log.error(\"Server unexpected exception {}\", exc)\n raise RuntimeError(exc) from exc\n Log.info(\"Server(UDP) listening.\")\n self.serving.set_result(True)\n else:\n raise RuntimeError(\n \"Can't call serve_forever on an already running server object\"\n )\n\n async def shutdown(self):\n \"\"\"Shutdown server.\"\"\"\n await self.server_close()\n\n async def server_close(self):\n \"\"\"Close server.\"\"\"\n if self.endpoint:\n self.endpoint.running = False\n if self.endpoint is not None and self.endpoint.handler_task is not None:\n self.endpoint.handler_task.cancel()\n if self.protocol is not None:\n self.protocol.close()\n self.protocol = None\n\n\nclass ModbusSerialServer: # pylint: disable=too-many-instance-attributes\n \"\"\"A modbus threaded serial socket server.\n\n We inherit and overload the socket server so that we\n can control the client threads as well as have a single\n server context instance.\n \"\"\"\n\n handler: ModbusSingleRequestHandler = None\n\n def __init__(\n self, context, framer=ModbusRtuFramer, identity=None, **kwargs\n ): # pragma: no cover\n \"\"\"Initialize the socket server.\n\n If the identity structure is not passed in, the ModbusControlBlock\n uses its own empty structure.\n :param context: The ModbusServerContext datastore\n :param framer: The framer strategy to use, default ModbusRtuFramer\n :param identity: An optional identify structure\n :param port: The serial port to attach to\n :param stopbits: The number of stop bits to use\n :param bytesize: The bytesize of the serial messages\n :param parity: Which kind of parity to use\n :param baudrate: The baud rate to use for the serial device\n :param timeout: The timeout to use for the serial device\n :param handle_local_echo: (optional) Discard local echo from dongle.\n :param ignore_missing_slaves: True to not send errors on a request\n to a missing slave\n :param broadcast_enable: True to treat slave_id 0 as broadcast address,\n False to treat 0 as any other slave_id\n :param auto_reconnect: True to enable automatic reconnection,\n False otherwise\n :param reconnect_delay: reconnect delay in seconds\n :param response_manipulator: Callback method for\n manipulating the response\n \"\"\"\n self.loop = kwargs.get(\"loop\") or asyncio.get_event_loop()\n self.bytesize = kwargs.get(\"bytesize\", Defaults.Bytesize)\n self.parity = kwargs.get(\"parity\", Defaults.Parity)\n self.baudrate = kwargs.get(\"baudrate\", Defaults.Baudrate)\n self.timeout = kwargs.get(\"timeout\", Defaults.Timeout)\n self.device = kwargs.get(\"port\", 0)\n self.stopbits = kwargs.get(\"stopbits\", Defaults.Stopbits)\n self.handle_local_echo = kwargs.get(\n \"handle_local_echo\", Defaults.HandleLocalEcho\n )\n self.ignore_missing_slaves = kwargs.get(\n \"ignore_missing_slaves\", Defaults.IgnoreMissingSlaves\n )\n self.broadcast_enable = kwargs.get(\"broadcast_enable\", Defaults.BroadcastEnable)\n self.auto_reconnect = kwargs.get(\"auto_reconnect\", False)\n self.reconnect_delay = kwargs.get(\"reconnect_delay\", 2)\n self.reconnecting_task = None\n self.handler = kwargs.get(\"handler\") or ModbusSingleRequestHandler\n self.framer = framer or ModbusRtuFramer\n self.decoder = ServerDecoder()\n self.context = context or ModbusServerContext()\n self.response_manipulator = kwargs.get(\"response_manipulator\", None)\n self.control = ModbusControlBlock()\n if isinstance(identity, ModbusDeviceIdentification):\n self.control.Identity.update(identity)\n self.active_connection = None\n self.request_tracer = None\n self.protocol = None\n self.transport = None\n self.server = None\n self.control = ModbusControlBlock()\n identity = kwargs.get(\"identity\")\n if isinstance(identity, ModbusDeviceIdentification):\n self.control.Identity.update(identity)\n\n async def start(self):\n \"\"\"Start connecting.\"\"\"\n await self._connect()\n\n async def _delayed_connect(self):\n \"\"\"Delay connect.\"\"\"\n await asyncio.sleep(self.reconnect_delay)\n await self._connect()\n\n async def _connect(self):\n \"\"\"Connect.\"\"\"\n if self.reconnecting_task is not None:\n self.reconnecting_task.cancel()\n await self.reconnecting_task\n self.reconnecting_task = None\n if self.device.startswith(\"socket:\"):\n return\n try:\n self.transport, self.protocol = await create_serial_connection(\n self.loop,\n lambda: self.handler(self),\n self.device,\n baudrate=self.baudrate,\n bytesize=self.bytesize,\n parity=self.parity,\n stopbits=self.stopbits,\n timeout=self.timeout,\n )\n except serial.serialutil.SerialException as exc:\n Log.debug(\"Failed to open serial port: {}\", self.device)\n if not self.auto_reconnect:\n raise exc\n self._check_reconnect()\n except Exception as exc: # pylint: disable=broad-except\n Log.debug(\"Exception while create - {}\", exc)\n\n def on_connection_lost(self):\n \"\"\"Call on lost connection.\"\"\"\n if self.transport is not None:\n self.transport.close()\n self.transport = None\n self.protocol = None\n if self.server is None:\n self._check_reconnect()\n\n async def shutdown(self):\n \"\"\"Terminate server.\"\"\"\n if self.transport:\n self.transport.abort()\n self.transport = None\n if self.active_connection:\n self.active_connection.transport.close()\n await asyncio.sleep(0.1)\n self.active_connection.handler_task.cancel()\n await self.active_connection.handler_task\n self.active_connection = None\n if self.server:\n self.server.close()\n await asyncio.wait_for(self.server.wait_closed(), 10)\n self.server = None\n if self.reconnecting_task:\n self.reconnecting_task.cancel()\n await self.reconnecting_task\n self.reconnecting_task = None\n self.protocol = None\n\n def _check_reconnect(self):\n \"\"\"Check reconnect.\"\"\"\n Log.debug(\n \"checking autoreconnect {} {}\", self.auto_reconnect, self.reconnecting_task\n )\n if self.auto_reconnect and (self.reconnecting_task is None):\n Log.debug(\"Scheduling serial connection reconnect\")\n self.reconnecting_task = self.loop.create_task(self._delayed_connect())\n\n async def serve_forever(self):\n \"\"\"Start endless loop.\"\"\"\n if self.server:\n raise RuntimeError(\n \"Can't call serve_forever on an already running server object\"\n )\n Log.info(\"Server(Serial) listening.\")\n if self.device.startswith(\"socket:\"):\n # Socket server means listen so start a socket server\n parts = self.device[9:].split(\":\")\n host_addr = (parts[0], int(parts[1]))\n self.server = await self.loop.create_server(\n lambda: self.handler(self),\n *host_addr,\n reuse_address=True,\n start_serving=True,\n backlog=20,\n )\n try:\n await self.server.serve_forever()\n except asyncio.exceptions.CancelledError:\n raise\n except Exception as exc: # pylint: disable=broad-except\n Log.error(\"Server unexpected exception {}\", exc)\n return\n\n while self.server or self.transport or self.protocol:\n await asyncio.sleep(10)\n\n\n# --------------------------------------------------------------------------- #\n# Creation Factories\n# --------------------------------------------------------------------------- #\n\n\nclass _serverList:\n \"\"\"Maintains information about the active server.\n\n :meta private:\n \"\"\"\n\n active_server: Union[\n ModbusUnixServer, ModbusTcpServer, ModbusUdpServer, ModbusSerialServer\n ] = None\n\n def __init__(self, server):\n \"\"\"Register new server.\"\"\"\n self.server = server\n self.loop = asyncio.get_event_loop()\n\n @classmethod\n async def run(cls, server, custom_functions):\n \"\"\"Help starting/stopping server.\"\"\"\n for func in custom_functions:\n server.decoder.register(func)\n cls.active_server = _serverList(server)\n try:\n await server.serve_forever()\n except asyncio.CancelledError:\n pass\n\n @classmethod\n async def async_stop(cls):\n \"\"\"Wait for server stop.\"\"\"\n if not cls.active_server:\n raise RuntimeError(\"ServerAsyncStop called without server task active.\")\n await cls.active_server.server.shutdown()\n await asyncio.sleep(1)\n cls.active_server = None\n\n @classmethod\n def stop(cls):\n \"\"\"Wait for server stop.\"\"\"\n if not cls.active_server:\n raise RuntimeError(\"ServerStop called without server task active.\")\n if not cls.active_server:\n raise RuntimeError(\"ServerStop called with loop stopped.\")\n asyncio.run_coroutine_threadsafe(cls.async_stop(), cls.active_server.loop)\n time.sleep(10)\n\n\nasync def StartAsyncUnixServer( # pylint: disable=invalid-name,dangerous-default-value\n context=None,\n identity=None,\n path=None,\n custom_functions=[],\n **kwargs,\n):\n \"\"\"Start and run a tcp modbus server.\n\n :param context: The ModbusServerContext datastore\n :param identity: An optional identify structure\n :param path: An optional path to bind to.\n :param custom_functions: An optional list of custom function classes\n supported by server instance.\n :param kwargs: The rest\n \"\"\"\n server = ModbusUnixServer(\n context, path, kwargs.pop(\"framer\", ModbusSocketFramer), identity, **kwargs\n )\n await _serverList.run(server, custom_functions)\n\n\nasync def StartAsyncTcpServer( # pylint: disable=invalid-name,dangerous-default-value\n context=None,\n identity=None,\n address=None,\n custom_functions=[],\n **kwargs,\n):\n \"\"\"Start and run a tcp modbus server.\n\n :param context: The ModbusServerContext datastore\n :param identity: An optional identify structure\n :param address: An optional (interface, port) to bind to.\n :param custom_functions: An optional list of custom function classes\n supported by server instance.\n :param kwargs: The rest\n \"\"\"\n server = ModbusTcpServer(\n context, kwargs.pop(\"framer\", ModbusSocketFramer), identity, address, **kwargs\n )\n await _serverList.run(server, custom_functions)\n\n\nasync def StartAsyncTlsServer( # pylint: disable=invalid-name,dangerous-default-value\n context=None,\n identity=None,\n address=None,\n sslctx=None,\n certfile=None,\n keyfile=None,\n password=None,\n reqclicert=False,\n allow_reuse_address=False,\n custom_functions=[],\n **kwargs,\n):\n \"\"\"Start and run a tls modbus server.\n\n :param context: The ModbusServerContext datastore\n :param identity: An optional identify structure\n :param address: An optional (interface, port) to bind to.\n :param sslctx: The SSLContext to use for TLS (default None and auto create)\n :param certfile: The cert file path for TLS (used if sslctx is None)\n :param keyfile: The key file path for TLS (used if sslctx is None)\n :param password: The password for for decrypting the private key file\n :param reqclicert: Force the sever request client's certificate\n :param allow_reuse_address: Whether the server will allow the reuse of an\n address.\n :param custom_functions: An optional list of custom function classes\n supported by server instance.\n :param kwargs: The rest\n \"\"\"\n server = ModbusTlsServer(\n context,\n kwargs.pop(\"framer\", ModbusTlsFramer),\n identity,\n address,\n sslctx,\n certfile,\n keyfile,\n password,\n reqclicert,\n allow_reuse_address=allow_reuse_address,\n **kwargs,\n )\n await _serverList.run(server, custom_functions)\n\n\nasync def StartAsyncUdpServer( # pylint: disable=invalid-name,dangerous-default-value\n context=None,\n identity=None,\n address=None,\n custom_functions=[],\n **kwargs,\n):\n \"\"\"Start and run a udp modbus server.\n\n :param context: The ModbusServerContext datastore\n :param identity: An optional identify structure\n :param address: An optional (interface, port) to bind to.\n :param custom_functions: An optional list of custom function classes\n supported by server instance.\n :param kwargs:\n \"\"\"\n server = ModbusUdpServer(\n context, kwargs.pop(\"framer\", ModbusSocketFramer), identity, address, **kwargs\n )\n await _serverList.run(server, custom_functions)\n\n\nasync def StartAsyncSerialServer( # pylint: disable=invalid-name,dangerous-default-value\n context=None,\n identity=None,\n custom_functions=[],\n **kwargs,\n): # pragma: no cover\n \"\"\"Start and run a serial modbus server.\n\n :param context: The ModbusServerContext datastore\n :param identity: An optional identify structure\n :param custom_functions: An optional list of custom function classes\n supported by server instance.\n :param kwargs: The rest\n \"\"\"\n server = ModbusSerialServer(\n context, kwargs.pop(\"framer\", ModbusAsciiFramer), identity=identity, **kwargs\n )\n server.start()\n await _serverList.run(server, custom_functions)\n\n\ndef StartSerialServer(**kwargs): # pylint: disable=invalid-name\n \"\"\"Start and run a serial modbus server.\"\"\"\n return asyncio.run(StartAsyncSerialServer(**kwargs))\n\n\ndef StartTcpServer(**kwargs): # pylint: disable=invalid-name\n \"\"\"Start and run a serial modbus server.\"\"\"\n return asyncio.run(StartAsyncTcpServer(**kwargs))\n\n\ndef StartTlsServer(**kwargs): # pylint: disable=invalid-name\n \"\"\"Start and run a serial modbus server.\"\"\"\n return asyncio.run(StartAsyncTlsServer(**kwargs))\n\n\ndef StartUdpServer(**kwargs): # pylint: disable=invalid-name\n \"\"\"Start and run a serial modbus server.\"\"\"\n return asyncio.run(StartAsyncUdpServer(**kwargs))\n\n\nasync def ServerAsyncStop(): # pylint: disable=invalid-name\n \"\"\"Terminate server.\"\"\"\n await _serverList.async_stop()\n\n\ndef ServerStop(): # pylint: disable=invalid-name\n \"\"\"Terminate server.\"\"\"\n _serverList.stop()\n", "path": "pymodbus/server/async_io.py" } ]
diff --git a/pymodbus/server/async_io.py b/pymodbus/server/async_io.py index 11024fc9f..ce78ec446 100644 --- a/pymodbus/server/async_io.py +++ b/pymodbus/server/async_io.py @@ -1225,6 +1225,7 @@ async def StartAsyncSerialServer( # pylint: disable=invalid-name,dangerous-defa server = ModbusSerialServer( context, kwargs.pop("framer", ModbusAsciiFramer), identity=identity, **kwargs ) + server.start() await _serverList.run(server, custom_functions)
gratipay__gratipay.com-2699
[email protected] still linked several places Should be [email protected], right? ;-)
[ { "content": "\"\"\"\nThis module contains exceptions shared across application code.\n\"\"\"\n\nfrom __future__ import print_function, unicode_literals\n\n\nclass ProblemChangingUsername(Exception):\n def __str__(self):\n return self.msg.format(self.args[0])\n\nclass UsernameIsEmpty(ProblemChangingUsername):\n msg = \"You need to provide a username!\"\n\nclass UsernameTooLong(ProblemChangingUsername):\n msg = \"The username '{}' is too long.\"\n\nclass UsernameContainsInvalidCharacters(ProblemChangingUsername):\n msg = \"The username '{}' contains invalid characters.\"\n\nclass UsernameIsRestricted(ProblemChangingUsername):\n msg = \"The username '{}' is restricted.\"\n\nclass UsernameAlreadyTaken(ProblemChangingUsername):\n msg = \"The username '{}' is already taken.\"\n\n\nclass ProblemChangingNumber(Exception):\n def __str__(self):\n return self.msg\n\nclass HasBigTips(ProblemChangingNumber):\n msg = \"You receive tips too large for an individual. Please contact [email protected].\"\n\n\nclass TooGreedy(Exception): pass\nclass NoSelfTipping(Exception): pass\nclass NoTippee(Exception): pass\nclass BadAmount(Exception): pass\nclass UserDoesntAcceptTips(Exception): pass\n\nclass FailedToReserveUsername(Exception): pass\n\nclass NegativeBalance(Exception):\n def __str__(self):\n return \"Negative balance not allowed in this context.\"\n\nclass NotWhitelisted(Exception): pass\nclass NoBalancedCustomerHref(Exception): pass\n", "path": "gratipay/exceptions.py" } ]
[ { "content": "\"\"\"\nThis module contains exceptions shared across application code.\n\"\"\"\n\nfrom __future__ import print_function, unicode_literals\n\n\nclass ProblemChangingUsername(Exception):\n def __str__(self):\n return self.msg.format(self.args[0])\n\nclass UsernameIsEmpty(ProblemChangingUsername):\n msg = \"You need to provide a username!\"\n\nclass UsernameTooLong(ProblemChangingUsername):\n msg = \"The username '{}' is too long.\"\n\nclass UsernameContainsInvalidCharacters(ProblemChangingUsername):\n msg = \"The username '{}' contains invalid characters.\"\n\nclass UsernameIsRestricted(ProblemChangingUsername):\n msg = \"The username '{}' is restricted.\"\n\nclass UsernameAlreadyTaken(ProblemChangingUsername):\n msg = \"The username '{}' is already taken.\"\n\n\nclass ProblemChangingNumber(Exception):\n def __str__(self):\n return self.msg\n\nclass HasBigTips(ProblemChangingNumber):\n msg = \"You receive tips too large for an individual. Please contact [email protected].\"\n\n\nclass TooGreedy(Exception): pass\nclass NoSelfTipping(Exception): pass\nclass NoTippee(Exception): pass\nclass BadAmount(Exception): pass\nclass UserDoesntAcceptTips(Exception): pass\n\nclass FailedToReserveUsername(Exception): pass\n\nclass NegativeBalance(Exception):\n def __str__(self):\n return \"Negative balance not allowed in this context.\"\n\nclass NotWhitelisted(Exception): pass\nclass NoBalancedCustomerHref(Exception): pass\n", "path": "gratipay/exceptions.py" } ]
diff --git a/gratipay/exceptions.py b/gratipay/exceptions.py index 9e1bd1d340..9695ad9ef4 100644 --- a/gratipay/exceptions.py +++ b/gratipay/exceptions.py @@ -30,7 +30,7 @@ def __str__(self): return self.msg class HasBigTips(ProblemChangingNumber): - msg = "You receive tips too large for an individual. Please contact [email protected]." + msg = "You receive tips too large for an individual. Please contact [email protected]." class TooGreedy(Exception): pass diff --git a/www/%username/account/close.spt b/www/%username/account/close.spt index 73a41382ad..ff3aece7d0 100644 --- a/www/%username/account/close.spt +++ b/www/%username/account/close.spt @@ -66,7 +66,7 @@ if POST: by Thursday, and then you'll be able to have your funds deposited to your bank account on file. To expedite the review, please <a - href="mailto:[email protected]?subject=review%20for%20closing%20account">contact + href="mailto:[email protected]?subject=review%20for%20closing%20account">contact support</a>.</label></li> {% endif %} @@ -91,7 +91,7 @@ if POST: </ul> <p>If neither option works for you, please <a - href="mailto:[email protected]?subject=close%20account">contact + href="mailto:[email protected]?subject=close%20account">contact support</a> to otherwise deal with your balance before closing your account.</p> diff --git a/www/about/faq.html.spt b/www/about/faq.html.spt index 7cebbd7884..f4290ade6c 100644 --- a/www/about/faq.html.spt +++ b/www/about/faq.html.spt @@ -60,7 +60,7 @@ title = "Frequently Asked Questions" expect additional fees from non-U.S. banks).</li> <li><a - href="mailto:[email protected]?subject=bitcoin%20payin">Email + href="mailto:[email protected]?subject=bitcoin%20payin">Email us</a> to request a one-time bitcoin payin (1% + 15&cent; fee).</li> @@ -80,12 +80,12 @@ title = "Frequently Asked Questions" U.S.-only).</li> <li><a - href="mailto:[email protected]?subject=configuring%20PayPal">Email + href="mailto:[email protected]?subject=configuring%20PayPal">Email us</a> to set up a weekly PayPal payout (unlimited payout; 2% fee capped at $20).</li> <li><a - href="mailto:[email protected]?subject=bitcoin%20payout">Email + href="mailto:[email protected]?subject=bitcoin%20payout">Email us</a> to request a one-time bitcoin payout (1% + 15&cent; fee).</li> diff --git a/www/about/index.html.spt b/www/about/index.html.spt index fbac93f0bc..243c68e2aa 100644 --- a/www/about/index.html.spt +++ b/www/about/index.html.spt @@ -22,7 +22,7 @@ title = "About" the equation are rewarded publicly for their participation. (You can opt out of publicly displaying your total giving.)</p> - + </div> @@ -85,7 +85,7 @@ title = "About" Ambridge, PA 15003<br /> USA<br /> <br /> - Email: <a href="mailto:[email protected]">[email protected]</a><br /> + Email: <a href="mailto:[email protected]">[email protected]</a><br /> Twitter: <a href="https://twitter.com/Gratipay">@Gratipay</a><br /> Facebook: <a href="https://www.facebook.com/Gratipay">Gratipay</a><br /> Freenode: <a href="http://inside.gratipay.com/appendices/chat">#gratipay</a><br />
getpelican__pelican-2065
Quickstart locale error: 'NoneType' object has no attribute 'split' Pelican version: 594b9c96 (installed with `pip install -e "git+https://github.com/getpelican/pelican.git#egg=pelican"`) Python: 2.7.12 Windows x64 (within a virtual environment) Using: > blinker (1.4) docutils (0.12) feedgenerator (1.9) Jinja2 (2.8) Markdown (2.6.7) MarkupSafe (0.23) pelican (3.6.4.dev0, d:\documents\pelican\src\pelican) pip (8.1.2) Pygments (2.1.3) python-dateutil (2.5.3) pytz (2016.7) setuptools (28.7.1) six (1.10.0) smartypants (1.8.6) typogrify (2.0.7) Unidecode (0.4.19) wheel (0.30.0a0) When invoking `pelican-quickstart`, I am getting the following stacktrace: >(Pelican) D:\Documents\Pelican>pelican-quickstart Traceback (most recent call last): File "D:\Documents\Pelican\Scripts\pelican-quickstart-script.py", line 11, in <module> load_entry_point('pelican', 'console_scripts', 'pelican-quickstart')() File "d:\documents\pelican\lib\site-packages\pkg_resources\__init__.py", line 564, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "d:\documents\pelican\lib\site-packages\pkg_resources\__init__.py", line 2608, in load_entry_point return ep.load() File "d:\documents\pelican\lib\site-packages\pkg_resources\__init__.py", line 2268, in load return self.resolve() File "d:\documents\pelican\lib\site-packages\pkg_resources\__init__.py", line 2274, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "d:\documents\pelican\src\pelican\pelican\tools\pelican_quickstart.py", line 52, in <module> 'lang': locale.getlocale()[0].split('_')[0], AttributeError: 'NoneType' object has no attribute 'split'
[ { "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import print_function, unicode_literals\n\nimport argparse\nimport codecs\nimport locale\nimport os\nimport string\nimport sys\n\nimport pytz\n\ntry:\n import tzlocal\n _DEFAULT_TIMEZONE = tzlocal.get_localzone().zone\nexcept:\n _DEFAULT_TIMEZONE = 'Europe/Paris'\n\nimport six\n\nfrom pelican import __version__\n\n\n_TEMPLATES_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n \"templates\")\n\n_GITHUB_PAGES_BRANCHES = {\n 'personal': 'master',\n 'project': 'gh-pages'\n}\n\nCONF = {\n 'pelican': 'pelican',\n 'pelicanopts': '',\n 'basedir': os.curdir,\n 'ftp_host': 'localhost',\n 'ftp_user': 'anonymous',\n 'ftp_target_dir': '/',\n 'ssh_host': 'localhost',\n 'ssh_port': 22,\n 'ssh_user': 'root',\n 'ssh_target_dir': '/var/www',\n 's3_bucket': 'my_s3_bucket',\n 'cloudfiles_username': 'my_rackspace_username',\n 'cloudfiles_api_key': 'my_rackspace_api_key',\n 'cloudfiles_container': 'my_cloudfiles_container',\n 'dropbox_dir': '~/Dropbox/Public/',\n 'github_pages_branch': _GITHUB_PAGES_BRANCHES['project'],\n 'default_pagination': 10,\n 'siteurl': '',\n 'lang': locale.getlocale()[0].split('_')[0],\n 'timezone': _DEFAULT_TIMEZONE\n}\n\n# url for list of valid timezones\n_TZ_URL = 'http://en.wikipedia.org/wiki/List_of_tz_database_time_zones'\n\n\ndef _input_compat(prompt):\n if six.PY3:\n r = input(prompt)\n else:\n r = raw_input(prompt)\n return r\n\nif six.PY3:\n str_compat = str\nelse:\n str_compat = unicode\n\n\n# Create a 'marked' default path, to determine if someone has supplied\n# a path on the command-line.\nclass _DEFAULT_PATH_TYPE(str_compat):\n is_default_path = True\n\n_DEFAULT_PATH = _DEFAULT_PATH_TYPE(os.curdir)\n\n\ndef decoding_strings(f):\n def wrapper(*args, **kwargs):\n out = f(*args, **kwargs)\n if isinstance(out, six.string_types) and not six.PY3:\n # todo: make encoding configurable?\n if six.PY3:\n return out\n else:\n return out.decode(sys.stdin.encoding)\n return out\n return wrapper\n\n\ndef get_template(name, as_encoding='utf-8'):\n template = os.path.join(_TEMPLATES_DIR, \"{0}.in\".format(name))\n\n if not os.path.isfile(template):\n raise RuntimeError(\"Cannot open {0}\".format(template))\n\n with codecs.open(template, 'r', as_encoding) as fd:\n line = fd.readline()\n while line:\n yield line\n line = fd.readline()\n fd.close()\n\n\n@decoding_strings\ndef ask(question, answer=str_compat, default=None, l=None):\n if answer == str_compat:\n r = ''\n while True:\n if default:\n r = _input_compat('> {0} [{1}] '.format(question, default))\n else:\n r = _input_compat('> {0} '.format(question, default))\n\n r = r.strip()\n\n if len(r) <= 0:\n if default:\n r = default\n break\n else:\n print('You must enter something')\n else:\n if l and len(r) != l:\n print('You must enter a {0} letters long string'.format(l))\n else:\n break\n\n return r\n\n elif answer == bool:\n r = None\n while True:\n if default is True:\n r = _input_compat('> {0} (Y/n) '.format(question))\n elif default is False:\n r = _input_compat('> {0} (y/N) '.format(question))\n else:\n r = _input_compat('> {0} (y/n) '.format(question))\n\n r = r.strip().lower()\n\n if r in ('y', 'yes'):\n r = True\n break\n elif r in ('n', 'no'):\n r = False\n break\n elif not r:\n r = default\n break\n else:\n print(\"You must answer 'yes' or 'no'\")\n return r\n elif answer == int:\n r = None\n while True:\n if default:\n r = _input_compat('> {0} [{1}] '.format(question, default))\n else:\n r = _input_compat('> {0} '.format(question))\n\n r = r.strip()\n\n if not r:\n r = default\n break\n\n try:\n r = int(r)\n break\n except:\n print('You must enter an integer')\n return r\n else:\n raise NotImplemented(\n 'Argument `answer` must be str_compat, bool, or integer')\n\n\ndef ask_timezone(question, default, tzurl):\n \"\"\"Prompt for time zone and validate input\"\"\"\n lower_tz = [tz.lower() for tz in pytz.all_timezones]\n while True:\n r = ask(question, str_compat, default)\n r = r.strip().replace(' ', '_').lower()\n if r in lower_tz:\n r = pytz.all_timezones[lower_tz.index(r)]\n break\n else:\n print('Please enter a valid time zone:\\n'\n ' (check [{0}])'.format(tzurl))\n return r\n\n\ndef main():\n parser = argparse.ArgumentParser(\n description=\"A kickstarter for Pelican\",\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n parser.add_argument('-p', '--path', default=_DEFAULT_PATH,\n help=\"The path to generate the blog into\")\n parser.add_argument('-t', '--title', metavar=\"title\",\n help='Set the title of the website')\n parser.add_argument('-a', '--author', metavar=\"author\",\n help='Set the author name of the website')\n parser.add_argument('-l', '--lang', metavar=\"lang\",\n help='Set the default web site language')\n\n args = parser.parse_args()\n\n print('''Welcome to pelican-quickstart v{v}.\n\nThis script will help you create a new Pelican-based website.\n\nPlease answer the following questions so this script can generate the files\nneeded by Pelican.\n\n '''.format(v=__version__))\n\n project = os.path.join(\n os.environ.get('VIRTUAL_ENV', os.curdir), '.project')\n no_path_was_specified = hasattr(args.path, 'is_default_path')\n if os.path.isfile(project) and no_path_was_specified:\n CONF['basedir'] = open(project, 'r').read().rstrip(\"\\n\")\n print('Using project associated with current virtual environment.'\n 'Will save to:\\n%s\\n' % CONF['basedir'])\n else:\n CONF['basedir'] = os.path.abspath(os.path.expanduser(\n ask('Where do you want to create your new web site?',\n answer=str_compat, default=args.path)))\n\n CONF['sitename'] = ask('What will be the title of this web site?',\n answer=str_compat, default=args.title)\n CONF['author'] = ask('Who will be the author of this web site?',\n answer=str_compat, default=args.author)\n CONF['lang'] = ask('What will be the default language of this web site?',\n str_compat, args.lang or CONF['lang'], 2)\n\n if ask('Do you want to specify a URL prefix? e.g., http://example.com ',\n answer=bool, default=True):\n CONF['siteurl'] = ask('What is your URL prefix? (see '\n 'above example; no trailing slash)',\n str_compat, CONF['siteurl'])\n\n CONF['with_pagination'] = ask('Do you want to enable article pagination?',\n bool, bool(CONF['default_pagination']))\n\n if CONF['with_pagination']:\n CONF['default_pagination'] = ask('How many articles per page '\n 'do you want?',\n int, CONF['default_pagination'])\n else:\n CONF['default_pagination'] = False\n\n CONF['timezone'] = ask_timezone('What is your time zone?',\n CONF['timezone'], _TZ_URL)\n\n automation = ask('Do you want to generate a Fabfile/Makefile '\n 'to automate generation and publishing?', bool, True)\n develop = ask('Do you want an auto-reload & simpleHTTP script '\n 'to assist with theme and site development?', bool, True)\n\n if automation:\n if ask('Do you want to upload your website using FTP?',\n answer=bool, default=False):\n CONF['ftp_host'] = ask('What is the hostname of your FTP server?',\n str_compat, CONF['ftp_host'])\n CONF['ftp_user'] = ask('What is your username on that server?',\n str_compat, CONF['ftp_user'])\n CONF['ftp_target_dir'] = ask('Where do you want to put your '\n 'web site on that server?',\n str_compat, CONF['ftp_target_dir'])\n if ask('Do you want to upload your website using SSH?',\n answer=bool, default=False):\n CONF['ssh_host'] = ask('What is the hostname of your SSH server?',\n str_compat, CONF['ssh_host'])\n CONF['ssh_port'] = ask('What is the port of your SSH server?',\n int, CONF['ssh_port'])\n CONF['ssh_user'] = ask('What is your username on that server?',\n str_compat, CONF['ssh_user'])\n CONF['ssh_target_dir'] = ask('Where do you want to put your '\n 'web site on that server?',\n str_compat, CONF['ssh_target_dir'])\n\n if ask('Do you want to upload your website using Dropbox?',\n answer=bool, default=False):\n CONF['dropbox_dir'] = ask('Where is your Dropbox directory?',\n str_compat, CONF['dropbox_dir'])\n\n if ask('Do you want to upload your website using S3?',\n answer=bool, default=False):\n CONF['s3_bucket'] = ask('What is the name of your S3 bucket?',\n str_compat, CONF['s3_bucket'])\n\n if ask('Do you want to upload your website using '\n 'Rackspace Cloud Files?', answer=bool, default=False):\n CONF['cloudfiles_username'] = ask('What is your Rackspace '\n 'Cloud username?', str_compat,\n CONF['cloudfiles_username'])\n CONF['cloudfiles_api_key'] = ask('What is your Rackspace '\n 'Cloud API key?', str_compat,\n CONF['cloudfiles_api_key'])\n CONF['cloudfiles_container'] = ask('What is the name of your '\n 'Cloud Files container?',\n str_compat,\n CONF['cloudfiles_container'])\n\n if ask('Do you want to upload your website using GitHub Pages?',\n answer=bool, default=False):\n if ask('Is this your personal page (username.github.io)?',\n answer=bool, default=False):\n CONF['github_pages_branch'] = \\\n _GITHUB_PAGES_BRANCHES['personal']\n else:\n CONF['github_pages_branch'] = \\\n _GITHUB_PAGES_BRANCHES['project']\n\n try:\n os.makedirs(os.path.join(CONF['basedir'], 'content'))\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n os.makedirs(os.path.join(CONF['basedir'], 'output'))\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'pelicanconf.py'),\n 'w', 'utf-8') as fd:\n conf_python = dict()\n for key, value in CONF.items():\n conf_python[key] = repr(value)\n\n for line in get_template('pelicanconf.py'):\n template = string.Template(line)\n fd.write(template.safe_substitute(conf_python))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'publishconf.py'),\n 'w', 'utf-8') as fd:\n for line in get_template('publishconf.py'):\n template = string.Template(line)\n fd.write(template.safe_substitute(CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n if automation:\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'fabfile.py'),\n 'w', 'utf-8') as fd:\n for line in get_template('fabfile.py'):\n template = string.Template(line)\n fd.write(template.safe_substitute(CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'Makefile'),\n 'w', 'utf-8') as fd:\n mkfile_template_name = 'Makefile'\n py_v = 'PY?=python'\n if six.PY3:\n py_v = 'PY?=python3'\n template = string.Template(py_v)\n fd.write(template.safe_substitute(CONF))\n fd.write('\\n')\n for line in get_template(mkfile_template_name):\n template = string.Template(line)\n fd.write(template.safe_substitute(CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n if develop:\n conf_shell = dict()\n for key, value in CONF.items():\n if isinstance(value, six.string_types) and ' ' in value:\n value = '\"' + value.replace('\"', '\\\\\"') + '\"'\n conf_shell[key] = value\n try:\n with codecs.open(os.path.join(CONF['basedir'],\n 'develop_server.sh'),\n 'w', 'utf-8') as fd:\n lines = list(get_template('develop_server.sh'))\n py_v = 'PY=${PY:-python}\\n'\n if six.PY3:\n py_v = 'PY=${PY:-python3}\\n'\n lines = lines[:4] + [py_v] + lines[4:]\n for line in lines:\n template = string.Template(line)\n fd.write(template.safe_substitute(conf_shell))\n fd.close()\n\n # mode 0o755\n os.chmod((os.path.join(CONF['basedir'],\n 'develop_server.sh')), 493)\n except OSError as e:\n print('Error: {0}'.format(e))\n\n print('Done. Your new project is available at %s' % CONF['basedir'])\n\nif __name__ == \"__main__\":\n main()\n", "path": "pelican/tools/pelican_quickstart.py" } ]
[ { "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import print_function, unicode_literals\n\nimport argparse\nimport codecs\nimport locale\nimport os\nimport string\nimport sys\n\nimport pytz\n\ntry:\n import tzlocal\n _DEFAULT_TIMEZONE = tzlocal.get_localzone().zone\nexcept:\n _DEFAULT_TIMEZONE = 'Europe/Paris'\n\nimport six\n\nfrom pelican import __version__\n\nif (sys.version_info.major == 2):\n locale.setlocale(locale.LC_ALL, '')\n\n_TEMPLATES_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)),\n \"templates\")\n\n_GITHUB_PAGES_BRANCHES = {\n 'personal': 'master',\n 'project': 'gh-pages'\n}\n\nCONF = {\n 'pelican': 'pelican',\n 'pelicanopts': '',\n 'basedir': os.curdir,\n 'ftp_host': 'localhost',\n 'ftp_user': 'anonymous',\n 'ftp_target_dir': '/',\n 'ssh_host': 'localhost',\n 'ssh_port': 22,\n 'ssh_user': 'root',\n 'ssh_target_dir': '/var/www',\n 's3_bucket': 'my_s3_bucket',\n 'cloudfiles_username': 'my_rackspace_username',\n 'cloudfiles_api_key': 'my_rackspace_api_key',\n 'cloudfiles_container': 'my_cloudfiles_container',\n 'dropbox_dir': '~/Dropbox/Public/',\n 'github_pages_branch': _GITHUB_PAGES_BRANCHES['project'],\n 'default_pagination': 10,\n 'siteurl': '',\n 'lang': locale.getlocale()[0].split('_')[0],\n 'timezone': _DEFAULT_TIMEZONE\n}\n\n# url for list of valid timezones\n_TZ_URL = 'http://en.wikipedia.org/wiki/List_of_tz_database_time_zones'\n\n\ndef _input_compat(prompt):\n if six.PY3:\n r = input(prompt)\n else:\n r = raw_input(prompt)\n return r\n\nif six.PY3:\n str_compat = str\nelse:\n str_compat = unicode\n\n\n# Create a 'marked' default path, to determine if someone has supplied\n# a path on the command-line.\nclass _DEFAULT_PATH_TYPE(str_compat):\n is_default_path = True\n\n_DEFAULT_PATH = _DEFAULT_PATH_TYPE(os.curdir)\n\n\ndef decoding_strings(f):\n def wrapper(*args, **kwargs):\n out = f(*args, **kwargs)\n if isinstance(out, six.string_types) and not six.PY3:\n # todo: make encoding configurable?\n if six.PY3:\n return out\n else:\n return out.decode(sys.stdin.encoding)\n return out\n return wrapper\n\n\ndef get_template(name, as_encoding='utf-8'):\n template = os.path.join(_TEMPLATES_DIR, \"{0}.in\".format(name))\n\n if not os.path.isfile(template):\n raise RuntimeError(\"Cannot open {0}\".format(template))\n\n with codecs.open(template, 'r', as_encoding) as fd:\n line = fd.readline()\n while line:\n yield line\n line = fd.readline()\n fd.close()\n\n\n@decoding_strings\ndef ask(question, answer=str_compat, default=None, l=None):\n if answer == str_compat:\n r = ''\n while True:\n if default:\n r = _input_compat('> {0} [{1}] '.format(question, default))\n else:\n r = _input_compat('> {0} '.format(question, default))\n\n r = r.strip()\n\n if len(r) <= 0:\n if default:\n r = default\n break\n else:\n print('You must enter something')\n else:\n if l and len(r) != l:\n print('You must enter a {0} letters long string'.format(l))\n else:\n break\n\n return r\n\n elif answer == bool:\n r = None\n while True:\n if default is True:\n r = _input_compat('> {0} (Y/n) '.format(question))\n elif default is False:\n r = _input_compat('> {0} (y/N) '.format(question))\n else:\n r = _input_compat('> {0} (y/n) '.format(question))\n\n r = r.strip().lower()\n\n if r in ('y', 'yes'):\n r = True\n break\n elif r in ('n', 'no'):\n r = False\n break\n elif not r:\n r = default\n break\n else:\n print(\"You must answer 'yes' or 'no'\")\n return r\n elif answer == int:\n r = None\n while True:\n if default:\n r = _input_compat('> {0} [{1}] '.format(question, default))\n else:\n r = _input_compat('> {0} '.format(question))\n\n r = r.strip()\n\n if not r:\n r = default\n break\n\n try:\n r = int(r)\n break\n except:\n print('You must enter an integer')\n return r\n else:\n raise NotImplemented(\n 'Argument `answer` must be str_compat, bool, or integer')\n\n\ndef ask_timezone(question, default, tzurl):\n \"\"\"Prompt for time zone and validate input\"\"\"\n lower_tz = [tz.lower() for tz in pytz.all_timezones]\n while True:\n r = ask(question, str_compat, default)\n r = r.strip().replace(' ', '_').lower()\n if r in lower_tz:\n r = pytz.all_timezones[lower_tz.index(r)]\n break\n else:\n print('Please enter a valid time zone:\\n'\n ' (check [{0}])'.format(tzurl))\n return r\n\n\ndef main():\n parser = argparse.ArgumentParser(\n description=\"A kickstarter for Pelican\",\n formatter_class=argparse.ArgumentDefaultsHelpFormatter)\n parser.add_argument('-p', '--path', default=_DEFAULT_PATH,\n help=\"The path to generate the blog into\")\n parser.add_argument('-t', '--title', metavar=\"title\",\n help='Set the title of the website')\n parser.add_argument('-a', '--author', metavar=\"author\",\n help='Set the author name of the website')\n parser.add_argument('-l', '--lang', metavar=\"lang\",\n help='Set the default web site language')\n\n args = parser.parse_args()\n\n print('''Welcome to pelican-quickstart v{v}.\n\nThis script will help you create a new Pelican-based website.\n\nPlease answer the following questions so this script can generate the files\nneeded by Pelican.\n\n '''.format(v=__version__))\n\n project = os.path.join(\n os.environ.get('VIRTUAL_ENV', os.curdir), '.project')\n no_path_was_specified = hasattr(args.path, 'is_default_path')\n if os.path.isfile(project) and no_path_was_specified:\n CONF['basedir'] = open(project, 'r').read().rstrip(\"\\n\")\n print('Using project associated with current virtual environment.'\n 'Will save to:\\n%s\\n' % CONF['basedir'])\n else:\n CONF['basedir'] = os.path.abspath(os.path.expanduser(\n ask('Where do you want to create your new web site?',\n answer=str_compat, default=args.path)))\n\n CONF['sitename'] = ask('What will be the title of this web site?',\n answer=str_compat, default=args.title)\n CONF['author'] = ask('Who will be the author of this web site?',\n answer=str_compat, default=args.author)\n CONF['lang'] = ask('What will be the default language of this web site?',\n str_compat, args.lang or CONF['lang'], 2)\n\n if ask('Do you want to specify a URL prefix? e.g., http://example.com ',\n answer=bool, default=True):\n CONF['siteurl'] = ask('What is your URL prefix? (see '\n 'above example; no trailing slash)',\n str_compat, CONF['siteurl'])\n\n CONF['with_pagination'] = ask('Do you want to enable article pagination?',\n bool, bool(CONF['default_pagination']))\n\n if CONF['with_pagination']:\n CONF['default_pagination'] = ask('How many articles per page '\n 'do you want?',\n int, CONF['default_pagination'])\n else:\n CONF['default_pagination'] = False\n\n CONF['timezone'] = ask_timezone('What is your time zone?',\n CONF['timezone'], _TZ_URL)\n\n automation = ask('Do you want to generate a Fabfile/Makefile '\n 'to automate generation and publishing?', bool, True)\n develop = ask('Do you want an auto-reload & simpleHTTP script '\n 'to assist with theme and site development?', bool, True)\n\n if automation:\n if ask('Do you want to upload your website using FTP?',\n answer=bool, default=False):\n CONF['ftp_host'] = ask('What is the hostname of your FTP server?',\n str_compat, CONF['ftp_host'])\n CONF['ftp_user'] = ask('What is your username on that server?',\n str_compat, CONF['ftp_user'])\n CONF['ftp_target_dir'] = ask('Where do you want to put your '\n 'web site on that server?',\n str_compat, CONF['ftp_target_dir'])\n if ask('Do you want to upload your website using SSH?',\n answer=bool, default=False):\n CONF['ssh_host'] = ask('What is the hostname of your SSH server?',\n str_compat, CONF['ssh_host'])\n CONF['ssh_port'] = ask('What is the port of your SSH server?',\n int, CONF['ssh_port'])\n CONF['ssh_user'] = ask('What is your username on that server?',\n str_compat, CONF['ssh_user'])\n CONF['ssh_target_dir'] = ask('Where do you want to put your '\n 'web site on that server?',\n str_compat, CONF['ssh_target_dir'])\n\n if ask('Do you want to upload your website using Dropbox?',\n answer=bool, default=False):\n CONF['dropbox_dir'] = ask('Where is your Dropbox directory?',\n str_compat, CONF['dropbox_dir'])\n\n if ask('Do you want to upload your website using S3?',\n answer=bool, default=False):\n CONF['s3_bucket'] = ask('What is the name of your S3 bucket?',\n str_compat, CONF['s3_bucket'])\n\n if ask('Do you want to upload your website using '\n 'Rackspace Cloud Files?', answer=bool, default=False):\n CONF['cloudfiles_username'] = ask('What is your Rackspace '\n 'Cloud username?', str_compat,\n CONF['cloudfiles_username'])\n CONF['cloudfiles_api_key'] = ask('What is your Rackspace '\n 'Cloud API key?', str_compat,\n CONF['cloudfiles_api_key'])\n CONF['cloudfiles_container'] = ask('What is the name of your '\n 'Cloud Files container?',\n str_compat,\n CONF['cloudfiles_container'])\n\n if ask('Do you want to upload your website using GitHub Pages?',\n answer=bool, default=False):\n if ask('Is this your personal page (username.github.io)?',\n answer=bool, default=False):\n CONF['github_pages_branch'] = \\\n _GITHUB_PAGES_BRANCHES['personal']\n else:\n CONF['github_pages_branch'] = \\\n _GITHUB_PAGES_BRANCHES['project']\n\n try:\n os.makedirs(os.path.join(CONF['basedir'], 'content'))\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n os.makedirs(os.path.join(CONF['basedir'], 'output'))\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'pelicanconf.py'),\n 'w', 'utf-8') as fd:\n conf_python = dict()\n for key, value in CONF.items():\n conf_python[key] = repr(value)\n\n for line in get_template('pelicanconf.py'):\n template = string.Template(line)\n fd.write(template.safe_substitute(conf_python))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'publishconf.py'),\n 'w', 'utf-8') as fd:\n for line in get_template('publishconf.py'):\n template = string.Template(line)\n fd.write(template.safe_substitute(CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n if automation:\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'fabfile.py'),\n 'w', 'utf-8') as fd:\n for line in get_template('fabfile.py'):\n template = string.Template(line)\n fd.write(template.safe_substitute(CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n try:\n with codecs.open(os.path.join(CONF['basedir'], 'Makefile'),\n 'w', 'utf-8') as fd:\n mkfile_template_name = 'Makefile'\n py_v = 'PY?=python'\n if six.PY3:\n py_v = 'PY?=python3'\n template = string.Template(py_v)\n fd.write(template.safe_substitute(CONF))\n fd.write('\\n')\n for line in get_template(mkfile_template_name):\n template = string.Template(line)\n fd.write(template.safe_substitute(CONF))\n fd.close()\n except OSError as e:\n print('Error: {0}'.format(e))\n\n if develop:\n conf_shell = dict()\n for key, value in CONF.items():\n if isinstance(value, six.string_types) and ' ' in value:\n value = '\"' + value.replace('\"', '\\\\\"') + '\"'\n conf_shell[key] = value\n try:\n with codecs.open(os.path.join(CONF['basedir'],\n 'develop_server.sh'),\n 'w', 'utf-8') as fd:\n lines = list(get_template('develop_server.sh'))\n py_v = 'PY=${PY:-python}\\n'\n if six.PY3:\n py_v = 'PY=${PY:-python3}\\n'\n lines = lines[:4] + [py_v] + lines[4:]\n for line in lines:\n template = string.Template(line)\n fd.write(template.safe_substitute(conf_shell))\n fd.close()\n\n # mode 0o755\n os.chmod((os.path.join(CONF['basedir'],\n 'develop_server.sh')), 493)\n except OSError as e:\n print('Error: {0}'.format(e))\n\n print('Done. Your new project is available at %s' % CONF['basedir'])\n\nif __name__ == \"__main__\":\n main()\n", "path": "pelican/tools/pelican_quickstart.py" } ]
diff --git a/pelican/tools/pelican_quickstart.py b/pelican/tools/pelican_quickstart.py index 6b4eb5a56..ecbc35103 100755 --- a/pelican/tools/pelican_quickstart.py +++ b/pelican/tools/pelican_quickstart.py @@ -21,6 +21,8 @@ from pelican import __version__ +if (sys.version_info.major == 2): + locale.setlocale(locale.LC_ALL, '') _TEMPLATES_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), "templates")
aio-libs__aiohttp-4057
TypeError: 'ABCMeta' aiohttp==3.6.0, Python 3.6.9 ## Long story short Cant import aiohttp pip freeze gives: aiohttp==3.6.0 python3 version: Python 3.6.9 import aiohttp Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.6/site-packages/aiohttp/__init__.py", line 6, in <module> from .client import BaseConnector as BaseConnector File "/usr/local/lib/python3.6/site-packages/aiohttp/client.py", line 63, in <module> from .client_reqrep import ClientRequest as ClientRequest File "/usr/local/lib/python3.6/site-packages/aiohttp/client_reqrep.py", line 29, in <module> from . import hdrs, helpers, http, multipart, payload File "/usr/local/lib/python3.6/site-packages/aiohttp/multipart.py", line 703, in <module> class MultipartWriter(Payload): File "/usr/local/lib/python3.6/site-packages/aiohttp/multipart.py", line 786, in MultipartWriter headers: Optional[MultiMapping[str]]=None TypeError: 'ABCMeta' object is not subscriptable Any known restriction, what I am missing?
[ { "content": "import codecs\nimport os\nimport pathlib\nimport re\nimport sys\nfrom distutils.command.build_ext import build_ext\nfrom distutils.errors import (CCompilerError, DistutilsExecError,\n DistutilsPlatformError)\n\nfrom setuptools import Extension, setup\n\n\nif sys.version_info < (3, 5, 3):\n raise RuntimeError(\"aiohttp 3.x requires Python 3.5.3+\")\n\n\nNO_EXTENSIONS = bool(os.environ.get('AIOHTTP_NO_EXTENSIONS')) # type: bool\n\nif sys.implementation.name != \"cpython\":\n NO_EXTENSIONS = True\n\n\nhere = pathlib.Path(__file__).parent\n\nif (here / '.git').exists() and not (here / 'vendor/http-parser/README.md').exists():\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" git submodule update --init\", file=sys.stderr)\n sys.exit(2)\n\n\n# NOTE: makefile cythonizes all Cython modules\n\nextensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket.c']),\n Extension('aiohttp._http_parser',\n ['aiohttp/_http_parser.c',\n 'vendor/http-parser/http_parser.c',\n 'aiohttp/_find_header.c'],\n define_macros=[('HTTP_PARSER_STRICT', 0)],\n ),\n Extension('aiohttp._frozenlist',\n ['aiohttp/_frozenlist.c']),\n Extension('aiohttp._helpers',\n ['aiohttp/_helpers.c']),\n Extension('aiohttp._http_writer',\n ['aiohttp/_http_writer.c'])]\n\n\nclass BuildFailed(Exception):\n pass\n\n\nclass ve_build_ext(build_ext):\n # This class allows C extension building to fail.\n\n def run(self):\n try:\n build_ext.run(self)\n except (DistutilsPlatformError, FileNotFoundError):\n raise BuildFailed()\n\n def build_extension(self, ext):\n try:\n build_ext.build_extension(self, ext)\n except (CCompilerError, DistutilsExecError,\n DistutilsPlatformError, ValueError):\n raise BuildFailed()\n\n\n\ntxt = (here / 'aiohttp' / '__init__.py').read_text('utf-8')\ntry:\n version = re.findall(r\"^__version__ = '([^']+)'\\r?$\",\n txt, re.M)[0]\nexcept IndexError:\n raise RuntimeError('Unable to determine version.')\n\ninstall_requires = [\n 'attrs>=17.3.0',\n 'chardet>=2.0,<4.0',\n 'multidict>=4.0,<5.0',\n 'async_timeout>=3.0,<4.0',\n 'yarl>=1.0,<2.0',\n 'idna-ssl>=1.0; python_version<\"3.7\"',\n 'typing_extensions>=3.6.5',\n]\n\n\ndef read(f):\n return (here / f).read_text('utf-8').strip()\n\n\nargs = dict(\n name='aiohttp',\n version=version,\n description='Async http client/server framework (asyncio)',\n long_description='\\n\\n'.join((read('README.rst'), read('CHANGES.rst'))),\n long_description_content_type=\"text/x-rst\",\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n 'Intended Audience :: Developers',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Development Status :: 5 - Production/Stable',\n 'Operating System :: POSIX',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Topic :: Internet :: WWW/HTTP',\n 'Framework :: AsyncIO',\n ],\n author='Nikolay Kim',\n author_email='[email protected]',\n maintainer=', '.join(('Nikolay Kim <[email protected]>',\n 'Andrew Svetlov <[email protected]>')),\n maintainer_email='[email protected]',\n url='https://github.com/aio-libs/aiohttp',\n project_urls={\n 'Chat: Gitter': 'https://gitter.im/aio-libs/Lobby',\n 'CI: AppVeyor': 'https://ci.appveyor.com/project/aio-libs/aiohttp',\n 'CI: Circle': 'https://circleci.com/gh/aio-libs/aiohttp',\n 'CI: Shippable': 'https://app.shippable.com/github/aio-libs/aiohttp',\n 'CI: Travis': 'https://travis-ci.com/aio-libs/aiohttp',\n 'Coverage: codecov': 'https://codecov.io/github/aio-libs/aiohttp',\n 'Docs: RTD': 'https://docs.aiohttp.org',\n 'GitHub: issues': 'https://github.com/aio-libs/aiohttp/issues',\n 'GitHub: repo': 'https://github.com/aio-libs/aiohttp',\n },\n license='Apache 2',\n packages=['aiohttp'],\n python_requires='>=3.5.3',\n install_requires=install_requires,\n extras_require={\n 'speedups': [\n 'aiodns',\n 'Brotli',\n 'cchardet',\n ],\n },\n include_package_data=True,\n)\n\nif not NO_EXTENSIONS:\n print(\"**********************\")\n print(\"* Accellerated build *\")\n print(\"**********************\")\n setup(ext_modules=extensions,\n cmdclass=dict(build_ext=ve_build_ext),\n **args)\nelse:\n print(\"*********************\")\n print(\"* Pure Python build *\")\n print(\"*********************\")\n setup(**args)\n", "path": "setup.py" } ]
[ { "content": "import codecs\nimport os\nimport pathlib\nimport re\nimport sys\nfrom distutils.command.build_ext import build_ext\nfrom distutils.errors import (CCompilerError, DistutilsExecError,\n DistutilsPlatformError)\n\nfrom setuptools import Extension, setup\n\n\nif sys.version_info < (3, 5, 3):\n raise RuntimeError(\"aiohttp 3.x requires Python 3.5.3+\")\n\n\nNO_EXTENSIONS = bool(os.environ.get('AIOHTTP_NO_EXTENSIONS')) # type: bool\n\nif sys.implementation.name != \"cpython\":\n NO_EXTENSIONS = True\n\n\nhere = pathlib.Path(__file__).parent\n\nif (here / '.git').exists() and not (here / 'vendor/http-parser/README.md').exists():\n print(\"Install submodules when building from git clone\", file=sys.stderr)\n print(\"Hint:\", file=sys.stderr)\n print(\" git submodule update --init\", file=sys.stderr)\n sys.exit(2)\n\n\n# NOTE: makefile cythonizes all Cython modules\n\nextensions = [Extension('aiohttp._websocket', ['aiohttp/_websocket.c']),\n Extension('aiohttp._http_parser',\n ['aiohttp/_http_parser.c',\n 'vendor/http-parser/http_parser.c',\n 'aiohttp/_find_header.c'],\n define_macros=[('HTTP_PARSER_STRICT', 0)],\n ),\n Extension('aiohttp._frozenlist',\n ['aiohttp/_frozenlist.c']),\n Extension('aiohttp._helpers',\n ['aiohttp/_helpers.c']),\n Extension('aiohttp._http_writer',\n ['aiohttp/_http_writer.c'])]\n\n\nclass BuildFailed(Exception):\n pass\n\n\nclass ve_build_ext(build_ext):\n # This class allows C extension building to fail.\n\n def run(self):\n try:\n build_ext.run(self)\n except (DistutilsPlatformError, FileNotFoundError):\n raise BuildFailed()\n\n def build_extension(self, ext):\n try:\n build_ext.build_extension(self, ext)\n except (CCompilerError, DistutilsExecError,\n DistutilsPlatformError, ValueError):\n raise BuildFailed()\n\n\n\ntxt = (here / 'aiohttp' / '__init__.py').read_text('utf-8')\ntry:\n version = re.findall(r\"^__version__ = '([^']+)'\\r?$\",\n txt, re.M)[0]\nexcept IndexError:\n raise RuntimeError('Unable to determine version.')\n\ninstall_requires = [\n 'attrs>=17.3.0',\n 'chardet>=2.0,<4.0',\n 'multidict>=4.5,<5.0',\n 'async_timeout>=3.0,<4.0',\n 'yarl>=1.0,<2.0',\n 'idna-ssl>=1.0; python_version<\"3.7\"',\n 'typing_extensions>=3.6.5',\n]\n\n\ndef read(f):\n return (here / f).read_text('utf-8').strip()\n\n\nargs = dict(\n name='aiohttp',\n version=version,\n description='Async http client/server framework (asyncio)',\n long_description='\\n\\n'.join((read('README.rst'), read('CHANGES.rst'))),\n long_description_content_type=\"text/x-rst\",\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n 'Intended Audience :: Developers',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Development Status :: 5 - Production/Stable',\n 'Operating System :: POSIX',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Topic :: Internet :: WWW/HTTP',\n 'Framework :: AsyncIO',\n ],\n author='Nikolay Kim',\n author_email='[email protected]',\n maintainer=', '.join(('Nikolay Kim <[email protected]>',\n 'Andrew Svetlov <[email protected]>')),\n maintainer_email='[email protected]',\n url='https://github.com/aio-libs/aiohttp',\n project_urls={\n 'Chat: Gitter': 'https://gitter.im/aio-libs/Lobby',\n 'CI: AppVeyor': 'https://ci.appveyor.com/project/aio-libs/aiohttp',\n 'CI: Circle': 'https://circleci.com/gh/aio-libs/aiohttp',\n 'CI: Shippable': 'https://app.shippable.com/github/aio-libs/aiohttp',\n 'CI: Travis': 'https://travis-ci.com/aio-libs/aiohttp',\n 'Coverage: codecov': 'https://codecov.io/github/aio-libs/aiohttp',\n 'Docs: RTD': 'https://docs.aiohttp.org',\n 'GitHub: issues': 'https://github.com/aio-libs/aiohttp/issues',\n 'GitHub: repo': 'https://github.com/aio-libs/aiohttp',\n },\n license='Apache 2',\n packages=['aiohttp'],\n python_requires='>=3.5.3',\n install_requires=install_requires,\n extras_require={\n 'speedups': [\n 'aiodns',\n 'Brotli',\n 'cchardet',\n ],\n },\n include_package_data=True,\n)\n\nif not NO_EXTENSIONS:\n print(\"**********************\")\n print(\"* Accellerated build *\")\n print(\"**********************\")\n setup(ext_modules=extensions,\n cmdclass=dict(build_ext=ve_build_ext),\n **args)\nelse:\n print(\"*********************\")\n print(\"* Pure Python build *\")\n print(\"*********************\")\n setup(**args)\n", "path": "setup.py" } ]
diff --git a/CHANGES/4057.bugfix b/CHANGES/4057.bugfix new file mode 100644 index 00000000000..990694930eb --- /dev/null +++ b/CHANGES/4057.bugfix @@ -0,0 +1 @@ +Update multidict requirement to >= 4.5 diff --git a/setup.py b/setup.py index 6d87f90b991..2cff742fd67 100644 --- a/setup.py +++ b/setup.py @@ -78,7 +78,7 @@ def build_extension(self, ext): install_requires = [ 'attrs>=17.3.0', 'chardet>=2.0,<4.0', - 'multidict>=4.0,<5.0', + 'multidict>=4.5,<5.0', 'async_timeout>=3.0,<4.0', 'yarl>=1.0,<2.0', 'idna-ssl>=1.0; python_version<"3.7"',
falconry__falcon-1946
Deprecate falcon.api_helpers See https://github.com/falconry/falcon/issues/1902. Starting with 3.1, mark `falcon.api_helpers` as deprecated. We could employ module-level `__getattr__` or redecorate re-imported functions.
[ { "content": "from .app_helpers import * # NOQA\n\n# TODO deprecate\n# import warnings\n# from .util.deprecation import DeprecatedWarning\n\n# warnings.warn('The api_helpers module was renamed to app_helpers.', DeprecatedWarning)\n", "path": "falcon/api_helpers.py" } ]
[ { "content": "import warnings\n\nfrom .app_helpers import * # NOQA\nfrom .util.deprecation import DeprecatedWarning\n\nwarnings.warn('The api_helpers module was renamed to app_helpers.', DeprecatedWarning)\n", "path": "falcon/api_helpers.py" } ]
diff --git a/falcon/api_helpers.py b/falcon/api_helpers.py index 3093856e1..23328b347 100644 --- a/falcon/api_helpers.py +++ b/falcon/api_helpers.py @@ -1,7 +1,6 @@ -from .app_helpers import * # NOQA +import warnings -# TODO deprecate -# import warnings -# from .util.deprecation import DeprecatedWarning +from .app_helpers import * # NOQA +from .util.deprecation import DeprecatedWarning -# warnings.warn('The api_helpers module was renamed to app_helpers.', DeprecatedWarning) +warnings.warn('The api_helpers module was renamed to app_helpers.', DeprecatedWarning) diff --git a/tests/test_deprecations.py b/tests/test_deprecations.py index 95dd5bb81..d33917f11 100644 --- a/tests/test_deprecations.py +++ b/tests/test_deprecations.py @@ -2,7 +2,7 @@ from falcon import app_helpers, request_helpers, stream -# from _util import has_cython +from _util import has_cython # NOQA def test_bounded_stream(): @@ -18,17 +18,16 @@ def test_imports(self): for name in app_helpers.__all__: assert getattr(api_helpers, name) is getattr(app_helpers, name) - # TODO enable test of deprecation - # @pytest.mark.skipif( - # has_cython, - # reason='Reloading modules on Cython does not work', - # ) - # def test_warning(self): - # import importlib + @pytest.mark.skipif( + has_cython, + reason='Reloading modules on Cython does not work', + ) + def test_warning(self): + import importlib - # from falcon.util.deprecation import DeprecatedWarning + from falcon.util.deprecation import DeprecatedWarning - # with pytest.warns(DeprecatedWarning, match='The api_helpers'): - # from falcon import api_helpers + with pytest.warns(DeprecatedWarning, match='The api_helpers'): + from falcon import api_helpers - # importlib.reload(api_helpers) + importlib.reload(api_helpers)
ansible-collections__community.aws-389
UnboundLocalError in sqs_queue <!--- Verify first that your issue is not already reported on GitHub --> <!--- Also test if the latest release and devel branch are affected too --> <!--- Complete *all* sections as described, this form is processed automatically --> ##### SUMMARY Copied the "Create FIFO queue" example from documentation, and it fails to run with an UnboundLocalError. ##### ISSUE TYPE - Bug Report ##### COMPONENT NAME sqs_queue ##### ANSIBLE VERSION <!--- Paste verbatim output from "ansible --version" between quotes --> ``` ansible 2.9.10 config file = /home/mstudd/.ansible.cfg configured module search path = ['/home/mstudd/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.8/site-packages/ansible executable location = /usr/bin/ansible python version = 3.8.3 (default, May 29 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)] ``` ##### CONFIGURATION <!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes --> ``` DEFAULT_VAULT_IDENTITY_LIST(env: ANSIBLE_VAULT_IDENTITY_LIST) = ['ops@~/.vault/ops', 'dev@~/.vault/dev'] ``` ##### OS / ENVIRONMENT Fedora 32 x86_64 ##### STEPS TO REPRODUCE <!--- Describe exactly how to reproduce the problem, using a minimal test-case --> <!--- Paste example playbooks or commands between quotes below --> ``` --- - hosts: localhost tasks: - community.aws.sqs_queue: name: fifo-queue region: us-east-1 queue_type: fifo content_based_deduplication: yes ``` <!--- HINT: You can paste gist.github.com links for larger files --> ##### EXPECTED RESULTS ansible-playbook creates the SQS queue as described (or reports relevant auth error if AWS creds aren't correct). ##### ACTUAL RESULTS <!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) --> <!--- Paste verbatim command output between quotes --> ``` config file = /home/mstudd/dev/git/ansible/ansible.cfg configured module search path = ['/home/mstudd/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.8/site-packages/ansible executable location = /usr/bin/ansible-playbook python version = 3.8.3 (default, May 29 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)] Using /home/mstudd/dev/git/ansible/ansible.cfg as config file setting up inventory plugins host_list declined parsing /home/mstudd/dev/git/ansible/inventories/dev/hosts as it did not pass its verify_file() method script declined parsing /home/mstudd/dev/git/ansible/inventories/dev/hosts as it did not pass its verify_file() method auto declined parsing /home/mstudd/dev/git/ansible/inventories/dev/hosts as it did not pass its verify_file() method Set default localhost to localhost Parsed /home/mstudd/dev/git/ansible/inventories/dev/hosts inventory source with ini plugin [WARNING]: Found both group and host with same name: api-charts Loading callback plugin default of type stdout, v2.0 from /usr/lib/python3.8/site-packages/ansible/plugins/callback/default.py PLAYBOOK: test.yml ********************************************************************************************************************************************************************************************************************************************************************************************************** Positional arguments: test.yml verbosity: 4 connection: smart timeout: 10 become_method: sudo tags: ('all',) inventory: ('/home/mstudd/dev/git/ansible/inventories/dev/hosts',) forks: 5 1 plays in test.yml PLAY [localhost] ************************************************************************************************************************************************************************************************************************************************************************************************************ Trying secret FileVaultSecret(filename='/home/mstudd/.vault/ops') for vault_id=ops Tried to use the vault secret (ops) to decrypt (/home/mstudd/dev/git/ansible/inventories/dev/group_vars/all/secrets.yml) but it failed. Error: HMAC verification failed: Signature did not match digest. Trying secret FileVaultSecret(filename='/home/mstudd/.vault/dev') for vault_id=dev TASK [Gathering Facts] ****************************************************************************************************************************************************************************************************************************************************************************************************** task path: /home/mstudd/dev/git/ansible/test.yml:3 <localhost.dev> ESTABLISH LOCAL CONNECTION FOR USER: mstudd <localhost.dev> EXEC /bin/sh -c 'echo ~mstudd && sleep 0' <localhost.dev> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/mstudd/.ansible/tmp `"&& mkdir /home/mstudd/.ansible/tmp/ansible-tmp-1596227662.5343304-234250-22361424934479 && echo ansible-tmp-1596227662.5343304-234250-22361424934479="` echo /home/mstudd/.ansible/tmp/ansible-tmp-1596227662.5343304-234250-22361424934479 `" ) && sleep 0' <localhost> Attempting python interpreter discovery <localhost.dev> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0' <localhost.dev> EXEC /bin/sh -c '/usr/bin/python && sleep 0' Using module file /usr/lib/python3.8/site-packages/ansible/modules/system/setup.py <localhost.dev> PUT /home/mstudd/.ansible/tmp/ansible-local-234246mzlywzby/tmphl8okmrr TO /home/mstudd/.ansible/tmp/ansible-tmp-1596227662.5343304-234250-22361424934479/AnsiballZ_setup.py <localhost.dev> EXEC /bin/sh -c 'chmod u+x /home/mstudd/.ansible/tmp/ansible-tmp-1596227662.5343304-234250-22361424934479/ /home/mstudd/.ansible/tmp/ansible-tmp-1596227662.5343304-234250-22361424934479/AnsiballZ_setup.py && sleep 0' <localhost.dev> EXEC /bin/sh -c '/usr/bin/python3 /home/mstudd/.ansible/tmp/ansible-tmp-1596227662.5343304-234250-22361424934479/AnsiballZ_setup.py && sleep 0' <localhost.dev> EXEC /bin/sh -c 'rm -f -r /home/mstudd/.ansible/tmp/ansible-tmp-1596227662.5343304-234250-22361424934479/ > /dev/null 2>&1 && sleep 0' ok: [localhost] META: ran handlers TASK [community.aws.sqs_queue] ********************************************************************************************************************************************************************************************************************************************************************************************** task path: /home/mstudd/dev/git/ansible/test.yml:5 <localhost.dev> ESTABLISH LOCAL CONNECTION FOR USER: mstudd <localhost.dev> EXEC /bin/sh -c 'echo ~mstudd && sleep 0' <localhost.dev> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/mstudd/.ansible/tmp `"&& mkdir /home/mstudd/.ansible/tmp/ansible-tmp-1596227663.4545841-234299-215745550363761 && echo ansible-tmp-1596227663.4545841-234299-215745550363761="` echo /home/mstudd/.ansible/tmp/ansible-tmp-1596227663.4545841-234299-215745550363761 `" ) && sleep 0' Using module file /home/mstudd/dev/git/ansible/galaxy-roles/ansible_collections/community/aws/plugins/modules/sqs_queue.py <localhost.dev> PUT /home/mstudd/.ansible/tmp/ansible-local-234246mzlywzby/tmpifp2ngt5 TO /home/mstudd/.ansible/tmp/ansible-tmp-1596227663.4545841-234299-215745550363761/AnsiballZ_sqs_queue.py <localhost.dev> EXEC /bin/sh -c 'chmod u+x /home/mstudd/.ansible/tmp/ansible-tmp-1596227663.4545841-234299-215745550363761/ /home/mstudd/.ansible/tmp/ansible-tmp-1596227663.4545841-234299-215745550363761/AnsiballZ_sqs_queue.py && sleep 0' <localhost.dev> EXEC /bin/sh -c '/usr/bin/python3 /home/mstudd/.ansible/tmp/ansible-tmp-1596227663.4545841-234299-215745550363761/AnsiballZ_sqs_queue.py && sleep 0' <localhost.dev> EXEC /bin/sh -c 'rm -f -r /home/mstudd/.ansible/tmp/ansible-tmp-1596227663.4545841-234299-215745550363761/ > /dev/null 2>&1 && sleep 0' The full traceback is: Traceback (most recent call last): File "/home/mstudd/.ansible/tmp/ansible-tmp-1596227663.4545841-234299-215745550363761/AnsiballZ_sqs_queue.py", line 102, in <module> _ansiballz_main() File "/home/mstudd/.ansible/tmp/ansible-tmp-1596227663.4545841-234299-215745550363761/AnsiballZ_sqs_queue.py", line 94, in _ansiballz_main invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS) File "/home/mstudd/.ansible/tmp/ansible-tmp-1596227663.4545841-234299-215745550363761/AnsiballZ_sqs_queue.py", line 40, in invoke_module runpy.run_module(mod_name='ansible_collections.community.aws.plugins.modules.sqs_queue', init_globals=None, run_name='__main__', alter_sys=True) File "/usr/lib64/python3.8/runpy.py", line 207, in run_module return _run_module_code(code, init_globals, run_name, mod_spec) File "/usr/lib64/python3.8/runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "/usr/lib64/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/tmp/ansible_community.aws.sqs_queue_payload_vm5yiveu/ansible_community.aws.sqs_queue_payload.zip/ansible_collections/community/aws/plugins/modules/sqs_queue.py", line 474, in <module> File "/tmp/ansible_community.aws.sqs_queue_payload_vm5yiveu/ansible_community.aws.sqs_queue_payload.zip/ansible_collections/community/aws/plugins/modules/sqs_queue.py", line 464, in main File "/tmp/ansible_community.aws.sqs_queue_payload_vm5yiveu/ansible_community.aws.sqs_queue_payload.zip/ansible_collections/community/aws/plugins/modules/sqs_queue.py", line 311, in create_or_update_sqs_queue File "/tmp/ansible_community.aws.sqs_queue_payload_vm5yiveu/ansible_community.aws.sqs_queue_payload.zip/ansible_collections/community/aws/plugins/modules/sqs_queue.py", line 377, in update_sqs_queue UnboundLocalError: local variable 'existing_value' referenced before assignment fatal: [localhost]: FAILED! => { "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/home/mstudd/.ansible/tmp/ansible-tmp-1596227663.4545841-234299-215745550363761/AnsiballZ_sqs_queue.py\", line 102, in <module>\n _ansiballz_main()\n File \"/home/mstudd/.ansible/tmp/ansible-tmp-1596227663.4545841-234299-215745550363761/AnsiballZ_sqs_queue.py\", line 94, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/home/mstudd/.ansible/tmp/ansible-tmp-1596227663.4545841-234299-215745550363761/AnsiballZ_sqs_queue.py\", line 40, in invoke_module\n runpy.run_module(mod_name='ansible_collections.community.aws.plugins.modules.sqs_queue', init_globals=None, run_name='__main__', alter_sys=True)\n File \"/usr/lib64/python3.8/runpy.py\", line 207, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.8/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib64/python3.8/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_community.aws.sqs_queue_payload_vm5yiveu/ansible_community.aws.sqs_queue_payload.zip/ansible_collections/community/aws/plugins/modules/sqs_queue.py\", line 474, in <module>\n File \"/tmp/ansible_community.aws.sqs_queue_payload_vm5yiveu/ansible_community.aws.sqs_queue_payload.zip/ansible_collections/community/aws/plugins/modules/sqs_queue.py\", line 464, in main\n File \"/tmp/ansible_community.aws.sqs_queue_payload_vm5yiveu/ansible_community.aws.sqs_queue_payload.zip/ansible_collections/community/aws/plugins/modules/sqs_queue.py\", line 311, in create_or_update_sqs_queue\n File \"/tmp/ansible_community.aws.sqs_queue_payload_vm5yiveu/ansible_community.aws.sqs_queue_payload.zip/ansible_collections/community/aws/plugins/modules/sqs_queue.py\", line 377, in update_sqs_queue\nUnboundLocalError: local variable 'existing_value' referenced before assignment\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1 } PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 ```
[ { "content": "#!/usr/bin/python\n# Copyright: Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: sqs_queue\nversion_added: 1.0.0\nshort_description: Creates or deletes AWS SQS queues\ndescription:\n - Create or delete AWS SQS queues.\n - Update attributes on existing queues.\nauthor:\n - Alan Loi (@loia)\n - Fernando Jose Pando (@nand0p)\n - Nadir Lloret (@nadirollo)\n - Dennis Podkovyrin (@sbj-ss)\nrequirements:\n - boto3\noptions:\n state:\n description:\n - Create or delete the queue.\n choices: ['present', 'absent']\n default: 'present'\n type: str\n name:\n description:\n - Name of the queue.\n required: true\n type: str\n queue_type:\n description:\n - Standard or FIFO queue.\n - I(queue_type) can only be set at queue creation and will otherwise be\n ignored.\n choices: ['standard', 'fifo']\n default: 'standard'\n type: str\n visibility_timeout:\n description:\n - The default visibility timeout in seconds.\n aliases: [default_visibility_timeout]\n type: int\n message_retention_period:\n description:\n - The message retention period in seconds.\n type: int\n maximum_message_size:\n description:\n - The maximum message size in bytes.\n type: int\n delay_seconds:\n description:\n - The delivery delay in seconds.\n aliases: [delivery_delay]\n type: int\n receive_message_wait_time_seconds:\n description:\n - The receive message wait time in seconds.\n aliases: [receive_message_wait_time]\n type: int\n policy:\n description:\n - The JSON dict policy to attach to queue.\n type: dict\n redrive_policy:\n description:\n - JSON dict with the redrive_policy (see example).\n type: dict\n kms_master_key_id:\n description:\n - The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK.\n type: str\n kms_data_key_reuse_period_seconds:\n description:\n - The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again.\n aliases: [kms_data_key_reuse_period]\n type: int\n content_based_deduplication:\n type: bool\n description:\n - Enables content-based deduplication. Used for FIFOs only.\n - Defaults to C(false).\n tags:\n description:\n - Tag dict to apply to the queue (requires botocore 1.5.40 or above).\n - To remove all tags set I(tags={}) and I(purge_tags=true).\n type: dict\n purge_tags:\n description:\n - Remove tags not listed in I(tags).\n type: bool\n default: false\nextends_documentation_fragment:\n- amazon.aws.aws\n- amazon.aws.ec2\n\n'''\n\nRETURN = '''\ncontent_based_deduplication:\n description: Enables content-based deduplication. Used for FIFOs only.\n type: bool\n returned: always\n sample: True\nvisibility_timeout:\n description: The default visibility timeout in seconds.\n type: int\n returned: always\n sample: 30\ndelay_seconds:\n description: The delivery delay in seconds.\n type: int\n returned: always\n sample: 0\nkms_master_key_id:\n description: The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK.\n type: str\n returned: always\n sample: alias/MyAlias\nkms_data_key_reuse_period_seconds:\n description: The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again.\n type: int\n returned: always\n sample: 300\nmaximum_message_size:\n description: The maximum message size in bytes.\n type: int\n returned: always\n sample: 262144\nmessage_retention_period:\n description: The message retention period in seconds.\n type: int\n returned: always\n sample: 345600\nname:\n description: Name of the SQS Queue\n type: str\n returned: always\n sample: \"queuename-987d2de0\"\nqueue_arn:\n description: The queue's Amazon resource name (ARN).\n type: str\n returned: on success\n sample: 'arn:aws:sqs:us-east-1:199999999999:queuename-987d2de0'\nqueue_url:\n description: URL to access the queue\n type: str\n returned: on success\n sample: 'https://queue.amazonaws.com/123456789012/MyQueue'\nreceive_message_wait_time_seconds:\n description: The receive message wait time in seconds.\n type: int\n returned: always\n sample: 0\nregion:\n description: Region that the queue was created within\n type: str\n returned: always\n sample: 'us-east-1'\ntags:\n description: List of queue tags\n type: dict\n returned: always\n sample: '{\"Env\": \"prod\"}'\n'''\n\nEXAMPLES = '''\n- name: Create SQS queue with redrive policy\n community.aws.sqs_queue:\n name: my-queue\n region: ap-southeast-2\n default_visibility_timeout: 120\n message_retention_period: 86400\n maximum_message_size: 1024\n delivery_delay: 30\n receive_message_wait_time: 20\n policy: \"{{ json_dict }}\"\n redrive_policy:\n maxReceiveCount: 5\n deadLetterTargetArn: arn:aws:sqs:eu-west-1:123456789012:my-dead-queue\n\n- name: Drop redrive policy\n community.aws.sqs_queue:\n name: my-queue\n region: ap-southeast-2\n redrive_policy: {}\n\n- name: Create FIFO queue\n community.aws.sqs_queue:\n name: fifo-queue\n region: ap-southeast-2\n queue_type: fifo\n content_based_deduplication: yes\n\n- name: Tag queue\n community.aws.sqs_queue:\n name: fifo-queue\n region: ap-southeast-2\n tags:\n example: SomeValue\n\n- name: Configure Encryption, automatically uses a new data key every hour\n community.aws.sqs_queue:\n name: fifo-queue\n region: ap-southeast-2\n kms_master_key_id: alias/MyQueueKey\n kms_data_key_reuse_period_seconds: 3600\n\n- name: Delete SQS queue\n community.aws.sqs_queue:\n name: my-queue\n region: ap-southeast-2\n state: absent\n'''\n\nimport json\n\ntry:\n import botocore\nexcept ImportError:\n pass # handled by AnsibleAWSModule\n\nfrom ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict\nfrom ansible.module_utils.common.dict_transformations import snake_dict_to_camel_dict\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import compare_aws_tags\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import compare_policies\n\n\ndef get_queue_name(module, is_fifo=False):\n name = module.params.get('name')\n if not is_fifo or name.endswith('.fifo'):\n return name\n return name + '.fifo'\n\n\n# NonExistentQueue is explicitly expected when a queue doesn't exist\[email protected]_backoff()\ndef get_queue_url(client, name):\n try:\n return client.get_queue_url(QueueName=name)['QueueUrl']\n except is_boto3_error_code('AWS.SimpleQueueService.NonExistentQueue'):\n return None\n\n\ndef describe_queue(client, queue_url):\n \"\"\"\n Description a queue in snake format\n \"\"\"\n attributes = client.get_queue_attributes(QueueUrl=queue_url, AttributeNames=['All'], aws_retry=True)['Attributes']\n description = dict(attributes)\n description.pop('Policy', None)\n description.pop('RedrivePolicy', None)\n description = camel_dict_to_snake_dict(description)\n description['policy'] = attributes.get('Policy', None)\n description['redrive_policy'] = attributes.get('RedrivePolicy', None)\n\n # Boto3 returns everything as a string, convert them back to integers/dicts if\n # that's what we expected.\n for key, value in description.items():\n if value is None:\n continue\n\n if key in ['policy', 'redrive_policy']:\n policy = json.loads(value)\n description[key] = policy\n continue\n\n if key == 'content_based_deduplication':\n try:\n description[key] = bool(value)\n except (TypeError, ValueError):\n pass\n\n try:\n if value == str(int(value)):\n description[key] = int(value)\n except (TypeError, ValueError):\n pass\n\n return description\n\n\ndef create_or_update_sqs_queue(client, module):\n is_fifo = (module.params.get('queue_type') == 'fifo')\n queue_name = get_queue_name(module, is_fifo)\n result = dict(\n name=queue_name,\n region=module.params.get('region'),\n changed=False,\n )\n\n queue_url = get_queue_url(client, queue_name)\n result['queue_url'] = queue_url\n\n if not queue_url:\n create_attributes = {'FifoQueue': 'true'} if is_fifo else {}\n result['changed'] = True\n if module.check_mode:\n return result\n queue_url = client.create_queue(QueueName=queue_name, Attributes=create_attributes, aws_retry=True)['QueueUrl']\n\n changed, arn = update_sqs_queue(module, client, queue_url)\n result['changed'] |= changed\n result['queue_arn'] = arn\n\n changed, tags = update_tags(client, queue_url, module)\n result['changed'] |= changed\n result['tags'] = tags\n\n result.update(describe_queue(client, queue_url))\n\n COMPATABILITY_KEYS = dict(\n delay_seconds='delivery_delay',\n receive_message_wait_time_seconds='receive_message_wait_time',\n visibility_timeout='default_visibility_timeout',\n kms_data_key_reuse_period_seconds='kms_data_key_reuse_period',\n )\n for key in list(result.keys()):\n\n # The return values changed between boto and boto3, add the old keys too\n # for backwards compatibility\n return_name = COMPATABILITY_KEYS.get(key)\n if return_name:\n result[return_name] = result.get(key)\n\n return result\n\n\ndef update_sqs_queue(module, client, queue_url):\n check_mode = module.check_mode\n changed = False\n existing_attributes = client.get_queue_attributes(QueueUrl=queue_url, AttributeNames=['All'], aws_retry=True)['Attributes']\n new_attributes = snake_dict_to_camel_dict(module.params, capitalize_first=True)\n attributes_to_set = dict()\n\n # Boto3 SQS deals with policies as strings, we want to deal with them as\n # dicts\n if module.params.get('policy') is not None:\n policy = module.params.get('policy')\n current_value = existing_attributes.get('Policy', '{}')\n current_policy = json.loads(current_value)\n if compare_policies(current_policy, policy):\n attributes_to_set['Policy'] = json.dumps(policy)\n changed = True\n if module.params.get('redrive_policy') is not None:\n policy = module.params.get('redrive_policy')\n current_value = existing_attributes.get('RedrivePolicy', '{}')\n current_policy = json.loads(current_value)\n if compare_policies(current_policy, policy):\n attributes_to_set['RedrivePolicy'] = json.dumps(policy)\n changed = True\n\n for attribute, value in existing_attributes.items():\n # We handle these as a special case because they're IAM policies\n if attribute in ['Policy', 'RedrivePolicy']:\n continue\n\n if attribute not in new_attributes.keys():\n continue\n\n if new_attributes.get(attribute) is None:\n continue\n\n new_value = new_attributes[attribute]\n\n if isinstance(new_value, bool):\n new_value = str(new_value).lower()\n existing_value = str(existing_value).lower()\n\n if new_value == value:\n continue\n\n # Boto3 expects strings\n attributes_to_set[attribute] = str(new_value)\n changed = True\n\n if changed and not check_mode:\n client.set_queue_attributes(QueueUrl=queue_url, Attributes=attributes_to_set, aws_retry=True)\n\n return changed, existing_attributes.get('queue_arn'),\n\n\ndef delete_sqs_queue(client, module):\n is_fifo = (module.params.get('queue_type') == 'fifo')\n queue_name = get_queue_name(module, is_fifo)\n result = dict(\n name=queue_name,\n region=module.params.get('region'),\n changed=False\n )\n\n queue_url = get_queue_url(client, queue_name)\n if not queue_url:\n return result\n\n result['changed'] = bool(queue_url)\n if not module.check_mode:\n AWSRetry.jittered_backoff()(client.delete_queue)(QueueUrl=queue_url)\n\n return result\n\n\ndef update_tags(client, queue_url, module):\n new_tags = module.params.get('tags')\n purge_tags = module.params.get('purge_tags')\n if new_tags is None:\n return False, {}\n\n try:\n existing_tags = client.list_queue_tags(QueueUrl=queue_url, aws_retry=True)['Tags']\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError, KeyError) as e:\n existing_tags = {}\n\n tags_to_add, tags_to_remove = compare_aws_tags(existing_tags, new_tags, purge_tags=purge_tags)\n\n if not module.check_mode:\n if tags_to_remove:\n client.untag_queue(QueueUrl=queue_url, TagKeys=tags_to_remove, aws_retry=True)\n if tags_to_add:\n client.tag_queue(QueueUrl=queue_url, Tags=tags_to_add)\n existing_tags = client.list_queue_tags(QueueUrl=queue_url, aws_retry=True).get('Tags', {})\n else:\n existing_tags = new_tags\n\n changed = bool(tags_to_remove) or bool(tags_to_add)\n return changed, existing_tags\n\n\ndef main():\n\n argument_spec = dict(\n state=dict(type='str', default='present', choices=['present', 'absent']),\n name=dict(type='str', required=True),\n queue_type=dict(type='str', default='standard', choices=['standard', 'fifo']),\n delay_seconds=dict(type='int', aliases=['delivery_delay']),\n maximum_message_size=dict(type='int'),\n message_retention_period=dict(type='int'),\n policy=dict(type='dict'),\n receive_message_wait_time_seconds=dict(type='int', aliases=['receive_message_wait_time']),\n redrive_policy=dict(type='dict'),\n visibility_timeout=dict(type='int', aliases=['default_visibility_timeout']),\n kms_master_key_id=dict(type='str'),\n kms_data_key_reuse_period_seconds=dict(type='int', aliases=['kms_data_key_reuse_period'], no_log=False),\n content_based_deduplication=dict(type='bool'),\n tags=dict(type='dict'),\n purge_tags=dict(type='bool', default=False),\n )\n module = AnsibleAWSModule(argument_spec=argument_spec, supports_check_mode=True)\n\n state = module.params.get('state')\n retry_decorator = AWSRetry.jittered_backoff(catch_extra_error_codes=['AWS.SimpleQueueService.NonExistentQueue'])\n try:\n client = module.client('sqs', retry_decorator=retry_decorator)\n if state == 'present':\n result = create_or_update_sqs_queue(client, module)\n elif state == 'absent':\n result = delete_sqs_queue(client, module)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(e, msg='Failed to control sqs queue')\n else:\n module.exit_json(**result)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/sqs_queue.py" } ]
[ { "content": "#!/usr/bin/python\n# Copyright: Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: sqs_queue\nversion_added: 1.0.0\nshort_description: Creates or deletes AWS SQS queues\ndescription:\n - Create or delete AWS SQS queues.\n - Update attributes on existing queues.\nauthor:\n - Alan Loi (@loia)\n - Fernando Jose Pando (@nand0p)\n - Nadir Lloret (@nadirollo)\n - Dennis Podkovyrin (@sbj-ss)\nrequirements:\n - boto3\noptions:\n state:\n description:\n - Create or delete the queue.\n choices: ['present', 'absent']\n default: 'present'\n type: str\n name:\n description:\n - Name of the queue.\n required: true\n type: str\n queue_type:\n description:\n - Standard or FIFO queue.\n - I(queue_type) can only be set at queue creation and will otherwise be\n ignored.\n choices: ['standard', 'fifo']\n default: 'standard'\n type: str\n visibility_timeout:\n description:\n - The default visibility timeout in seconds.\n aliases: [default_visibility_timeout]\n type: int\n message_retention_period:\n description:\n - The message retention period in seconds.\n type: int\n maximum_message_size:\n description:\n - The maximum message size in bytes.\n type: int\n delay_seconds:\n description:\n - The delivery delay in seconds.\n aliases: [delivery_delay]\n type: int\n receive_message_wait_time_seconds:\n description:\n - The receive message wait time in seconds.\n aliases: [receive_message_wait_time]\n type: int\n policy:\n description:\n - The JSON dict policy to attach to queue.\n type: dict\n redrive_policy:\n description:\n - JSON dict with the redrive_policy (see example).\n type: dict\n kms_master_key_id:\n description:\n - The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK.\n type: str\n kms_data_key_reuse_period_seconds:\n description:\n - The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again.\n aliases: [kms_data_key_reuse_period]\n type: int\n content_based_deduplication:\n type: bool\n description:\n - Enables content-based deduplication. Used for FIFOs only.\n - Defaults to C(false).\n tags:\n description:\n - Tag dict to apply to the queue (requires botocore 1.5.40 or above).\n - To remove all tags set I(tags={}) and I(purge_tags=true).\n type: dict\n purge_tags:\n description:\n - Remove tags not listed in I(tags).\n type: bool\n default: false\nextends_documentation_fragment:\n- amazon.aws.aws\n- amazon.aws.ec2\n\n'''\n\nRETURN = '''\ncontent_based_deduplication:\n description: Enables content-based deduplication. Used for FIFOs only.\n type: bool\n returned: always\n sample: True\nvisibility_timeout:\n description: The default visibility timeout in seconds.\n type: int\n returned: always\n sample: 30\ndelay_seconds:\n description: The delivery delay in seconds.\n type: int\n returned: always\n sample: 0\nkms_master_key_id:\n description: The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK.\n type: str\n returned: always\n sample: alias/MyAlias\nkms_data_key_reuse_period_seconds:\n description: The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again.\n type: int\n returned: always\n sample: 300\nmaximum_message_size:\n description: The maximum message size in bytes.\n type: int\n returned: always\n sample: 262144\nmessage_retention_period:\n description: The message retention period in seconds.\n type: int\n returned: always\n sample: 345600\nname:\n description: Name of the SQS Queue\n type: str\n returned: always\n sample: \"queuename-987d2de0\"\nqueue_arn:\n description: The queue's Amazon resource name (ARN).\n type: str\n returned: on success\n sample: 'arn:aws:sqs:us-east-1:199999999999:queuename-987d2de0'\nqueue_url:\n description: URL to access the queue\n type: str\n returned: on success\n sample: 'https://queue.amazonaws.com/123456789012/MyQueue'\nreceive_message_wait_time_seconds:\n description: The receive message wait time in seconds.\n type: int\n returned: always\n sample: 0\nregion:\n description: Region that the queue was created within\n type: str\n returned: always\n sample: 'us-east-1'\ntags:\n description: List of queue tags\n type: dict\n returned: always\n sample: '{\"Env\": \"prod\"}'\n'''\n\nEXAMPLES = '''\n- name: Create SQS queue with redrive policy\n community.aws.sqs_queue:\n name: my-queue\n region: ap-southeast-2\n default_visibility_timeout: 120\n message_retention_period: 86400\n maximum_message_size: 1024\n delivery_delay: 30\n receive_message_wait_time: 20\n policy: \"{{ json_dict }}\"\n redrive_policy:\n maxReceiveCount: 5\n deadLetterTargetArn: arn:aws:sqs:eu-west-1:123456789012:my-dead-queue\n\n- name: Drop redrive policy\n community.aws.sqs_queue:\n name: my-queue\n region: ap-southeast-2\n redrive_policy: {}\n\n- name: Create FIFO queue\n community.aws.sqs_queue:\n name: fifo-queue\n region: ap-southeast-2\n queue_type: fifo\n content_based_deduplication: yes\n\n- name: Tag queue\n community.aws.sqs_queue:\n name: fifo-queue\n region: ap-southeast-2\n tags:\n example: SomeValue\n\n- name: Configure Encryption, automatically uses a new data key every hour\n community.aws.sqs_queue:\n name: fifo-queue\n region: ap-southeast-2\n kms_master_key_id: alias/MyQueueKey\n kms_data_key_reuse_period_seconds: 3600\n\n- name: Delete SQS queue\n community.aws.sqs_queue:\n name: my-queue\n region: ap-southeast-2\n state: absent\n'''\n\nimport json\n\ntry:\n import botocore\nexcept ImportError:\n pass # handled by AnsibleAWSModule\n\nfrom ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict\nfrom ansible.module_utils.common.dict_transformations import snake_dict_to_camel_dict\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import compare_aws_tags\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import compare_policies\n\n\ndef get_queue_name(module, is_fifo=False):\n name = module.params.get('name')\n if not is_fifo or name.endswith('.fifo'):\n return name\n return name + '.fifo'\n\n\n# NonExistentQueue is explicitly expected when a queue doesn't exist\[email protected]_backoff()\ndef get_queue_url(client, name):\n try:\n return client.get_queue_url(QueueName=name)['QueueUrl']\n except is_boto3_error_code('AWS.SimpleQueueService.NonExistentQueue'):\n return None\n\n\ndef describe_queue(client, queue_url):\n \"\"\"\n Description a queue in snake format\n \"\"\"\n attributes = client.get_queue_attributes(QueueUrl=queue_url, AttributeNames=['All'], aws_retry=True)['Attributes']\n description = dict(attributes)\n description.pop('Policy', None)\n description.pop('RedrivePolicy', None)\n description = camel_dict_to_snake_dict(description)\n description['policy'] = attributes.get('Policy', None)\n description['redrive_policy'] = attributes.get('RedrivePolicy', None)\n\n # Boto3 returns everything as a string, convert them back to integers/dicts if\n # that's what we expected.\n for key, value in description.items():\n if value is None:\n continue\n\n if key in ['policy', 'redrive_policy']:\n policy = json.loads(value)\n description[key] = policy\n continue\n\n if key == 'content_based_deduplication':\n try:\n description[key] = bool(value)\n except (TypeError, ValueError):\n pass\n\n try:\n if value == str(int(value)):\n description[key] = int(value)\n except (TypeError, ValueError):\n pass\n\n return description\n\n\ndef create_or_update_sqs_queue(client, module):\n is_fifo = (module.params.get('queue_type') == 'fifo')\n queue_name = get_queue_name(module, is_fifo)\n result = dict(\n name=queue_name,\n region=module.params.get('region'),\n changed=False,\n )\n\n queue_url = get_queue_url(client, queue_name)\n result['queue_url'] = queue_url\n\n if not queue_url:\n create_attributes = {'FifoQueue': 'true'} if is_fifo else {}\n result['changed'] = True\n if module.check_mode:\n return result\n queue_url = client.create_queue(QueueName=queue_name, Attributes=create_attributes, aws_retry=True)['QueueUrl']\n\n changed, arn = update_sqs_queue(module, client, queue_url)\n result['changed'] |= changed\n result['queue_arn'] = arn\n\n changed, tags = update_tags(client, queue_url, module)\n result['changed'] |= changed\n result['tags'] = tags\n\n result.update(describe_queue(client, queue_url))\n\n COMPATABILITY_KEYS = dict(\n delay_seconds='delivery_delay',\n receive_message_wait_time_seconds='receive_message_wait_time',\n visibility_timeout='default_visibility_timeout',\n kms_data_key_reuse_period_seconds='kms_data_key_reuse_period',\n )\n for key in list(result.keys()):\n\n # The return values changed between boto and boto3, add the old keys too\n # for backwards compatibility\n return_name = COMPATABILITY_KEYS.get(key)\n if return_name:\n result[return_name] = result.get(key)\n\n return result\n\n\ndef update_sqs_queue(module, client, queue_url):\n check_mode = module.check_mode\n changed = False\n existing_attributes = client.get_queue_attributes(QueueUrl=queue_url, AttributeNames=['All'], aws_retry=True)['Attributes']\n new_attributes = snake_dict_to_camel_dict(module.params, capitalize_first=True)\n attributes_to_set = dict()\n\n # Boto3 SQS deals with policies as strings, we want to deal with them as\n # dicts\n if module.params.get('policy') is not None:\n policy = module.params.get('policy')\n current_value = existing_attributes.get('Policy', '{}')\n current_policy = json.loads(current_value)\n if compare_policies(current_policy, policy):\n attributes_to_set['Policy'] = json.dumps(policy)\n changed = True\n if module.params.get('redrive_policy') is not None:\n policy = module.params.get('redrive_policy')\n current_value = existing_attributes.get('RedrivePolicy', '{}')\n current_policy = json.loads(current_value)\n if compare_policies(current_policy, policy):\n attributes_to_set['RedrivePolicy'] = json.dumps(policy)\n changed = True\n\n for attribute, value in existing_attributes.items():\n # We handle these as a special case because they're IAM policies\n if attribute in ['Policy', 'RedrivePolicy']:\n continue\n\n if attribute not in new_attributes.keys():\n continue\n\n if new_attributes.get(attribute) is None:\n continue\n\n new_value = new_attributes[attribute]\n\n if isinstance(new_value, bool):\n new_value = str(new_value).lower()\n value = str(value).lower()\n\n if new_value == value:\n continue\n\n # Boto3 expects strings\n attributes_to_set[attribute] = str(new_value)\n changed = True\n\n if changed and not check_mode:\n client.set_queue_attributes(QueueUrl=queue_url, Attributes=attributes_to_set, aws_retry=True)\n\n return changed, existing_attributes.get('queue_arn'),\n\n\ndef delete_sqs_queue(client, module):\n is_fifo = (module.params.get('queue_type') == 'fifo')\n queue_name = get_queue_name(module, is_fifo)\n result = dict(\n name=queue_name,\n region=module.params.get('region'),\n changed=False\n )\n\n queue_url = get_queue_url(client, queue_name)\n if not queue_url:\n return result\n\n result['changed'] = bool(queue_url)\n if not module.check_mode:\n AWSRetry.jittered_backoff()(client.delete_queue)(QueueUrl=queue_url)\n\n return result\n\n\ndef update_tags(client, queue_url, module):\n new_tags = module.params.get('tags')\n purge_tags = module.params.get('purge_tags')\n if new_tags is None:\n return False, {}\n\n try:\n existing_tags = client.list_queue_tags(QueueUrl=queue_url, aws_retry=True)['Tags']\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError, KeyError) as e:\n existing_tags = {}\n\n tags_to_add, tags_to_remove = compare_aws_tags(existing_tags, new_tags, purge_tags=purge_tags)\n\n if not module.check_mode:\n if tags_to_remove:\n client.untag_queue(QueueUrl=queue_url, TagKeys=tags_to_remove, aws_retry=True)\n if tags_to_add:\n client.tag_queue(QueueUrl=queue_url, Tags=tags_to_add)\n existing_tags = client.list_queue_tags(QueueUrl=queue_url, aws_retry=True).get('Tags', {})\n else:\n existing_tags = new_tags\n\n changed = bool(tags_to_remove) or bool(tags_to_add)\n return changed, existing_tags\n\n\ndef main():\n\n argument_spec = dict(\n state=dict(type='str', default='present', choices=['present', 'absent']),\n name=dict(type='str', required=True),\n queue_type=dict(type='str', default='standard', choices=['standard', 'fifo']),\n delay_seconds=dict(type='int', aliases=['delivery_delay']),\n maximum_message_size=dict(type='int'),\n message_retention_period=dict(type='int'),\n policy=dict(type='dict'),\n receive_message_wait_time_seconds=dict(type='int', aliases=['receive_message_wait_time']),\n redrive_policy=dict(type='dict'),\n visibility_timeout=dict(type='int', aliases=['default_visibility_timeout']),\n kms_master_key_id=dict(type='str'),\n kms_data_key_reuse_period_seconds=dict(type='int', aliases=['kms_data_key_reuse_period'], no_log=False),\n content_based_deduplication=dict(type='bool'),\n tags=dict(type='dict'),\n purge_tags=dict(type='bool', default=False),\n )\n module = AnsibleAWSModule(argument_spec=argument_spec, supports_check_mode=True)\n\n state = module.params.get('state')\n retry_decorator = AWSRetry.jittered_backoff(catch_extra_error_codes=['AWS.SimpleQueueService.NonExistentQueue'])\n try:\n client = module.client('sqs', retry_decorator=retry_decorator)\n if state == 'present':\n result = create_or_update_sqs_queue(client, module)\n elif state == 'absent':\n result = delete_sqs_queue(client, module)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(e, msg='Failed to control sqs queue')\n else:\n module.exit_json(**result)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/sqs_queue.py" } ]
diff --git a/changelogs/fragments/389-sqs-queue-UnboundLocalError.yml b/changelogs/fragments/389-sqs-queue-UnboundLocalError.yml new file mode 100644 index 00000000000..8b1b371428f --- /dev/null +++ b/changelogs/fragments/389-sqs-queue-UnboundLocalError.yml @@ -0,0 +1,2 @@ +bugfixes: +- sqs_queue - fix UnboundLocalError when passing a boolean parameter (https://github.com/ansible-collections/community.aws/issues/172). diff --git a/plugins/modules/sqs_queue.py b/plugins/modules/sqs_queue.py index b0565c6c8d0..b76cdb31410 100644 --- a/plugins/modules/sqs_queue.py +++ b/plugins/modules/sqs_queue.py @@ -375,7 +375,7 @@ def update_sqs_queue(module, client, queue_url): if isinstance(new_value, bool): new_value = str(new_value).lower() - existing_value = str(existing_value).lower() + value = str(value).lower() if new_value == value: continue diff --git a/tests/integration/targets/sqs_queue/tasks/main.yml b/tests/integration/targets/sqs_queue/tasks/main.yml index b689c9eb2b9..483f17bb298 100644 --- a/tests/integration/targets/sqs_queue/tasks/main.yml +++ b/tests/integration/targets/sqs_queue/tasks/main.yml @@ -104,3 +104,25 @@ with_items: - { name: "{{ create_result.name }}" } - { name: "{{ dead_letter_queue.name }}" } + - name: Test FIFO queue + block: + - name: Creating FIFO queue + sqs_queue: + name: "{{ resource_prefix }}{{ 1000 | random }}" + queue_type: fifo + content_based_deduplication: yes + register: create_result + - name: Assert queue created with configuration + assert: + that: + - create_result.changed + always: + - name: Cleaning up queue + sqs_queue: + name: "{{ item.name }}" + state: absent + register: delete_result + retries: 3 + delay: 3 + with_items: + - { name: "{{ create_result.name }}" }
roboflow__supervision-430
Fix `sv.ByteTrack.tracker_id` return value when `sv.Detections` is empty ### Bug When `sv.Detections` is empty (`len(detections) == 0`) `sv.ByteTrack` returns `tracker_id` equal `None`. Correct that behavior, make it more consistent, and return `np.array([], dtype=int)`. ### Fix Add `else` statement here: https://github.com/roboflow/supervision/blob/89c1b63979d43f8ed651a9701ca8333034d0bc07/supervision/tracker/byte_tracker/core.py#L239
[ { "content": "from typing import List, Tuple\n\nimport numpy as np\n\nfrom supervision.detection.core import Detections\nfrom supervision.tracker.byte_tracker import matching\nfrom supervision.tracker.byte_tracker.basetrack import BaseTrack, TrackState\nfrom supervision.tracker.byte_tracker.kalman_filter import KalmanFilter\n\n\nclass STrack(BaseTrack):\n shared_kalman = KalmanFilter()\n\n def __init__(self, tlwh, score, class_ids):\n # wait activate\n self._tlwh = np.asarray(tlwh, dtype=np.float32)\n self.kalman_filter = None\n self.mean, self.covariance = None, None\n self.is_activated = False\n\n self.score = score\n self.class_ids = class_ids\n self.tracklet_len = 0\n\n def predict(self):\n mean_state = self.mean.copy()\n if self.state != TrackState.Tracked:\n mean_state[7] = 0\n self.mean, self.covariance = self.kalman_filter.predict(\n mean_state, self.covariance\n )\n\n @staticmethod\n def multi_predict(stracks):\n if len(stracks) > 0:\n multi_mean = []\n multi_covariance = []\n for i, st in enumerate(stracks):\n multi_mean.append(st.mean.copy())\n multi_covariance.append(st.covariance)\n if st.state != TrackState.Tracked:\n multi_mean[i][7] = 0\n\n multi_mean, multi_covariance = STrack.shared_kalman.multi_predict(\n np.asarray(multi_mean), np.asarray(multi_covariance)\n )\n for i, (mean, cov) in enumerate(zip(multi_mean, multi_covariance)):\n stracks[i].mean = mean\n stracks[i].covariance = cov\n\n def activate(self, kalman_filter, frame_id):\n \"\"\"Start a new tracklet\"\"\"\n self.kalman_filter = kalman_filter\n self.track_id = self.next_id()\n self.mean, self.covariance = self.kalman_filter.initiate(\n self.tlwh_to_xyah(self._tlwh)\n )\n\n self.tracklet_len = 0\n self.state = TrackState.Tracked\n if frame_id == 1:\n self.is_activated = True\n self.frame_id = frame_id\n self.start_frame = frame_id\n\n def re_activate(self, new_track, frame_id, new_id=False):\n self.mean, self.covariance = self.kalman_filter.update(\n self.mean, self.covariance, self.tlwh_to_xyah(new_track.tlwh)\n )\n self.tracklet_len = 0\n self.state = TrackState.Tracked\n self.is_activated = True\n self.frame_id = frame_id\n if new_id:\n self.track_id = self.next_id()\n self.score = new_track.score\n\n def update(self, new_track, frame_id):\n \"\"\"\n Update a matched track\n :type new_track: STrack\n :type frame_id: int\n :type update_feature: bool\n :return:\n \"\"\"\n self.frame_id = frame_id\n self.tracklet_len += 1\n\n new_tlwh = new_track.tlwh\n self.mean, self.covariance = self.kalman_filter.update(\n self.mean, self.covariance, self.tlwh_to_xyah(new_tlwh)\n )\n self.state = TrackState.Tracked\n self.is_activated = True\n\n self.score = new_track.score\n\n @property\n def tlwh(self):\n \"\"\"Get current position in bounding box format `(top left x, top left y,\n width, height)`.\n \"\"\"\n if self.mean is None:\n return self._tlwh.copy()\n ret = self.mean[:4].copy()\n ret[2] *= ret[3]\n ret[:2] -= ret[2:] / 2\n return ret\n\n @property\n def tlbr(self):\n \"\"\"Convert bounding box to format `(min x, min y, max x, max y)`, i.e.,\n `(top left, bottom right)`.\n \"\"\"\n ret = self.tlwh.copy()\n ret[2:] += ret[:2]\n return ret\n\n @staticmethod\n def tlwh_to_xyah(tlwh):\n \"\"\"Convert bounding box to format `(center x, center y, aspect ratio,\n height)`, where the aspect ratio is `width / height`.\n \"\"\"\n ret = np.asarray(tlwh).copy()\n ret[:2] += ret[2:] / 2\n ret[2] /= ret[3]\n return ret\n\n def to_xyah(self):\n return self.tlwh_to_xyah(self.tlwh)\n\n @staticmethod\n def tlbr_to_tlwh(tlbr):\n ret = np.asarray(tlbr).copy()\n ret[2:] -= ret[:2]\n return ret\n\n @staticmethod\n def tlwh_to_tlbr(tlwh):\n ret = np.asarray(tlwh).copy()\n ret[2:] += ret[:2]\n return ret\n\n def __repr__(self):\n return \"OT_{}_({}-{})\".format(self.track_id, self.start_frame, self.end_frame)\n\n\ndef detections2boxes(detections: Detections) -> np.ndarray:\n \"\"\"\n Convert Supervision Detections to numpy tensors for further computation.\n Args:\n detections (Detections): Detections/Targets in the format of sv.Detections.\n Returns:\n (np.ndarray): Detections as numpy tensors as in\n `(x_min, y_min, x_max, y_max, confidence, class_id)` order.\n \"\"\"\n return np.hstack(\n (\n detections.xyxy,\n detections.confidence[:, np.newaxis],\n detections.class_id[:, np.newaxis],\n )\n )\n\n\nclass ByteTrack:\n \"\"\"\n Initialize the ByteTrack object.\n\n Parameters:\n track_thresh (float, optional): Detection confidence threshold\n for track activation.\n track_buffer (int, optional): Number of frames to buffer when a track is lost.\n match_thresh (float, optional): Threshold for matching tracks with detections.\n frame_rate (int, optional): The frame rate of the video.\n \"\"\"\n\n def __init__(\n self,\n track_thresh: float = 0.25,\n track_buffer: int = 30,\n match_thresh: float = 0.8,\n frame_rate: int = 30,\n ):\n self.track_thresh = track_thresh\n self.match_thresh = match_thresh\n\n self.frame_id = 0\n self.det_thresh = self.track_thresh + 0.1\n self.max_time_lost = int(frame_rate / 30.0 * track_buffer)\n self.kalman_filter = KalmanFilter()\n\n self.tracked_tracks: List[STrack] = []\n self.lost_tracks: List[STrack] = []\n self.removed_tracks: List[STrack] = []\n\n def update_with_detections(self, detections: Detections) -> Detections:\n \"\"\"\n Updates the tracker with the provided detections and\n returns the updated detection results.\n\n Parameters:\n detections: The new detections to update with.\n Returns:\n Detection: The updated detection results that now include tracking IDs.\n Example:\n ```python\n >>> import supervision as sv\n >>> from ultralytics import YOLO\n\n >>> model = YOLO(...)\n >>> byte_tracker = sv.ByteTrack()\n >>> annotator = sv.BoxAnnotator()\n\n >>> def callback(frame: np.ndarray, index: int) -> np.ndarray:\n ... results = model(frame)[0]\n ... detections = sv.Detections.from_ultralytics(results)\n ... detections = byte_tracker.update_with_detections(detections)\n ... labels = [\n ... f\"#{tracker_id} {model.model.names[class_id]} {confidence:0.2f}\"\n ... for _, _, confidence, class_id, tracker_id\n ... in detections\n ... ]\n ... return annotator.annotate(scene=frame.copy(),\n ... detections=detections, labels=labels)\n\n >>> sv.process_video(\n ... source_path='...',\n ... target_path='...',\n ... callback=callback\n ... )\n ```\n \"\"\"\n\n tracks = self.update_with_tensors(\n tensors=detections2boxes(detections=detections)\n )\n detections = Detections.empty()\n if len(tracks) > 0:\n detections.xyxy = np.array(\n [track.tlbr for track in tracks], dtype=np.float32\n )\n detections.class_id = np.array(\n [int(t.class_ids) for t in tracks], dtype=int\n )\n detections.tracker_id = np.array(\n [int(t.track_id) for t in tracks], dtype=int\n )\n detections.confidence = np.array(\n [t.score for t in tracks], dtype=np.float32\n )\n\n return detections\n\n def update_with_tensors(self, tensors: np.ndarray) -> List[STrack]:\n \"\"\"\n Updates the tracker with the provided tensors and returns the updated tracks.\n\n Parameters:\n tensors: The new tensors to update with.\n\n Returns:\n List[STrack]: Updated tracks.\n \"\"\"\n self.frame_id += 1\n activated_starcks = []\n refind_stracks = []\n lost_stracks = []\n removed_stracks = []\n\n class_ids = tensors[:, 5]\n scores = tensors[:, 4]\n bboxes = tensors[:, :4]\n\n remain_inds = scores > self.track_thresh\n inds_low = scores > 0.1\n inds_high = scores < self.track_thresh\n\n inds_second = np.logical_and(inds_low, inds_high)\n dets_second = bboxes[inds_second]\n dets = bboxes[remain_inds]\n scores_keep = scores[remain_inds]\n scores_second = scores[inds_second]\n\n class_ids_keep = class_ids[remain_inds]\n class_ids_second = class_ids[inds_second]\n\n if len(dets) > 0:\n \"\"\"Detections\"\"\"\n detections = [\n STrack(STrack.tlbr_to_tlwh(tlbr), s, c)\n for (tlbr, s, c) in zip(dets, scores_keep, class_ids_keep)\n ]\n else:\n detections = []\n\n \"\"\" Add newly detected tracklets to tracked_stracks\"\"\"\n unconfirmed = []\n tracked_stracks = [] # type: list[STrack]\n for track in self.tracked_tracks:\n if not track.is_activated:\n unconfirmed.append(track)\n else:\n tracked_stracks.append(track)\n\n \"\"\" Step 2: First association, with high score detection boxes\"\"\"\n strack_pool = joint_tracks(tracked_stracks, self.lost_tracks)\n # Predict the current location with KF\n STrack.multi_predict(strack_pool)\n dists = matching.iou_distance(strack_pool, detections)\n\n dists = matching.fuse_score(dists, detections)\n matches, u_track, u_detection = matching.linear_assignment(\n dists, thresh=self.match_thresh\n )\n\n for itracked, idet in matches:\n track = strack_pool[itracked]\n det = detections[idet]\n if track.state == TrackState.Tracked:\n track.update(detections[idet], self.frame_id)\n activated_starcks.append(track)\n else:\n track.re_activate(det, self.frame_id, new_id=False)\n refind_stracks.append(track)\n\n \"\"\" Step 3: Second association, with low score detection boxes\"\"\"\n # association the untrack to the low score detections\n if len(dets_second) > 0:\n \"\"\"Detections\"\"\"\n detections_second = [\n STrack(STrack.tlbr_to_tlwh(tlbr), s, c)\n for (tlbr, s, c) in zip(dets_second, scores_second, class_ids_second)\n ]\n else:\n detections_second = []\n r_tracked_stracks = [\n strack_pool[i]\n for i in u_track\n if strack_pool[i].state == TrackState.Tracked\n ]\n dists = matching.iou_distance(r_tracked_stracks, detections_second)\n matches, u_track, u_detection_second = matching.linear_assignment(\n dists, thresh=0.5\n )\n for itracked, idet in matches:\n track = r_tracked_stracks[itracked]\n det = detections_second[idet]\n if track.state == TrackState.Tracked:\n track.update(det, self.frame_id)\n activated_starcks.append(track)\n else:\n track.re_activate(det, self.frame_id, new_id=False)\n refind_stracks.append(track)\n\n for it in u_track:\n track = r_tracked_stracks[it]\n if not track.state == TrackState.Lost:\n track.mark_lost()\n lost_stracks.append(track)\n\n \"\"\"Deal with unconfirmed tracks, usually tracks with only one beginning frame\"\"\"\n detections = [detections[i] for i in u_detection]\n dists = matching.iou_distance(unconfirmed, detections)\n\n dists = matching.fuse_score(dists, detections)\n matches, u_unconfirmed, u_detection = matching.linear_assignment(\n dists, thresh=0.7\n )\n for itracked, idet in matches:\n unconfirmed[itracked].update(detections[idet], self.frame_id)\n activated_starcks.append(unconfirmed[itracked])\n for it in u_unconfirmed:\n track = unconfirmed[it]\n track.mark_removed()\n removed_stracks.append(track)\n\n \"\"\" Step 4: Init new stracks\"\"\"\n for inew in u_detection:\n track = detections[inew]\n if track.score < self.det_thresh:\n continue\n track.activate(self.kalman_filter, self.frame_id)\n activated_starcks.append(track)\n \"\"\" Step 5: Update state\"\"\"\n for track in self.lost_tracks:\n if self.frame_id - track.end_frame > self.max_time_lost:\n track.mark_removed()\n removed_stracks.append(track)\n\n self.tracked_tracks = [\n t for t in self.tracked_tracks if t.state == TrackState.Tracked\n ]\n self.tracked_tracks = joint_tracks(self.tracked_tracks, activated_starcks)\n self.tracked_tracks = joint_tracks(self.tracked_tracks, refind_stracks)\n self.lost_tracks = sub_tracks(self.lost_tracks, self.tracked_tracks)\n self.lost_tracks.extend(lost_stracks)\n self.lost_tracks = sub_tracks(self.lost_tracks, self.removed_tracks)\n self.removed_tracks.extend(removed_stracks)\n self.tracked_tracks, self.lost_tracks = remove_duplicate_tracks(\n self.tracked_tracks, self.lost_tracks\n )\n output_stracks = [track for track in self.tracked_tracks if track.is_activated]\n\n return output_stracks\n\n\ndef joint_tracks(\n track_list_a: List[STrack], track_list_b: List[STrack]\n) -> List[STrack]:\n \"\"\"\n Joins two lists of tracks, ensuring that the resulting list does not\n contain tracks with duplicate track_id values.\n\n Parameters:\n track_list_a: First list of tracks (with track_id attribute).\n track_list_b: Second list of tracks (with track_id attribute).\n\n Returns:\n Combined list of tracks from track_list_a and track_list_b\n without duplicate track_id values.\n \"\"\"\n seen_track_ids = set()\n result = []\n\n for track in track_list_a + track_list_b:\n if track.track_id not in seen_track_ids:\n seen_track_ids.add(track.track_id)\n result.append(track)\n\n return result\n\n\ndef sub_tracks(track_list_a: List, track_list_b: List) -> List[int]:\n \"\"\"\n Returns a list of tracks from track_list_a after removing any tracks\n that share the same track_id with tracks in track_list_b.\n\n Parameters:\n track_list_a: List of tracks (with track_id attribute).\n track_list_b: List of tracks (with track_id attribute) to\n be subtracted from track_list_a.\n Returns:\n List of remaining tracks from track_list_a after subtraction.\n \"\"\"\n tracks = {track.track_id: track for track in track_list_a}\n track_ids_b = {track.track_id for track in track_list_b}\n\n for track_id in track_ids_b:\n tracks.pop(track_id, None)\n\n return list(tracks.values())\n\n\ndef remove_duplicate_tracks(tracks_a: List, tracks_b: List) -> Tuple[List, List]:\n pairwise_distance = matching.iou_distance(tracks_a, tracks_b)\n matching_pairs = np.where(pairwise_distance < 0.15)\n\n duplicates_a, duplicates_b = set(), set()\n for track_index_a, track_index_b in zip(*matching_pairs):\n time_a = tracks_a[track_index_a].frame_id - tracks_a[track_index_a].start_frame\n time_b = tracks_b[track_index_b].frame_id - tracks_b[track_index_b].start_frame\n if time_a > time_b:\n duplicates_b.add(track_index_b)\n else:\n duplicates_a.add(track_index_a)\n\n result_a = [\n track for index, track in enumerate(tracks_a) if index not in duplicates_a\n ]\n result_b = [\n track for index, track in enumerate(tracks_b) if index not in duplicates_b\n ]\n\n return result_a, result_b\n", "path": "supervision/tracker/byte_tracker/core.py" } ]
[ { "content": "from typing import List, Tuple\n\nimport numpy as np\n\nfrom supervision.detection.core import Detections\nfrom supervision.tracker.byte_tracker import matching\nfrom supervision.tracker.byte_tracker.basetrack import BaseTrack, TrackState\nfrom supervision.tracker.byte_tracker.kalman_filter import KalmanFilter\n\n\nclass STrack(BaseTrack):\n shared_kalman = KalmanFilter()\n\n def __init__(self, tlwh, score, class_ids):\n # wait activate\n self._tlwh = np.asarray(tlwh, dtype=np.float32)\n self.kalman_filter = None\n self.mean, self.covariance = None, None\n self.is_activated = False\n\n self.score = score\n self.class_ids = class_ids\n self.tracklet_len = 0\n\n def predict(self):\n mean_state = self.mean.copy()\n if self.state != TrackState.Tracked:\n mean_state[7] = 0\n self.mean, self.covariance = self.kalman_filter.predict(\n mean_state, self.covariance\n )\n\n @staticmethod\n def multi_predict(stracks):\n if len(stracks) > 0:\n multi_mean = []\n multi_covariance = []\n for i, st in enumerate(stracks):\n multi_mean.append(st.mean.copy())\n multi_covariance.append(st.covariance)\n if st.state != TrackState.Tracked:\n multi_mean[i][7] = 0\n\n multi_mean, multi_covariance = STrack.shared_kalman.multi_predict(\n np.asarray(multi_mean), np.asarray(multi_covariance)\n )\n for i, (mean, cov) in enumerate(zip(multi_mean, multi_covariance)):\n stracks[i].mean = mean\n stracks[i].covariance = cov\n\n def activate(self, kalman_filter, frame_id):\n \"\"\"Start a new tracklet\"\"\"\n self.kalman_filter = kalman_filter\n self.track_id = self.next_id()\n self.mean, self.covariance = self.kalman_filter.initiate(\n self.tlwh_to_xyah(self._tlwh)\n )\n\n self.tracklet_len = 0\n self.state = TrackState.Tracked\n if frame_id == 1:\n self.is_activated = True\n self.frame_id = frame_id\n self.start_frame = frame_id\n\n def re_activate(self, new_track, frame_id, new_id=False):\n self.mean, self.covariance = self.kalman_filter.update(\n self.mean, self.covariance, self.tlwh_to_xyah(new_track.tlwh)\n )\n self.tracklet_len = 0\n self.state = TrackState.Tracked\n self.is_activated = True\n self.frame_id = frame_id\n if new_id:\n self.track_id = self.next_id()\n self.score = new_track.score\n\n def update(self, new_track, frame_id):\n \"\"\"\n Update a matched track\n :type new_track: STrack\n :type frame_id: int\n :type update_feature: bool\n :return:\n \"\"\"\n self.frame_id = frame_id\n self.tracklet_len += 1\n\n new_tlwh = new_track.tlwh\n self.mean, self.covariance = self.kalman_filter.update(\n self.mean, self.covariance, self.tlwh_to_xyah(new_tlwh)\n )\n self.state = TrackState.Tracked\n self.is_activated = True\n\n self.score = new_track.score\n\n @property\n def tlwh(self):\n \"\"\"Get current position in bounding box format `(top left x, top left y,\n width, height)`.\n \"\"\"\n if self.mean is None:\n return self._tlwh.copy()\n ret = self.mean[:4].copy()\n ret[2] *= ret[3]\n ret[:2] -= ret[2:] / 2\n return ret\n\n @property\n def tlbr(self):\n \"\"\"Convert bounding box to format `(min x, min y, max x, max y)`, i.e.,\n `(top left, bottom right)`.\n \"\"\"\n ret = self.tlwh.copy()\n ret[2:] += ret[:2]\n return ret\n\n @staticmethod\n def tlwh_to_xyah(tlwh):\n \"\"\"Convert bounding box to format `(center x, center y, aspect ratio,\n height)`, where the aspect ratio is `width / height`.\n \"\"\"\n ret = np.asarray(tlwh).copy()\n ret[:2] += ret[2:] / 2\n ret[2] /= ret[3]\n return ret\n\n def to_xyah(self):\n return self.tlwh_to_xyah(self.tlwh)\n\n @staticmethod\n def tlbr_to_tlwh(tlbr):\n ret = np.asarray(tlbr).copy()\n ret[2:] -= ret[:2]\n return ret\n\n @staticmethod\n def tlwh_to_tlbr(tlwh):\n ret = np.asarray(tlwh).copy()\n ret[2:] += ret[:2]\n return ret\n\n def __repr__(self):\n return \"OT_{}_({}-{})\".format(self.track_id, self.start_frame, self.end_frame)\n\n\ndef detections2boxes(detections: Detections) -> np.ndarray:\n \"\"\"\n Convert Supervision Detections to numpy tensors for further computation.\n Args:\n detections (Detections): Detections/Targets in the format of sv.Detections.\n Returns:\n (np.ndarray): Detections as numpy tensors as in\n `(x_min, y_min, x_max, y_max, confidence, class_id)` order.\n \"\"\"\n return np.hstack(\n (\n detections.xyxy,\n detections.confidence[:, np.newaxis],\n detections.class_id[:, np.newaxis],\n )\n )\n\n\nclass ByteTrack:\n \"\"\"\n Initialize the ByteTrack object.\n\n Parameters:\n track_thresh (float, optional): Detection confidence threshold\n for track activation.\n track_buffer (int, optional): Number of frames to buffer when a track is lost.\n match_thresh (float, optional): Threshold for matching tracks with detections.\n frame_rate (int, optional): The frame rate of the video.\n \"\"\"\n\n def __init__(\n self,\n track_thresh: float = 0.25,\n track_buffer: int = 30,\n match_thresh: float = 0.8,\n frame_rate: int = 30,\n ):\n self.track_thresh = track_thresh\n self.match_thresh = match_thresh\n\n self.frame_id = 0\n self.det_thresh = self.track_thresh + 0.1\n self.max_time_lost = int(frame_rate / 30.0 * track_buffer)\n self.kalman_filter = KalmanFilter()\n\n self.tracked_tracks: List[STrack] = []\n self.lost_tracks: List[STrack] = []\n self.removed_tracks: List[STrack] = []\n\n def update_with_detections(self, detections: Detections) -> Detections:\n \"\"\"\n Updates the tracker with the provided detections and\n returns the updated detection results.\n\n Parameters:\n detections: The new detections to update with.\n Returns:\n Detection: The updated detection results that now include tracking IDs.\n Example:\n ```python\n >>> import supervision as sv\n >>> from ultralytics import YOLO\n\n >>> model = YOLO(...)\n >>> byte_tracker = sv.ByteTrack()\n >>> annotator = sv.BoxAnnotator()\n\n >>> def callback(frame: np.ndarray, index: int) -> np.ndarray:\n ... results = model(frame)[0]\n ... detections = sv.Detections.from_ultralytics(results)\n ... detections = byte_tracker.update_with_detections(detections)\n ... labels = [\n ... f\"#{tracker_id} {model.model.names[class_id]} {confidence:0.2f}\"\n ... for _, _, confidence, class_id, tracker_id\n ... in detections\n ... ]\n ... return annotator.annotate(scene=frame.copy(),\n ... detections=detections, labels=labels)\n\n >>> sv.process_video(\n ... source_path='...',\n ... target_path='...',\n ... callback=callback\n ... )\n ```\n \"\"\"\n\n tracks = self.update_with_tensors(\n tensors=detections2boxes(detections=detections)\n )\n detections = Detections.empty()\n if len(tracks) > 0:\n detections.xyxy = np.array(\n [track.tlbr for track in tracks], dtype=np.float32\n )\n detections.class_id = np.array(\n [int(t.class_ids) for t in tracks], dtype=int\n )\n detections.tracker_id = np.array(\n [int(t.track_id) for t in tracks], dtype=int\n )\n detections.confidence = np.array(\n [t.score for t in tracks], dtype=np.float32\n )\n else:\n detections.tracker_id = np.array([], dtype=int)\n\n return detections\n\n def update_with_tensors(self, tensors: np.ndarray) -> List[STrack]:\n \"\"\"\n Updates the tracker with the provided tensors and returns the updated tracks.\n\n Parameters:\n tensors: The new tensors to update with.\n\n Returns:\n List[STrack]: Updated tracks.\n \"\"\"\n self.frame_id += 1\n activated_starcks = []\n refind_stracks = []\n lost_stracks = []\n removed_stracks = []\n\n class_ids = tensors[:, 5]\n scores = tensors[:, 4]\n bboxes = tensors[:, :4]\n\n remain_inds = scores > self.track_thresh\n inds_low = scores > 0.1\n inds_high = scores < self.track_thresh\n\n inds_second = np.logical_and(inds_low, inds_high)\n dets_second = bboxes[inds_second]\n dets = bboxes[remain_inds]\n scores_keep = scores[remain_inds]\n scores_second = scores[inds_second]\n\n class_ids_keep = class_ids[remain_inds]\n class_ids_second = class_ids[inds_second]\n\n if len(dets) > 0:\n \"\"\"Detections\"\"\"\n detections = [\n STrack(STrack.tlbr_to_tlwh(tlbr), s, c)\n for (tlbr, s, c) in zip(dets, scores_keep, class_ids_keep)\n ]\n else:\n detections = []\n\n \"\"\" Add newly detected tracklets to tracked_stracks\"\"\"\n unconfirmed = []\n tracked_stracks = [] # type: list[STrack]\n for track in self.tracked_tracks:\n if not track.is_activated:\n unconfirmed.append(track)\n else:\n tracked_stracks.append(track)\n\n \"\"\" Step 2: First association, with high score detection boxes\"\"\"\n strack_pool = joint_tracks(tracked_stracks, self.lost_tracks)\n # Predict the current location with KF\n STrack.multi_predict(strack_pool)\n dists = matching.iou_distance(strack_pool, detections)\n\n dists = matching.fuse_score(dists, detections)\n matches, u_track, u_detection = matching.linear_assignment(\n dists, thresh=self.match_thresh\n )\n\n for itracked, idet in matches:\n track = strack_pool[itracked]\n det = detections[idet]\n if track.state == TrackState.Tracked:\n track.update(detections[idet], self.frame_id)\n activated_starcks.append(track)\n else:\n track.re_activate(det, self.frame_id, new_id=False)\n refind_stracks.append(track)\n\n \"\"\" Step 3: Second association, with low score detection boxes\"\"\"\n # association the untrack to the low score detections\n if len(dets_second) > 0:\n \"\"\"Detections\"\"\"\n detections_second = [\n STrack(STrack.tlbr_to_tlwh(tlbr), s, c)\n for (tlbr, s, c) in zip(dets_second, scores_second, class_ids_second)\n ]\n else:\n detections_second = []\n r_tracked_stracks = [\n strack_pool[i]\n for i in u_track\n if strack_pool[i].state == TrackState.Tracked\n ]\n dists = matching.iou_distance(r_tracked_stracks, detections_second)\n matches, u_track, u_detection_second = matching.linear_assignment(\n dists, thresh=0.5\n )\n for itracked, idet in matches:\n track = r_tracked_stracks[itracked]\n det = detections_second[idet]\n if track.state == TrackState.Tracked:\n track.update(det, self.frame_id)\n activated_starcks.append(track)\n else:\n track.re_activate(det, self.frame_id, new_id=False)\n refind_stracks.append(track)\n\n for it in u_track:\n track = r_tracked_stracks[it]\n if not track.state == TrackState.Lost:\n track.mark_lost()\n lost_stracks.append(track)\n\n \"\"\"Deal with unconfirmed tracks, usually tracks with only one beginning frame\"\"\"\n detections = [detections[i] for i in u_detection]\n dists = matching.iou_distance(unconfirmed, detections)\n\n dists = matching.fuse_score(dists, detections)\n matches, u_unconfirmed, u_detection = matching.linear_assignment(\n dists, thresh=0.7\n )\n for itracked, idet in matches:\n unconfirmed[itracked].update(detections[idet], self.frame_id)\n activated_starcks.append(unconfirmed[itracked])\n for it in u_unconfirmed:\n track = unconfirmed[it]\n track.mark_removed()\n removed_stracks.append(track)\n\n \"\"\" Step 4: Init new stracks\"\"\"\n for inew in u_detection:\n track = detections[inew]\n if track.score < self.det_thresh:\n continue\n track.activate(self.kalman_filter, self.frame_id)\n activated_starcks.append(track)\n \"\"\" Step 5: Update state\"\"\"\n for track in self.lost_tracks:\n if self.frame_id - track.end_frame > self.max_time_lost:\n track.mark_removed()\n removed_stracks.append(track)\n\n self.tracked_tracks = [\n t for t in self.tracked_tracks if t.state == TrackState.Tracked\n ]\n self.tracked_tracks = joint_tracks(self.tracked_tracks, activated_starcks)\n self.tracked_tracks = joint_tracks(self.tracked_tracks, refind_stracks)\n self.lost_tracks = sub_tracks(self.lost_tracks, self.tracked_tracks)\n self.lost_tracks.extend(lost_stracks)\n self.lost_tracks = sub_tracks(self.lost_tracks, self.removed_tracks)\n self.removed_tracks.extend(removed_stracks)\n self.tracked_tracks, self.lost_tracks = remove_duplicate_tracks(\n self.tracked_tracks, self.lost_tracks\n )\n output_stracks = [track for track in self.tracked_tracks if track.is_activated]\n\n return output_stracks\n\n\ndef joint_tracks(\n track_list_a: List[STrack], track_list_b: List[STrack]\n) -> List[STrack]:\n \"\"\"\n Joins two lists of tracks, ensuring that the resulting list does not\n contain tracks with duplicate track_id values.\n\n Parameters:\n track_list_a: First list of tracks (with track_id attribute).\n track_list_b: Second list of tracks (with track_id attribute).\n\n Returns:\n Combined list of tracks from track_list_a and track_list_b\n without duplicate track_id values.\n \"\"\"\n seen_track_ids = set()\n result = []\n\n for track in track_list_a + track_list_b:\n if track.track_id not in seen_track_ids:\n seen_track_ids.add(track.track_id)\n result.append(track)\n\n return result\n\n\ndef sub_tracks(track_list_a: List, track_list_b: List) -> List[int]:\n \"\"\"\n Returns a list of tracks from track_list_a after removing any tracks\n that share the same track_id with tracks in track_list_b.\n\n Parameters:\n track_list_a: List of tracks (with track_id attribute).\n track_list_b: List of tracks (with track_id attribute) to\n be subtracted from track_list_a.\n Returns:\n List of remaining tracks from track_list_a after subtraction.\n \"\"\"\n tracks = {track.track_id: track for track in track_list_a}\n track_ids_b = {track.track_id for track in track_list_b}\n\n for track_id in track_ids_b:\n tracks.pop(track_id, None)\n\n return list(tracks.values())\n\n\ndef remove_duplicate_tracks(tracks_a: List, tracks_b: List) -> Tuple[List, List]:\n pairwise_distance = matching.iou_distance(tracks_a, tracks_b)\n matching_pairs = np.where(pairwise_distance < 0.15)\n\n duplicates_a, duplicates_b = set(), set()\n for track_index_a, track_index_b in zip(*matching_pairs):\n time_a = tracks_a[track_index_a].frame_id - tracks_a[track_index_a].start_frame\n time_b = tracks_b[track_index_b].frame_id - tracks_b[track_index_b].start_frame\n if time_a > time_b:\n duplicates_b.add(track_index_b)\n else:\n duplicates_a.add(track_index_a)\n\n result_a = [\n track for index, track in enumerate(tracks_a) if index not in duplicates_a\n ]\n result_b = [\n track for index, track in enumerate(tracks_b) if index not in duplicates_b\n ]\n\n return result_a, result_b\n", "path": "supervision/tracker/byte_tracker/core.py" } ]
diff --git a/supervision/tracker/byte_tracker/core.py b/supervision/tracker/byte_tracker/core.py index 466b587ea..34a901385 100644 --- a/supervision/tracker/byte_tracker/core.py +++ b/supervision/tracker/byte_tracker/core.py @@ -249,6 +249,8 @@ def update_with_detections(self, detections: Detections) -> Detections: detections.confidence = np.array( [t.score for t in tracks], dtype=np.float32 ) + else: + detections.tracker_id = np.array([], dtype=int) return detections
safe-global__safe-config-service-23
Set port numbers in docker compose via environment variables To provide more flexibility when setting up the ports for a given environment, we should not use static ports in `docker-compose`. Instead those ports should be extracted to the `.env` file.
[ { "content": "import multiprocessing\nimport os\nfrom distutils.util import strtobool\n\nbind = f\"0.0.0.0:{os.getenv('PORT', '8000')}\"\naccesslog = \"-\"\n\nworkers = int(os.getenv(\"WEB_CONCURRENCY\", multiprocessing.cpu_count() * 2))\nthreads = int(os.getenv(\"PYTHON_MAX_THREADS\", 1))\n\nreload = bool(strtobool(os.getenv(\"WEB_RELOAD\", \"false\")))\n", "path": "src/config/gunicorn.py" } ]
[ { "content": "import multiprocessing\nimport os\nfrom distutils.util import strtobool\n\nbind = f\"0.0.0.0:{os.getenv('GUNICORN_BIND_PORT', '8000')}\"\naccesslog = \"-\"\n\nworkers = int(os.getenv(\"WEB_CONCURRENCY\", multiprocessing.cpu_count() * 2))\nthreads = int(os.getenv(\"PYTHON_MAX_THREADS\", 1))\n\nreload = bool(strtobool(os.getenv(\"WEB_RELOAD\", \"false\")))\n", "path": "src/config/gunicorn.py" } ]
diff --git a/.env.example b/.env.example index 00e52f0e..77bb7992 100644 --- a/.env.example +++ b/.env.example @@ -28,7 +28,14 @@ ALLOWED_HOSTS=".localhost,127.0.0.1,[::1]" # Be warned that if you change this value you'll need to change 8000 in both # your Dockerfile and in a few spots in docker-compose.yml due to the nature of # how this value can be set (Docker Compose doesn't support nested ENV vars). -#PORT=8000 +GUNICORN_BIND_PORT=8000 + +# The port exposed to the host by the nginx image. +NGINX_HOST_PORT=8080 + +# A directory where the result of executing envsubst is output (default: /etc/nginx/conf.d) +# Used by the nginx docker image in the templating system in order to use the environment variables set +NGINX_ENVSUBST_OUTPUT_DIR=/etc/nginx/ # Should the Webpack watcher use polling? Not all Docker hosts support inotify. # If you find your assets aren't updating in development then set this to true. diff --git a/docker-compose.yml b/docker-compose.yml index 12d7f920..cbe6898d 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -3,13 +3,14 @@ version: "3.9" services: nginx: image: nginx:1.20-alpine - hostname: nginx links: - web:web + env_file: + - .env ports: - - "8080:80" + - "${NGINX_HOST_PORT}:80" volumes: - - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro + - ./nginx/templates:/etc/nginx/templates depends_on: - web db: @@ -19,13 +20,13 @@ services: volumes: - ./data/db:/var/lib/postgresql/data ports: - - "5432:5432" + - "${POSTGRES_PORT}:${POSTGRES_PORT}" web: build: . tty: true env_file: - .env - command: gunicorn -c python:config.gunicorn config.wsgi -b 0.0.0.0:8000 + command: gunicorn -c python:config.gunicorn config.wsgi -b 0.0.0.0:${GUNICORN_BIND_PORT} working_dir: /app/src depends_on: - db diff --git a/nginx/nginx.conf b/nginx/templates/nginx.conf.template similarity index 97% rename from nginx/nginx.conf rename to nginx/templates/nginx.conf.template index 65b2fdc8..fe7bb236 100644 --- a/nginx/nginx.conf +++ b/nginx/templates/nginx.conf.template @@ -25,7 +25,7 @@ http { # server unix:/run/gunicorn.sock fail_timeout=0; # for a TCP configuration - server web:8000 fail_timeout=0; + server web:${GUNICORN_BIND_PORT} fail_timeout=0; keepalive 32; } diff --git a/src/config/gunicorn.py b/src/config/gunicorn.py index 2bcf450b..e5197757 100644 --- a/src/config/gunicorn.py +++ b/src/config/gunicorn.py @@ -2,7 +2,7 @@ import os from distutils.util import strtobool -bind = f"0.0.0.0:{os.getenv('PORT', '8000')}" +bind = f"0.0.0.0:{os.getenv('GUNICORN_BIND_PORT', '8000')}" accesslog = "-" workers = int(os.getenv("WEB_CONCURRENCY", multiprocessing.cpu_count() * 2))
freqtrade__freqtrade-2082
plot_dataframe.py ## Step 1: Have you search for this issue before posting it? Couldn't find similar issue, so starting a new issue. ## Step 2: Describe your environment * Python Version: Python 3.6.8 * CCXT version: ccxt==1.18.992 * Branch: Master * Last Commit ID: b8713a515e960f1ffadcf1c7ee62c4bee80b506c ## Step 3: Describe the problem: Unable to plot my backtest results. *Explain the problem you have encountered* Executing the following command results in error. Error ### Steps to reproduce: ` Command: python3 scripts/plot_dataframe.py -s EMACrossHTF1h --export EMACrossHTF1h_results.json -p BTC/USDT --datadir user_data/data/binance/ ` ### Observed Results: Error is thrown. ### Relevant code exceptions or logs: ` File "scripts/plot_dataframe.py", line 113, in <module> main(sys.argv[1:]) File "scripts/plot_dataframe.py", line 107, in main plot_parse_args(sysargv) File "scripts/plot_dataframe.py", line 58, in analyse_and_plot_pairs plot_elements = init_plotscript(config) File "/home/ubuntu/freqtrade/freqtrade/plot/plotting.py", line 57, in init_plotscript trades = load_trades(config) File "/home/ubuntu/freqtrade/freqtrade/data/btanalysis.py", line 113, in load_trades return load_backtest_data(Path(config["exportfilename"])) File "/home/ubuntu/freqtrade/freqtrade/data/btanalysis.py", line 33, in load_backtest_data raise ValueError("File {filename} does not exist.") ValueError: File {filename} does not exist. `
[ { "content": "\"\"\"\nHelpers when analyzing backtest data\n\"\"\"\nimport logging\nfrom pathlib import Path\nfrom typing import Dict\n\nimport numpy as np\nimport pandas as pd\nimport pytz\n\nfrom freqtrade import persistence\nfrom freqtrade.misc import json_load\nfrom freqtrade.persistence import Trade\n\nlogger = logging.getLogger(__name__)\n\n# must align with columns in backtest.py\nBT_DATA_COLUMNS = [\"pair\", \"profitperc\", \"open_time\", \"close_time\", \"index\", \"duration\",\n \"open_rate\", \"close_rate\", \"open_at_end\", \"sell_reason\"]\n\n\ndef load_backtest_data(filename) -> pd.DataFrame:\n \"\"\"\n Load backtest data file.\n :param filename: pathlib.Path object, or string pointing to the file.\n :return: a dataframe with the analysis results\n \"\"\"\n if isinstance(filename, str):\n filename = Path(filename)\n\n if not filename.is_file():\n raise ValueError(\"File {filename} does not exist.\")\n\n with filename.open() as file:\n data = json_load(file)\n\n df = pd.DataFrame(data, columns=BT_DATA_COLUMNS)\n\n df['open_time'] = pd.to_datetime(df['open_time'],\n unit='s',\n utc=True,\n infer_datetime_format=True\n )\n df['close_time'] = pd.to_datetime(df['close_time'],\n unit='s',\n utc=True,\n infer_datetime_format=True\n )\n df['profitabs'] = df['close_rate'] - df['open_rate']\n df = df.sort_values(\"open_time\").reset_index(drop=True)\n return df\n\n\ndef evaluate_result_multi(results: pd.DataFrame, freq: str, max_open_trades: int) -> pd.DataFrame:\n \"\"\"\n Find overlapping trades by expanding each trade once per period it was open\n and then counting overlaps\n :param results: Results Dataframe - can be loaded\n :param freq: Frequency used for the backtest\n :param max_open_trades: parameter max_open_trades used during backtest run\n :return: dataframe with open-counts per time-period in freq\n \"\"\"\n dates = [pd.Series(pd.date_range(row[1].open_time, row[1].close_time, freq=freq))\n for row in results[['open_time', 'close_time']].iterrows()]\n deltas = [len(x) for x in dates]\n dates = pd.Series(pd.concat(dates).values, name='date')\n df2 = pd.DataFrame(np.repeat(results.values, deltas, axis=0), columns=results.columns)\n\n df2 = pd.concat([dates, df2], axis=1)\n df2 = df2.set_index('date')\n df_final = df2.resample(freq)[['pair']].count()\n return df_final[df_final['pair'] > max_open_trades]\n\n\ndef load_trades_from_db(db_url: str) -> pd.DataFrame:\n \"\"\"\n Load trades from a DB (using dburl)\n :param db_url: Sqlite url (default format sqlite:///tradesv3.dry-run.sqlite)\n :return: Dataframe containing Trades\n \"\"\"\n trades: pd.DataFrame = pd.DataFrame([], columns=BT_DATA_COLUMNS)\n persistence.init(db_url, clean_open_orders=False)\n columns = [\"pair\", \"profit\", \"open_time\", \"close_time\",\n \"open_rate\", \"close_rate\", \"duration\", \"sell_reason\",\n \"max_rate\", \"min_rate\"]\n\n trades = pd.DataFrame([(t.pair, t.calc_profit(),\n t.open_date.replace(tzinfo=pytz.UTC),\n t.close_date.replace(tzinfo=pytz.UTC) if t.close_date else None,\n t.open_rate, t.close_rate,\n t.close_date.timestamp() - t.open_date.timestamp()\n if t.close_date else None,\n t.sell_reason,\n t.max_rate,\n t.min_rate,\n )\n for t in Trade.query.all()],\n columns=columns)\n\n return trades\n\n\ndef load_trades(config) -> pd.DataFrame:\n \"\"\"\n Based on configuration option \"trade_source\":\n * loads data from DB (using `db_url`)\n * loads data from backtestfile (using `exportfilename`)\n \"\"\"\n if config[\"trade_source\"] == \"DB\":\n return load_trades_from_db(config[\"db_url\"])\n elif config[\"trade_source\"] == \"file\":\n return load_backtest_data(Path(config[\"exportfilename\"]))\n\n\ndef extract_trades_of_period(dataframe: pd.DataFrame, trades: pd.DataFrame) -> pd.DataFrame:\n \"\"\"\n Compare trades and backtested pair DataFrames to get trades performed on backtested period\n :return: the DataFrame of a trades of period\n \"\"\"\n trades = trades.loc[(trades['open_time'] >= dataframe.iloc[0]['date']) &\n (trades['close_time'] <= dataframe.iloc[-1]['date'])]\n return trades\n\n\ndef combine_tickers_with_mean(tickers: Dict[str, pd.DataFrame], column: str = \"close\"):\n \"\"\"\n Combine multiple dataframes \"column\"\n :param tickers: Dict of Dataframes, dict key should be pair.\n :param column: Column in the original dataframes to use\n :return: DataFrame with the column renamed to the dict key, and a column\n named mean, containing the mean of all pairs.\n \"\"\"\n df_comb = pd.concat([tickers[pair].set_index('date').rename(\n {column: pair}, axis=1)[pair] for pair in tickers], axis=1)\n\n df_comb['mean'] = df_comb.mean(axis=1)\n\n return df_comb\n\n\ndef create_cum_profit(df: pd.DataFrame, trades: pd.DataFrame, col_name: str) -> pd.DataFrame:\n \"\"\"\n Adds a column `col_name` with the cumulative profit for the given trades array.\n :param df: DataFrame with date index\n :param trades: DataFrame containing trades (requires columns close_time and profitperc)\n :return: Returns df with one additional column, col_name, containing the cumulative profit.\n \"\"\"\n df[col_name] = trades.set_index('close_time')['profitperc'].cumsum()\n # Set first value to 0\n df.loc[df.iloc[0].name, col_name] = 0\n # FFill to get continuous\n df[col_name] = df[col_name].ffill()\n return df\n", "path": "freqtrade/data/btanalysis.py" } ]
[ { "content": "\"\"\"\nHelpers when analyzing backtest data\n\"\"\"\nimport logging\nfrom pathlib import Path\nfrom typing import Dict\n\nimport numpy as np\nimport pandas as pd\nimport pytz\n\nfrom freqtrade import persistence\nfrom freqtrade.misc import json_load\nfrom freqtrade.persistence import Trade\n\nlogger = logging.getLogger(__name__)\n\n# must align with columns in backtest.py\nBT_DATA_COLUMNS = [\"pair\", \"profitperc\", \"open_time\", \"close_time\", \"index\", \"duration\",\n \"open_rate\", \"close_rate\", \"open_at_end\", \"sell_reason\"]\n\n\ndef load_backtest_data(filename) -> pd.DataFrame:\n \"\"\"\n Load backtest data file.\n :param filename: pathlib.Path object, or string pointing to the file.\n :return: a dataframe with the analysis results\n \"\"\"\n if isinstance(filename, str):\n filename = Path(filename)\n\n if not filename.is_file():\n raise ValueError(f\"File {filename} does not exist.\")\n\n with filename.open() as file:\n data = json_load(file)\n\n df = pd.DataFrame(data, columns=BT_DATA_COLUMNS)\n\n df['open_time'] = pd.to_datetime(df['open_time'],\n unit='s',\n utc=True,\n infer_datetime_format=True\n )\n df['close_time'] = pd.to_datetime(df['close_time'],\n unit='s',\n utc=True,\n infer_datetime_format=True\n )\n df['profitabs'] = df['close_rate'] - df['open_rate']\n df = df.sort_values(\"open_time\").reset_index(drop=True)\n return df\n\n\ndef evaluate_result_multi(results: pd.DataFrame, freq: str, max_open_trades: int) -> pd.DataFrame:\n \"\"\"\n Find overlapping trades by expanding each trade once per period it was open\n and then counting overlaps\n :param results: Results Dataframe - can be loaded\n :param freq: Frequency used for the backtest\n :param max_open_trades: parameter max_open_trades used during backtest run\n :return: dataframe with open-counts per time-period in freq\n \"\"\"\n dates = [pd.Series(pd.date_range(row[1].open_time, row[1].close_time, freq=freq))\n for row in results[['open_time', 'close_time']].iterrows()]\n deltas = [len(x) for x in dates]\n dates = pd.Series(pd.concat(dates).values, name='date')\n df2 = pd.DataFrame(np.repeat(results.values, deltas, axis=0), columns=results.columns)\n\n df2 = pd.concat([dates, df2], axis=1)\n df2 = df2.set_index('date')\n df_final = df2.resample(freq)[['pair']].count()\n return df_final[df_final['pair'] > max_open_trades]\n\n\ndef load_trades_from_db(db_url: str) -> pd.DataFrame:\n \"\"\"\n Load trades from a DB (using dburl)\n :param db_url: Sqlite url (default format sqlite:///tradesv3.dry-run.sqlite)\n :return: Dataframe containing Trades\n \"\"\"\n trades: pd.DataFrame = pd.DataFrame([], columns=BT_DATA_COLUMNS)\n persistence.init(db_url, clean_open_orders=False)\n columns = [\"pair\", \"profit\", \"open_time\", \"close_time\",\n \"open_rate\", \"close_rate\", \"duration\", \"sell_reason\",\n \"max_rate\", \"min_rate\"]\n\n trades = pd.DataFrame([(t.pair, t.calc_profit(),\n t.open_date.replace(tzinfo=pytz.UTC),\n t.close_date.replace(tzinfo=pytz.UTC) if t.close_date else None,\n t.open_rate, t.close_rate,\n t.close_date.timestamp() - t.open_date.timestamp()\n if t.close_date else None,\n t.sell_reason,\n t.max_rate,\n t.min_rate,\n )\n for t in Trade.query.all()],\n columns=columns)\n\n return trades\n\n\ndef load_trades(config) -> pd.DataFrame:\n \"\"\"\n Based on configuration option \"trade_source\":\n * loads data from DB (using `db_url`)\n * loads data from backtestfile (using `exportfilename`)\n \"\"\"\n if config[\"trade_source\"] == \"DB\":\n return load_trades_from_db(config[\"db_url\"])\n elif config[\"trade_source\"] == \"file\":\n return load_backtest_data(Path(config[\"exportfilename\"]))\n\n\ndef extract_trades_of_period(dataframe: pd.DataFrame, trades: pd.DataFrame) -> pd.DataFrame:\n \"\"\"\n Compare trades and backtested pair DataFrames to get trades performed on backtested period\n :return: the DataFrame of a trades of period\n \"\"\"\n trades = trades.loc[(trades['open_time'] >= dataframe.iloc[0]['date']) &\n (trades['close_time'] <= dataframe.iloc[-1]['date'])]\n return trades\n\n\ndef combine_tickers_with_mean(tickers: Dict[str, pd.DataFrame], column: str = \"close\"):\n \"\"\"\n Combine multiple dataframes \"column\"\n :param tickers: Dict of Dataframes, dict key should be pair.\n :param column: Column in the original dataframes to use\n :return: DataFrame with the column renamed to the dict key, and a column\n named mean, containing the mean of all pairs.\n \"\"\"\n df_comb = pd.concat([tickers[pair].set_index('date').rename(\n {column: pair}, axis=1)[pair] for pair in tickers], axis=1)\n\n df_comb['mean'] = df_comb.mean(axis=1)\n\n return df_comb\n\n\ndef create_cum_profit(df: pd.DataFrame, trades: pd.DataFrame, col_name: str) -> pd.DataFrame:\n \"\"\"\n Adds a column `col_name` with the cumulative profit for the given trades array.\n :param df: DataFrame with date index\n :param trades: DataFrame containing trades (requires columns close_time and profitperc)\n :return: Returns df with one additional column, col_name, containing the cumulative profit.\n \"\"\"\n df[col_name] = trades.set_index('close_time')['profitperc'].cumsum()\n # Set first value to 0\n df.loc[df.iloc[0].name, col_name] = 0\n # FFill to get continuous\n df[col_name] = df[col_name].ffill()\n return df\n", "path": "freqtrade/data/btanalysis.py" } ]
diff --git a/freqtrade/data/btanalysis.py b/freqtrade/data/btanalysis.py index f2356c34b0f..5865d56a7fb 100644 --- a/freqtrade/data/btanalysis.py +++ b/freqtrade/data/btanalysis.py @@ -30,7 +30,7 @@ def load_backtest_data(filename) -> pd.DataFrame: filename = Path(filename) if not filename.is_file(): - raise ValueError("File {filename} does not exist.") + raise ValueError(f"File {filename} does not exist.") with filename.open() as file: data = json_load(file)
projectmesa__mesa-1844
jupyterviz checkbox input change is not propagated
[ { "content": "import threading\n\nimport matplotlib.pyplot as plt\nimport networkx as nx\nimport reacton.ipywidgets as widgets\nimport solara\nfrom matplotlib.figure import Figure\nfrom matplotlib.ticker import MaxNLocator\n\nimport mesa\n\n# Avoid interactive backend\nplt.switch_backend(\"agg\")\n\n\[email protected]\ndef JupyterViz(\n model_class,\n model_params,\n measures=None,\n name=\"Mesa Model\",\n agent_portrayal=None,\n space_drawer=\"default\",\n play_interval=150,\n):\n \"\"\"Initialize a component to visualize a model.\n Args:\n model_class: class of the model to instantiate\n model_params: parameters for initializing the model\n measures: list of callables or data attributes to plot\n name: name for display\n agent_portrayal: options for rendering agents (dictionary)\n space_drawer: method to render the agent space for\n the model; default implementation is :meth:`make_space`;\n simulations with no space to visualize should\n specify `space_drawer=False`\n play_interval: play interval (default: 150)\n \"\"\"\n\n current_step, set_current_step = solara.use_state(0)\n\n # 1. Set up model parameters\n user_params, fixed_params = split_model_params(model_params)\n model_parameters, set_model_parameters = solara.use_state(\n {**fixed_params, **{k: v[\"value\"] for k, v in user_params.items()}}\n )\n\n # 2. Set up Model\n def make_model():\n model = model_class(**model_parameters)\n set_current_step(0)\n return model\n\n reset_counter = solara.use_reactive(0)\n model = solara.use_memo(\n make_model, dependencies=[*list(model_parameters.values()), reset_counter.value]\n )\n\n def handle_change_model_params(name: str, value: any):\n set_model_parameters({**model_parameters, name: value})\n\n # 3. Set up UI\n solara.Markdown(name)\n UserInputs(user_params, on_change=handle_change_model_params)\n ModelController(model, play_interval, current_step, set_current_step, reset_counter)\n\n with solara.GridFixed(columns=2):\n # 4. Space\n if space_drawer == \"default\":\n # draw with the default implementation\n make_space(model, agent_portrayal)\n elif space_drawer:\n # if specified, draw agent space with an alternate renderer\n space_drawer(model, agent_portrayal)\n # otherwise, do nothing (do not draw space)\n\n # 5. Plots\n for measure in measures:\n if callable(measure):\n # Is a custom object\n measure(model)\n else:\n make_plot(model, measure)\n\n\[email protected]\ndef ModelController(\n model, play_interval, current_step, set_current_step, reset_counter\n):\n playing = solara.use_reactive(False)\n thread = solara.use_reactive(None)\n # We track the previous step to detect if user resets the model via\n # clicking the reset button or changing the parameters. If previous_step >\n # current_step, it means a model reset happens while the simulation is\n # still playing.\n previous_step = solara.use_reactive(0)\n\n def on_value_play(change):\n if previous_step.value > current_step and current_step == 0:\n # We add extra checks for current_step == 0, just to be sure.\n # We automatically stop the playing if a model is reset.\n playing.value = False\n elif model.running:\n do_step()\n else:\n playing.value = False\n\n def do_step():\n model.step()\n previous_step.value = current_step\n set_current_step(model.schedule.steps)\n\n def do_play():\n model.running = True\n while model.running:\n do_step()\n\n def threaded_do_play():\n if thread is not None and thread.is_alive():\n return\n thread.value = threading.Thread(target=do_play)\n thread.start()\n\n def do_pause():\n if (thread is None) or (not thread.is_alive()):\n return\n model.running = False\n thread.join()\n\n def do_reset():\n reset_counter.value += 1\n\n with solara.Row():\n solara.Button(label=\"Step\", color=\"primary\", on_click=do_step)\n # This style is necessary so that the play widget has almost the same\n # height as typical Solara buttons.\n solara.Style(\n \"\"\"\n .widget-play {\n height: 30px;\n }\n \"\"\"\n )\n widgets.Play(\n value=0,\n interval=play_interval,\n repeat=True,\n show_repeat=False,\n on_value=on_value_play,\n playing=playing.value,\n on_playing=playing.set,\n )\n solara.Button(label=\"Reset\", color=\"primary\", on_click=do_reset)\n solara.Markdown(md_text=f\"**Step:** {current_step}\")\n # threaded_do_play is not used for now because it\n # doesn't work in Google colab. We use\n # ipywidgets.Play until it is fixed. The threading\n # version is definite a much better implementation,\n # if it works.\n # solara.Button(label=\"▶\", color=\"primary\", on_click=viz.threaded_do_play)\n # solara.Button(label=\"⏸︎\", color=\"primary\", on_click=viz.do_pause)\n # solara.Button(label=\"Reset\", color=\"primary\", on_click=do_reset)\n\n\ndef split_model_params(model_params):\n model_params_input = {}\n model_params_fixed = {}\n for k, v in model_params.items():\n if check_param_is_fixed(v):\n model_params_fixed[k] = v\n else:\n model_params_input[k] = v\n return model_params_input, model_params_fixed\n\n\ndef check_param_is_fixed(param):\n if not isinstance(param, dict):\n return True\n if \"type\" not in param:\n return True\n\n\[email protected]\ndef UserInputs(user_params, on_change=None):\n \"\"\"Initialize user inputs for configurable model parameters.\n Currently supports :class:`solara.SliderInt`, :class:`solara.SliderFloat`,\n :class:`solara.Select`, and :class:`solara.Checkbox`.\n\n Props:\n user_params: dictionary with options for the input, including label,\n min and max values, and other fields specific to the input type.\n on_change: function to be called with (name, value) when the value of an input changes.\n \"\"\"\n\n for name, options in user_params.items():\n # label for the input is \"label\" from options or name\n label = options.get(\"label\", name)\n input_type = options.get(\"type\")\n\n def change_handler(value, name=name):\n on_change(name, value)\n\n if input_type == \"SliderInt\":\n solara.SliderInt(\n label,\n value=options.get(\"value\"),\n on_value=change_handler,\n min=options.get(\"min\"),\n max=options.get(\"max\"),\n step=options.get(\"step\"),\n )\n elif input_type == \"SliderFloat\":\n solara.SliderFloat(\n label,\n value=options.get(\"value\"),\n on_value=change_handler,\n min=options.get(\"min\"),\n max=options.get(\"max\"),\n step=options.get(\"step\"),\n )\n elif input_type == \"Select\":\n solara.Select(\n label,\n value=options.get(\"value\"),\n on_value=change_handler,\n values=options.get(\"values\"),\n )\n elif input_type == \"Checkbox\":\n solara.Checkbox(\n label=label,\n value=options.get(\"value\"),\n )\n else:\n raise ValueError(f\"{input_type} is not a supported input type\")\n\n\ndef make_space(model, agent_portrayal):\n space_fig = Figure()\n space_ax = space_fig.subplots()\n space = getattr(model, \"grid\", None)\n if space is None:\n # Sometimes the space is defined as model.space instead of model.grid\n space = model.space\n if isinstance(space, mesa.space.NetworkGrid):\n _draw_network_grid(space, space_ax, agent_portrayal)\n elif isinstance(space, mesa.space.ContinuousSpace):\n _draw_continuous_space(space, space_ax, agent_portrayal)\n else:\n _draw_grid(space, space_ax, agent_portrayal)\n space_ax.set_axis_off()\n solara.FigureMatplotlib(space_fig, format=\"png\")\n\n\ndef _draw_grid(space, space_ax, agent_portrayal):\n def portray(g):\n x = []\n y = []\n s = [] # size\n c = [] # color\n for i in range(g.width):\n for j in range(g.height):\n content = g._grid[i][j]\n if not content:\n continue\n if not hasattr(content, \"__iter__\"):\n # Is a single grid\n content = [content]\n for agent in content:\n data = agent_portrayal(agent)\n x.append(i)\n y.append(j)\n if \"size\" in data:\n s.append(data[\"size\"])\n if \"color\" in data:\n c.append(data[\"color\"])\n out = {\"x\": x, \"y\": y}\n if len(s) > 0:\n out[\"s\"] = s\n if len(c) > 0:\n out[\"c\"] = c\n return out\n\n space_ax.scatter(**portray(space))\n\n\ndef _draw_network_grid(space, space_ax, agent_portrayal):\n graph = space.G\n pos = nx.spring_layout(graph, seed=0)\n nx.draw(\n graph,\n ax=space_ax,\n pos=pos,\n **agent_portrayal(graph),\n )\n\n\ndef _draw_continuous_space(space, space_ax, agent_portrayal):\n def portray(space):\n x = []\n y = []\n s = [] # size\n c = [] # color\n for agent in space._agent_to_index:\n data = agent_portrayal(agent)\n _x, _y = agent.pos\n x.append(_x)\n y.append(_y)\n if \"size\" in data:\n s.append(data[\"size\"])\n if \"color\" in data:\n c.append(data[\"color\"])\n out = {\"x\": x, \"y\": y}\n if len(s) > 0:\n out[\"s\"] = s\n if len(c) > 0:\n out[\"c\"] = c\n return out\n\n space_ax.scatter(**portray(space))\n\n\ndef make_plot(model, measure):\n fig = Figure()\n ax = fig.subplots()\n df = model.datacollector.get_model_vars_dataframe()\n ax.plot(df.loc[:, measure])\n ax.set_ylabel(measure)\n # Set integer x axis\n ax.xaxis.set_major_locator(MaxNLocator(integer=True))\n solara.FigureMatplotlib(fig)\n\n\ndef make_text(renderer):\n def function(model):\n solara.Markdown(renderer(model))\n\n return function\n", "path": "mesa/experimental/jupyter_viz.py" } ]
[ { "content": "import threading\n\nimport matplotlib.pyplot as plt\nimport networkx as nx\nimport reacton.ipywidgets as widgets\nimport solara\nfrom matplotlib.figure import Figure\nfrom matplotlib.ticker import MaxNLocator\n\nimport mesa\n\n# Avoid interactive backend\nplt.switch_backend(\"agg\")\n\n\[email protected]\ndef JupyterViz(\n model_class,\n model_params,\n measures=None,\n name=\"Mesa Model\",\n agent_portrayal=None,\n space_drawer=\"default\",\n play_interval=150,\n):\n \"\"\"Initialize a component to visualize a model.\n Args:\n model_class: class of the model to instantiate\n model_params: parameters for initializing the model\n measures: list of callables or data attributes to plot\n name: name for display\n agent_portrayal: options for rendering agents (dictionary)\n space_drawer: method to render the agent space for\n the model; default implementation is :meth:`make_space`;\n simulations with no space to visualize should\n specify `space_drawer=False`\n play_interval: play interval (default: 150)\n \"\"\"\n\n current_step, set_current_step = solara.use_state(0)\n\n # 1. Set up model parameters\n user_params, fixed_params = split_model_params(model_params)\n model_parameters, set_model_parameters = solara.use_state(\n {**fixed_params, **{k: v[\"value\"] for k, v in user_params.items()}}\n )\n\n # 2. Set up Model\n def make_model():\n model = model_class(**model_parameters)\n set_current_step(0)\n return model\n\n reset_counter = solara.use_reactive(0)\n model = solara.use_memo(\n make_model, dependencies=[*list(model_parameters.values()), reset_counter.value]\n )\n\n def handle_change_model_params(name: str, value: any):\n set_model_parameters({**model_parameters, name: value})\n\n # 3. Set up UI\n solara.Markdown(name)\n UserInputs(user_params, on_change=handle_change_model_params)\n ModelController(model, play_interval, current_step, set_current_step, reset_counter)\n\n with solara.GridFixed(columns=2):\n # 4. Space\n if space_drawer == \"default\":\n # draw with the default implementation\n make_space(model, agent_portrayal)\n elif space_drawer:\n # if specified, draw agent space with an alternate renderer\n space_drawer(model, agent_portrayal)\n # otherwise, do nothing (do not draw space)\n\n # 5. Plots\n for measure in measures:\n if callable(measure):\n # Is a custom object\n measure(model)\n else:\n make_plot(model, measure)\n\n\[email protected]\ndef ModelController(\n model, play_interval, current_step, set_current_step, reset_counter\n):\n playing = solara.use_reactive(False)\n thread = solara.use_reactive(None)\n # We track the previous step to detect if user resets the model via\n # clicking the reset button or changing the parameters. If previous_step >\n # current_step, it means a model reset happens while the simulation is\n # still playing.\n previous_step = solara.use_reactive(0)\n\n def on_value_play(change):\n if previous_step.value > current_step and current_step == 0:\n # We add extra checks for current_step == 0, just to be sure.\n # We automatically stop the playing if a model is reset.\n playing.value = False\n elif model.running:\n do_step()\n else:\n playing.value = False\n\n def do_step():\n model.step()\n previous_step.value = current_step\n set_current_step(model.schedule.steps)\n\n def do_play():\n model.running = True\n while model.running:\n do_step()\n\n def threaded_do_play():\n if thread is not None and thread.is_alive():\n return\n thread.value = threading.Thread(target=do_play)\n thread.start()\n\n def do_pause():\n if (thread is None) or (not thread.is_alive()):\n return\n model.running = False\n thread.join()\n\n def do_reset():\n reset_counter.value += 1\n\n with solara.Row():\n solara.Button(label=\"Step\", color=\"primary\", on_click=do_step)\n # This style is necessary so that the play widget has almost the same\n # height as typical Solara buttons.\n solara.Style(\n \"\"\"\n .widget-play {\n height: 30px;\n }\n \"\"\"\n )\n widgets.Play(\n value=0,\n interval=play_interval,\n repeat=True,\n show_repeat=False,\n on_value=on_value_play,\n playing=playing.value,\n on_playing=playing.set,\n )\n solara.Button(label=\"Reset\", color=\"primary\", on_click=do_reset)\n solara.Markdown(md_text=f\"**Step:** {current_step}\")\n # threaded_do_play is not used for now because it\n # doesn't work in Google colab. We use\n # ipywidgets.Play until it is fixed. The threading\n # version is definite a much better implementation,\n # if it works.\n # solara.Button(label=\"▶\", color=\"primary\", on_click=viz.threaded_do_play)\n # solara.Button(label=\"⏸︎\", color=\"primary\", on_click=viz.do_pause)\n # solara.Button(label=\"Reset\", color=\"primary\", on_click=do_reset)\n\n\ndef split_model_params(model_params):\n model_params_input = {}\n model_params_fixed = {}\n for k, v in model_params.items():\n if check_param_is_fixed(v):\n model_params_fixed[k] = v\n else:\n model_params_input[k] = v\n return model_params_input, model_params_fixed\n\n\ndef check_param_is_fixed(param):\n if not isinstance(param, dict):\n return True\n if \"type\" not in param:\n return True\n\n\[email protected]\ndef UserInputs(user_params, on_change=None):\n \"\"\"Initialize user inputs for configurable model parameters.\n Currently supports :class:`solara.SliderInt`, :class:`solara.SliderFloat`,\n :class:`solara.Select`, and :class:`solara.Checkbox`.\n\n Props:\n user_params: dictionary with options for the input, including label,\n min and max values, and other fields specific to the input type.\n on_change: function to be called with (name, value) when the value of an input changes.\n \"\"\"\n\n for name, options in user_params.items():\n # label for the input is \"label\" from options or name\n label = options.get(\"label\", name)\n input_type = options.get(\"type\")\n\n def change_handler(value, name=name):\n on_change(name, value)\n\n if input_type == \"SliderInt\":\n solara.SliderInt(\n label,\n value=options.get(\"value\"),\n on_value=change_handler,\n min=options.get(\"min\"),\n max=options.get(\"max\"),\n step=options.get(\"step\"),\n )\n elif input_type == \"SliderFloat\":\n solara.SliderFloat(\n label,\n value=options.get(\"value\"),\n on_value=change_handler,\n min=options.get(\"min\"),\n max=options.get(\"max\"),\n step=options.get(\"step\"),\n )\n elif input_type == \"Select\":\n solara.Select(\n label,\n value=options.get(\"value\"),\n on_value=change_handler,\n values=options.get(\"values\"),\n )\n elif input_type == \"Checkbox\":\n solara.Checkbox(\n label=label,\n on_value=change_handler,\n value=options.get(\"value\"),\n )\n else:\n raise ValueError(f\"{input_type} is not a supported input type\")\n\n\ndef make_space(model, agent_portrayal):\n space_fig = Figure()\n space_ax = space_fig.subplots()\n space = getattr(model, \"grid\", None)\n if space is None:\n # Sometimes the space is defined as model.space instead of model.grid\n space = model.space\n if isinstance(space, mesa.space.NetworkGrid):\n _draw_network_grid(space, space_ax, agent_portrayal)\n elif isinstance(space, mesa.space.ContinuousSpace):\n _draw_continuous_space(space, space_ax, agent_portrayal)\n else:\n _draw_grid(space, space_ax, agent_portrayal)\n space_ax.set_axis_off()\n solara.FigureMatplotlib(space_fig, format=\"png\")\n\n\ndef _draw_grid(space, space_ax, agent_portrayal):\n def portray(g):\n x = []\n y = []\n s = [] # size\n c = [] # color\n for i in range(g.width):\n for j in range(g.height):\n content = g._grid[i][j]\n if not content:\n continue\n if not hasattr(content, \"__iter__\"):\n # Is a single grid\n content = [content]\n for agent in content:\n data = agent_portrayal(agent)\n x.append(i)\n y.append(j)\n if \"size\" in data:\n s.append(data[\"size\"])\n if \"color\" in data:\n c.append(data[\"color\"])\n out = {\"x\": x, \"y\": y}\n if len(s) > 0:\n out[\"s\"] = s\n if len(c) > 0:\n out[\"c\"] = c\n return out\n\n space_ax.scatter(**portray(space))\n\n\ndef _draw_network_grid(space, space_ax, agent_portrayal):\n graph = space.G\n pos = nx.spring_layout(graph, seed=0)\n nx.draw(\n graph,\n ax=space_ax,\n pos=pos,\n **agent_portrayal(graph),\n )\n\n\ndef _draw_continuous_space(space, space_ax, agent_portrayal):\n def portray(space):\n x = []\n y = []\n s = [] # size\n c = [] # color\n for agent in space._agent_to_index:\n data = agent_portrayal(agent)\n _x, _y = agent.pos\n x.append(_x)\n y.append(_y)\n if \"size\" in data:\n s.append(data[\"size\"])\n if \"color\" in data:\n c.append(data[\"color\"])\n out = {\"x\": x, \"y\": y}\n if len(s) > 0:\n out[\"s\"] = s\n if len(c) > 0:\n out[\"c\"] = c\n return out\n\n space_ax.scatter(**portray(space))\n\n\ndef make_plot(model, measure):\n fig = Figure()\n ax = fig.subplots()\n df = model.datacollector.get_model_vars_dataframe()\n ax.plot(df.loc[:, measure])\n ax.set_ylabel(measure)\n # Set integer x axis\n ax.xaxis.set_major_locator(MaxNLocator(integer=True))\n solara.FigureMatplotlib(fig)\n\n\ndef make_text(renderer):\n def function(model):\n solara.Markdown(renderer(model))\n\n return function\n", "path": "mesa/experimental/jupyter_viz.py" } ]
diff --git a/mesa/experimental/jupyter_viz.py b/mesa/experimental/jupyter_viz.py index 3408ba11c6a..de207bf2926 100644 --- a/mesa/experimental/jupyter_viz.py +++ b/mesa/experimental/jupyter_viz.py @@ -228,6 +228,7 @@ def change_handler(value, name=name): elif input_type == "Checkbox": solara.Checkbox( label=label, + on_value=change_handler, value=options.get("value"), ) else:
hylang__hy-885
Exclamation mark ! is not mangled I noticed that https://github.com/hylang/hyway/blob/master/conway.hy uses "!" in `set!` and `get!`, but Hy doesn't mangle "!" into something else. The variable is added to the module as-is. That means it'll be hard to reach it from normal Python code. Also, hy2py on Hy code with `set!` returns invalid syntax: `def set!(`.
[ { "content": "# Copyright (c) 2013 Nicolas Dandrimont <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport sys\nfrom functools import wraps\n\nfrom rply import ParserGenerator\n\nfrom hy.models.complex import HyComplex\nfrom hy.models.cons import HyCons\nfrom hy.models.dict import HyDict\nfrom hy.models.expression import HyExpression\nfrom hy.models.float import HyFloat\nfrom hy.models.integer import HyInteger\nfrom hy.models.keyword import HyKeyword\nfrom hy.models.list import HyList\nfrom hy.models.set import HySet\nfrom hy.models.string import HyString\nfrom hy.models.symbol import HySymbol\n\nfrom .lexer import lexer\nfrom .exceptions import LexException, PrematureEndOfInput\n\n\npg = ParserGenerator(\n [rule.name for rule in lexer.rules] + ['$end'],\n cache_id=\"hy_parser\"\n)\n\n\ndef set_boundaries(fun):\n @wraps(fun)\n def wrapped(p):\n start = p[0].source_pos\n end = p[-1].source_pos\n ret = fun(p)\n ret.start_line = start.lineno\n ret.start_column = start.colno\n if start is not end:\n ret.end_line = end.lineno\n ret.end_column = end.colno\n else:\n ret.end_line = start.lineno\n ret.end_column = start.colno + len(p[0].value)\n return ret\n return wrapped\n\n\ndef set_quote_boundaries(fun):\n @wraps(fun)\n def wrapped(p):\n start = p[0].source_pos\n ret = fun(p)\n ret.start_line = start.lineno\n ret.start_column = start.colno\n ret.end_line = p[-1].end_line\n ret.end_column = p[-1].end_column\n return ret\n return wrapped\n\n\[email protected](\"main : HASHBANG real_main\")\ndef main_hashbang(p):\n return p[1]\n\n\[email protected](\"main : real_main\")\ndef main(p):\n return p[0]\n\n\[email protected](\"real_main : list_contents\")\ndef real_main(p):\n return p[0]\n\n\[email protected](\"real_main : $end\")\ndef real_main_empty(p):\n return []\n\n\ndef reject_spurious_dots(*items):\n \"Reject the spurious dots from items\"\n for list in items:\n for tok in list:\n if tok == \".\" and type(tok) == HySymbol:\n raise LexException(\"Malformed dotted list\",\n tok.start_line, tok.start_column)\n\n\[email protected](\"paren : LPAREN list_contents RPAREN\")\n@set_boundaries\ndef paren(p):\n cont = p[1]\n\n # Dotted lists are expressions of the form\n # (a b c . d)\n # that evaluate to nested cons cells of the form\n # (a . (b . (c . d)))\n if len(cont) >= 3 and isinstance(cont[-2], HySymbol) and cont[-2] == \".\":\n\n reject_spurious_dots(cont[:-2], cont[-1:])\n\n if len(cont) == 3:\n # Two-item dotted list: return the cons cell directly\n return HyCons(cont[0], cont[2])\n else:\n # Return a nested cons cell\n return HyCons(cont[0], paren([p[0], cont[1:], p[2]]))\n\n # Warn preemptively on a malformed dotted list.\n # Only check for dots after the first item to allow for a potential\n # attribute accessor shorthand\n reject_spurious_dots(cont[1:])\n\n return HyExpression(p[1])\n\n\[email protected](\"paren : LPAREN RPAREN\")\n@set_boundaries\ndef empty_paren(p):\n return HyExpression([])\n\n\[email protected](\"list_contents : term list_contents\")\ndef list_contents(p):\n return [p[0]] + p[1]\n\n\[email protected](\"list_contents : term\")\ndef list_contents_single(p):\n return [p[0]]\n\n\[email protected](\"term : identifier\")\[email protected](\"term : paren\")\[email protected](\"term : dict\")\[email protected](\"term : list\")\[email protected](\"term : set\")\[email protected](\"term : string\")\ndef term(p):\n return p[0]\n\n\[email protected](\"term : QUOTE term\")\n@set_quote_boundaries\ndef term_quote(p):\n return HyExpression([HySymbol(\"quote\"), p[1]])\n\n\[email protected](\"term : QUASIQUOTE term\")\n@set_quote_boundaries\ndef term_quasiquote(p):\n return HyExpression([HySymbol(\"quasiquote\"), p[1]])\n\n\[email protected](\"term : UNQUOTE term\")\n@set_quote_boundaries\ndef term_unquote(p):\n return HyExpression([HySymbol(\"unquote\"), p[1]])\n\n\[email protected](\"term : UNQUOTESPLICE term\")\n@set_quote_boundaries\ndef term_unquote_splice(p):\n return HyExpression([HySymbol(\"unquote_splice\"), p[1]])\n\n\[email protected](\"term : HASHREADER term\")\n@set_quote_boundaries\ndef hash_reader(p):\n st = p[0].getstr()[1]\n str_object = HyString(st)\n expr = p[1]\n return HyExpression([HySymbol(\"dispatch_reader_macro\"), str_object, expr])\n\n\[email protected](\"set : HLCURLY list_contents RCURLY\")\n@set_boundaries\ndef t_set(p):\n return HySet(p[1])\n\n\[email protected](\"set : HLCURLY RCURLY\")\n@set_boundaries\ndef empty_set(p):\n return HySet([])\n\n\[email protected](\"dict : LCURLY list_contents RCURLY\")\n@set_boundaries\ndef t_dict(p):\n return HyDict(p[1])\n\n\[email protected](\"dict : LCURLY RCURLY\")\n@set_boundaries\ndef empty_dict(p):\n return HyDict([])\n\n\[email protected](\"list : LBRACKET list_contents RBRACKET\")\n@set_boundaries\ndef t_list(p):\n return HyList(p[1])\n\n\[email protected](\"list : LBRACKET RBRACKET\")\n@set_boundaries\ndef t_empty_list(p):\n return HyList([])\n\n\nif sys.version_info[0] >= 3:\n def uni_hystring(s):\n return HyString(eval(s))\nelse:\n def uni_hystring(s):\n return HyString(eval('u'+s))\n\n\[email protected](\"string : STRING\")\n@set_boundaries\ndef t_string(p):\n # remove trailing quote\n s = p[0].value[:-1]\n # get the header\n header, s = s.split('\"', 1)\n # remove unicode marker\n header = header.replace(\"u\", \"\")\n # build python string\n s = header + '\"\"\"' + s + '\"\"\"'\n return uni_hystring(s)\n\n\[email protected](\"string : PARTIAL_STRING\")\ndef t_partial_string(p):\n # Any unterminated string requires more input\n raise PrematureEndOfInput(\"Premature end of input\")\n\n\[email protected](\"identifier : IDENTIFIER\")\n@set_boundaries\ndef t_identifier(p):\n obj = p[0].value\n\n try:\n return HyInteger(obj)\n except ValueError:\n pass\n\n if '/' in obj:\n try:\n lhs, rhs = obj.split('/')\n return HyExpression([HySymbol('fraction'), HyInteger(lhs),\n HyInteger(rhs)])\n except ValueError:\n pass\n\n try:\n return HyFloat(obj)\n except ValueError:\n pass\n\n if obj != 'j':\n try:\n return HyComplex(obj)\n except ValueError:\n pass\n\n table = {\n \"true\": \"True\",\n \"false\": \"False\",\n \"nil\": \"None\",\n \"null\": \"None\",\n }\n\n if obj in table:\n return HySymbol(table[obj])\n\n if obj.startswith(\":\"):\n return HyKeyword(obj)\n\n def mangle(p):\n if p.startswith(\"*\") and p.endswith(\"*\") and p not in (\"*\", \"**\"):\n p = p[1:-1].upper()\n\n if \"-\" in p and p != \"-\":\n p = p.replace(\"-\", \"_\")\n\n if p.endswith(\"?\") and p != \"?\":\n p = \"is_%s\" % (p[:-1])\n\n return p\n\n obj = \".\".join([mangle(part) for part in obj.split(\".\")])\n\n return HySymbol(obj)\n\n\[email protected]\ndef error_handler(token):\n tokentype = token.gettokentype()\n if tokentype == '$end':\n raise PrematureEndOfInput(\"Premature end of input\")\n else:\n raise LexException(\n \"Ran into a %s where it wasn't expected.\" % tokentype,\n token.source_pos.lineno, token.source_pos.colno)\n\n\nparser = pg.build()\n", "path": "hy/lex/parser.py" } ]
[ { "content": "# Copyright (c) 2013 Nicolas Dandrimont <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport sys\nfrom functools import wraps\n\nfrom rply import ParserGenerator\n\nfrom hy.models.complex import HyComplex\nfrom hy.models.cons import HyCons\nfrom hy.models.dict import HyDict\nfrom hy.models.expression import HyExpression\nfrom hy.models.float import HyFloat\nfrom hy.models.integer import HyInteger\nfrom hy.models.keyword import HyKeyword\nfrom hy.models.list import HyList\nfrom hy.models.set import HySet\nfrom hy.models.string import HyString\nfrom hy.models.symbol import HySymbol\n\nfrom .lexer import lexer\nfrom .exceptions import LexException, PrematureEndOfInput\n\n\npg = ParserGenerator(\n [rule.name for rule in lexer.rules] + ['$end'],\n cache_id=\"hy_parser\"\n)\n\n\ndef set_boundaries(fun):\n @wraps(fun)\n def wrapped(p):\n start = p[0].source_pos\n end = p[-1].source_pos\n ret = fun(p)\n ret.start_line = start.lineno\n ret.start_column = start.colno\n if start is not end:\n ret.end_line = end.lineno\n ret.end_column = end.colno\n else:\n ret.end_line = start.lineno\n ret.end_column = start.colno + len(p[0].value)\n return ret\n return wrapped\n\n\ndef set_quote_boundaries(fun):\n @wraps(fun)\n def wrapped(p):\n start = p[0].source_pos\n ret = fun(p)\n ret.start_line = start.lineno\n ret.start_column = start.colno\n ret.end_line = p[-1].end_line\n ret.end_column = p[-1].end_column\n return ret\n return wrapped\n\n\[email protected](\"main : HASHBANG real_main\")\ndef main_hashbang(p):\n return p[1]\n\n\[email protected](\"main : real_main\")\ndef main(p):\n return p[0]\n\n\[email protected](\"real_main : list_contents\")\ndef real_main(p):\n return p[0]\n\n\[email protected](\"real_main : $end\")\ndef real_main_empty(p):\n return []\n\n\ndef reject_spurious_dots(*items):\n \"Reject the spurious dots from items\"\n for list in items:\n for tok in list:\n if tok == \".\" and type(tok) == HySymbol:\n raise LexException(\"Malformed dotted list\",\n tok.start_line, tok.start_column)\n\n\[email protected](\"paren : LPAREN list_contents RPAREN\")\n@set_boundaries\ndef paren(p):\n cont = p[1]\n\n # Dotted lists are expressions of the form\n # (a b c . d)\n # that evaluate to nested cons cells of the form\n # (a . (b . (c . d)))\n if len(cont) >= 3 and isinstance(cont[-2], HySymbol) and cont[-2] == \".\":\n\n reject_spurious_dots(cont[:-2], cont[-1:])\n\n if len(cont) == 3:\n # Two-item dotted list: return the cons cell directly\n return HyCons(cont[0], cont[2])\n else:\n # Return a nested cons cell\n return HyCons(cont[0], paren([p[0], cont[1:], p[2]]))\n\n # Warn preemptively on a malformed dotted list.\n # Only check for dots after the first item to allow for a potential\n # attribute accessor shorthand\n reject_spurious_dots(cont[1:])\n\n return HyExpression(p[1])\n\n\[email protected](\"paren : LPAREN RPAREN\")\n@set_boundaries\ndef empty_paren(p):\n return HyExpression([])\n\n\[email protected](\"list_contents : term list_contents\")\ndef list_contents(p):\n return [p[0]] + p[1]\n\n\[email protected](\"list_contents : term\")\ndef list_contents_single(p):\n return [p[0]]\n\n\[email protected](\"term : identifier\")\[email protected](\"term : paren\")\[email protected](\"term : dict\")\[email protected](\"term : list\")\[email protected](\"term : set\")\[email protected](\"term : string\")\ndef term(p):\n return p[0]\n\n\[email protected](\"term : QUOTE term\")\n@set_quote_boundaries\ndef term_quote(p):\n return HyExpression([HySymbol(\"quote\"), p[1]])\n\n\[email protected](\"term : QUASIQUOTE term\")\n@set_quote_boundaries\ndef term_quasiquote(p):\n return HyExpression([HySymbol(\"quasiquote\"), p[1]])\n\n\[email protected](\"term : UNQUOTE term\")\n@set_quote_boundaries\ndef term_unquote(p):\n return HyExpression([HySymbol(\"unquote\"), p[1]])\n\n\[email protected](\"term : UNQUOTESPLICE term\")\n@set_quote_boundaries\ndef term_unquote_splice(p):\n return HyExpression([HySymbol(\"unquote_splice\"), p[1]])\n\n\[email protected](\"term : HASHREADER term\")\n@set_quote_boundaries\ndef hash_reader(p):\n st = p[0].getstr()[1]\n str_object = HyString(st)\n expr = p[1]\n return HyExpression([HySymbol(\"dispatch_reader_macro\"), str_object, expr])\n\n\[email protected](\"set : HLCURLY list_contents RCURLY\")\n@set_boundaries\ndef t_set(p):\n return HySet(p[1])\n\n\[email protected](\"set : HLCURLY RCURLY\")\n@set_boundaries\ndef empty_set(p):\n return HySet([])\n\n\[email protected](\"dict : LCURLY list_contents RCURLY\")\n@set_boundaries\ndef t_dict(p):\n return HyDict(p[1])\n\n\[email protected](\"dict : LCURLY RCURLY\")\n@set_boundaries\ndef empty_dict(p):\n return HyDict([])\n\n\[email protected](\"list : LBRACKET list_contents RBRACKET\")\n@set_boundaries\ndef t_list(p):\n return HyList(p[1])\n\n\[email protected](\"list : LBRACKET RBRACKET\")\n@set_boundaries\ndef t_empty_list(p):\n return HyList([])\n\n\nif sys.version_info[0] >= 3:\n def uni_hystring(s):\n return HyString(eval(s))\nelse:\n def uni_hystring(s):\n return HyString(eval('u'+s))\n\n\[email protected](\"string : STRING\")\n@set_boundaries\ndef t_string(p):\n # remove trailing quote\n s = p[0].value[:-1]\n # get the header\n header, s = s.split('\"', 1)\n # remove unicode marker\n header = header.replace(\"u\", \"\")\n # build python string\n s = header + '\"\"\"' + s + '\"\"\"'\n return uni_hystring(s)\n\n\[email protected](\"string : PARTIAL_STRING\")\ndef t_partial_string(p):\n # Any unterminated string requires more input\n raise PrematureEndOfInput(\"Premature end of input\")\n\n\[email protected](\"identifier : IDENTIFIER\")\n@set_boundaries\ndef t_identifier(p):\n obj = p[0].value\n\n try:\n return HyInteger(obj)\n except ValueError:\n pass\n\n if '/' in obj:\n try:\n lhs, rhs = obj.split('/')\n return HyExpression([HySymbol('fraction'), HyInteger(lhs),\n HyInteger(rhs)])\n except ValueError:\n pass\n\n try:\n return HyFloat(obj)\n except ValueError:\n pass\n\n if obj != 'j':\n try:\n return HyComplex(obj)\n except ValueError:\n pass\n\n table = {\n \"true\": \"True\",\n \"false\": \"False\",\n \"nil\": \"None\",\n \"null\": \"None\",\n }\n\n if obj in table:\n return HySymbol(table[obj])\n\n if obj.startswith(\":\"):\n return HyKeyword(obj)\n\n def mangle(p):\n if p.startswith(\"*\") and p.endswith(\"*\") and p not in (\"*\", \"**\"):\n p = p[1:-1].upper()\n\n if \"-\" in p and p != \"-\":\n p = p.replace(\"-\", \"_\")\n\n if p.endswith(\"?\") and p != \"?\":\n p = \"is_%s\" % (p[:-1])\n\n if p.endswith(\"!\") and p != \"!\":\n p = \"%s_bang\" % (p[:-1])\n\n return p\n\n obj = \".\".join([mangle(part) for part in obj.split(\".\")])\n\n return HySymbol(obj)\n\n\[email protected]\ndef error_handler(token):\n tokentype = token.gettokentype()\n if tokentype == '$end':\n raise PrematureEndOfInput(\"Premature end of input\")\n else:\n raise LexException(\n \"Ran into a %s where it wasn't expected.\" % tokentype,\n token.source_pos.lineno, token.source_pos.colno)\n\n\nparser = pg.build()\n", "path": "hy/lex/parser.py" } ]
diff --git a/hy/lex/parser.py b/hy/lex/parser.py index a63be3e33..22aaf26b3 100644 --- a/hy/lex/parser.py +++ b/hy/lex/parser.py @@ -308,6 +308,9 @@ def mangle(p): if p.endswith("?") and p != "?": p = "is_%s" % (p[:-1]) + if p.endswith("!") and p != "!": + p = "%s_bang" % (p[:-1]) + return p obj = ".".join([mangle(part) for part in obj.split(".")]) diff --git a/tests/lex/test_lex.py b/tests/lex/test_lex.py index cc956752c..56ed52ad2 100644 --- a/tests/lex/test_lex.py +++ b/tests/lex/test_lex.py @@ -326,6 +326,24 @@ def test_lex_mangling_qmark(): assert entry == [HySymbol(".is_foo.bar.is_baz")] +def test_lex_mangling_bang(): + """Ensure that identifiers ending with a bang get mangled ok""" + entry = tokenize("foo!") + assert entry == [HySymbol("foo_bang")] + entry = tokenize("!") + assert entry == [HySymbol("!")] + entry = tokenize("im!foo") + assert entry == [HySymbol("im!foo")] + entry = tokenize(".foo!") + assert entry == [HySymbol(".foo_bang")] + entry = tokenize("foo.bar!") + assert entry == [HySymbol("foo.bar_bang")] + entry = tokenize("foo!.bar") + assert entry == [HySymbol("foo_bang.bar")] + entry = tokenize(".foo!.bar.baz!") + assert entry == [HySymbol(".foo_bang.bar.baz_bang")] + + def test_simple_cons(): """Check that cons gets tokenized correctly""" entry = tokenize("(a . b)")[0]
mozilla__bugbug-598
Use new 'everchanged' operator instead of changedafter 1970 Depends on https://bugzilla.mozilla.org/show_bug.cgi?id=1546624.
[ { "content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport argparse\nimport csv\nimport sys\n\nimport requests\n\n\ndef parse_args(args):\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--types\",\n help=\"Types to retrieve\",\n default=[\"defect\", \"enhancement\", \"task\"],\n nargs=\"*\",\n )\n return parser.parse_args(args)\n\n\ndef main(args):\n params = {\n \"columnlist\": \"bug_type\",\n \"order\": \"bug_id\",\n \"j_top\": \"OR\",\n \"f1\": \"bug_type\",\n \"o1\": \"changedafter\",\n \"v1\": \"1970-01-01\",\n \"f2\": \"OP\",\n \"f3\": \"bug_type\",\n \"o3\": \"anyexact\",\n \"v3\": \"task,enhancement\",\n \"f4\": \"bug_id\",\n \"o4\": \"greaterthan\",\n \"v4\": 1540807,\n \"f5\": \"CP\",\n \"ctype\": \"csv\",\n }\n\n r = requests.get(\"https://bugzilla.mozilla.org/buglist.cgi\", params=params)\n r.raise_for_status()\n\n with open(\"bugbug/labels/defect_enhancement_task_h.csv\", \"r\") as f:\n reader = csv.reader(f)\n headers = next(reader)\n bug_type_map = {int(row[0]): row[1] for row in reader}\n\n # We add to our csv both labels that were changed, and labels that are in\n # the list of requested types.\n reader = csv.reader(r.text.splitlines())\n next(reader)\n for row in reader:\n if int(row[0]) in bug_type_map or row[1] in args.types:\n bug_type_map[int(row[0])] = row[1]\n\n with open(\"bugbug/labels/defect_enhancement_task_h.csv\", \"w\") as f:\n writer = csv.writer(f)\n writer.writerow(headers)\n writer.writerows(sorted(bug_type_map.items()))\n\n\nif __name__ == \"__main__\":\n main(parse_args(sys.argv[1:]))\n", "path": "scripts/get_type_labels.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport argparse\nimport csv\nimport sys\n\nimport requests\n\n\ndef parse_args(args):\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--types\",\n help=\"Types to retrieve\",\n default=[\"defect\", \"enhancement\", \"task\"],\n nargs=\"*\",\n )\n return parser.parse_args(args)\n\n\ndef main(args):\n params = {\n \"columnlist\": \"bug_type\",\n \"order\": \"bug_id\",\n \"j_top\": \"OR\",\n \"f1\": \"bug_type\",\n \"o1\": \"everchanged\",\n \"f2\": \"OP\",\n \"f3\": \"bug_type\",\n \"o3\": \"anyexact\",\n \"v3\": \"task,enhancement\",\n \"f4\": \"bug_id\",\n \"o4\": \"greaterthan\",\n \"v4\": 1540807,\n \"f5\": \"CP\",\n \"ctype\": \"csv\",\n }\n\n r = requests.get(\"https://bugzilla.mozilla.org/buglist.cgi\", params=params)\n r.raise_for_status()\n\n with open(\"bugbug/labels/defect_enhancement_task_h.csv\", \"r\") as f:\n reader = csv.reader(f)\n headers = next(reader)\n bug_type_map = {int(row[0]): row[1] for row in reader}\n\n # We add to our csv both labels that were changed, and labels that are in\n # the list of requested types.\n reader = csv.reader(r.text.splitlines())\n next(reader)\n for row in reader:\n if int(row[0]) in bug_type_map or row[1] in args.types:\n bug_type_map[int(row[0])] = row[1]\n\n with open(\"bugbug/labels/defect_enhancement_task_h.csv\", \"w\") as f:\n writer = csv.writer(f)\n writer.writerow(headers)\n writer.writerows(sorted(bug_type_map.items()))\n\n\nif __name__ == \"__main__\":\n main(parse_args(sys.argv[1:]))\n", "path": "scripts/get_type_labels.py" } ]
diff --git a/scripts/get_type_labels.py b/scripts/get_type_labels.py index 83d482c571..a7a179d95a 100644 --- a/scripts/get_type_labels.py +++ b/scripts/get_type_labels.py @@ -27,8 +27,7 @@ def main(args): "order": "bug_id", "j_top": "OR", "f1": "bug_type", - "o1": "changedafter", - "v1": "1970-01-01", + "o1": "everchanged", "f2": "OP", "f3": "bug_type", "o3": "anyexact",
nipy__nipype-2827
Python 3.4 tests failing on Travis ### Summary Looks like either a pytest or a pytest-xdist problem. Perhaps one of them stopped supporting 3.4. From [#6217.17](https://travis-ci.org/nipy/nipype/jobs/467617939): ``` $ py.test -v --cov nipype --cov-config .coveragerc --cov-report xml:cov.xml -c nipype/pytest.ini --doctest-modules nipype Traceback (most recent call last): File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 510, in load_setuptools_entrypoints plugin = ep.load() File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/pkg_resources/__init__.py", line 2301, in load self.require(*args, **kwargs) File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/pkg_resources/__init__.py", line 2324, in require items = working_set.resolve(reqs, env, installer, extras=self.extras) File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/pkg_resources/__init__.py", line 859, in resolve raise VersionConflict(dist, req).with_context(dependent_req) pkg_resources.VersionConflict: (pytest 3.0.7 (/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages), Requirement.parse('pytest>=3.6.0')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/travis/virtualenv/python3.4.6/bin/py.test", line 11, in <module> sys.exit(main()) File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/config.py", line 47, in main config = _prepareconfig(args, plugins) File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/config.py", line 156, in _prepareconfig pluginmanager=pluginmanager, args=args) File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__ return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec return self._inner_hookexec(hook, methods, kwargs) File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda> _MultiCall(methods, kwargs, hook.spec_opts).execute() File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 613, in execute return _wrapped_call(hook_impl.function(*args), self.execute) File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 250, in _wrapped_call wrap_controller.send(call_outcome) File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/helpconfig.py", line 32, in pytest_cmdline_parse config = outcome.get_result() File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 279, in get_result raise ex[1].with_traceback(ex[2]) File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 265, in __init__ self.result = func() File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute res = hook_impl.function(*args) File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/config.py", line 924, in pytest_cmdline_parse self.parse(args) File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/config.py", line 1082, in parse self._preparse(args, addopts=addopts) File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/config.py", line 1044, in _preparse self.pluginmanager.load_setuptools_entrypoints(entrypoint_name) File "/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages/_pytest/vendored_packages/pluggy.py", line 515, in load_setuptools_entrypoints "Plugin %r could not be loaded: %s!" % (ep.name, e)) _pytest.vendored_packages.pluggy.PluginValidationError: Plugin 'xdist' could not be loaded: (pytest 3.0.7 (/home/travis/virtualenv/python3.4.6/lib/python3.4/site-packages), Requirement.parse('pytest>=3.6.0'))! ``` ### Platform details: Travis. Python 3.4. ### Execution environment Choose one - Travis environment (Python 3.4)
[ { "content": "\"\"\" This file contains defines parameters for nipy that we use to fill\nsettings in setup.py, the nipy top-level docstring, and for building the\ndocs. In setup.py in particular, we exec this file, so it cannot import nipy\n\"\"\"\nfrom __future__ import (print_function, division, unicode_literals,\n absolute_import)\n\nimport sys\n\n# nipype version information. An empty version_extra corresponds to a\n# full release. '.dev' as a version_extra string means this is a development\n# version\n# Remove -dev for release\n__version__ = '1.1.7-dev'\n\n\ndef get_nipype_gitversion():\n \"\"\"Nipype version as reported by the last commit in git\n\n Returns\n -------\n None or str\n Version of Nipype according to git.\n \"\"\"\n import os\n import subprocess\n try:\n import nipype\n gitpath = os.path.realpath(\n os.path.join(os.path.dirname(nipype.__file__), os.path.pardir))\n except:\n gitpath = os.getcwd()\n gitpathgit = os.path.join(gitpath, '.git')\n if not os.path.exists(gitpathgit):\n return None\n ver = None\n try:\n o, _ = subprocess.Popen(\n 'git describe', shell=True, cwd=gitpath,\n stdout=subprocess.PIPE).communicate()\n except Exception:\n pass\n else:\n ver = o.decode().strip().split('-')[-1]\n return ver\n\n\nif __version__.endswith('-dev'):\n gitversion = get_nipype_gitversion()\n if gitversion:\n __version__ = '{}+{}'.format(__version__, gitversion)\n\nCLASSIFIERS = [\n 'Development Status :: 5 - Production/Stable', 'Environment :: Console',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6', 'Topic :: Scientific/Engineering'\n]\n\ndescription = 'Neuroimaging in Python: Pipelines and Interfaces'\n\n# Note: this long_description is actually a copy/paste from the top-level\n# README.txt, so that it shows up nicely on PyPI. So please remember to edit\n# it only in one place and sync it correctly.\nlong_description = \"\"\"========================================================\nNIPYPE: Neuroimaging in Python: Pipelines and Interfaces\n========================================================\n\nCurrent neuroimaging software offer users an incredible opportunity to\nanalyze data using a variety of different algorithms. However, this has\nresulted in a heterogeneous collection of specialized applications\nwithout transparent interoperability or a uniform operating interface.\n\n*Nipype*, an open-source, community-developed initiative under the\numbrella of `NiPy <http://nipy.org>`_, is a Python project that provides a\nuniform interface to existing neuroimaging software and facilitates interaction\nbetween these packages within a single workflow. Nipype provides an environment\nthat encourages interactive exploration of algorithms from different\npackages (e.g., AFNI, ANTS, BRAINS, BrainSuite, Camino, FreeSurfer, FSL, MNE,\nMRtrix, MNE, Nipy, Slicer, SPM), eases the design of workflows within and\nbetween packages, and reduces the learning curve necessary to use different \\\npackages. Nipype is creating a collaborative platform for neuroimaging \\\nsoftware development in a high-level language and addressing limitations of \\\nexisting pipeline systems.\n\n*Nipype* allows you to:\n\n* easily interact with tools from different software packages\n* combine processing steps from different software packages\n* develop new workflows faster by reusing common steps from old ones\n* process data faster by running it in parallel on many cores/machines\n* make your research easily reproducible\n* share your processing workflows with the community\n\"\"\"\n\n# versions\nNIBABEL_MIN_VERSION = '2.1.0'\nNETWORKX_MIN_VERSION = '1.9'\nNUMPY_MIN_VERSION = '1.9.0'\n# Numpy bug in python 3.7:\n# https://www.opensourceanswers.com/blog/you-shouldnt-use-python-37-for-data-science-right-now.html\nNUMPY_MIN_VERSION_37 = '1.15.3'\nSCIPY_MIN_VERSION = '0.14'\nTRAITS_MIN_VERSION = '4.6'\nDATEUTIL_MIN_VERSION = '2.2'\nPYTEST_MIN_VERSION = '3.0'\nFUTURE_MIN_VERSION = '0.16.0'\nSIMPLEJSON_MIN_VERSION = '3.8.0'\nPROV_VERSION = '1.5.2'\nCLICK_MIN_VERSION = '6.6.0'\nPYDOT_MIN_VERSION = '1.2.3'\n\nNAME = 'nipype'\nMAINTAINER = 'nipype developers'\nMAINTAINER_EMAIL = '[email protected]'\nDESCRIPTION = description\nLONG_DESCRIPTION = long_description\nURL = 'http://nipy.org/nipype'\nDOWNLOAD_URL = 'http://github.com/nipy/nipype/archives/master'\nLICENSE = 'Apache License, 2.0'\nAUTHOR = 'nipype developers'\nAUTHOR_EMAIL = '[email protected]'\nPLATFORMS = 'OS Independent'\nMAJOR = __version__.split('.')[0]\nMINOR = __version__.split('.')[1]\nMICRO = __version__.replace('-', '.').split('.')[2]\nISRELEASE = (len(__version__.replace('-', '.').split('.')) == 3\n or 'post' in __version__.replace('-', '.').split('.')[-1])\nVERSION = __version__\nPROVIDES = ['nipype']\nREQUIRES = [\n 'nibabel>=%s' % NIBABEL_MIN_VERSION,\n 'networkx>=%s' % NETWORKX_MIN_VERSION,\n 'numpy>=%s ; python_version < \"3.7\"' % NUMPY_MIN_VERSION,\n 'numpy>=%s ; python_version >= \"3.7\"' % NUMPY_MIN_VERSION_37,\n 'python-dateutil>=%s' % DATEUTIL_MIN_VERSION,\n 'scipy>=%s' % SCIPY_MIN_VERSION,\n 'traits>=%s' % TRAITS_MIN_VERSION,\n 'future>=%s' % FUTURE_MIN_VERSION,\n 'simplejson>=%s' % SIMPLEJSON_MIN_VERSION,\n 'prov>=%s' % PROV_VERSION,\n 'neurdflib',\n 'click>=%s' % CLICK_MIN_VERSION,\n 'funcsigs',\n 'pytest>=%s' % PYTEST_MIN_VERSION,\n 'pytest-xdist',\n 'mock',\n 'pydotplus',\n 'pydot>=%s' % PYDOT_MIN_VERSION,\n 'packaging',\n 'futures; python_version == \"2.7\"',\n]\n\nif sys.version_info <= (3, 4):\n REQUIRES.append('configparser')\n\nTESTS_REQUIRES = ['pytest-cov', 'codecov', 'pytest-env', 'coverage<5']\n\nEXTRA_REQUIRES = {\n 'doc': ['Sphinx>=1.4', 'numpydoc', 'matplotlib', 'pydotplus', 'pydot>=1.2.3'],\n 'tests': TESTS_REQUIRES,\n 'specs': ['yapf'],\n 'nipy': ['nitime', 'nilearn<0.5.0', 'dipy', 'nipy', 'matplotlib'],\n 'profiler': ['psutil>=5.0'],\n 'duecredit': ['duecredit'],\n 'xvfbwrapper': ['xvfbwrapper'],\n 'pybids': ['pybids==0.6.5'],\n 'ssh': ['paramiko'],\n # 'mesh': ['mayavi'] # Enable when it works\n}\n\n\ndef _list_union(iterable):\n return list(set(sum(iterable, [])))\n\n\n# Enable a handle to install all extra dependencies at once\nEXTRA_REQUIRES['all'] = _list_union(EXTRA_REQUIRES.values())\n# dev = doc + tests + specs\nEXTRA_REQUIRES['dev'] = _list_union(val for key, val in EXTRA_REQUIRES.items()\n if key in ('doc', 'tests', 'specs'))\n\nSTATUS = 'stable'\n", "path": "nipype/info.py" } ]
[ { "content": "\"\"\" This file contains defines parameters for nipy that we use to fill\nsettings in setup.py, the nipy top-level docstring, and for building the\ndocs. In setup.py in particular, we exec this file, so it cannot import nipy\n\"\"\"\nfrom __future__ import (print_function, division, unicode_literals,\n absolute_import)\n\nimport sys\n\n# nipype version information. An empty version_extra corresponds to a\n# full release. '.dev' as a version_extra string means this is a development\n# version\n# Remove -dev for release\n__version__ = '1.1.7-dev'\n\n\ndef get_nipype_gitversion():\n \"\"\"Nipype version as reported by the last commit in git\n\n Returns\n -------\n None or str\n Version of Nipype according to git.\n \"\"\"\n import os\n import subprocess\n try:\n import nipype\n gitpath = os.path.realpath(\n os.path.join(os.path.dirname(nipype.__file__), os.path.pardir))\n except:\n gitpath = os.getcwd()\n gitpathgit = os.path.join(gitpath, '.git')\n if not os.path.exists(gitpathgit):\n return None\n ver = None\n try:\n o, _ = subprocess.Popen(\n 'git describe', shell=True, cwd=gitpath,\n stdout=subprocess.PIPE).communicate()\n except Exception:\n pass\n else:\n ver = o.decode().strip().split('-')[-1]\n return ver\n\n\nif __version__.endswith('-dev'):\n gitversion = get_nipype_gitversion()\n if gitversion:\n __version__ = '{}+{}'.format(__version__, gitversion)\n\nCLASSIFIERS = [\n 'Development Status :: 5 - Production/Stable', 'Environment :: Console',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6', 'Topic :: Scientific/Engineering'\n]\n\ndescription = 'Neuroimaging in Python: Pipelines and Interfaces'\n\n# Note: this long_description is actually a copy/paste from the top-level\n# README.txt, so that it shows up nicely on PyPI. So please remember to edit\n# it only in one place and sync it correctly.\nlong_description = \"\"\"========================================================\nNIPYPE: Neuroimaging in Python: Pipelines and Interfaces\n========================================================\n\nCurrent neuroimaging software offer users an incredible opportunity to\nanalyze data using a variety of different algorithms. However, this has\nresulted in a heterogeneous collection of specialized applications\nwithout transparent interoperability or a uniform operating interface.\n\n*Nipype*, an open-source, community-developed initiative under the\numbrella of `NiPy <http://nipy.org>`_, is a Python project that provides a\nuniform interface to existing neuroimaging software and facilitates interaction\nbetween these packages within a single workflow. Nipype provides an environment\nthat encourages interactive exploration of algorithms from different\npackages (e.g., AFNI, ANTS, BRAINS, BrainSuite, Camino, FreeSurfer, FSL, MNE,\nMRtrix, MNE, Nipy, Slicer, SPM), eases the design of workflows within and\nbetween packages, and reduces the learning curve necessary to use different \\\npackages. Nipype is creating a collaborative platform for neuroimaging \\\nsoftware development in a high-level language and addressing limitations of \\\nexisting pipeline systems.\n\n*Nipype* allows you to:\n\n* easily interact with tools from different software packages\n* combine processing steps from different software packages\n* develop new workflows faster by reusing common steps from old ones\n* process data faster by running it in parallel on many cores/machines\n* make your research easily reproducible\n* share your processing workflows with the community\n\"\"\"\n\n# versions\nNIBABEL_MIN_VERSION = '2.1.0'\nNETWORKX_MIN_VERSION = '1.9'\nNUMPY_MIN_VERSION = '1.9.0'\n# Numpy bug in python 3.7:\n# https://www.opensourceanswers.com/blog/you-shouldnt-use-python-37-for-data-science-right-now.html\nNUMPY_MIN_VERSION_37 = '1.15.3'\nSCIPY_MIN_VERSION = '0.14'\nTRAITS_MIN_VERSION = '4.6'\nDATEUTIL_MIN_VERSION = '2.2'\nPYTEST_MIN_VERSION = '3.6'\nFUTURE_MIN_VERSION = '0.16.0'\nSIMPLEJSON_MIN_VERSION = '3.8.0'\nPROV_VERSION = '1.5.2'\nCLICK_MIN_VERSION = '6.6.0'\nPYDOT_MIN_VERSION = '1.2.3'\n\nNAME = 'nipype'\nMAINTAINER = 'nipype developers'\nMAINTAINER_EMAIL = '[email protected]'\nDESCRIPTION = description\nLONG_DESCRIPTION = long_description\nURL = 'http://nipy.org/nipype'\nDOWNLOAD_URL = 'http://github.com/nipy/nipype/archives/master'\nLICENSE = 'Apache License, 2.0'\nAUTHOR = 'nipype developers'\nAUTHOR_EMAIL = '[email protected]'\nPLATFORMS = 'OS Independent'\nMAJOR = __version__.split('.')[0]\nMINOR = __version__.split('.')[1]\nMICRO = __version__.replace('-', '.').split('.')[2]\nISRELEASE = (len(__version__.replace('-', '.').split('.')) == 3\n or 'post' in __version__.replace('-', '.').split('.')[-1])\nVERSION = __version__\nPROVIDES = ['nipype']\nREQUIRES = [\n 'nibabel>=%s' % NIBABEL_MIN_VERSION,\n 'networkx>=%s' % NETWORKX_MIN_VERSION,\n 'numpy>=%s ; python_version < \"3.7\"' % NUMPY_MIN_VERSION,\n 'numpy>=%s ; python_version >= \"3.7\"' % NUMPY_MIN_VERSION_37,\n 'python-dateutil>=%s' % DATEUTIL_MIN_VERSION,\n 'scipy>=%s' % SCIPY_MIN_VERSION,\n 'traits>=%s' % TRAITS_MIN_VERSION,\n 'future>=%s' % FUTURE_MIN_VERSION,\n 'simplejson>=%s' % SIMPLEJSON_MIN_VERSION,\n 'prov>=%s' % PROV_VERSION,\n 'neurdflib',\n 'click>=%s' % CLICK_MIN_VERSION,\n 'funcsigs',\n 'pytest>=%s' % PYTEST_MIN_VERSION,\n 'pytest-xdist',\n 'mock',\n 'pydotplus',\n 'pydot>=%s' % PYDOT_MIN_VERSION,\n 'packaging',\n 'futures; python_version == \"2.7\"',\n]\n\nif sys.version_info <= (3, 4):\n REQUIRES.append('configparser')\n\nTESTS_REQUIRES = ['pytest-cov', 'codecov', 'pytest-env', 'coverage<5']\n\nEXTRA_REQUIRES = {\n 'doc': ['Sphinx>=1.4', 'numpydoc', 'matplotlib', 'pydotplus', 'pydot>=1.2.3'],\n 'tests': TESTS_REQUIRES,\n 'specs': ['yapf'],\n 'nipy': ['nitime', 'nilearn<0.5.0', 'dipy', 'nipy', 'matplotlib'],\n 'profiler': ['psutil>=5.0'],\n 'duecredit': ['duecredit'],\n 'xvfbwrapper': ['xvfbwrapper'],\n 'pybids': ['pybids==0.6.5'],\n 'ssh': ['paramiko'],\n # 'mesh': ['mayavi'] # Enable when it works\n}\n\n\ndef _list_union(iterable):\n return list(set(sum(iterable, [])))\n\n\n# Enable a handle to install all extra dependencies at once\nEXTRA_REQUIRES['all'] = _list_union(EXTRA_REQUIRES.values())\n# dev = doc + tests + specs\nEXTRA_REQUIRES['dev'] = _list_union(val for key, val in EXTRA_REQUIRES.items()\n if key in ('doc', 'tests', 'specs'))\n\nSTATUS = 'stable'\n", "path": "nipype/info.py" } ]
diff --git a/nipype/info.py b/nipype/info.py index 1cf361c40c..be6edb713f 100644 --- a/nipype/info.py +++ b/nipype/info.py @@ -108,7 +108,7 @@ def get_nipype_gitversion(): SCIPY_MIN_VERSION = '0.14' TRAITS_MIN_VERSION = '4.6' DATEUTIL_MIN_VERSION = '2.2' -PYTEST_MIN_VERSION = '3.0' +PYTEST_MIN_VERSION = '3.6' FUTURE_MIN_VERSION = '0.16.0' SIMPLEJSON_MIN_VERSION = '3.8.0' PROV_VERSION = '1.5.2'
Chia-Network__chia-blockchain-14904
[Bug] Windows CLI waits for passphrase input but no longer prompts for it in 1.7.1 ### What happened? When using the CLI to start the wallet (or indeed any service) - the prompt `(Unlock Keyring) Passphrase:` is no longer shown until *after* the passphrase is entered. The daemon will display `Starting daemon` and then wait there for the passphrase input. Once the passphrase is entered, the following is shown `(Unlock Keyring) Passphrase: Unlocking daemon keyring` This is a regression in 1.7.1 as it works as expected in 1.7.0 Tested in both cmd and powershell 7.3.2 ### Version 1.7.1 ### What platform are you using? Windows ### What ui mode are you using? CLI ### Relevant log output ```shell N/A ```
[ { "content": "from __future__ import annotations\n\nimport os\nimport sys\nimport time\nfrom getpass import getpass\nfrom io import TextIOWrapper\nfrom pathlib import Path\nfrom typing import Any, Dict, Optional, Tuple\n\nimport colorama\n\nfrom chia.daemon.client import acquire_connection_to_daemon\nfrom chia.util.errors import KeychainMaxUnlockAttempts\nfrom chia.util.keychain import Keychain, supports_os_passphrase_storage\nfrom chia.util.keyring_wrapper import DEFAULT_PASSPHRASE_IF_NO_MASTER_PASSPHRASE, KeyringWrapper\nfrom chia.util.misc import prompt_yes_no\n\nDEFAULT_PASSPHRASE_PROMPT = (\n colorama.Fore.YELLOW + colorama.Style.BRIGHT + \"(Unlock Keyring)\" + colorama.Style.RESET_ALL + \" Passphrase: \"\n) # noqa: E501\nFAILED_ATTEMPT_DELAY = 0.5\nMAX_KEYS = 100\nMAX_RETRIES = 3\nSAVE_MASTER_PASSPHRASE_WARNING = (\n colorama.Fore.YELLOW\n + colorama.Style.BRIGHT\n + \"\\n!!! SECURITY WARNING !!!\\n\"\n + colorama.Style.RESET_ALL\n + \"Other processes may be able to access your saved passphrase, possibly exposing your private keys.\\n\"\n + \"You should not save your passphrase unless you fully trust your environment.\\n\"\n)\n\n\ndef obtain_current_passphrase(prompt: str = DEFAULT_PASSPHRASE_PROMPT, use_passphrase_cache: bool = False) -> str:\n \"\"\"\n Obtains the master passphrase for the keyring, optionally using the cached\n value (if previously set). If the passphrase isn't already cached, the user is\n prompted interactively to enter their passphrase a max of MAX_RETRIES times\n before failing.\n \"\"\"\n\n if use_passphrase_cache:\n passphrase, validated = KeyringWrapper.get_shared_instance().get_cached_master_passphrase()\n if passphrase:\n # If the cached passphrase was previously validated, we assume it's... valid\n if validated:\n return passphrase\n\n # Cached passphrase needs to be validated\n if KeyringWrapper.get_shared_instance().master_passphrase_is_valid(passphrase):\n KeyringWrapper.get_shared_instance().set_cached_master_passphrase(passphrase, validated=True)\n return passphrase\n else:\n # Cached passphrase is bad, clear the cache\n KeyringWrapper.get_shared_instance().set_cached_master_passphrase(None)\n\n # Prompt interactively with up to MAX_RETRIES attempts\n for i in range(MAX_RETRIES):\n colorama.init()\n\n passphrase = prompt_for_passphrase(prompt)\n\n if KeyringWrapper.get_shared_instance().master_passphrase_is_valid(passphrase):\n # If using the passphrase cache, and the user inputted a passphrase, update the cache\n if use_passphrase_cache:\n KeyringWrapper.get_shared_instance().set_cached_master_passphrase(passphrase, validated=True)\n return passphrase\n\n time.sleep(FAILED_ATTEMPT_DELAY)\n print(\"Incorrect passphrase\\n\")\n raise KeychainMaxUnlockAttempts()\n\n\ndef verify_passphrase_meets_requirements(\n new_passphrase: str, confirmation_passphrase: str\n) -> Tuple[bool, Optional[str]]:\n match = new_passphrase == confirmation_passphrase\n min_length = Keychain.minimum_passphrase_length()\n meets_len_requirement = len(new_passphrase) >= min_length\n\n if match and meets_len_requirement:\n return True, None\n elif not match:\n return False, \"Passphrases do not match\"\n elif not meets_len_requirement:\n return False, f\"Minimum passphrase length is {min_length}\"\n else:\n raise Exception(\"Unexpected passphrase verification case\")\n\n\ndef prompt_for_passphrase(prompt: str) -> str:\n if sys.platform == \"win32\" or sys.platform == \"cygwin\":\n print(prompt, end=\"\")\n prompt = \"\"\n return getpass(prompt)\n\n\ndef prompt_to_save_passphrase() -> bool:\n save: bool = False\n\n try:\n if supports_os_passphrase_storage():\n location: Optional[str] = None\n warning: Optional[str] = None\n\n if sys.platform == \"darwin\":\n location = \"macOS Keychain\"\n warning = SAVE_MASTER_PASSPHRASE_WARNING\n elif sys.platform == \"win32\" or sys.platform == \"cygwin\":\n location = \"Windows Credential Manager\"\n warning = SAVE_MASTER_PASSPHRASE_WARNING\n\n if location is None:\n raise ValueError(\"OS-specific credential store not specified\")\n\n print(\n \"\\n\"\n \"Your passphrase can be stored in your system's secure credential store. \"\n \"Other Chia processes will be able to access your keys without prompting for your passphrase.\"\n )\n if warning is not None:\n colorama.init()\n\n print(warning)\n save = prompt_yes_no(f\"Would you like to save your passphrase to the {location}?\")\n\n except Exception as e:\n print(f\"Caught exception: {e}\")\n return False\n\n return save\n\n\ndef prompt_for_new_passphrase() -> Tuple[str, bool]:\n min_length: int = Keychain.minimum_passphrase_length()\n if min_length > 0:\n n = min_length\n print(f\"\\nPassphrases must be {n} or more characters in length\") # lgtm [py/clear-text-logging-sensitive-data]\n while True:\n passphrase: str = getpass(\"New Passphrase: \")\n confirmation: str = getpass(\"Confirm Passphrase: \")\n save_passphrase: bool = False\n\n valid_passphrase, error_msg = verify_passphrase_meets_requirements(passphrase, confirmation)\n\n if valid_passphrase:\n if supports_os_passphrase_storage():\n save_passphrase = prompt_to_save_passphrase()\n\n return passphrase, save_passphrase\n elif error_msg:\n print(f\"{error_msg}\\n\") # lgtm [py/clear-text-logging-sensitive-data]\n\n\ndef read_passphrase_from_file(passphrase_file: TextIOWrapper) -> str:\n passphrase = passphrase_file.read().rstrip(os.environ.get(\"CHIA_PASSPHRASE_STRIP_TRAILING_CHARS\", \"\\r\\n\"))\n passphrase_file.close()\n return passphrase\n\n\ndef initialize_passphrase() -> None:\n if Keychain.has_master_passphrase():\n print(\"Keyring is already protected by a passphrase\")\n print(\"\\nUse 'chia passphrase set' or 'chia passphrase remove' to update or remove your passphrase\")\n sys.exit(1)\n\n # We'll rely on Keyring initialization to leverage the cached passphrase for\n # bootstrapping the keyring encryption process\n print(\"Setting keyring passphrase\")\n passphrase: Optional[str] = None\n # save_passphrase indicates whether the passphrase should be saved in the\n # macOS Keychain or Windows Credential Manager\n save_passphrase: bool = False\n\n if Keychain.has_cached_passphrase():\n passphrase = Keychain.get_cached_master_passphrase()\n\n if not passphrase or passphrase == default_passphrase():\n passphrase, save_passphrase = prompt_for_new_passphrase()\n\n Keychain.set_master_passphrase(current_passphrase=None, new_passphrase=passphrase, save_passphrase=save_passphrase)\n\n\ndef set_or_update_passphrase(passphrase: Optional[str], current_passphrase: Optional[str], hint: Optional[str]) -> bool:\n # Prompt for the current passphrase, if necessary\n if Keychain.has_master_passphrase():\n # Try the default passphrase first\n if using_default_passphrase():\n current_passphrase = default_passphrase()\n\n if not current_passphrase:\n try:\n current_passphrase = obtain_current_passphrase(\"Current Passphrase: \")\n except Exception as e:\n print(f\"Unable to confirm current passphrase: {e}\")\n sys.exit(1)\n\n success: bool = False\n new_passphrase: Optional[str] = passphrase\n save_passphrase: bool = False\n\n try:\n # Prompt for the new passphrase, if necessary\n if new_passphrase is None:\n new_passphrase, save_passphrase = prompt_for_new_passphrase()\n\n if new_passphrase == current_passphrase:\n raise ValueError(\"passphrase is unchanged\")\n\n Keychain.set_master_passphrase(\n current_passphrase=current_passphrase,\n new_passphrase=new_passphrase,\n passphrase_hint=hint,\n save_passphrase=save_passphrase,\n )\n success = True\n except Exception as e:\n print(f\"Unable to set or update passphrase: {e}\")\n success = False\n\n return success\n\n\ndef remove_passphrase(current_passphrase: Optional[str]) -> bool:\n \"\"\"\n Removes the user's keyring passphrase. The keyring will be re-encrypted to the default passphrase.\n \"\"\"\n success = False\n\n if not Keychain.has_master_passphrase() or using_default_passphrase():\n print(\"Passphrase is not currently set\")\n success = False\n else:\n # Try the default passphrase first\n if using_default_passphrase():\n current_passphrase = default_passphrase()\n\n # Prompt for the current passphrase, if necessary\n if not current_passphrase:\n try:\n current_passphrase = obtain_current_passphrase(\"Current Passphrase: \")\n except Exception as e:\n print(f\"Unable to confirm current passphrase: {e}\")\n success = False\n\n if current_passphrase:\n try:\n Keychain.remove_master_passphrase(current_passphrase)\n success = True\n except Exception as e:\n print(f\"Unable to remove passphrase: {e}\")\n success = False\n\n return success\n\n\ndef cache_passphrase(passphrase: str) -> None:\n Keychain.set_cached_master_passphrase(passphrase)\n\n\ndef get_current_passphrase() -> Optional[str]:\n if not Keychain.has_master_passphrase():\n return None\n\n current_passphrase = None\n if using_default_passphrase():\n current_passphrase = default_passphrase()\n else:\n try:\n current_passphrase = obtain_current_passphrase()\n except Exception as e:\n print(f\"Unable to confirm current passphrase: {e}\")\n raise\n\n return current_passphrase\n\n\ndef default_passphrase() -> str:\n return DEFAULT_PASSPHRASE_IF_NO_MASTER_PASSPHRASE\n\n\ndef using_default_passphrase() -> bool:\n if not Keychain.has_master_passphrase():\n return False\n\n return Keychain.master_passphrase_is_valid(default_passphrase())\n\n\ndef display_passphrase_hint() -> None:\n passphrase_hint = Keychain.get_master_passphrase_hint()\n if passphrase_hint is not None:\n print(f\"Passphrase hint: {passphrase_hint}\") # lgtm [py/clear-text-logging-sensitive-data]\n else:\n print(\"Passphrase hint is not set\")\n\n\ndef update_passphrase_hint(hint: Optional[str] = None) -> bool:\n updated: bool = False\n if Keychain.has_master_passphrase() is False or using_default_passphrase():\n print(\"Updating the passphrase hint requires that a passphrase has been set\")\n else:\n current_passphrase: Optional[str] = get_current_passphrase()\n if current_passphrase is None:\n print(\"Keyring is not passphrase-protected\")\n else:\n # Set or remove the passphrase hint\n Keychain.set_master_passphrase_hint(current_passphrase, hint)\n updated = True\n\n return updated\n\n\ndef set_passphrase_hint(hint: str) -> None:\n if update_passphrase_hint(hint):\n print(\"Passphrase hint set\")\n else:\n print(\"Passphrase hint was not updated\")\n\n\ndef remove_passphrase_hint() -> None:\n if update_passphrase_hint(None):\n print(\"Passphrase hint removed\")\n else:\n print(\"Passphrase hint was not removed\")\n\n\nasync def async_update_daemon_passphrase_cache_if_running(root_path: Path, config: Dict[str, Any]) -> None:\n \"\"\"\n Attempt to connect to the daemon and update the cached passphrase\n \"\"\"\n new_passphrase = Keychain.get_cached_master_passphrase()\n assert new_passphrase is not None\n\n try:\n async with acquire_connection_to_daemon(root_path, config, quiet=True) as daemon:\n if daemon is not None:\n response = await daemon.unlock_keyring(new_passphrase)\n if response is None:\n raise Exception(\"daemon didn't respond\")\n\n success: bool = response.get(\"data\", {}).get(\"success\", False)\n if success is False:\n error = response.get(\"data\", {}).get(\"error\", \"unknown error\")\n raise Exception(error)\n except Exception as e:\n print(f\"Failed to notify daemon of updated keyring passphrase: {e}\")\n", "path": "chia/cmds/passphrase_funcs.py" } ]
[ { "content": "from __future__ import annotations\n\nimport os\nimport sys\nimport time\nfrom getpass import getpass\nfrom io import TextIOWrapper\nfrom pathlib import Path\nfrom typing import Any, Dict, Optional, Tuple\n\nimport colorama\n\nfrom chia.daemon.client import acquire_connection_to_daemon\nfrom chia.util.errors import KeychainMaxUnlockAttempts\nfrom chia.util.keychain import Keychain, supports_os_passphrase_storage\nfrom chia.util.keyring_wrapper import DEFAULT_PASSPHRASE_IF_NO_MASTER_PASSPHRASE, KeyringWrapper\nfrom chia.util.misc import prompt_yes_no\n\nDEFAULT_PASSPHRASE_PROMPT = (\n colorama.Fore.YELLOW + colorama.Style.BRIGHT + \"(Unlock Keyring)\" + colorama.Style.RESET_ALL + \" Passphrase: \"\n) # noqa: E501\nFAILED_ATTEMPT_DELAY = 0.5\nMAX_KEYS = 100\nMAX_RETRIES = 3\nSAVE_MASTER_PASSPHRASE_WARNING = (\n colorama.Fore.YELLOW\n + colorama.Style.BRIGHT\n + \"\\n!!! SECURITY WARNING !!!\\n\"\n + colorama.Style.RESET_ALL\n + \"Other processes may be able to access your saved passphrase, possibly exposing your private keys.\\n\"\n + \"You should not save your passphrase unless you fully trust your environment.\\n\"\n)\n\n\ndef obtain_current_passphrase(prompt: str = DEFAULT_PASSPHRASE_PROMPT, use_passphrase_cache: bool = False) -> str:\n \"\"\"\n Obtains the master passphrase for the keyring, optionally using the cached\n value (if previously set). If the passphrase isn't already cached, the user is\n prompted interactively to enter their passphrase a max of MAX_RETRIES times\n before failing.\n \"\"\"\n\n if use_passphrase_cache:\n passphrase, validated = KeyringWrapper.get_shared_instance().get_cached_master_passphrase()\n if passphrase:\n # If the cached passphrase was previously validated, we assume it's... valid\n if validated:\n return passphrase\n\n # Cached passphrase needs to be validated\n if KeyringWrapper.get_shared_instance().master_passphrase_is_valid(passphrase):\n KeyringWrapper.get_shared_instance().set_cached_master_passphrase(passphrase, validated=True)\n return passphrase\n else:\n # Cached passphrase is bad, clear the cache\n KeyringWrapper.get_shared_instance().set_cached_master_passphrase(None)\n\n # Prompt interactively with up to MAX_RETRIES attempts\n for i in range(MAX_RETRIES):\n colorama.init()\n\n passphrase = prompt_for_passphrase(prompt)\n\n if KeyringWrapper.get_shared_instance().master_passphrase_is_valid(passphrase):\n # If using the passphrase cache, and the user inputted a passphrase, update the cache\n if use_passphrase_cache:\n KeyringWrapper.get_shared_instance().set_cached_master_passphrase(passphrase, validated=True)\n return passphrase\n\n time.sleep(FAILED_ATTEMPT_DELAY)\n print(\"Incorrect passphrase\\n\")\n raise KeychainMaxUnlockAttempts()\n\n\ndef verify_passphrase_meets_requirements(\n new_passphrase: str, confirmation_passphrase: str\n) -> Tuple[bool, Optional[str]]:\n match = new_passphrase == confirmation_passphrase\n min_length = Keychain.minimum_passphrase_length()\n meets_len_requirement = len(new_passphrase) >= min_length\n\n if match and meets_len_requirement:\n return True, None\n elif not match:\n return False, \"Passphrases do not match\"\n elif not meets_len_requirement:\n return False, f\"Minimum passphrase length is {min_length}\"\n else:\n raise Exception(\"Unexpected passphrase verification case\")\n\n\ndef prompt_for_passphrase(prompt: str) -> str:\n if sys.platform == \"win32\" or sys.platform == \"cygwin\":\n print(prompt, end=\"\", flush=True)\n prompt = \"\"\n return getpass(prompt)\n\n\ndef prompt_to_save_passphrase() -> bool:\n save: bool = False\n\n try:\n if supports_os_passphrase_storage():\n location: Optional[str] = None\n warning: Optional[str] = None\n\n if sys.platform == \"darwin\":\n location = \"macOS Keychain\"\n warning = SAVE_MASTER_PASSPHRASE_WARNING\n elif sys.platform == \"win32\" or sys.platform == \"cygwin\":\n location = \"Windows Credential Manager\"\n warning = SAVE_MASTER_PASSPHRASE_WARNING\n\n if location is None:\n raise ValueError(\"OS-specific credential store not specified\")\n\n print(\n \"\\n\"\n \"Your passphrase can be stored in your system's secure credential store. \"\n \"Other Chia processes will be able to access your keys without prompting for your passphrase.\"\n )\n if warning is not None:\n colorama.init()\n\n print(warning)\n save = prompt_yes_no(f\"Would you like to save your passphrase to the {location}?\")\n\n except Exception as e:\n print(f\"Caught exception: {e}\")\n return False\n\n return save\n\n\ndef prompt_for_new_passphrase() -> Tuple[str, bool]:\n min_length: int = Keychain.minimum_passphrase_length()\n if min_length > 0:\n n = min_length\n print(f\"\\nPassphrases must be {n} or more characters in length\") # lgtm [py/clear-text-logging-sensitive-data]\n while True:\n passphrase: str = getpass(\"New Passphrase: \")\n confirmation: str = getpass(\"Confirm Passphrase: \")\n save_passphrase: bool = False\n\n valid_passphrase, error_msg = verify_passphrase_meets_requirements(passphrase, confirmation)\n\n if valid_passphrase:\n if supports_os_passphrase_storage():\n save_passphrase = prompt_to_save_passphrase()\n\n return passphrase, save_passphrase\n elif error_msg:\n print(f\"{error_msg}\\n\") # lgtm [py/clear-text-logging-sensitive-data]\n\n\ndef read_passphrase_from_file(passphrase_file: TextIOWrapper) -> str:\n passphrase = passphrase_file.read().rstrip(os.environ.get(\"CHIA_PASSPHRASE_STRIP_TRAILING_CHARS\", \"\\r\\n\"))\n passphrase_file.close()\n return passphrase\n\n\ndef initialize_passphrase() -> None:\n if Keychain.has_master_passphrase():\n print(\"Keyring is already protected by a passphrase\")\n print(\"\\nUse 'chia passphrase set' or 'chia passphrase remove' to update or remove your passphrase\")\n sys.exit(1)\n\n # We'll rely on Keyring initialization to leverage the cached passphrase for\n # bootstrapping the keyring encryption process\n print(\"Setting keyring passphrase\")\n passphrase: Optional[str] = None\n # save_passphrase indicates whether the passphrase should be saved in the\n # macOS Keychain or Windows Credential Manager\n save_passphrase: bool = False\n\n if Keychain.has_cached_passphrase():\n passphrase = Keychain.get_cached_master_passphrase()\n\n if not passphrase or passphrase == default_passphrase():\n passphrase, save_passphrase = prompt_for_new_passphrase()\n\n Keychain.set_master_passphrase(current_passphrase=None, new_passphrase=passphrase, save_passphrase=save_passphrase)\n\n\ndef set_or_update_passphrase(passphrase: Optional[str], current_passphrase: Optional[str], hint: Optional[str]) -> bool:\n # Prompt for the current passphrase, if necessary\n if Keychain.has_master_passphrase():\n # Try the default passphrase first\n if using_default_passphrase():\n current_passphrase = default_passphrase()\n\n if not current_passphrase:\n try:\n current_passphrase = obtain_current_passphrase(\"Current Passphrase: \")\n except Exception as e:\n print(f\"Unable to confirm current passphrase: {e}\")\n sys.exit(1)\n\n success: bool = False\n new_passphrase: Optional[str] = passphrase\n save_passphrase: bool = False\n\n try:\n # Prompt for the new passphrase, if necessary\n if new_passphrase is None:\n new_passphrase, save_passphrase = prompt_for_new_passphrase()\n\n if new_passphrase == current_passphrase:\n raise ValueError(\"passphrase is unchanged\")\n\n Keychain.set_master_passphrase(\n current_passphrase=current_passphrase,\n new_passphrase=new_passphrase,\n passphrase_hint=hint,\n save_passphrase=save_passphrase,\n )\n success = True\n except Exception as e:\n print(f\"Unable to set or update passphrase: {e}\")\n success = False\n\n return success\n\n\ndef remove_passphrase(current_passphrase: Optional[str]) -> bool:\n \"\"\"\n Removes the user's keyring passphrase. The keyring will be re-encrypted to the default passphrase.\n \"\"\"\n success = False\n\n if not Keychain.has_master_passphrase() or using_default_passphrase():\n print(\"Passphrase is not currently set\")\n success = False\n else:\n # Try the default passphrase first\n if using_default_passphrase():\n current_passphrase = default_passphrase()\n\n # Prompt for the current passphrase, if necessary\n if not current_passphrase:\n try:\n current_passphrase = obtain_current_passphrase(\"Current Passphrase: \")\n except Exception as e:\n print(f\"Unable to confirm current passphrase: {e}\")\n success = False\n\n if current_passphrase:\n try:\n Keychain.remove_master_passphrase(current_passphrase)\n success = True\n except Exception as e:\n print(f\"Unable to remove passphrase: {e}\")\n success = False\n\n return success\n\n\ndef cache_passphrase(passphrase: str) -> None:\n Keychain.set_cached_master_passphrase(passphrase)\n\n\ndef get_current_passphrase() -> Optional[str]:\n if not Keychain.has_master_passphrase():\n return None\n\n current_passphrase = None\n if using_default_passphrase():\n current_passphrase = default_passphrase()\n else:\n try:\n current_passphrase = obtain_current_passphrase()\n except Exception as e:\n print(f\"Unable to confirm current passphrase: {e}\")\n raise\n\n return current_passphrase\n\n\ndef default_passphrase() -> str:\n return DEFAULT_PASSPHRASE_IF_NO_MASTER_PASSPHRASE\n\n\ndef using_default_passphrase() -> bool:\n if not Keychain.has_master_passphrase():\n return False\n\n return Keychain.master_passphrase_is_valid(default_passphrase())\n\n\ndef display_passphrase_hint() -> None:\n passphrase_hint = Keychain.get_master_passphrase_hint()\n if passphrase_hint is not None:\n print(f\"Passphrase hint: {passphrase_hint}\") # lgtm [py/clear-text-logging-sensitive-data]\n else:\n print(\"Passphrase hint is not set\")\n\n\ndef update_passphrase_hint(hint: Optional[str] = None) -> bool:\n updated: bool = False\n if Keychain.has_master_passphrase() is False or using_default_passphrase():\n print(\"Updating the passphrase hint requires that a passphrase has been set\")\n else:\n current_passphrase: Optional[str] = get_current_passphrase()\n if current_passphrase is None:\n print(\"Keyring is not passphrase-protected\")\n else:\n # Set or remove the passphrase hint\n Keychain.set_master_passphrase_hint(current_passphrase, hint)\n updated = True\n\n return updated\n\n\ndef set_passphrase_hint(hint: str) -> None:\n if update_passphrase_hint(hint):\n print(\"Passphrase hint set\")\n else:\n print(\"Passphrase hint was not updated\")\n\n\ndef remove_passphrase_hint() -> None:\n if update_passphrase_hint(None):\n print(\"Passphrase hint removed\")\n else:\n print(\"Passphrase hint was not removed\")\n\n\nasync def async_update_daemon_passphrase_cache_if_running(root_path: Path, config: Dict[str, Any]) -> None:\n \"\"\"\n Attempt to connect to the daemon and update the cached passphrase\n \"\"\"\n new_passphrase = Keychain.get_cached_master_passphrase()\n assert new_passphrase is not None\n\n try:\n async with acquire_connection_to_daemon(root_path, config, quiet=True) as daemon:\n if daemon is not None:\n response = await daemon.unlock_keyring(new_passphrase)\n if response is None:\n raise Exception(\"daemon didn't respond\")\n\n success: bool = response.get(\"data\", {}).get(\"success\", False)\n if success is False:\n error = response.get(\"data\", {}).get(\"error\", \"unknown error\")\n raise Exception(error)\n except Exception as e:\n print(f\"Failed to notify daemon of updated keyring passphrase: {e}\")\n", "path": "chia/cmds/passphrase_funcs.py" } ]
diff --git a/chia/cmds/passphrase_funcs.py b/chia/cmds/passphrase_funcs.py index 98c5fe243c8c..69ca4f14059e 100644 --- a/chia/cmds/passphrase_funcs.py +++ b/chia/cmds/passphrase_funcs.py @@ -91,7 +91,7 @@ def verify_passphrase_meets_requirements( def prompt_for_passphrase(prompt: str) -> str: if sys.platform == "win32" or sys.platform == "cygwin": - print(prompt, end="") + print(prompt, end="", flush=True) prompt = "" return getpass(prompt)
ivy-llc__ivy-18252
broadcast_to
[ { "content": "# global\nimport ivy\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\nfrom ivy.func_wrapper import (\n with_unsupported_dtypes,\n with_supported_dtypes,\n)\n\n\n@to_ivy_arrays_and_back\ndef reshape(x, shape):\n return ivy.reshape(x, shape)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef abs(x, name=None):\n return ivy.abs(x)\n\n\nabsolute = abs\n\n\n@to_ivy_arrays_and_back\ndef stack(x, axis=0, name=None):\n return ivy.stack(x, axis=axis)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"int8\", \"int16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef concat(x, axis, name=None):\n return ivy.concat(x, axis=axis)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"int8\", \"uint8\", \"int16\", \"float16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef tile(x, repeat_times, name=None):\n return ivy.tile(x, repeats=repeat_times)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"int16\", \"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef split(x, num_or_sections, axis=0, name=None):\n return ivy.split(x, num_or_size_splits=num_or_sections, axis=axis)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"float16\", \"bfloat16\", \"int8\", \"int16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef squeeze(x, axis=None, name=None):\n return ivy.squeeze(x, axis=axis)\n\n\n@with_supported_dtypes(\n {\n \"2.5.0 and below\": (\n \"bool\",\n \"float16\",\n \"float32\",\n \"float64\",\n \"int32\",\n \"int64\",\n \"uint8\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef cast(x, dtype):\n return ivy.astype(x, dtype)\n", "path": "ivy/functional/frontends/paddle/tensor/manipulation.py" } ]
[ { "content": "# global\nimport ivy\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\nfrom ivy.func_wrapper import (\n with_unsupported_dtypes,\n with_supported_dtypes,\n)\n\n\n@to_ivy_arrays_and_back\ndef reshape(x, shape):\n return ivy.reshape(x, shape)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef abs(x, name=None):\n return ivy.abs(x)\n\n\nabsolute = abs\n\n\n@to_ivy_arrays_and_back\ndef stack(x, axis=0, name=None):\n return ivy.stack(x, axis=axis)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"int8\", \"int16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef concat(x, axis, name=None):\n return ivy.concat(x, axis=axis)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"int8\", \"uint8\", \"int16\", \"float16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef tile(x, repeat_times, name=None):\n return ivy.tile(x, repeats=repeat_times)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"int16\", \"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef split(x, num_or_sections, axis=0, name=None):\n return ivy.split(x, num_or_size_splits=num_or_sections, axis=axis)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"float16\", \"bfloat16\", \"int8\", \"int16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef squeeze(x, axis=None, name=None):\n return ivy.squeeze(x, axis=axis)\n\n\n@with_supported_dtypes(\n {\n \"2.5.0 and below\": (\n \"bool\",\n \"float16\",\n \"float32\",\n \"float64\",\n \"int32\",\n \"int64\",\n \"uint8\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef cast(x, dtype):\n return ivy.astype(x, dtype)\n\n\n@with_supported_dtypes(\n {\"2.5.0 and below\": (\"bool\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef broadcast_to(x, shape, name=None):\n return ivy.broadcast_to(x, shape)\n", "path": "ivy/functional/frontends/paddle/tensor/manipulation.py" } ]
diff --git a/ivy/functional/frontends/paddle/tensor/manipulation.py b/ivy/functional/frontends/paddle/tensor/manipulation.py index adb24100b6e7c..7ef098a2c71d3 100644 --- a/ivy/functional/frontends/paddle/tensor/manipulation.py +++ b/ivy/functional/frontends/paddle/tensor/manipulation.py @@ -78,3 +78,12 @@ def squeeze(x, axis=None, name=None): @to_ivy_arrays_and_back def cast(x, dtype): return ivy.astype(x, dtype) + + +@with_supported_dtypes( + {"2.5.0 and below": ("bool", "float32", "float64", "int32", "int64")}, + "paddle", +) +@to_ivy_arrays_and_back +def broadcast_to(x, shape, name=None): + return ivy.broadcast_to(x, shape) diff --git a/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_paddle_manipulation.py b/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_paddle_manipulation.py index 38fd74caf3c2f..4547de5df1c9a 100644 --- a/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_paddle_manipulation.py +++ b/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_paddle_manipulation.py @@ -362,3 +362,46 @@ def test_paddle_cast( x=x[0], dtype=dtype[0], ) + + [email protected] +def _broadcast_to_helper(draw): + dtype_and_x = draw( + helpers.dtype_and_values( + available_dtypes=helpers.get_dtypes("valid"), + min_num_dims=1, + max_num_dims=6, + ) + ) + + dtype, x = dtype_and_x + input_shape = x[0].shape + + max_num_dims = 6 - len(input_shape) + shape = draw(helpers.get_shape(max_num_dims=max_num_dims)) + input_shape + + return dtype, x, shape + + +@handle_frontend_test( + fn_tree="paddle.broadcast_to", + dtype_x_and_shape=_broadcast_to_helper(), +) +def test_paddle_broadcast_to( + *, + dtype_x_and_shape, + on_device, + fn_tree, + frontend, + test_flags, +): + input_dtype, x, shape = dtype_x_and_shape + helpers.test_frontend_function( + input_dtypes=input_dtype, + frontend=frontend, + test_flags=test_flags, + fn_tree=fn_tree, + on_device=on_device, + x=x[0], + shape=shape, + )
cobbler__cobbler-1265
build_reporting fails if empty string in ignorelist The default configuration in the ubuntu 12.04 cobbler 2.6.5 package has the following in `/etc/settings`: ``` build_reporting_ignorelist = [""] ``` The code that reads this value is in `install_post_report.py`, and the condition that determines whether to send a build report email is: ``` for prefix in settings.build_reporting_ignorelist: if name.lower().startswith(prefix) == True: sendmail = False ``` With the default configuration, this check always succeeds, and **mail is not sent**. Fix the issue by modifying the condition to: ``` if prefix != '' and name.lower().startswith(prefix): ```
[ { "content": "# (c) 2008-2009\n# Jeff Schroeder <[email protected]>\n# Michael DeHaan <michael.dehaan AT gmail>\n#\n# License: GPLv2+\n\n# Post install trigger for cobbler to\n# send out a pretty email report that\n# contains target information.\n\nimport distutils.sysconfig\nimport sys\nimport os\nimport traceback\n\nplib = distutils.sysconfig.get_python_lib()\nmod_path=\"%s/cobbler\" % plib\nsys.path.insert(0, mod_path)\n\nfrom utils import _\nimport smtplib\nimport sys\nimport cobbler.templar as templar\nfrom cobbler.cexceptions import CX\nimport utils\n\ndef register():\n # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster.\n # the return of this method indicates the trigger type\n return \"/var/lib/cobbler/triggers/install/post/*\"\n\ndef run(api, args, logger):\n # FIXME: make everything use the logger\n\n settings = api.settings()\n\n # go no further if this feature is turned off\n if not str(settings.build_reporting_enabled).lower() in [ \"1\", \"yes\", \"y\", \"true\"]:\n return 0\n\n objtype = args[0] # \"target\" or \"profile\"\n name = args[1] # name of target or profile\n boot_ip = args[2] # ip or \"?\"\n\n if objtype == \"system\":\n target = api.find_system(name)\n else:\n target = api.find_profile(name)\n\n # collapse the object down to a rendered datastructure\n target = utils.blender(api, False, target)\n\n if target == {}:\n raise CX(\"failure looking up target\")\n\n to_addr = settings.build_reporting_email\n if to_addr == \"\":\n return 0\n\n # add the ability to specify an MTA for servers that don't run their own\n smtp_server = settings.build_reporting_smtp_server\n if smtp_server == \"\":\n smtp_server = \"localhost\"\n\n # use a custom from address or fall back to a reasonable default\n from_addr = settings.build_reporting_sender\n if from_addr == \"\":\n from_addr = \"cobbler@%s\" % settings.server\n\n subject = settings.build_reporting_subject\n if subject == \"\":\n subject = '[Cobbler] install complete '\n\n to_addr = \",\".join(to_addr)\n metadata = {\n \"from_addr\" : from_addr,\n \"to_addr\" : to_addr,\n \"subject\" : subject,\n \"boot_ip\" : boot_ip\n }\n metadata.update(target)\n\n input_template = open(\"/etc/cobbler/reporting/build_report_email.template\")\n input_data = input_template.read()\n input_template.close()\n\n message = templar.Templar(api._config).render(input_data, metadata, None)\n \n # for debug, call\n # print message\n\n sendmail = True\n for prefix in settings.build_reporting_ignorelist:\n if name.lower().startswith(prefix) == True:\n sendmail = False\n\n if sendmail == True:\n # Send the mail\n # FIXME: on error, return non-zero\n server_handle = smtplib.SMTP(smtp_server)\n server_handle.sendmail(from_addr, to_addr.split(','), message)\n server_handle.quit()\n\n return 0\n\n\n\n\n", "path": "cobbler/modules/install_post_report.py" } ]
[ { "content": "# (c) 2008-2009\n# Jeff Schroeder <[email protected]>\n# Michael DeHaan <michael.dehaan AT gmail>\n#\n# License: GPLv2+\n\n# Post install trigger for cobbler to\n# send out a pretty email report that\n# contains target information.\n\nimport distutils.sysconfig\nimport sys\nimport os\nimport traceback\n\nplib = distutils.sysconfig.get_python_lib()\nmod_path=\"%s/cobbler\" % plib\nsys.path.insert(0, mod_path)\n\nfrom utils import _\nimport smtplib\nimport sys\nimport cobbler.templar as templar\nfrom cobbler.cexceptions import CX\nimport utils\n\ndef register():\n # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster.\n # the return of this method indicates the trigger type\n return \"/var/lib/cobbler/triggers/install/post/*\"\n\ndef run(api, args, logger):\n # FIXME: make everything use the logger\n\n settings = api.settings()\n\n # go no further if this feature is turned off\n if not str(settings.build_reporting_enabled).lower() in [ \"1\", \"yes\", \"y\", \"true\"]:\n return 0\n\n objtype = args[0] # \"target\" or \"profile\"\n name = args[1] # name of target or profile\n boot_ip = args[2] # ip or \"?\"\n\n if objtype == \"system\":\n target = api.find_system(name)\n else:\n target = api.find_profile(name)\n\n # collapse the object down to a rendered datastructure\n target = utils.blender(api, False, target)\n\n if target == {}:\n raise CX(\"failure looking up target\")\n\n to_addr = settings.build_reporting_email\n if to_addr == \"\":\n return 0\n\n # add the ability to specify an MTA for servers that don't run their own\n smtp_server = settings.build_reporting_smtp_server\n if smtp_server == \"\":\n smtp_server = \"localhost\"\n\n # use a custom from address or fall back to a reasonable default\n from_addr = settings.build_reporting_sender\n if from_addr == \"\":\n from_addr = \"cobbler@%s\" % settings.server\n\n subject = settings.build_reporting_subject\n if subject == \"\":\n subject = '[Cobbler] install complete '\n\n to_addr = \",\".join(to_addr)\n metadata = {\n \"from_addr\" : from_addr,\n \"to_addr\" : to_addr,\n \"subject\" : subject,\n \"boot_ip\" : boot_ip\n }\n metadata.update(target)\n\n input_template = open(\"/etc/cobbler/reporting/build_report_email.template\")\n input_data = input_template.read()\n input_template.close()\n\n message = templar.Templar(api._config).render(input_data, metadata, None)\n \n # for debug, call\n # print message\n\n sendmail = True\n for prefix in settings.build_reporting_ignorelist:\n if prefix != '' and name.lower().startswith(prefix):\n sendmail = False\n\n if sendmail == True:\n # Send the mail\n # FIXME: on error, return non-zero\n server_handle = smtplib.SMTP(smtp_server)\n server_handle.sendmail(from_addr, to_addr.split(','), message)\n server_handle.quit()\n\n return 0\n\n\n\n\n", "path": "cobbler/modules/install_post_report.py" } ]
diff --git a/cobbler/modules/install_post_report.py b/cobbler/modules/install_post_report.py index 052dfb149e..4f9401bbfc 100755 --- a/cobbler/modules/install_post_report.py +++ b/cobbler/modules/install_post_report.py @@ -91,7 +91,7 @@ def run(api, args, logger): sendmail = True for prefix in settings.build_reporting_ignorelist: - if name.lower().startswith(prefix) == True: + if prefix != '' and name.lower().startswith(prefix): sendmail = False if sendmail == True:
mathesar-foundation__mathesar-673
IndexError when deleting a column ## Description <!-- A clear and concise description of what the bug is. --> An indexError occurs when deleting a column through the API. Most of the time the error occurs when deleting the first or second column of a table. Deleting the last columns in a table does not seem to produce this error. ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> - A column should be deleted ## To Reproduce <!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. --> 1. Delete the first or second column of a table via API. Example: api/v0/tables/1/columns/1/ 2. Delete the first or second column of another table via API. Example: api/v0/tables/2/columns/0/ ## Screenshots ![Screenshot (35)](https://user-images.githubusercontent.com/66047479/133317146-3e4aa024-afc2-4370-9a79-1007549edccd.png) ![Screenshot (36)](https://user-images.githubusercontent.com/66047479/133317195-0da1725a-9895-4ee3-8b18-90b83eda90c4.png) ## Environment - OS: (_eg._ macOS 10.14.6; Fedora 32) - Browser: (_eg._ Safari; Firefox) - Browser Version: (_eg._ 13; 73) - Other info: ## Additional context <!-- Add any other context about the problem or screenshots here. -->
[ { "content": "import warnings\n\nfrom sqlalchemy import Table, MetaData, and_, select, text, func\n\nfrom db.tables.operations.select import reflect_table_from_oid\nfrom db.utils import execute_statement\n\n\ndef get_column_index_from_name(table_oid, column_name, engine, connection_to_use=None):\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_attribute = Table(\"pg_attribute\", MetaData(), autoload_with=engine)\n sel = select(pg_attribute.c.attnum).where(\n and_(\n pg_attribute.c.attrelid == table_oid,\n pg_attribute.c.attname == column_name\n )\n )\n result = execute_statement(engine, sel, connection_to_use).fetchone()[0]\n\n # Account for dropped columns that don't appear in the SQLAlchemy tables\n sel = (\n select(func.count())\n .where(and_(\n pg_attribute.c.attisdropped.is_(True),\n pg_attribute.c.attnum < result,\n ))\n )\n dropped_count = execute_statement(engine, sel, connection_to_use).fetchone()[0]\n\n return result - 1 - dropped_count\n\n\ndef get_column_default(table_oid, column_index, engine, connection_to_use=None):\n table = reflect_table_from_oid(table_oid, engine, connection_to_use)\n column = table.columns[column_index]\n if column.server_default is None:\n return None\n\n metadata = MetaData()\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_attribute = Table(\"pg_attribute\", metadata, autoload_with=engine)\n pg_attrdef = Table(\"pg_attrdef\", metadata, autoload_with=engine)\n\n query = (\n select(pg_attrdef.c.adbin)\n .select_from(\n pg_attrdef\n .join(\n pg_attribute,\n and_(\n pg_attribute.c.attnum == pg_attrdef.c.adnum,\n pg_attribute.c.attrelid == pg_attrdef.c.adrelid\n )\n )\n )\n .where(and_(\n pg_attribute.c.attrelid == table_oid,\n pg_attribute.c.attname == column.name,\n pg_attribute.c.attnum >= 1,\n ))\n )\n\n result = execute_statement(engine, query, connection_to_use).first()[0]\n\n # Here, we get the 'adbin' value for the current column, stored in the attrdef\n # system table. The prefix of this value tells us whether the default is static\n # ('{CONSTANT') or generated ('{FUNCEXPR'). We do not return generated defaults.\n if result.startswith(\"{FUNCEXPR\"):\n return None\n\n default_textual_sql = column.server_default.arg.text\n # Defaults are stored as text with SQL casts appended\n # Ex: \"'test default string'::character varying\" or \"'2020-01-01'::date\"\n # Here, we execute the cast to get the proper python value\n return execute_statement(engine, select(text(default_textual_sql)), connection_to_use).first()[0]\n", "path": "db/columns/operations/select.py" } ]
[ { "content": "import warnings\n\nfrom sqlalchemy import Table, MetaData, and_, select, text, func\n\nfrom db.tables.operations.select import reflect_table_from_oid\nfrom db.utils import execute_statement\n\n\ndef get_column_index_from_name(table_oid, column_name, engine, connection_to_use=None):\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_attribute = Table(\"pg_attribute\", MetaData(), autoload_with=engine)\n sel = select(pg_attribute.c.attnum).where(\n and_(\n pg_attribute.c.attrelid == table_oid,\n pg_attribute.c.attname == column_name\n )\n )\n result = execute_statement(engine, sel, connection_to_use).fetchone()[0]\n\n # Account for dropped columns that don't appear in the SQLAlchemy tables\n sel = (\n select(func.count())\n .where(and_(\n pg_attribute.c.attrelid == table_oid,\n pg_attribute.c.attisdropped.is_(True),\n pg_attribute.c.attnum < result,\n ))\n )\n dropped_count = execute_statement(engine, sel, connection_to_use).fetchone()[0]\n\n return result - 1 - dropped_count\n\n\ndef get_column_default(table_oid, column_index, engine, connection_to_use=None):\n table = reflect_table_from_oid(table_oid, engine, connection_to_use)\n column = table.columns[column_index]\n if column.server_default is None:\n return None\n\n metadata = MetaData()\n with warnings.catch_warnings():\n warnings.filterwarnings(\"ignore\", message=\"Did not recognize type\")\n pg_attribute = Table(\"pg_attribute\", metadata, autoload_with=engine)\n pg_attrdef = Table(\"pg_attrdef\", metadata, autoload_with=engine)\n\n query = (\n select(pg_attrdef.c.adbin)\n .select_from(\n pg_attrdef\n .join(\n pg_attribute,\n and_(\n pg_attribute.c.attnum == pg_attrdef.c.adnum,\n pg_attribute.c.attrelid == pg_attrdef.c.adrelid\n )\n )\n )\n .where(and_(\n pg_attribute.c.attrelid == table_oid,\n pg_attribute.c.attname == column.name,\n pg_attribute.c.attnum >= 1,\n ))\n )\n\n result = execute_statement(engine, query, connection_to_use).first()[0]\n\n # Here, we get the 'adbin' value for the current column, stored in the attrdef\n # system table. The prefix of this value tells us whether the default is static\n # ('{CONSTANT') or generated ('{FUNCEXPR'). We do not return generated defaults.\n if result.startswith(\"{FUNCEXPR\"):\n return None\n\n default_textual_sql = column.server_default.arg.text\n # Defaults are stored as text with SQL casts appended\n # Ex: \"'test default string'::character varying\" or \"'2020-01-01'::date\"\n # Here, we execute the cast to get the proper python value\n return execute_statement(engine, select(text(default_textual_sql)), connection_to_use).first()[0]\n", "path": "db/columns/operations/select.py" } ]
diff --git a/db/columns/operations/select.py b/db/columns/operations/select.py index 8b60cc1d2d..800fc2fc88 100644 --- a/db/columns/operations/select.py +++ b/db/columns/operations/select.py @@ -22,6 +22,7 @@ def get_column_index_from_name(table_oid, column_name, engine, connection_to_use sel = ( select(func.count()) .where(and_( + pg_attribute.c.attrelid == table_oid, pg_attribute.c.attisdropped.is_(True), pg_attribute.c.attnum < result, )) diff --git a/db/tests/columns/operations/test_select.py b/db/tests/columns/operations/test_select.py index 2ac469ce1b..2b9aec8790 100644 --- a/db/tests/columns/operations/test_select.py +++ b/db/tests/columns/operations/test_select.py @@ -1,3 +1,5 @@ +from alembic.migration import MigrationContext +from alembic.operations import Operations import pytest from sqlalchemy import String, Integer, Column, Table, MetaData, Sequence, DateTime, func @@ -23,6 +25,63 @@ def test_get_column_index_from_name(engine_with_schema): assert get_column_index_from_name(table_oid, one_name, engine) == 1 +def test_get_column_index_from_name_after_delete(engine_with_schema): + engine, schema = engine_with_schema + table_name = "table_with_columns" + zero_name = "colzero" + one_name = "colone" + two_name = "coltwo" + table = Table( + table_name, + MetaData(bind=engine, schema=schema), + Column(zero_name, Integer), + Column(one_name, String), + Column(two_name, String), + ) + table.create() + with engine.begin() as conn: + op = Operations(MigrationContext.configure(conn)) + op.drop_column(table.name, one_name, schema=schema) + + table_oid = get_oid_from_table(table_name, schema, engine) + assert get_column_index_from_name(table_oid, zero_name, engine) == 0 + assert get_column_index_from_name(table_oid, two_name, engine) == 1 + + +def test_get_column_index_from_name_after_delete_two_tables(engine_with_schema): + engine, schema = engine_with_schema + table_name = "table_with_columns" + zero_name = "colzero" + one_name = "colone" + two_name = "coltwo" + + for suffix in ["1", "2"]: + table = Table( + table_name + suffix, + MetaData(bind=engine, schema=schema), + Column(zero_name + suffix, Integer), + Column(one_name + suffix, String), + Column(two_name + suffix, String), + ) + table.create() + + with engine.begin() as conn: + op = Operations(MigrationContext.configure(conn)) + op.drop_column(table_name + "1", one_name + "1", schema=schema) + + table_oid1 = get_oid_from_table(table_name + "1", schema, engine) + table_oid2 = get_oid_from_table(table_name + "2", schema, engine) + assert all( + [ + get_column_index_from_name(table_oid1, zero_name + "1", engine) == 0, + get_column_index_from_name(table_oid1, two_name + "1", engine) == 1, + get_column_index_from_name(table_oid2, zero_name + "2", engine) == 0, + get_column_index_from_name(table_oid2, one_name + "2", engine) == 1, + get_column_index_from_name(table_oid2, two_name + "2", engine) == 2, + ] + ) + + @pytest.mark.parametrize("filler", [True, False]) @pytest.mark.parametrize("col_type", column_test_dict.keys()) def test_get_column_default(engine_with_schema, filler, col_type):
falconry__falcon-676
Request.client_accepts_msgpack only supports 'application/x-msgpack' The use of the 'x-' prefix is now discouraged for media types. We should update this Request property to also return True for 'application/msgpack', and verify the change with additional tests.
[ { "content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom datetime import datetime\n\ntry:\n # NOTE(kgrifs): In Python 2.6 and 2.7, socket._fileobject is a\n # standard way of exposing a socket as a file-like object, and\n # is used by wsgiref for wsgi.input.\n import socket\n NativeStream = socket._fileobject # pylint: disable=E1101\nexcept AttributeError: # pragma nocover\n # NOTE(kgriffs): In Python 3.3, wsgiref implements wsgi.input\n # using _io.BufferedReader which is an alias of io.BufferedReader\n import io\n NativeStream = io.BufferedReader\n\nimport mimeparse\nimport six\n\nfrom falcon.errors import * # NOQA\nfrom falcon import util\nfrom falcon.util.uri import parse_query_string, parse_host, unquote_string\nfrom falcon import request_helpers as helpers\n\n# NOTE(tbug): In some cases, http_cookies is not a module\n# but a dict-like structure. This fixes that issue.\n# See issue https://github.com/falconry/falcon/issues/556\nfrom six.moves import http_cookies\nSimpleCookie = http_cookies.SimpleCookie\n\n\nDEFAULT_ERROR_LOG_FORMAT = (u'{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR]'\n u' {1} {2}{3} => ')\n\nTRUE_STRINGS = ('true', 'True', 'yes')\nFALSE_STRINGS = ('false', 'False', 'no')\nWSGI_CONTENT_HEADERS = ('CONTENT_TYPE', 'CONTENT_LENGTH')\n\n\n_maybe_wrap_wsgi_stream = True\n\n\n# PERF(kgriffs): Avoid an extra namespace lookup when using these functions\nstrptime = datetime.strptime\nnow = datetime.now\n\n\nclass Request(object):\n \"\"\"Represents a client's HTTP request.\n\n Note:\n `Request` is not meant to be instantiated directly by responders.\n\n Args:\n env (dict): A WSGI environment dict passed in from the server. See\n also PEP-3333.\n options (dict): Set of global options passed from the API handler.\n\n Attributes:\n protocol (str): Either 'http' or 'https'.\n method (str): HTTP method requested (e.g., 'GET', 'POST', etc.)\n host (str): Hostname requested by the client\n subdomain (str): Leftmost (i.e., most specific) subdomain from the\n hostname. If only a single domain name is given, `subdomain`\n will be ``None``.\n\n Note:\n If the hostname in the request is an IP address, the value\n for `subdomain` is undefined.\n\n env (dict): Reference to the WSGI environ ``dict`` passed in from the\n server. See also PEP-3333.\n app (str): Name of the WSGI app (if using WSGI's notion of virtual\n hosting).\n access_route(list): IP address of the original client, as well\n as any known addresses of proxies fronting the WSGI server.\n\n The following request headers are checked, in order of\n preference, to determine the addresses:\n\n - ``Forwarded``\n - ``X-Forwarded-For``\n - ``X-Real-IP``\n\n If none of these headers are available, the value of\n :py:attr:`~.remote_addr` is used instead.\n\n Note:\n Per `RFC 7239`_, the access route may contain \"unknown\"\n and obfuscated identifiers, in addition to IPv4 and\n IPv6 addresses\n\n .. _RFC 7239: https://tools.ietf.org/html/rfc7239\n\n Warning:\n Headers can be forged by any client or proxy. Use this\n property with caution and validate all values before\n using them. Do not rely on the access route to authorize\n requests.\n\n remote_addr(str): IP address of the closest client or proxy to\n the WSGI server.\n\n This property is determined by the value of ``REMOTE_ADDR``\n in the WSGI environment dict. Since this address is not\n derived from an HTTP header, clients and proxies can not\n forge it.\n\n Note:\n If your application is behind one or more reverse\n proxies, you can use :py:attr:`~.access_route`\n to retrieve the real IP address of the client.\n\n context (dict): Dictionary to hold any data about the request which is\n specific to your app (e.g. session object). Falcon itself will\n not interact with this attribute after it has been initialized.\n context_type (class): Class variable that determines the\n factory or type to use for initializing the\n `context` attribute. By default, the framework will\n instantiate standard\n ``dict`` objects. However, You may override this behavior\n by creating a custom child class of ``falcon.Request``, and\n then passing that new class to `falcon.API()` by way of the\n latter's `request_type` parameter.\n\n Note:\n When overriding `context_type` with a factory function (as\n opposed to a class), the function is called like a method of\n the current Request instance. Therefore the first argument is\n the Request instance itself (self).\n\n uri (str): The fully-qualified URI for the request.\n url (str): alias for `uri`.\n relative_uri (str): The path + query string portion of the full URI.\n path (str): Path portion of the request URL (not including query\n string).\n query_string (str): Query string portion of the request URL, without\n the preceding '?' character.\n user_agent (str): Value of the User-Agent header, or ``None`` if the\n header is missing.\n accept (str): Value of the Accept header, or '*/*' if the header is\n missing.\n auth (str): Value of the Authorization header, or ``None`` if the\n header is missing.\n client_accepts_json (bool): ``True`` if the Accept header indicates\n that the client is willing to receive JSON, otherwise ``False``.\n client_accepts_msgpack (bool): ``True`` if the Accept header indicates\n that the client is willing to receive MessagePack, otherwise\n ``False``.\n client_accepts_xml (bool): ``True`` if the Accept header indicates that\n the client is willing to receive XML, otherwise ``False``.\n content_type (str): Value of the Content-Type header, or ``None`` if\n the header is missing.\n content_length (int): Value of the Content-Length header converted\n to an ``int``, or ``None`` if the header is missing.\n stream: File-like object for reading the body of the request, if any.\n\n Note:\n If an HTML form is POSTed to the API using the\n *application/x-www-form-urlencoded* media type, Falcon\n will consume `stream` in order to parse the parameters\n and merge them into the query string parameters. In this\n case, the stream will be left at EOF.\n\n Note also that the character encoding for fields, before\n percent-encoding non-ASCII bytes, is assumed to be\n UTF-8. The special `_charset_` field is ignored if present.\n\n Falcon expects form-encoded request bodies to be\n encoded according to the standard W3C algorithm (see\n also http://goo.gl/6rlcux).\n\n date (datetime): Value of the Date header, converted to a\n ``datetime`` instance. The header value is assumed to\n conform to RFC 1123.\n expect (str): Value of the Expect header, or ``None`` if the\n header is missing.\n range (tuple of int): A 2-member ``tuple`` parsed from the value of the\n Range header.\n\n The two members correspond to the first and last byte\n positions of the requested resource, inclusive. Negative\n indices indicate offset from the end of the resource,\n where -1 is the last byte, -2 is the second-to-last byte,\n and so forth.\n\n Only continous ranges are supported (e.g., \"bytes=0-0,-1\" would\n result in an HTTPBadRequest exception when the attribute is\n accessed.)\n range_unit (str): Unit of the range parsed from the value of the\n Range header, or ``None`` if the header is missing\n if_match (str): Value of the If-Match header, or ``None`` if the\n header is missing.\n if_none_match (str): Value of the If-None-Match header, or ``None``\n if the header is missing.\n if_modified_since (datetime): Value of the If-Modified-Since header,\n or ``None`` if the header is missing.\n if_unmodified_since (datetime): Value of the If-Unmodified-Since\n header, or ``None`` if the header is missing.\n if_range (str): Value of the If-Range header, or ``None`` if the\n header is missing.\n\n headers (dict): Raw HTTP headers from the request with\n canonical dash-separated names. Parsing all the headers\n to create this dict is done the first time this attribute\n is accessed. This parsing can be costly, so unless you\n need all the headers in this format, you should use the\n `get_header` method or one of the convenience attributes\n instead, to get a value for a specific header.\n\n params (dict): The mapping of request query parameter names to their\n values. Where the parameter appears multiple times in the query\n string, the value mapped to that parameter key will be a list of\n all the values in the order seen.\n\n options (dict): Set of global options passed from the API handler.\n\n cookies (dict):\n A dict of name/value cookie pairs.\n See also: :ref:`Getting Cookies <getting-cookies>`\n\n \"\"\"\n\n __slots__ = (\n '_cached_headers',\n '_cached_uri',\n '_cached_relative_uri',\n 'content_type',\n 'env',\n 'method',\n '_params',\n 'path',\n 'query_string',\n 'stream',\n 'context',\n '_wsgierrors',\n 'options',\n '_cookies',\n '_cached_access_route',\n )\n\n # Allow child classes to override this\n context_type = None\n\n def __init__(self, env, options=None):\n global _maybe_wrap_wsgi_stream\n\n self.env = env\n self.options = options if options else RequestOptions()\n\n self._wsgierrors = env['wsgi.errors']\n self.stream = env['wsgi.input']\n self.method = env['REQUEST_METHOD']\n\n # Normalize path\n path = env['PATH_INFO']\n if path:\n if six.PY3: # pragma: no cover\n # PEP 3333 specifies that PATH_INFO variable are always\n # \"bytes tunneled as latin-1\" and must be encoded back\n path = path.encode('latin1').decode('utf-8', 'replace')\n\n if len(path) != 1 and path.endswith('/'):\n self.path = path[:-1]\n else:\n self.path = path\n else:\n self.path = '/'\n\n # PERF(kgriffs): if...in is faster than using env.get(...)\n if 'QUERY_STRING' in env:\n self.query_string = env['QUERY_STRING']\n\n if self.query_string:\n self._params = parse_query_string(\n self.query_string,\n keep_blank_qs_values=self.options.keep_blank_qs_values,\n )\n\n else:\n self._params = {}\n\n else:\n self.query_string = ''\n self._params = {}\n\n self._cookies = None\n\n self._cached_headers = None\n self._cached_uri = None\n self._cached_relative_uri = None\n self._cached_access_route = None\n\n try:\n self.content_type = self.env['CONTENT_TYPE']\n except KeyError:\n self.content_type = None\n\n # NOTE(kgriffs): Wrap wsgi.input if needed to make read() more robust,\n # normalizing semantics between, e.g., gunicorn and wsgiref.\n if _maybe_wrap_wsgi_stream:\n if isinstance(self.stream, NativeStream):\n # NOTE(kgriffs): This is covered by tests, it's just that\n # coverage can't figure this out for some reason (TBD).\n self._wrap_stream() # pragma nocover\n else:\n # PERF(kgriffs): If self.stream does not need to be wrapped\n # this time, it never needs to be wrapped since the server\n # will continue using the same type for wsgi.input.\n _maybe_wrap_wsgi_stream = False\n\n # PERF(kgriffs): Technically, we should spend a few more\n # cycles and parse the content type for real, but\n # this heuristic will work virtually all the time.\n if (self.content_type is not None and\n 'application/x-www-form-urlencoded' in self.content_type):\n self._parse_form_urlencoded()\n\n if self.context_type is None:\n # Literal syntax is more efficient than using dict()\n self.context = {}\n else:\n # pylint will detect this as not-callable because it only sees the\n # declaration of None, not whatever type a subclass may have set.\n self.context = self.context_type() # pylint: disable=not-callable\n\n # ------------------------------------------------------------------------\n # Properties\n # ------------------------------------------------------------------------\n\n user_agent = helpers.header_property('HTTP_USER_AGENT')\n auth = helpers.header_property('HTTP_AUTHORIZATION')\n\n expect = helpers.header_property('HTTP_EXPECT')\n\n if_match = helpers.header_property('HTTP_IF_MATCH')\n if_none_match = helpers.header_property('HTTP_IF_NONE_MATCH')\n if_range = helpers.header_property('HTTP_IF_RANGE')\n\n @property\n def client_accepts_json(self):\n return self.client_accepts('application/json')\n\n @property\n def client_accepts_msgpack(self):\n return self.client_accepts('application/x-msgpack')\n\n @property\n def client_accepts_xml(self):\n return self.client_accepts('application/xml')\n\n @property\n def accept(self):\n # NOTE(kgriffs): Per RFC, a missing accept header is\n # equivalent to '*/*'\n try:\n return self.env['HTTP_ACCEPT'] or '*/*'\n except KeyError:\n return '*/*'\n\n @property\n def content_length(self):\n try:\n value = self.env['CONTENT_LENGTH']\n except KeyError:\n return None\n\n # NOTE(kgriffs): Normalize an empty value to behave as if\n # the header were not included; wsgiref, at least, inserts\n # an empty CONTENT_LENGTH value if the request does not\n # set the header. Gunicorn and uWSGI do not do this, but\n # others might if they are trying to match wsgiref's\n # behavior too closely.\n if not value:\n return None\n\n try:\n value_as_int = int(value)\n except ValueError:\n msg = 'The value of the header must be a number.'\n raise HTTPInvalidHeader(msg, 'Content-Length')\n\n if value_as_int < 0:\n msg = 'The value of the header must be a positive number.'\n raise HTTPInvalidHeader(msg, 'Content-Length')\n\n return value_as_int\n\n @property\n def date(self):\n return self.get_header_as_datetime('Date')\n\n @property\n def if_modified_since(self):\n return self.get_header_as_datetime('If-Modified-Since')\n\n @property\n def if_unmodified_since(self):\n return self.get_header_as_datetime('If-Unmodified-Since')\n\n @property\n def range(self):\n try:\n value = self.env['HTTP_RANGE']\n if '=' in value:\n unit, sep, req_range = value.partition('=')\n else:\n msg = \"The value must be prefixed with a range unit, e.g. 'bytes='\"\n raise HTTPInvalidHeader(msg, 'Range')\n except KeyError:\n return None\n\n if ',' in req_range:\n msg = 'The value must be a continuous range.'\n raise HTTPInvalidHeader(msg, 'Range')\n\n try:\n first, sep, last = req_range.partition('-')\n\n if not sep:\n raise ValueError()\n\n if first:\n return (int(first), int(last or -1))\n elif last:\n return (-int(last), -1)\n else:\n msg = 'The range offsets are missing.'\n raise HTTPInvalidHeader(msg, 'Range')\n\n except ValueError:\n href = 'http://goo.gl/zZ6Ey'\n href_text = 'HTTP/1.1 Range Requests'\n msg = ('It must be a range formatted according to RFC 7233.')\n raise HTTPInvalidHeader(msg, 'Range', href=href,\n href_text=href_text)\n\n @property\n def range_unit(self):\n try:\n value = self.env['HTTP_RANGE']\n\n if '=' in value:\n unit, sep, req_range = value.partition('=')\n return unit\n else:\n msg = \"The value must be prefixed with a range unit, e.g. 'bytes='\"\n raise HTTPInvalidHeader(msg, 'Range')\n except KeyError:\n return None\n\n @property\n def app(self):\n return self.env.get('SCRIPT_NAME', '')\n\n @property\n def protocol(self):\n return self.env['wsgi.url_scheme']\n\n @property\n def uri(self):\n if self._cached_uri is None:\n env = self.env\n protocol = env['wsgi.url_scheme']\n\n # NOTE(kgriffs): According to PEP-3333 we should first\n # try to use the Host header if present.\n #\n # PERF(kgriffs): try..except is faster than .get\n try:\n host = env['HTTP_HOST']\n except KeyError:\n host = env['SERVER_NAME']\n port = env['SERVER_PORT']\n\n if protocol == 'https':\n if port != '443':\n host += ':' + port\n else:\n if port != '80':\n host += ':' + port\n\n # PERF: For small numbers of items, '+' is faster\n # than ''.join(...). Concatenation is also generally\n # faster than formatting.\n value = (protocol + '://' +\n host +\n self.app +\n self.path)\n\n if self.query_string:\n value = value + '?' + self.query_string\n\n self._cached_uri = value\n\n return self._cached_uri\n\n url = uri\n\n @property\n def host(self):\n try:\n # NOTE(kgriffs): Prefer the host header; the web server\n # isn't supposed to mess with it, so it should be what\n # the client actually sent.\n host_header = self.env['HTTP_HOST']\n host, port = parse_host(host_header)\n except KeyError:\n # PERF(kgriffs): According to PEP-3333, this header\n # will always be present.\n host = self.env['SERVER_NAME']\n\n return host\n\n @property\n def subdomain(self):\n # PERF(kgriffs): .partition is slightly faster than .split\n subdomain, sep, remainder = self.host.partition('.')\n return subdomain if sep else None\n\n @property\n def relative_uri(self):\n if self._cached_relative_uri is None:\n if self.query_string:\n self._cached_relative_uri = (self.app + self.path + '?' +\n self.query_string)\n else:\n self._cached_relative_uri = self.app + self.path\n\n return self._cached_relative_uri\n\n @property\n def headers(self):\n # NOTE(kgriffs: First time here will cache the dict so all we\n # have to do is clone it in the future.\n if self._cached_headers is None:\n headers = self._cached_headers = {}\n\n env = self.env\n for name, value in env.items():\n if name.startswith('HTTP_'):\n # NOTE(kgriffs): Don't take the time to fix the case\n # since headers are supposed to be case-insensitive\n # anyway.\n headers[name[5:].replace('_', '-')] = value\n\n elif name in WSGI_CONTENT_HEADERS:\n headers[name.replace('_', '-')] = value\n\n return self._cached_headers.copy()\n\n @property\n def params(self):\n return self._params\n\n @property\n def cookies(self):\n if self._cookies is None:\n # NOTE(tbug): We might want to look into parsing\n # cookies ourselves. The SimpleCookie is doing a\n # lot if stuff only required to SEND cookies.\n parser = SimpleCookie(self.get_header(\"Cookie\"))\n cookies = {}\n for morsel in parser.values():\n cookies[morsel.key] = morsel.value\n\n self._cookies = cookies\n\n return self._cookies.copy()\n\n @property\n def access_route(self):\n if self._cached_access_route is None:\n access_route = []\n if 'HTTP_FORWARDED' in self.env:\n access_route = self._parse_rfc_forwarded()\n if not access_route and 'HTTP_X_FORWARDED_FOR' in self.env:\n access_route = [ip.strip() for ip in\n self.env['HTTP_X_FORWARDED_FOR'].split(',')]\n if not access_route and 'HTTP_X_REAL_IP' in self.env:\n access_route = [self.env['HTTP_X_REAL_IP']]\n if not access_route and 'REMOTE_ADDR' in self.env:\n access_route = [self.env['REMOTE_ADDR']]\n self._cached_access_route = access_route\n\n return self._cached_access_route\n\n @property\n def remote_addr(self):\n return self.env.get('REMOTE_ADDR')\n\n # ------------------------------------------------------------------------\n # Methods\n # ------------------------------------------------------------------------\n\n def client_accepts(self, media_type):\n \"\"\"Determines whether or not the client accepts a given media type.\n\n Args:\n media_type (str): An Internet media type to check.\n\n Returns:\n bool: ``True`` if the client has indicated in the Accept header\n that it accepts the specified media type. Otherwise, returns\n ``False``.\n \"\"\"\n\n accept = self.accept\n\n # PERF(kgriffs): Usually the following will be true, so\n # try it first.\n if (accept == media_type) or (accept == '*/*'):\n return True\n\n # Fall back to full-blown parsing\n try:\n return mimeparse.quality(media_type, accept) != 0.0\n except ValueError:\n return False\n\n def client_prefers(self, media_types):\n \"\"\"Returns the client's preferred media type, given several choices.\n\n Args:\n media_types (iterable of str): One or more Internet media types\n from which to choose the client's preferred type. This value\n **must** be an iterable collection of strings.\n\n Returns:\n str: The client's preferred media type, based on the Accept\n header. Returns ``None`` if the client does not accept any\n of the given types.\n \"\"\"\n\n try:\n # NOTE(kgriffs): best_match will return '' if no match is found\n preferred_type = mimeparse.best_match(media_types, self.accept)\n except ValueError:\n # Value for the accept header was not formatted correctly\n preferred_type = ''\n\n return (preferred_type if preferred_type else None)\n\n def get_header(self, name, required=False):\n \"\"\"Retrieve the raw string value for the given header.\n\n Args:\n name (str): Header name, case-insensitive (e.g., 'Content-Type')\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning gracefully when the\n header is not found (default ``False``).\n\n Returns:\n str: The value of the specified header if it exists, or ``None`` if\n the header is not found and is not required.\n\n Raises:\n HTTPBadRequest: The header was not found in the request, but\n it was required.\n\n \"\"\"\n\n wsgi_name = name.upper().replace('-', '_')\n\n # Use try..except to optimize for the header existing in most cases\n try:\n # Don't take the time to cache beforehand, using HTTP naming.\n # This will be faster, assuming that most headers are looked\n # up only once, and not all headers will be requested.\n return self.env['HTTP_' + wsgi_name]\n\n except KeyError:\n # NOTE(kgriffs): There are a couple headers that do not\n # use the HTTP prefix in the env, so try those. We expect\n # people to usually just use the relevant helper properties\n # to access these instead of .get_header.\n if wsgi_name in WSGI_CONTENT_HEADERS:\n try:\n return self.env[wsgi_name]\n except KeyError:\n pass\n\n if not required:\n return None\n\n raise HTTPMissingHeader(name)\n\n def get_header_as_datetime(self, header, required=False, obs_date=False):\n \"\"\"Return an HTTP header with HTTP-Date values as a datetime.\n\n Args:\n name (str): Header name, case-insensitive (e.g., 'Date')\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning gracefully when the\n header is not found (default ``False``).\n obs_date (bool, optional): Support obs-date formats according to\n RFC 7231, e.g.: \"Sunday, 06-Nov-94 08:49:37 GMT\"\n (default ``False``).\n\n Returns:\n datetime: The value of the specified header if it exists,\n or ``None`` if the header is not found and is not required.\n\n Raises:\n HTTPBadRequest: The header was not found in the request, but\n it was required.\n HttpInvalidHeader: The header contained a malformed/invalid value.\n \"\"\"\n\n try:\n http_date = self.get_header(header, required=required)\n return util.http_date_to_dt(http_date, obs_date=obs_date)\n except TypeError:\n # When the header does not exist and isn't required\n return None\n except ValueError:\n msg = ('It must be formatted according to RFC 7231, '\n 'Section 7.1.1.1')\n raise HTTPInvalidHeader(msg, header)\n\n def get_param(self, name, required=False, store=None, default=None):\n \"\"\"Return the raw value of a query string parameter as a string.\n\n Note:\n If an HTML form is POSTed to the API using the\n *application/x-www-form-urlencoded* media type, the\n parameters from the request body will be merged into\n the query string parameters.\n\n If a key appears more than once in the form data, one of the\n values will be returned as a string, but it is undefined which\n one. Use `req.get_param_as_list()` to retrieve all the values.\n\n Note:\n Similar to the way multiple keys in form data is handled,\n if a query parameter is assigned a comma-separated list of\n values (e.g., 'foo=a,b,c'), only one of those values will be\n returned, and it is undefined which one. Use\n `req.get_param_as_list()` to retrieve all the values.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'sort').\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found (default ``False``).\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is present.\n default (any, optional): If the param is not found returns the\n given value instead of None\n\n Returns:\n str: The value of the param as a string, or ``None`` if param is\n not found and is not required.\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n # NOTE(warsaw): If the key appeared multiple times, it will be\n # stored internally as a list. We do not define which one\n # actually gets returned, but let's pick the last one for grins.\n param = params[name]\n if isinstance(param, list):\n param = param[-1]\n\n if store is not None:\n store[name] = param\n\n return param\n\n if not required:\n return default\n\n raise HTTPMissingParam(name)\n\n def get_param_as_int(self, name,\n required=False, min=None, max=None, store=None):\n \"\"\"Return the value of a query string parameter as an int.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'limit').\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found or is not an integer (default\n ``False``).\n min (int, optional): Set to the minimum value allowed for this\n param. If the param is found and it is less than min, an\n ``HTTPError`` is raised.\n max (int, optional): Set to the maximum value allowed for this\n param. If the param is found and its value is greater than\n max, an ``HTTPError`` is raised.\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is found\n (default ``None``).\n\n Returns:\n int: The value of the param if it is found and can be converted to\n an integer. If the param is not found, returns ``None``, unless\n `required` is ``True``.\n\n Raises\n HTTPBadRequest: The param was not found in the request, even though\n it was required to be there. Also raised if the param's value\n falls outside the given interval, i.e., the value must be in\n the interval: min <= value <= max to avoid triggering an error.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n val = params[name]\n if isinstance(val, list):\n val = val[-1]\n\n try:\n val = int(val)\n except ValueError:\n msg = 'The value must be an integer.'\n raise HTTPInvalidParam(msg, name)\n\n if min is not None and val < min:\n msg = 'The value must be at least ' + str(min)\n raise HTTPInvalidParam(msg, name)\n\n if max is not None and max < val:\n msg = 'The value may not exceed ' + str(max)\n raise HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = val\n\n return val\n\n if not required:\n return None\n\n raise HTTPMissingParam(name)\n\n def get_param_as_bool(self, name, required=False, store=None,\n blank_as_true=False):\n \"\"\"Return the value of a query string parameter as a boolean\n\n The following boolean strings are supported::\n\n TRUE_STRINGS = ('true', 'True', 'yes')\n FALSE_STRINGS = ('false', 'False', 'no')\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'detailed').\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found or is not a recognized boolean\n string (default ``False``).\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is found (default\n ``None``).\n blank_as_true (bool): If ``True``, an empty string value will be\n treated as ``True``. Normally empty strings are ignored; if\n you would like to recognize such parameters, you must set the\n `keep_blank_qs_values` request option to ``True``. Request\n options are set globally for each instance of ``falcon.API``\n through the `req_options` attribute.\n\n Returns:\n bool: The value of the param if it is found and can be converted\n to a ``bool``. If the param is not found, returns ``None``\n unless required is ``True``.\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n val = params[name]\n if isinstance(val, list):\n val = val[-1]\n\n if val in TRUE_STRINGS:\n val = True\n elif val in FALSE_STRINGS:\n val = False\n elif blank_as_true and not val:\n val = True\n else:\n msg = 'The value of the parameter must be \"true\" or \"false\".'\n raise HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = val\n\n return val\n\n if not required:\n return None\n\n raise HTTPMissingParam(name)\n\n def get_param_as_list(self, name,\n transform=None, required=False, store=None):\n \"\"\"Return the value of a query string parameter as a list.\n\n List items must be comma-separated or must be provided\n as multiple instances of the same param in the query string\n ala *application/x-www-form-urlencoded*.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'ids').\n transform (callable, optional): An optional transform function\n that takes as input each element in the list as a ``str`` and\n outputs a transformed element for inclusion in the list that\n will be returned. For example, passing ``int`` will\n transform list items into numbers.\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found (default ``False``).\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is found (default\n ``None``).\n\n Returns:\n list: The value of the param if it is found. Otherwise, returns\n ``None`` unless required is True. Empty list elements will be\n discarded. For example, the following query strings would\n both result in `['1', '3']`::\n\n things=1,,3\n things=1&things=&things=3\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n HTTPInvalidParam: A transform function raised an instance of\n ``ValueError``.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n items = params[name]\n\n # NOTE(warsaw): When a key appears multiple times in the request\n # query, it will already be represented internally as a list.\n # NOTE(kgriffs): Likewise for comma-delimited values.\n if not isinstance(items, list):\n items = [items]\n\n # PERF(kgriffs): Use if-else rather than a DRY approach\n # that sets transform to a passthrough function; avoids\n # function calling overhead.\n if transform is not None:\n try:\n items = [transform(i) for i in items]\n\n except ValueError:\n msg = 'The value is not formatted correctly.'\n raise HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = items\n\n return items\n\n if not required:\n return None\n\n raise HTTPMissingParam(name)\n\n def get_param_as_date(self, name, format_string='%Y-%m-%d',\n required=False, store=None):\n \"\"\"Return the value of a query string parameter as a date.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'ids').\n format_string (str): String used to parse the param value into a\n date.\n Any format recognized by strptime() is supported.\n (default ``\"%Y-%m-%d\"``)\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found (default ``False``).\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is found (default\n ``None``).\n Returns:\n datetime.date: The value of the param if it is found and can be\n converted to a ``date`` according to the supplied format\n string. If the param is not found, returns ``None`` unless\n required is ``True``.\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n HTTPInvalidParam: A transform function raised an instance of\n ``ValueError``.\n \"\"\"\n\n param_value = self.get_param(name, required=required)\n\n if param_value is None:\n return None\n\n try:\n date = strptime(param_value, format_string).date()\n except ValueError:\n msg = \"The date value does not match the required format\"\n raise HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = date\n\n return date\n\n # TODO(kgriffs): Use the nocover pragma only for the six.PY3 if..else\n def log_error(self, message): # pragma: no cover\n \"\"\"Write an error message to the server's log.\n\n Prepends timestamp and request info to message, and writes the\n result out to the WSGI server's error stream (`wsgi.error`).\n\n Args:\n message (str or unicode): Description of the problem. On Python 2,\n instances of ``unicode`` will be converted to UTF-8.\n\n \"\"\"\n\n if self.query_string:\n query_string_formatted = '?' + self.query_string\n else:\n query_string_formatted = ''\n\n log_line = (\n DEFAULT_ERROR_LOG_FORMAT.\n format(now(), self.method, self.path, query_string_formatted)\n )\n\n if six.PY3:\n self._wsgierrors.write(log_line + message + '\\n')\n else:\n if isinstance(message, unicode): # pylint: disable=E0602\n message = message.encode('utf-8')\n\n self._wsgierrors.write(log_line.encode('utf-8'))\n self._wsgierrors.write(message + '\\n')\n\n # ------------------------------------------------------------------------\n # Helpers\n # ------------------------------------------------------------------------\n\n def _wrap_stream(self): # pragma nocover\n try:\n content_length = self.content_length or 0\n\n except HTTPInvalidHeader:\n # NOTE(kgriffs): The content-length header was specified,\n # but it had an invalid value. Assume no content.\n content_length = 0\n\n self.stream = helpers.Body(self.stream, content_length)\n\n def _parse_form_urlencoded(self):\n # NOTE(kgriffs): This assumes self.stream has been patched\n # above in the case of wsgiref, so that self.content_length\n # is not needed. Normally we just avoid accessing\n # self.content_length, because it is a little expensive\n # to call. We could cache self.content_length, but the\n # overhead to do that won't usually be helpful, since\n # content length will only ever be read once per\n # request in most cases.\n body = self.stream.read()\n\n # NOTE(kgriffs): According to http://goo.gl/6rlcux the\n # body should be US-ASCII. Enforcing this also helps\n # catch malicious input.\n try:\n body = body.decode('ascii')\n except UnicodeDecodeError:\n body = None\n self.log_error('Non-ASCII characters found in form body '\n 'with Content-Type of '\n 'application/x-www-form-urlencoded. Body '\n 'will be ignored.')\n\n if body:\n extra_params = parse_query_string(\n body,\n keep_blank_qs_values=self.options.keep_blank_qs_values,\n )\n\n self._params.update(extra_params)\n\n def _parse_rfc_forwarded(self):\n \"\"\"Parse RFC 7239 \"Forwarded\" header.\n\n Returns:\n list: addresses derived from \"for\" parameters.\n \"\"\"\n addr = []\n for forwarded in self.env['HTTP_FORWARDED'].split(','):\n for param in forwarded.split(';'):\n param = param.strip().split('=', 1)\n if len(param) == 1:\n continue\n key, val = param\n if key.lower() != 'for':\n # we only want for params\n continue\n host, _ = parse_host(unquote_string(val))\n addr.append(host)\n return addr\n\n\n# PERF: To avoid typos and improve storage space and speed over a dict.\nclass RequestOptions(object):\n \"\"\"This class is a container for ``Request`` options.\n\n Attributes:\n keep_blank_qs_values (bool): Set to ``True`` in order to retain\n blank values in query string parameters (default ``False``).\n\n \"\"\"\n __slots__ = (\n 'keep_blank_qs_values',\n )\n\n def __init__(self):\n self.keep_blank_qs_values = False\n", "path": "falcon/request.py" } ]
[ { "content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom datetime import datetime\n\ntry:\n # NOTE(kgrifs): In Python 2.6 and 2.7, socket._fileobject is a\n # standard way of exposing a socket as a file-like object, and\n # is used by wsgiref for wsgi.input.\n import socket\n NativeStream = socket._fileobject # pylint: disable=E1101\nexcept AttributeError: # pragma nocover\n # NOTE(kgriffs): In Python 3.3, wsgiref implements wsgi.input\n # using _io.BufferedReader which is an alias of io.BufferedReader\n import io\n NativeStream = io.BufferedReader\n\nimport mimeparse\nimport six\n\nfrom falcon.errors import * # NOQA\nfrom falcon import util\nfrom falcon.util.uri import parse_query_string, parse_host, unquote_string\nfrom falcon import request_helpers as helpers\n\n# NOTE(tbug): In some cases, http_cookies is not a module\n# but a dict-like structure. This fixes that issue.\n# See issue https://github.com/falconry/falcon/issues/556\nfrom six.moves import http_cookies\nSimpleCookie = http_cookies.SimpleCookie\n\n\nDEFAULT_ERROR_LOG_FORMAT = (u'{0:%Y-%m-%d %H:%M:%S} [FALCON] [ERROR]'\n u' {1} {2}{3} => ')\n\nTRUE_STRINGS = ('true', 'True', 'yes')\nFALSE_STRINGS = ('false', 'False', 'no')\nWSGI_CONTENT_HEADERS = ('CONTENT_TYPE', 'CONTENT_LENGTH')\n\n\n_maybe_wrap_wsgi_stream = True\n\n\n# PERF(kgriffs): Avoid an extra namespace lookup when using these functions\nstrptime = datetime.strptime\nnow = datetime.now\n\n\nclass Request(object):\n \"\"\"Represents a client's HTTP request.\n\n Note:\n `Request` is not meant to be instantiated directly by responders.\n\n Args:\n env (dict): A WSGI environment dict passed in from the server. See\n also PEP-3333.\n options (dict): Set of global options passed from the API handler.\n\n Attributes:\n protocol (str): Either 'http' or 'https'.\n method (str): HTTP method requested (e.g., 'GET', 'POST', etc.)\n host (str): Hostname requested by the client\n subdomain (str): Leftmost (i.e., most specific) subdomain from the\n hostname. If only a single domain name is given, `subdomain`\n will be ``None``.\n\n Note:\n If the hostname in the request is an IP address, the value\n for `subdomain` is undefined.\n\n env (dict): Reference to the WSGI environ ``dict`` passed in from the\n server. See also PEP-3333.\n app (str): Name of the WSGI app (if using WSGI's notion of virtual\n hosting).\n access_route(list): IP address of the original client, as well\n as any known addresses of proxies fronting the WSGI server.\n\n The following request headers are checked, in order of\n preference, to determine the addresses:\n\n - ``Forwarded``\n - ``X-Forwarded-For``\n - ``X-Real-IP``\n\n If none of these headers are available, the value of\n :py:attr:`~.remote_addr` is used instead.\n\n Note:\n Per `RFC 7239`_, the access route may contain \"unknown\"\n and obfuscated identifiers, in addition to IPv4 and\n IPv6 addresses\n\n .. _RFC 7239: https://tools.ietf.org/html/rfc7239\n\n Warning:\n Headers can be forged by any client or proxy. Use this\n property with caution and validate all values before\n using them. Do not rely on the access route to authorize\n requests.\n\n remote_addr(str): IP address of the closest client or proxy to\n the WSGI server.\n\n This property is determined by the value of ``REMOTE_ADDR``\n in the WSGI environment dict. Since this address is not\n derived from an HTTP header, clients and proxies can not\n forge it.\n\n Note:\n If your application is behind one or more reverse\n proxies, you can use :py:attr:`~.access_route`\n to retrieve the real IP address of the client.\n\n context (dict): Dictionary to hold any data about the request which is\n specific to your app (e.g. session object). Falcon itself will\n not interact with this attribute after it has been initialized.\n context_type (class): Class variable that determines the\n factory or type to use for initializing the\n `context` attribute. By default, the framework will\n instantiate standard\n ``dict`` objects. However, You may override this behavior\n by creating a custom child class of ``falcon.Request``, and\n then passing that new class to `falcon.API()` by way of the\n latter's `request_type` parameter.\n\n Note:\n When overriding `context_type` with a factory function (as\n opposed to a class), the function is called like a method of\n the current Request instance. Therefore the first argument is\n the Request instance itself (self).\n\n uri (str): The fully-qualified URI for the request.\n url (str): alias for `uri`.\n relative_uri (str): The path + query string portion of the full URI.\n path (str): Path portion of the request URL (not including query\n string).\n query_string (str): Query string portion of the request URL, without\n the preceding '?' character.\n user_agent (str): Value of the User-Agent header, or ``None`` if the\n header is missing.\n accept (str): Value of the Accept header, or '*/*' if the header is\n missing.\n auth (str): Value of the Authorization header, or ``None`` if the\n header is missing.\n client_accepts_json (bool): ``True`` if the Accept header indicates\n that the client is willing to receive JSON, otherwise ``False``.\n client_accepts_msgpack (bool): ``True`` if the Accept header indicates\n that the client is willing to receive MessagePack, otherwise\n ``False``.\n client_accepts_xml (bool): ``True`` if the Accept header indicates that\n the client is willing to receive XML, otherwise ``False``.\n content_type (str): Value of the Content-Type header, or ``None`` if\n the header is missing.\n content_length (int): Value of the Content-Length header converted\n to an ``int``, or ``None`` if the header is missing.\n stream: File-like object for reading the body of the request, if any.\n\n Note:\n If an HTML form is POSTed to the API using the\n *application/x-www-form-urlencoded* media type, Falcon\n will consume `stream` in order to parse the parameters\n and merge them into the query string parameters. In this\n case, the stream will be left at EOF.\n\n Note also that the character encoding for fields, before\n percent-encoding non-ASCII bytes, is assumed to be\n UTF-8. The special `_charset_` field is ignored if present.\n\n Falcon expects form-encoded request bodies to be\n encoded according to the standard W3C algorithm (see\n also http://goo.gl/6rlcux).\n\n date (datetime): Value of the Date header, converted to a\n ``datetime`` instance. The header value is assumed to\n conform to RFC 1123.\n expect (str): Value of the Expect header, or ``None`` if the\n header is missing.\n range (tuple of int): A 2-member ``tuple`` parsed from the value of the\n Range header.\n\n The two members correspond to the first and last byte\n positions of the requested resource, inclusive. Negative\n indices indicate offset from the end of the resource,\n where -1 is the last byte, -2 is the second-to-last byte,\n and so forth.\n\n Only continous ranges are supported (e.g., \"bytes=0-0,-1\" would\n result in an HTTPBadRequest exception when the attribute is\n accessed.)\n range_unit (str): Unit of the range parsed from the value of the\n Range header, or ``None`` if the header is missing\n if_match (str): Value of the If-Match header, or ``None`` if the\n header is missing.\n if_none_match (str): Value of the If-None-Match header, or ``None``\n if the header is missing.\n if_modified_since (datetime): Value of the If-Modified-Since header,\n or ``None`` if the header is missing.\n if_unmodified_since (datetime): Value of the If-Unmodified-Since\n header, or ``None`` if the header is missing.\n if_range (str): Value of the If-Range header, or ``None`` if the\n header is missing.\n\n headers (dict): Raw HTTP headers from the request with\n canonical dash-separated names. Parsing all the headers\n to create this dict is done the first time this attribute\n is accessed. This parsing can be costly, so unless you\n need all the headers in this format, you should use the\n `get_header` method or one of the convenience attributes\n instead, to get a value for a specific header.\n\n params (dict): The mapping of request query parameter names to their\n values. Where the parameter appears multiple times in the query\n string, the value mapped to that parameter key will be a list of\n all the values in the order seen.\n\n options (dict): Set of global options passed from the API handler.\n\n cookies (dict):\n A dict of name/value cookie pairs.\n See also: :ref:`Getting Cookies <getting-cookies>`\n\n \"\"\"\n\n __slots__ = (\n '_cached_headers',\n '_cached_uri',\n '_cached_relative_uri',\n 'content_type',\n 'env',\n 'method',\n '_params',\n 'path',\n 'query_string',\n 'stream',\n 'context',\n '_wsgierrors',\n 'options',\n '_cookies',\n '_cached_access_route',\n )\n\n # Allow child classes to override this\n context_type = None\n\n def __init__(self, env, options=None):\n global _maybe_wrap_wsgi_stream\n\n self.env = env\n self.options = options if options else RequestOptions()\n\n self._wsgierrors = env['wsgi.errors']\n self.stream = env['wsgi.input']\n self.method = env['REQUEST_METHOD']\n\n # Normalize path\n path = env['PATH_INFO']\n if path:\n if six.PY3: # pragma: no cover\n # PEP 3333 specifies that PATH_INFO variable are always\n # \"bytes tunneled as latin-1\" and must be encoded back\n path = path.encode('latin1').decode('utf-8', 'replace')\n\n if len(path) != 1 and path.endswith('/'):\n self.path = path[:-1]\n else:\n self.path = path\n else:\n self.path = '/'\n\n # PERF(kgriffs): if...in is faster than using env.get(...)\n if 'QUERY_STRING' in env:\n self.query_string = env['QUERY_STRING']\n\n if self.query_string:\n self._params = parse_query_string(\n self.query_string,\n keep_blank_qs_values=self.options.keep_blank_qs_values,\n )\n\n else:\n self._params = {}\n\n else:\n self.query_string = ''\n self._params = {}\n\n self._cookies = None\n\n self._cached_headers = None\n self._cached_uri = None\n self._cached_relative_uri = None\n self._cached_access_route = None\n\n try:\n self.content_type = self.env['CONTENT_TYPE']\n except KeyError:\n self.content_type = None\n\n # NOTE(kgriffs): Wrap wsgi.input if needed to make read() more robust,\n # normalizing semantics between, e.g., gunicorn and wsgiref.\n if _maybe_wrap_wsgi_stream:\n if isinstance(self.stream, NativeStream):\n # NOTE(kgriffs): This is covered by tests, it's just that\n # coverage can't figure this out for some reason (TBD).\n self._wrap_stream() # pragma nocover\n else:\n # PERF(kgriffs): If self.stream does not need to be wrapped\n # this time, it never needs to be wrapped since the server\n # will continue using the same type for wsgi.input.\n _maybe_wrap_wsgi_stream = False\n\n # PERF(kgriffs): Technically, we should spend a few more\n # cycles and parse the content type for real, but\n # this heuristic will work virtually all the time.\n if (self.content_type is not None and\n 'application/x-www-form-urlencoded' in self.content_type):\n self._parse_form_urlencoded()\n\n if self.context_type is None:\n # Literal syntax is more efficient than using dict()\n self.context = {}\n else:\n # pylint will detect this as not-callable because it only sees the\n # declaration of None, not whatever type a subclass may have set.\n self.context = self.context_type() # pylint: disable=not-callable\n\n # ------------------------------------------------------------------------\n # Properties\n # ------------------------------------------------------------------------\n\n user_agent = helpers.header_property('HTTP_USER_AGENT')\n auth = helpers.header_property('HTTP_AUTHORIZATION')\n\n expect = helpers.header_property('HTTP_EXPECT')\n\n if_match = helpers.header_property('HTTP_IF_MATCH')\n if_none_match = helpers.header_property('HTTP_IF_NONE_MATCH')\n if_range = helpers.header_property('HTTP_IF_RANGE')\n\n @property\n def client_accepts_json(self):\n return self.client_accepts('application/json')\n\n @property\n def client_accepts_msgpack(self):\n return (self.client_accepts('application/x-msgpack')\n or self.client_accepts('application/msgpack'))\n\n @property\n def client_accepts_xml(self):\n return self.client_accepts('application/xml')\n\n @property\n def accept(self):\n # NOTE(kgriffs): Per RFC, a missing accept header is\n # equivalent to '*/*'\n try:\n return self.env['HTTP_ACCEPT'] or '*/*'\n except KeyError:\n return '*/*'\n\n @property\n def content_length(self):\n try:\n value = self.env['CONTENT_LENGTH']\n except KeyError:\n return None\n\n # NOTE(kgriffs): Normalize an empty value to behave as if\n # the header were not included; wsgiref, at least, inserts\n # an empty CONTENT_LENGTH value if the request does not\n # set the header. Gunicorn and uWSGI do not do this, but\n # others might if they are trying to match wsgiref's\n # behavior too closely.\n if not value:\n return None\n\n try:\n value_as_int = int(value)\n except ValueError:\n msg = 'The value of the header must be a number.'\n raise HTTPInvalidHeader(msg, 'Content-Length')\n\n if value_as_int < 0:\n msg = 'The value of the header must be a positive number.'\n raise HTTPInvalidHeader(msg, 'Content-Length')\n\n return value_as_int\n\n @property\n def date(self):\n return self.get_header_as_datetime('Date')\n\n @property\n def if_modified_since(self):\n return self.get_header_as_datetime('If-Modified-Since')\n\n @property\n def if_unmodified_since(self):\n return self.get_header_as_datetime('If-Unmodified-Since')\n\n @property\n def range(self):\n try:\n value = self.env['HTTP_RANGE']\n if '=' in value:\n unit, sep, req_range = value.partition('=')\n else:\n msg = \"The value must be prefixed with a range unit, e.g. 'bytes='\"\n raise HTTPInvalidHeader(msg, 'Range')\n except KeyError:\n return None\n\n if ',' in req_range:\n msg = 'The value must be a continuous range.'\n raise HTTPInvalidHeader(msg, 'Range')\n\n try:\n first, sep, last = req_range.partition('-')\n\n if not sep:\n raise ValueError()\n\n if first:\n return (int(first), int(last or -1))\n elif last:\n return (-int(last), -1)\n else:\n msg = 'The range offsets are missing.'\n raise HTTPInvalidHeader(msg, 'Range')\n\n except ValueError:\n href = 'http://goo.gl/zZ6Ey'\n href_text = 'HTTP/1.1 Range Requests'\n msg = ('It must be a range formatted according to RFC 7233.')\n raise HTTPInvalidHeader(msg, 'Range', href=href,\n href_text=href_text)\n\n @property\n def range_unit(self):\n try:\n value = self.env['HTTP_RANGE']\n\n if '=' in value:\n unit, sep, req_range = value.partition('=')\n return unit\n else:\n msg = \"The value must be prefixed with a range unit, e.g. 'bytes='\"\n raise HTTPInvalidHeader(msg, 'Range')\n except KeyError:\n return None\n\n @property\n def app(self):\n return self.env.get('SCRIPT_NAME', '')\n\n @property\n def protocol(self):\n return self.env['wsgi.url_scheme']\n\n @property\n def uri(self):\n if self._cached_uri is None:\n env = self.env\n protocol = env['wsgi.url_scheme']\n\n # NOTE(kgriffs): According to PEP-3333 we should first\n # try to use the Host header if present.\n #\n # PERF(kgriffs): try..except is faster than .get\n try:\n host = env['HTTP_HOST']\n except KeyError:\n host = env['SERVER_NAME']\n port = env['SERVER_PORT']\n\n if protocol == 'https':\n if port != '443':\n host += ':' + port\n else:\n if port != '80':\n host += ':' + port\n\n # PERF: For small numbers of items, '+' is faster\n # than ''.join(...). Concatenation is also generally\n # faster than formatting.\n value = (protocol + '://' +\n host +\n self.app +\n self.path)\n\n if self.query_string:\n value = value + '?' + self.query_string\n\n self._cached_uri = value\n\n return self._cached_uri\n\n url = uri\n\n @property\n def host(self):\n try:\n # NOTE(kgriffs): Prefer the host header; the web server\n # isn't supposed to mess with it, so it should be what\n # the client actually sent.\n host_header = self.env['HTTP_HOST']\n host, port = parse_host(host_header)\n except KeyError:\n # PERF(kgriffs): According to PEP-3333, this header\n # will always be present.\n host = self.env['SERVER_NAME']\n\n return host\n\n @property\n def subdomain(self):\n # PERF(kgriffs): .partition is slightly faster than .split\n subdomain, sep, remainder = self.host.partition('.')\n return subdomain if sep else None\n\n @property\n def relative_uri(self):\n if self._cached_relative_uri is None:\n if self.query_string:\n self._cached_relative_uri = (self.app + self.path + '?' +\n self.query_string)\n else:\n self._cached_relative_uri = self.app + self.path\n\n return self._cached_relative_uri\n\n @property\n def headers(self):\n # NOTE(kgriffs: First time here will cache the dict so all we\n # have to do is clone it in the future.\n if self._cached_headers is None:\n headers = self._cached_headers = {}\n\n env = self.env\n for name, value in env.items():\n if name.startswith('HTTP_'):\n # NOTE(kgriffs): Don't take the time to fix the case\n # since headers are supposed to be case-insensitive\n # anyway.\n headers[name[5:].replace('_', '-')] = value\n\n elif name in WSGI_CONTENT_HEADERS:\n headers[name.replace('_', '-')] = value\n\n return self._cached_headers.copy()\n\n @property\n def params(self):\n return self._params\n\n @property\n def cookies(self):\n if self._cookies is None:\n # NOTE(tbug): We might want to look into parsing\n # cookies ourselves. The SimpleCookie is doing a\n # lot if stuff only required to SEND cookies.\n parser = SimpleCookie(self.get_header(\"Cookie\"))\n cookies = {}\n for morsel in parser.values():\n cookies[morsel.key] = morsel.value\n\n self._cookies = cookies\n\n return self._cookies.copy()\n\n @property\n def access_route(self):\n if self._cached_access_route is None:\n access_route = []\n if 'HTTP_FORWARDED' in self.env:\n access_route = self._parse_rfc_forwarded()\n if not access_route and 'HTTP_X_FORWARDED_FOR' in self.env:\n access_route = [ip.strip() for ip in\n self.env['HTTP_X_FORWARDED_FOR'].split(',')]\n if not access_route and 'HTTP_X_REAL_IP' in self.env:\n access_route = [self.env['HTTP_X_REAL_IP']]\n if not access_route and 'REMOTE_ADDR' in self.env:\n access_route = [self.env['REMOTE_ADDR']]\n self._cached_access_route = access_route\n\n return self._cached_access_route\n\n @property\n def remote_addr(self):\n return self.env.get('REMOTE_ADDR')\n\n # ------------------------------------------------------------------------\n # Methods\n # ------------------------------------------------------------------------\n\n def client_accepts(self, media_type):\n \"\"\"Determines whether or not the client accepts a given media type.\n\n Args:\n media_type (str): An Internet media type to check.\n\n Returns:\n bool: ``True`` if the client has indicated in the Accept header\n that it accepts the specified media type. Otherwise, returns\n ``False``.\n \"\"\"\n\n accept = self.accept\n\n # PERF(kgriffs): Usually the following will be true, so\n # try it first.\n if (accept == media_type) or (accept == '*/*'):\n return True\n\n # Fall back to full-blown parsing\n try:\n return mimeparse.quality(media_type, accept) != 0.0\n except ValueError:\n return False\n\n def client_prefers(self, media_types):\n \"\"\"Returns the client's preferred media type, given several choices.\n\n Args:\n media_types (iterable of str): One or more Internet media types\n from which to choose the client's preferred type. This value\n **must** be an iterable collection of strings.\n\n Returns:\n str: The client's preferred media type, based on the Accept\n header. Returns ``None`` if the client does not accept any\n of the given types.\n \"\"\"\n\n try:\n # NOTE(kgriffs): best_match will return '' if no match is found\n preferred_type = mimeparse.best_match(media_types, self.accept)\n except ValueError:\n # Value for the accept header was not formatted correctly\n preferred_type = ''\n\n return (preferred_type if preferred_type else None)\n\n def get_header(self, name, required=False):\n \"\"\"Retrieve the raw string value for the given header.\n\n Args:\n name (str): Header name, case-insensitive (e.g., 'Content-Type')\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning gracefully when the\n header is not found (default ``False``).\n\n Returns:\n str: The value of the specified header if it exists, or ``None`` if\n the header is not found and is not required.\n\n Raises:\n HTTPBadRequest: The header was not found in the request, but\n it was required.\n\n \"\"\"\n\n wsgi_name = name.upper().replace('-', '_')\n\n # Use try..except to optimize for the header existing in most cases\n try:\n # Don't take the time to cache beforehand, using HTTP naming.\n # This will be faster, assuming that most headers are looked\n # up only once, and not all headers will be requested.\n return self.env['HTTP_' + wsgi_name]\n\n except KeyError:\n # NOTE(kgriffs): There are a couple headers that do not\n # use the HTTP prefix in the env, so try those. We expect\n # people to usually just use the relevant helper properties\n # to access these instead of .get_header.\n if wsgi_name in WSGI_CONTENT_HEADERS:\n try:\n return self.env[wsgi_name]\n except KeyError:\n pass\n\n if not required:\n return None\n\n raise HTTPMissingHeader(name)\n\n def get_header_as_datetime(self, header, required=False, obs_date=False):\n \"\"\"Return an HTTP header with HTTP-Date values as a datetime.\n\n Args:\n name (str): Header name, case-insensitive (e.g., 'Date')\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning gracefully when the\n header is not found (default ``False``).\n obs_date (bool, optional): Support obs-date formats according to\n RFC 7231, e.g.: \"Sunday, 06-Nov-94 08:49:37 GMT\"\n (default ``False``).\n\n Returns:\n datetime: The value of the specified header if it exists,\n or ``None`` if the header is not found and is not required.\n\n Raises:\n HTTPBadRequest: The header was not found in the request, but\n it was required.\n HttpInvalidHeader: The header contained a malformed/invalid value.\n \"\"\"\n\n try:\n http_date = self.get_header(header, required=required)\n return util.http_date_to_dt(http_date, obs_date=obs_date)\n except TypeError:\n # When the header does not exist and isn't required\n return None\n except ValueError:\n msg = ('It must be formatted according to RFC 7231, '\n 'Section 7.1.1.1')\n raise HTTPInvalidHeader(msg, header)\n\n def get_param(self, name, required=False, store=None, default=None):\n \"\"\"Return the raw value of a query string parameter as a string.\n\n Note:\n If an HTML form is POSTed to the API using the\n *application/x-www-form-urlencoded* media type, the\n parameters from the request body will be merged into\n the query string parameters.\n\n If a key appears more than once in the form data, one of the\n values will be returned as a string, but it is undefined which\n one. Use `req.get_param_as_list()` to retrieve all the values.\n\n Note:\n Similar to the way multiple keys in form data is handled,\n if a query parameter is assigned a comma-separated list of\n values (e.g., 'foo=a,b,c'), only one of those values will be\n returned, and it is undefined which one. Use\n `req.get_param_as_list()` to retrieve all the values.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'sort').\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found (default ``False``).\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is present.\n default (any, optional): If the param is not found returns the\n given value instead of None\n\n Returns:\n str: The value of the param as a string, or ``None`` if param is\n not found and is not required.\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n # NOTE(warsaw): If the key appeared multiple times, it will be\n # stored internally as a list. We do not define which one\n # actually gets returned, but let's pick the last one for grins.\n param = params[name]\n if isinstance(param, list):\n param = param[-1]\n\n if store is not None:\n store[name] = param\n\n return param\n\n if not required:\n return default\n\n raise HTTPMissingParam(name)\n\n def get_param_as_int(self, name,\n required=False, min=None, max=None, store=None):\n \"\"\"Return the value of a query string parameter as an int.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'limit').\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found or is not an integer (default\n ``False``).\n min (int, optional): Set to the minimum value allowed for this\n param. If the param is found and it is less than min, an\n ``HTTPError`` is raised.\n max (int, optional): Set to the maximum value allowed for this\n param. If the param is found and its value is greater than\n max, an ``HTTPError`` is raised.\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is found\n (default ``None``).\n\n Returns:\n int: The value of the param if it is found and can be converted to\n an integer. If the param is not found, returns ``None``, unless\n `required` is ``True``.\n\n Raises\n HTTPBadRequest: The param was not found in the request, even though\n it was required to be there. Also raised if the param's value\n falls outside the given interval, i.e., the value must be in\n the interval: min <= value <= max to avoid triggering an error.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n val = params[name]\n if isinstance(val, list):\n val = val[-1]\n\n try:\n val = int(val)\n except ValueError:\n msg = 'The value must be an integer.'\n raise HTTPInvalidParam(msg, name)\n\n if min is not None and val < min:\n msg = 'The value must be at least ' + str(min)\n raise HTTPInvalidParam(msg, name)\n\n if max is not None and max < val:\n msg = 'The value may not exceed ' + str(max)\n raise HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = val\n\n return val\n\n if not required:\n return None\n\n raise HTTPMissingParam(name)\n\n def get_param_as_bool(self, name, required=False, store=None,\n blank_as_true=False):\n \"\"\"Return the value of a query string parameter as a boolean\n\n The following boolean strings are supported::\n\n TRUE_STRINGS = ('true', 'True', 'yes')\n FALSE_STRINGS = ('false', 'False', 'no')\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'detailed').\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found or is not a recognized boolean\n string (default ``False``).\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is found (default\n ``None``).\n blank_as_true (bool): If ``True``, an empty string value will be\n treated as ``True``. Normally empty strings are ignored; if\n you would like to recognize such parameters, you must set the\n `keep_blank_qs_values` request option to ``True``. Request\n options are set globally for each instance of ``falcon.API``\n through the `req_options` attribute.\n\n Returns:\n bool: The value of the param if it is found and can be converted\n to a ``bool``. If the param is not found, returns ``None``\n unless required is ``True``.\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n val = params[name]\n if isinstance(val, list):\n val = val[-1]\n\n if val in TRUE_STRINGS:\n val = True\n elif val in FALSE_STRINGS:\n val = False\n elif blank_as_true and not val:\n val = True\n else:\n msg = 'The value of the parameter must be \"true\" or \"false\".'\n raise HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = val\n\n return val\n\n if not required:\n return None\n\n raise HTTPMissingParam(name)\n\n def get_param_as_list(self, name,\n transform=None, required=False, store=None):\n \"\"\"Return the value of a query string parameter as a list.\n\n List items must be comma-separated or must be provided\n as multiple instances of the same param in the query string\n ala *application/x-www-form-urlencoded*.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'ids').\n transform (callable, optional): An optional transform function\n that takes as input each element in the list as a ``str`` and\n outputs a transformed element for inclusion in the list that\n will be returned. For example, passing ``int`` will\n transform list items into numbers.\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found (default ``False``).\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is found (default\n ``None``).\n\n Returns:\n list: The value of the param if it is found. Otherwise, returns\n ``None`` unless required is True. Empty list elements will be\n discarded. For example, the following query strings would\n both result in `['1', '3']`::\n\n things=1,,3\n things=1&things=&things=3\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n HTTPInvalidParam: A transform function raised an instance of\n ``ValueError``.\n\n \"\"\"\n\n params = self._params\n\n # PERF: Use if..in since it is a good all-around performer; we don't\n # know how likely params are to be specified by clients.\n if name in params:\n items = params[name]\n\n # NOTE(warsaw): When a key appears multiple times in the request\n # query, it will already be represented internally as a list.\n # NOTE(kgriffs): Likewise for comma-delimited values.\n if not isinstance(items, list):\n items = [items]\n\n # PERF(kgriffs): Use if-else rather than a DRY approach\n # that sets transform to a passthrough function; avoids\n # function calling overhead.\n if transform is not None:\n try:\n items = [transform(i) for i in items]\n\n except ValueError:\n msg = 'The value is not formatted correctly.'\n raise HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = items\n\n return items\n\n if not required:\n return None\n\n raise HTTPMissingParam(name)\n\n def get_param_as_date(self, name, format_string='%Y-%m-%d',\n required=False, store=None):\n \"\"\"Return the value of a query string parameter as a date.\n\n Args:\n name (str): Parameter name, case-sensitive (e.g., 'ids').\n format_string (str): String used to parse the param value into a\n date.\n Any format recognized by strptime() is supported.\n (default ``\"%Y-%m-%d\"``)\n required (bool, optional): Set to ``True`` to raise\n ``HTTPBadRequest`` instead of returning ``None`` when the\n parameter is not found (default ``False``).\n store (dict, optional): A ``dict``-like object in which to place\n the value of the param, but only if the param is found (default\n ``None``).\n Returns:\n datetime.date: The value of the param if it is found and can be\n converted to a ``date`` according to the supplied format\n string. If the param is not found, returns ``None`` unless\n required is ``True``.\n\n Raises:\n HTTPBadRequest: A required param is missing from the request.\n HTTPInvalidParam: A transform function raised an instance of\n ``ValueError``.\n \"\"\"\n\n param_value = self.get_param(name, required=required)\n\n if param_value is None:\n return None\n\n try:\n date = strptime(param_value, format_string).date()\n except ValueError:\n msg = \"The date value does not match the required format\"\n raise HTTPInvalidParam(msg, name)\n\n if store is not None:\n store[name] = date\n\n return date\n\n # TODO(kgriffs): Use the nocover pragma only for the six.PY3 if..else\n def log_error(self, message): # pragma: no cover\n \"\"\"Write an error message to the server's log.\n\n Prepends timestamp and request info to message, and writes the\n result out to the WSGI server's error stream (`wsgi.error`).\n\n Args:\n message (str or unicode): Description of the problem. On Python 2,\n instances of ``unicode`` will be converted to UTF-8.\n\n \"\"\"\n\n if self.query_string:\n query_string_formatted = '?' + self.query_string\n else:\n query_string_formatted = ''\n\n log_line = (\n DEFAULT_ERROR_LOG_FORMAT.\n format(now(), self.method, self.path, query_string_formatted)\n )\n\n if six.PY3:\n self._wsgierrors.write(log_line + message + '\\n')\n else:\n if isinstance(message, unicode): # pylint: disable=E0602\n message = message.encode('utf-8')\n\n self._wsgierrors.write(log_line.encode('utf-8'))\n self._wsgierrors.write(message + '\\n')\n\n # ------------------------------------------------------------------------\n # Helpers\n # ------------------------------------------------------------------------\n\n def _wrap_stream(self): # pragma nocover\n try:\n content_length = self.content_length or 0\n\n except HTTPInvalidHeader:\n # NOTE(kgriffs): The content-length header was specified,\n # but it had an invalid value. Assume no content.\n content_length = 0\n\n self.stream = helpers.Body(self.stream, content_length)\n\n def _parse_form_urlencoded(self):\n # NOTE(kgriffs): This assumes self.stream has been patched\n # above in the case of wsgiref, so that self.content_length\n # is not needed. Normally we just avoid accessing\n # self.content_length, because it is a little expensive\n # to call. We could cache self.content_length, but the\n # overhead to do that won't usually be helpful, since\n # content length will only ever be read once per\n # request in most cases.\n body = self.stream.read()\n\n # NOTE(kgriffs): According to http://goo.gl/6rlcux the\n # body should be US-ASCII. Enforcing this also helps\n # catch malicious input.\n try:\n body = body.decode('ascii')\n except UnicodeDecodeError:\n body = None\n self.log_error('Non-ASCII characters found in form body '\n 'with Content-Type of '\n 'application/x-www-form-urlencoded. Body '\n 'will be ignored.')\n\n if body:\n extra_params = parse_query_string(\n body,\n keep_blank_qs_values=self.options.keep_blank_qs_values,\n )\n\n self._params.update(extra_params)\n\n def _parse_rfc_forwarded(self):\n \"\"\"Parse RFC 7239 \"Forwarded\" header.\n\n Returns:\n list: addresses derived from \"for\" parameters.\n \"\"\"\n addr = []\n for forwarded in self.env['HTTP_FORWARDED'].split(','):\n for param in forwarded.split(';'):\n param = param.strip().split('=', 1)\n if len(param) == 1:\n continue\n key, val = param\n if key.lower() != 'for':\n # we only want for params\n continue\n host, _ = parse_host(unquote_string(val))\n addr.append(host)\n return addr\n\n\n# PERF: To avoid typos and improve storage space and speed over a dict.\nclass RequestOptions(object):\n \"\"\"This class is a container for ``Request`` options.\n\n Attributes:\n keep_blank_qs_values (bool): Set to ``True`` in order to retain\n blank values in query string parameters (default ``False``).\n\n \"\"\"\n __slots__ = (\n 'keep_blank_qs_values',\n )\n\n def __init__(self):\n self.keep_blank_qs_values = False\n", "path": "falcon/request.py" } ]
diff --git a/falcon/request.py b/falcon/request.py index 54be8c069..bd832279b 100644 --- a/falcon/request.py +++ b/falcon/request.py @@ -353,7 +353,8 @@ def client_accepts_json(self): @property def client_accepts_msgpack(self): - return self.client_accepts('application/x-msgpack') + return (self.client_accepts('application/x-msgpack') + or self.client_accepts('application/msgpack')) @property def client_accepts_xml(self): diff --git a/tests/test_req_vars.py b/tests/test_req_vars.py index 9e8875448..c71f02e07 100644 --- a/tests/test_req_vars.py +++ b/tests/test_req_vars.py @@ -348,6 +348,12 @@ def test_client_accepts_props(self): self.assertFalse(req.client_accepts_json) self.assertTrue(req.client_accepts_msgpack) + headers = {'Accept': 'application/msgpack'} + req = Request(testing.create_environ(headers=headers)) + self.assertFalse(req.client_accepts_xml) + self.assertFalse(req.client_accepts_json) + self.assertTrue(req.client_accepts_msgpack) + headers = { 'Accept': 'application/json,application/xml,application/x-msgpack' }
frappe__frappe-14120
Problems while using translations via Globe Symbol ## Description / Context information (for bug reports) There are 2 issues - **Issue - 1** While updating translations using Globe Symbol, system generates duplicate rows of translations for records added in Description https://user-images.githubusercontent.com/16986940/132095963-9b747145-cdb8-4972-8cd9-e2dc907b45a3.mp4 <hr/> **Issue - 2** When clicking on the globe button of Item-2, values from the prev edited item (Item-1) are displayed.In order to refresh the correct values of Item-2, page reload is required. https://user-images.githubusercontent.com/16986940/132095967-caefab6a-ff84-45dc-b1ad-50a0ba60e84f.mp4 **Output of `bench version`** We Checked on version-12 and version-13 Problem is consistent for versions ``` Bench 5.5.0 ERPNext: v13.10.0 (version-13) Frappe Framework: v13.10.0 (version-13) ``` ``` Bench 5.5.0 ERPNext: v12.23.0 (version-12) Frappe Framework: v12.20.0 (version-12) ``` ## Steps to reproduce the issue Check Translate On Description field for Item master. <hr/> - [email protected] - [email protected]
[ { "content": "# Copyright (c) 2021, Frappe Technologies Pvt. Ltd. and Contributors\n# MIT License. See license.txt\n\nfrom __future__ import unicode_literals, print_function\n\nfrom six import iteritems, text_type, string_types, PY2\n\nfrom frappe.utils import cstr\n\n\"\"\"\n\tfrappe.translate\n\t~~~~~~~~~~~~~~~~\n\n\tTranslation tools for frappe\n\"\"\"\n\nimport io\nimport itertools\nimport json\nimport operator\nimport functools\nimport os\nimport re\nfrom typing import List, Union, Tuple\n\nimport frappe\nfrom frappe.model.utils import InvalidIncludePath, render_include\nfrom frappe.utils import get_bench_path, is_html, strip, strip_html_tags\n\n\ndef guess_language(lang_list=None):\n\t\"\"\"[DEPRECATED] This method is deprecated, use `frappe.translate.get_language` method instead.\n\tIt will be removed in v14.\n\t\"\"\"\n\timport click\n\n\tclick.secho(f\"{guess_language.__doc__}\\n{get_language.__doc__}\", fg=\"yellow\")\n\treturn get_language(lang_list)\n\n\ndef get_language(lang_list: List = None) -> str:\n\t\"\"\"Set `frappe.local.lang` from HTTP headers at beginning of request\n\n\tOrder of priority for setting language:\n\t1. Form Dict => _lang\n\t2. Cookie => preferred_language (Non authorized user)\n\t3. Request Header => Accept-Language (Non authorized user)\n\t4. User document => language\n\t5. System Settings => language\n\t\"\"\"\n\tis_logged_in = frappe.session.user != \"Guest\"\n\n\t# fetch language from form_dict\n\tif frappe.form_dict._lang:\n\t\tlanguage = get_lang_code(\n\t\t\tfrappe.form_dict._lang or get_parent_language(frappe.form_dict._lang)\n\t\t)\n\t\tif language:\n\t\t\treturn language\n\n\t# use language set in User or System Settings if user is logged in\n\tif is_logged_in:\n\t\treturn frappe.local.lang\n\n\tlang_set = set(lang_list or get_all_languages() or [])\n\n\t# fetch language from cookie\n\tpreferred_language_cookie = get_preferred_language_cookie()\n\n\tif preferred_language_cookie:\n\t\tif preferred_language_cookie in lang_set:\n\t\t\treturn preferred_language_cookie\n\n\t\tparent_language = get_parent_language(language)\n\t\tif parent_language in lang_set:\n\t\t\treturn parent_language\n\n\t# fetch language from request headers\n\taccept_language = list(frappe.request.accept_languages.values())\n\n\tfor language in accept_language:\n\t\tif language in lang_set:\n\t\t\treturn language\n\n\t\tparent_language = get_parent_language(language)\n\t\tif parent_language in lang_set:\n\t\t\treturn parent_language\n\n\t# fallback to language set in User or System Settings\n\treturn frappe.local.lang\n\n\[email protected]_cache()\ndef get_parent_language(lang: str) -> str:\n\t\"\"\"If the passed language is a variant, return its parent\n\n\tEg:\n\t\t1. zh-TW -> zh\n\t\t2. sr-BA -> sr\n\t\"\"\"\n\tis_language_variant = \"-\" in lang\n\tif is_language_variant:\n\t\treturn lang[:lang.index(\"-\")]\n\n\ndef get_user_lang(user: str = None) -> str:\n\t\"\"\"Set frappe.local.lang from user preferences on session beginning or resumption\"\"\"\n\tuser = user or frappe.session.user\n\tlang = frappe.cache().hget(\"lang\", user)\n\n\tif not lang:\n\t\t# User.language => Session Defaults => frappe.local.lang => 'en'\n\t\tlang = (\n\t\t\tfrappe.db.get_value(\"User\", user, \"language\")\n\t\t\tor frappe.db.get_default(\"lang\")\n\t\t\tor frappe.local.lang\n\t\t\tor \"en\"\n\t\t)\n\n\t\tfrappe.cache().hset(\"lang\", user, lang)\n\n\treturn lang\n\ndef get_lang_code(lang: str) -> Union[str, None]:\n\treturn (\n\t\tfrappe.db.get_value(\"Language\", {\"name\": lang})\n\t\tor frappe.db.get_value(\"Language\", {\"language_name\": lang})\n\t)\n\ndef set_default_language(lang):\n\t\"\"\"Set Global default language\"\"\"\n\tif frappe.db.get_default(\"lang\") != lang:\n\t\tfrappe.db.set_default(\"lang\", lang)\n\tfrappe.local.lang = lang\n\ndef get_lang_dict():\n\t\"\"\"Returns all languages in dict format, full name is the key e.g. `{\"english\":\"en\"}`\"\"\"\n\treturn dict(frappe.db.sql('select language_name, name from tabLanguage'))\n\ndef get_dict(fortype, name=None):\n\t\"\"\"Returns translation dict for a type of object.\n\n\t :param fortype: must be one of `doctype`, `page`, `report`, `include`, `jsfile`, `boot`\n\t :param name: name of the document for which assets are to be returned.\n\t \"\"\"\n\tfortype = fortype.lower()\n\tcache = frappe.cache()\n\tasset_key = fortype + \":\" + (name or \"-\")\n\ttranslation_assets = cache.hget(\"translation_assets\", frappe.local.lang, shared=True) or {}\n\n\tif not asset_key in translation_assets:\n\t\tmessages = []\n\t\tif fortype==\"doctype\":\n\t\t\tmessages = get_messages_from_doctype(name)\n\t\telif fortype==\"page\":\n\t\t\tmessages = get_messages_from_page(name)\n\t\telif fortype==\"report\":\n\t\t\tmessages = get_messages_from_report(name)\n\t\telif fortype==\"include\":\n\t\t\tmessages = get_messages_from_include_files()\n\t\telif fortype==\"jsfile\":\n\t\t\tmessages = get_messages_from_file(name)\n\t\telif fortype==\"boot\":\n\t\t\tapps = frappe.get_all_apps(True)\n\t\t\tfor app in apps:\n\t\t\t\tmessages.extend(get_server_messages(app))\n\n\t\t\tmessages += get_messages_from_navbar()\n\t\t\tmessages += get_messages_from_include_files()\n\t\t\tmessages += frappe.db.sql(\"select 'Print Format:', name from `tabPrint Format`\")\n\t\t\tmessages += frappe.db.sql(\"select 'DocType:', name from tabDocType\")\n\t\t\tmessages += frappe.db.sql(\"select 'Role:', name from tabRole\")\n\t\t\tmessages += frappe.db.sql(\"select 'Module:', name from `tabModule Def`\")\n\t\t\tmessages += frappe.db.sql(\"select '', format from `tabWorkspace Shortcut` where format is not null\")\n\t\t\tmessages += frappe.db.sql(\"select '', title from `tabOnboarding Step`\")\n\n\t\tmessages = deduplicate_messages(messages)\n\t\tmessage_dict = make_dict_from_messages(messages, load_user_translation=False)\n\t\tmessage_dict.update(get_dict_from_hooks(fortype, name))\n\t\t# remove untranslated\n\t\tmessage_dict = {k:v for k, v in iteritems(message_dict) if k!=v}\n\t\ttranslation_assets[asset_key] = message_dict\n\t\tcache.hset(\"translation_assets\", frappe.local.lang, translation_assets, shared=True)\n\n\ttranslation_map = translation_assets[asset_key]\n\n\ttranslation_map.update(get_user_translations(frappe.local.lang))\n\n\treturn translation_map\n\n\ndef get_dict_from_hooks(fortype, name):\n\ttranslated_dict = {}\n\n\thooks = frappe.get_hooks(\"get_translated_dict\")\n\tfor (hook_fortype, fortype_name) in hooks:\n\t\tif hook_fortype == fortype and fortype_name == name:\n\t\t\tfor method in hooks[(hook_fortype, fortype_name)]:\n\t\t\t\ttranslated_dict.update(frappe.get_attr(method)())\n\n\treturn translated_dict\n\ndef make_dict_from_messages(messages, full_dict=None, load_user_translation=True):\n\t\"\"\"Returns translated messages as a dict in Language specified in `frappe.local.lang`\n\n\t:param messages: List of untranslated messages\n\t\"\"\"\n\tout = {}\n\tif full_dict==None:\n\t\tif load_user_translation:\n\t\t\tfull_dict = get_full_dict(frappe.local.lang)\n\t\telse:\n\t\t\tfull_dict = load_lang(frappe.local.lang)\n\n\tfor m in messages:\n\t\tif m[1] in full_dict:\n\t\t\tout[m[1]] = full_dict[m[1]]\n\t\t# check if msg with context as key exist eg. msg:context\n\t\tif len(m) > 2 and m[2]:\n\t\t\tkey = m[1] + ':' + m[2]\n\t\t\tif full_dict.get(key):\n\t\t\t\tout[key] = full_dict[key]\n\n\treturn out\n\ndef get_lang_js(fortype, name):\n\t\"\"\"Returns code snippet to be appended at the end of a JS script.\n\n\t:param fortype: Type of object, e.g. `DocType`\n\t:param name: Document name\n\t\"\"\"\n\treturn \"\\n\\n$.extend(frappe._messages, %s)\" % json.dumps(get_dict(fortype, name))\n\ndef get_full_dict(lang):\n\t\"\"\"Load and return the entire translations dictionary for a language from :meth:`frape.cache`\n\n\t:param lang: Language Code, e.g. `hi`\n\t\"\"\"\n\tif not lang:\n\t\treturn {}\n\n\t# found in local, return!\n\tif getattr(frappe.local, 'lang_full_dict', None) and frappe.local.lang_full_dict.get(lang, None):\n\t\treturn frappe.local.lang_full_dict\n\n\tfrappe.local.lang_full_dict = load_lang(lang)\n\n\ttry:\n\t\t# get user specific translation data\n\t\tuser_translations = get_user_translations(lang)\n\t\tfrappe.local.lang_full_dict.update(user_translations)\n\texcept Exception:\n\t\tpass\n\n\treturn frappe.local.lang_full_dict\n\ndef load_lang(lang, apps=None):\n\t\"\"\"Combine all translations from `.csv` files in all `apps`.\n\tFor derivative languages (es-GT), take translations from the\n\tbase language (es) and then update translations from the child (es-GT)\"\"\"\n\n\tif lang=='en':\n\t\treturn {}\n\n\tout = frappe.cache().hget(\"lang_full_dict\", lang, shared=True)\n\tif not out:\n\t\tout = {}\n\t\tfor app in (apps or frappe.get_all_apps(True)):\n\t\t\tpath = os.path.join(frappe.get_pymodule_path(app), \"translations\", lang + \".csv\")\n\t\t\tout.update(get_translation_dict_from_file(path, lang, app) or {})\n\n\t\tif '-' in lang:\n\t\t\tparent = lang.split('-')[0]\n\t\t\tparent_out = load_lang(parent)\n\t\t\tparent_out.update(out)\n\t\t\tout = parent_out\n\n\t\tfrappe.cache().hset(\"lang_full_dict\", lang, out, shared=True)\n\n\treturn out or {}\n\ndef get_translation_dict_from_file(path, lang, app):\n\t\"\"\"load translation dict from given path\"\"\"\n\ttranslation_map = {}\n\tif os.path.exists(path):\n\t\tcsv_content = read_csv_file(path)\n\n\t\tfor item in csv_content:\n\t\t\tif len(item)==3 and item[2]:\n\t\t\t\tkey = item[0] + ':' + item[2]\n\t\t\t\ttranslation_map[key] = strip(item[1])\n\t\t\telif len(item) in [2, 3]:\n\t\t\t\ttranslation_map[item[0]] = strip(item[1])\n\t\t\telif item:\n\t\t\t\traise Exception(\"Bad translation in '{app}' for language '{lang}': {values}\".format(\n\t\t\t\t\tapp=app, lang=lang, values=repr(item).encode(\"utf-8\")\n\t\t\t\t))\n\n\treturn translation_map\n\ndef get_user_translations(lang):\n\tif not frappe.db:\n\t\tfrappe.connect()\n\tout = frappe.cache().hget('lang_user_translations', lang)\n\tif out is None:\n\t\tout = {}\n\t\tuser_translations = frappe.get_all('Translation',\n\t\t\tfields=[\"source_text\", \"translated_text\", \"context\"],\n\t\t\tfilters={'language': lang})\n\n\t\tfor translation in user_translations:\n\t\t\tkey = translation.source_text\n\t\t\tvalue = translation.translated_text\n\t\t\tif translation.context:\n\t\t\t\tkey += ':' + translation.context\n\t\t\tout[key] = value\n\n\t\tfrappe.cache().hset('lang_user_translations', lang, out)\n\n\treturn out\n\n\ndef clear_cache():\n\t\"\"\"Clear all translation assets from :meth:`frappe.cache`\"\"\"\n\tcache = frappe.cache()\n\tcache.delete_key(\"langinfo\")\n\n\t# clear translations saved in boot cache\n\tcache.delete_key(\"bootinfo\")\n\tcache.delete_key(\"lang_full_dict\", shared=True)\n\tcache.delete_key(\"translation_assets\", shared=True)\n\tcache.delete_key(\"lang_user_translations\")\n\ndef get_messages_for_app(app, deduplicate=True):\n\t\"\"\"Returns all messages (list) for a specified `app`\"\"\"\n\tmessages = []\n\tmodules = \", \".join(['\"{}\"'.format(m.title().replace(\"_\", \" \")) \\\n\t\tfor m in frappe.local.app_modules[app]])\n\n\t# doctypes\n\tif modules:\n\t\tfor name in frappe.db.sql_list(\"\"\"select name from tabDocType\n\t\t\twhere module in ({})\"\"\".format(modules)):\n\t\t\tmessages.extend(get_messages_from_doctype(name))\n\n\t\t# pages\n\t\tfor name, title in frappe.db.sql(\"\"\"select name, title from tabPage\n\t\t\twhere module in ({})\"\"\".format(modules)):\n\t\t\tmessages.append((None, title or name))\n\t\t\tmessages.extend(get_messages_from_page(name))\n\n\n\t\t# reports\n\t\tfor name in frappe.db.sql_list(\"\"\"select tabReport.name from tabDocType, tabReport\n\t\t\twhere tabReport.ref_doctype = tabDocType.name\n\t\t\t\tand tabDocType.module in ({})\"\"\".format(modules)):\n\t\t\tmessages.append((None, name))\n\t\t\tmessages.extend(get_messages_from_report(name))\n\t\t\tfor i in messages:\n\t\t\t\tif not isinstance(i, tuple):\n\t\t\t\t\traise Exception\n\n\t# workflow based on app.hooks.fixtures\n\tmessages.extend(get_messages_from_workflow(app_name=app))\n\n\t# custom fields based on app.hooks.fixtures\n\tmessages.extend(get_messages_from_custom_fields(app_name=app))\n\n\t# app_include_files\n\tmessages.extend(get_all_messages_from_js_files(app))\n\n\t# server_messages\n\tmessages.extend(get_server_messages(app))\n\n\t# messages from navbar settings\n\tmessages.extend(get_messages_from_navbar())\n\n\tif deduplicate:\n\t\tmessages = deduplicate_messages(messages)\n\n\treturn messages\n\n\ndef get_messages_from_navbar():\n\t\"\"\"Return all labels from Navbar Items, as specified in Navbar Settings.\"\"\"\n\tlabels = frappe.get_all('Navbar Item', filters={'item_label': ('is', 'set')}, pluck='item_label')\n\treturn [('Navbar:', label, 'Label of a Navbar Item') for label in labels]\n\n\ndef get_messages_from_doctype(name):\n\t\"\"\"Extract all translatable messages for a doctype. Includes labels, Python code,\n\tJavascript code, html templates\"\"\"\n\tmessages = []\n\tmeta = frappe.get_meta(name)\n\n\tmessages = [meta.name, meta.module]\n\n\tif meta.description:\n\t\tmessages.append(meta.description)\n\n\t# translations of field labels, description and options\n\tfor d in meta.get(\"fields\"):\n\t\tmessages.extend([d.label, d.description])\n\n\t\tif d.fieldtype=='Select' and d.options:\n\t\t\toptions = d.options.split('\\n')\n\t\t\tif not \"icon\" in options[0]:\n\t\t\t\tmessages.extend(options)\n\t\tif d.fieldtype=='HTML' and d.options:\n\t\t\tmessages.append(d.options)\n\n\t# translations of roles\n\tfor d in meta.get(\"permissions\"):\n\t\tif d.role:\n\t\t\tmessages.append(d.role)\n\n\tmessages = [message for message in messages if message]\n\tmessages = [('DocType: ' + name, message) for message in messages if is_translatable(message)]\n\n\t# extract from js, py files\n\tif not meta.custom:\n\t\tdoctype_file_path = frappe.get_module_path(meta.module, \"doctype\", meta.name, meta.name)\n\t\tmessages.extend(get_messages_from_file(doctype_file_path + \".js\"))\n\t\tmessages.extend(get_messages_from_file(doctype_file_path + \"_list.js\"))\n\t\tmessages.extend(get_messages_from_file(doctype_file_path + \"_list.html\"))\n\t\tmessages.extend(get_messages_from_file(doctype_file_path + \"_calendar.js\"))\n\t\tmessages.extend(get_messages_from_file(doctype_file_path + \"_dashboard.html\"))\n\n\t# workflow based on doctype\n\tmessages.extend(get_messages_from_workflow(doctype=name))\n\treturn messages\n\ndef get_messages_from_workflow(doctype=None, app_name=None):\n\tassert doctype or app_name, 'doctype or app_name should be provided'\n\n\t# translations for Workflows\n\tworkflows = []\n\tif doctype:\n\t\tworkflows = frappe.get_all('Workflow', filters={'document_type': doctype})\n\telse:\n\t\tfixtures = frappe.get_hooks('fixtures', app_name=app_name) or []\n\t\tfor fixture in fixtures:\n\t\t\tif isinstance(fixture, string_types) and fixture == 'Worflow':\n\t\t\t\tworkflows = frappe.get_all('Workflow')\n\t\t\t\tbreak\n\t\t\telif isinstance(fixture, dict) and fixture.get('dt', fixture.get('doctype')) == 'Workflow':\n\t\t\t\tworkflows.extend(frappe.get_all('Workflow', filters=fixture.get('filters')))\n\n\tmessages = []\n\tfor w in workflows:\n\t\tstates = frappe.db.sql(\n\t\t\t'select distinct state from `tabWorkflow Document State` where parent=%s',\n\t\t\t(w['name'],), as_dict=True)\n\n\t\tmessages.extend([('Workflow: ' + w['name'], state['state']) for state in states if is_translatable(state['state'])])\n\n\t\tstates = frappe.db.sql(\n\t\t\t'select distinct message from `tabWorkflow Document State` where parent=%s and message is not null',\n\t\t\t(w['name'],), as_dict=True)\n\n\t\tmessages.extend([(\"Workflow: \" + w['name'], state['message'])\n\t\t\tfor state in states if is_translatable(state['message'])])\n\n\t\tactions = frappe.db.sql(\n\t\t\t'select distinct action from `tabWorkflow Transition` where parent=%s',\n\t\t\t(w['name'],), as_dict=True)\n\n\t\tmessages.extend([(\"Workflow: \" + w['name'], action['action']) \\\n\t\t\tfor action in actions if is_translatable(action['action'])])\n\n\treturn messages\n\n\ndef get_messages_from_custom_fields(app_name):\n\tfixtures = frappe.get_hooks('fixtures', app_name=app_name) or []\n\tcustom_fields = []\n\n\tfor fixture in fixtures:\n\t\tif isinstance(fixture, string_types) and fixture == 'Custom Field':\n\t\t\tcustom_fields = frappe.get_all('Custom Field', fields=['name','label', 'description', 'fieldtype', 'options'])\n\t\t\tbreak\n\t\telif isinstance(fixture, dict) and fixture.get('dt', fixture.get('doctype')) == 'Custom Field':\n\t\t\tcustom_fields.extend(frappe.get_all('Custom Field', filters=fixture.get('filters'),\n\t\t\t\tfields=['name','label', 'description', 'fieldtype', 'options']))\n\n\tmessages = []\n\tfor cf in custom_fields:\n\t\tfor prop in ('label', 'description'):\n\t\t\tif not cf.get(prop) or not is_translatable(cf[prop]):\n\t\t\t\tcontinue\n\t\t\tmessages.append(('Custom Field - {}: {}'.format(prop, cf['name']), cf[prop]))\n\t\tif cf['fieldtype'] == 'Selection' and cf.get('options'):\n\t\t\tfor option in cf['options'].split('\\n'):\n\t\t\t\tif option and 'icon' not in option and is_translatable(option):\n\t\t\t\t\tmessages.append(('Custom Field - Description: ' + cf['name'], option))\n\n\treturn messages\n\ndef get_messages_from_page(name):\n\t\"\"\"Returns all translatable strings from a :class:`frappe.core.doctype.Page`\"\"\"\n\treturn _get_messages_from_page_or_report(\"Page\", name)\n\ndef get_messages_from_report(name):\n\t\"\"\"Returns all translatable strings from a :class:`frappe.core.doctype.Report`\"\"\"\n\treport = frappe.get_doc(\"Report\", name)\n\tmessages = _get_messages_from_page_or_report(\"Report\", name,\n\t\tfrappe.db.get_value(\"DocType\", report.ref_doctype, \"module\"))\n\n\tif report.columns:\n\t\tcontext = \"Column of report '%s'\" % report.name # context has to match context in `prepare_columns` in query_report.js\n\t\tmessages.extend([(None, report_column.label, context) for report_column in report.columns])\n\n\tif report.filters:\n\t\tmessages.extend([(None, report_filter.label) for report_filter in report.filters])\n\n\tif report.query:\n\t\tmessages.extend([(None, message) for message in re.findall('\"([^:,^\"]*):', report.query) if is_translatable(message)])\n\n\tmessages.append((None,report.report_name))\n\treturn messages\n\ndef _get_messages_from_page_or_report(doctype, name, module=None):\n\tif not module:\n\t\tmodule = frappe.db.get_value(doctype, name, \"module\")\n\n\tdoc_path = frappe.get_module_path(module, doctype, name)\n\n\tmessages = get_messages_from_file(os.path.join(doc_path, frappe.scrub(name) +\".py\"))\n\n\tif os.path.exists(doc_path):\n\t\tfor filename in os.listdir(doc_path):\n\t\t\tif filename.endswith(\".js\") or filename.endswith(\".html\"):\n\t\t\t\tmessages += get_messages_from_file(os.path.join(doc_path, filename))\n\n\treturn messages\n\ndef get_server_messages(app):\n\t\"\"\"Extracts all translatable strings (tagged with :func:`frappe._`) from Python modules\n\t\tinside an app\"\"\"\n\tmessages = []\n\tfile_extensions = ('.py', '.html', '.js', '.vue')\n\tfor basepath, folders, files in os.walk(frappe.get_pymodule_path(app)):\n\t\tfor dontwalk in (\".git\", \"public\", \"locale\"):\n\t\t\tif dontwalk in folders: folders.remove(dontwalk)\n\n\t\tfor f in files:\n\t\t\tf = frappe.as_unicode(f)\n\t\t\tif f.endswith(file_extensions):\n\t\t\t\tmessages.extend(get_messages_from_file(os.path.join(basepath, f)))\n\n\treturn messages\n\ndef get_messages_from_include_files(app_name=None):\n\t\"\"\"Returns messages from js files included at time of boot like desk.min.js for desk and web\"\"\"\n\tmessages = []\n\tapp_include_js = frappe.get_hooks(\"app_include_js\", app_name=app_name) or []\n\tweb_include_js = frappe.get_hooks(\"web_include_js\", app_name=app_name) or []\n\tinclude_js = app_include_js + web_include_js\n\n\tfor js_path in include_js:\n\t\trelative_path = os.path.join(frappe.local.sites_path, js_path.lstrip('/'))\n\t\tmessages_from_file = get_messages_from_file(relative_path)\n\t\tmessages.extend(messages_from_file)\n\n\treturn messages\n\ndef get_all_messages_from_js_files(app_name=None):\n\t\"\"\"Extracts all translatable strings from app `.js` files\"\"\"\n\tmessages = []\n\tfor app in ([app_name] if app_name else frappe.get_installed_apps()):\n\t\tif os.path.exists(frappe.get_app_path(app, \"public\")):\n\t\t\tfor basepath, folders, files in os.walk(frappe.get_app_path(app, \"public\")):\n\t\t\t\tif \"frappe/public/js/lib\" in basepath:\n\t\t\t\t\tcontinue\n\n\t\t\t\tfor fname in files:\n\t\t\t\t\tif fname.endswith(\".js\") or fname.endswith(\".html\") or fname.endswith('.vue'):\n\t\t\t\t\t\tmessages.extend(get_messages_from_file(os.path.join(basepath, fname)))\n\n\treturn messages\n\ndef get_messages_from_file(path: str) -> List[Tuple[str, str, str, str]]:\n\t\"\"\"Returns a list of transatable strings from a code file\n\n\t:param path: path of the code file\n\t\"\"\"\n\tfrappe.flags.setdefault('scanned_files', [])\n\t# TODO: Find better alternative\n\t# To avoid duplicate scan\n\tif path in set(frappe.flags.scanned_files):\n\t\treturn []\n\n\tfrappe.flags.scanned_files.append(path)\n\n\tbench_path = get_bench_path()\n\tif os.path.exists(path):\n\t\twith open(path, 'r') as sourcefile:\n\t\t\ttry:\n\t\t\t\tfile_contents = sourcefile.read()\n\t\t\texcept Exception:\n\t\t\t\tprint(\"Could not scan file for translation: {0}\".format(path))\n\t\t\t\treturn []\n\n\t\t\treturn [\n\t\t\t\t(os.path.relpath(path, bench_path), message, context, line)\n\t\t\t\tfor (line, message, context) in extract_messages_from_code(file_contents)\n\t\t\t]\n\telse:\n\t\treturn []\n\ndef extract_messages_from_code(code):\n\t\"\"\"\n\t\tExtracts translatable strings from a code file\n\t\t:param code: code from which translatable files are to be extracted\n\t\t:param is_py: include messages in triple quotes e.g. `_('''message''')`\n\t\"\"\"\n\tfrom jinja2 import TemplateError\n\n\ttry:\n\t\tcode = frappe.as_unicode(render_include(code))\n\n\t# Exception will occur when it encounters John Resig's microtemplating code\n\texcept (TemplateError, ImportError, InvalidIncludePath, IOError) as e:\n\t\tif isinstance(e, InvalidIncludePath):\n\t\t\tfrappe.clear_last_message()\n\n\t\tpass\n\n\tmessages = []\n\tpattern = r\"_\\(([\\\"']{,3})(?P<message>((?!\\1).)*)\\1(\\s*,\\s*context\\s*=\\s*([\\\"'])(?P<py_context>((?!\\5).)*)\\5)*(\\s*,\\s*(.)*?\\s*(,\\s*([\\\"'])(?P<js_context>((?!\\11).)*)\\11)*)*\\)\"\n\n\tfor m in re.compile(pattern).finditer(code):\n\t\tmessage = m.group('message')\n\t\tcontext = m.group('py_context') or m.group('js_context')\n\t\tpos = m.start()\n\n\t\tif is_translatable(message):\n\t\t\tmessages.append([pos, message, context])\n\n\treturn add_line_number(messages, code)\n\ndef is_translatable(m):\n\tif re.search(\"[a-zA-Z]\", m) and not m.startswith(\"fa fa-\") and not m.endswith(\"px\") and not m.startswith(\"eval:\"):\n\t\treturn True\n\treturn False\n\ndef add_line_number(messages, code):\n\tret = []\n\tmessages = sorted(messages, key=lambda x: x[0])\n\tnewlines = [m.start() for m in re.compile('\\\\n').finditer(code)]\n\tline = 1\n\tnewline_i = 0\n\tfor pos, message, context in messages:\n\t\twhile newline_i < len(newlines) and pos > newlines[newline_i]:\n\t\t\tline+=1\n\t\t\tnewline_i+= 1\n\t\tret.append([line, message, context])\n\treturn ret\n\ndef read_csv_file(path):\n\t\"\"\"Read CSV file and return as list of list\n\n\t:param path: File path\"\"\"\n\tfrom csv import reader\n\n\tif PY2:\n\t\twith codecs.open(path, 'r', 'utf-8') as msgfile:\n\t\t\tdata = msgfile.read()\n\n\t\t\t# for japanese! #wtf\n\t\t\tdata = data.replace(chr(28), \"\").replace(chr(29), \"\")\n\t\t\tdata = reader([r.encode('utf-8') for r in data.splitlines()])\n\t\t\tnewdata = [[text_type(val, 'utf-8') for val in row] for row in data]\n\telse:\n\t\twith io.open(path, mode='r', encoding='utf-8', newline='') as msgfile:\n\t\t\tdata = reader(msgfile)\n\t\t\tnewdata = [[ val for val in row ] for row in data]\n\treturn newdata\n\ndef write_csv_file(path, app_messages, lang_dict):\n\t\"\"\"Write translation CSV file.\n\n\t:param path: File path, usually `[app]/translations`.\n\t:param app_messages: Translatable strings for this app.\n\t:param lang_dict: Full translated dict.\n\t\"\"\"\n\tapp_messages.sort(key = lambda x: x[1])\n\tfrom csv import writer\n\twith open(path, 'w', newline='') as msgfile:\n\t\tw = writer(msgfile, lineterminator='\\n')\n\n\t\tfor app_message in app_messages:\n\t\t\tcontext = None\n\t\t\tif len(app_message) == 2:\n\t\t\t\tpath, message = app_message\n\t\t\telif len(app_message) == 3:\n\t\t\t\tpath, message, lineno = app_message\n\t\t\telif len(app_message) == 4:\n\t\t\t\tpath, message, context, lineno = app_message\n\t\t\telse:\n\t\t\t\tcontinue\n\n\t\t\tt = lang_dict.get(message, '')\n\t\t\t# strip whitespaces\n\t\t\ttranslated_string = re.sub(r'{\\s?([0-9]+)\\s?}', r\"{\\g<1>}\", t)\n\t\t\tif translated_string:\n\t\t\t\tw.writerow([message, translated_string, context])\n\ndef get_untranslated(lang, untranslated_file, get_all=False):\n\t\"\"\"Returns all untranslated strings for a language and writes in a file\n\n\t:param lang: Language code.\n\t:param untranslated_file: Output file path.\n\t:param get_all: Return all strings, translated or not.\"\"\"\n\tclear_cache()\n\tapps = frappe.get_all_apps(True)\n\n\tmessages = []\n\tuntranslated = []\n\tfor app in apps:\n\t\tmessages.extend(get_messages_for_app(app))\n\n\tmessages = deduplicate_messages(messages)\n\n\tdef escape_newlines(s):\n\t\treturn (s.replace(\"\\\\\\n\", \"|||||\")\n\t\t\t\t.replace(\"\\\\n\", \"||||\")\n\t\t\t\t.replace(\"\\n\", \"|||\"))\n\n\tif get_all:\n\t\tprint(str(len(messages)) + \" messages\")\n\t\twith open(untranslated_file, \"wb\") as f:\n\t\t\tfor m in messages:\n\t\t\t\t# replace \\n with ||| so that internal linebreaks don't get split\n\t\t\t\tf.write((escape_newlines(m[1]) + os.linesep).encode(\"utf-8\"))\n\telse:\n\t\tfull_dict = get_full_dict(lang)\n\n\t\tfor m in messages:\n\t\t\tif not full_dict.get(m[1]):\n\t\t\t\tuntranslated.append(m[1])\n\n\t\tif untranslated:\n\t\t\tprint(str(len(untranslated)) + \" missing translations of \" + str(len(messages)))\n\t\t\twith open(untranslated_file, \"wb\") as f:\n\t\t\t\tfor m in untranslated:\n\t\t\t\t\t# replace \\n with ||| so that internal linebreaks don't get split\n\t\t\t\t\tf.write((escape_newlines(m) + os.linesep).encode(\"utf-8\"))\n\t\telse:\n\t\t\tprint(\"all translated!\")\n\ndef update_translations(lang, untranslated_file, translated_file):\n\t\"\"\"Update translations from a source and target file for a given language.\n\n\t:param lang: Language code (e.g. `en`).\n\t:param untranslated_file: File path with the messages in English.\n\t:param translated_file: File path with messages in language to be updated.\"\"\"\n\tclear_cache()\n\tfull_dict = get_full_dict(lang)\n\n\tdef restore_newlines(s):\n\t\treturn (s.replace(\"|||||\", \"\\\\\\n\")\n\t\t\t\t.replace(\"| | | | |\", \"\\\\\\n\")\n\t\t\t\t.replace(\"||||\", \"\\\\n\")\n\t\t\t\t.replace(\"| | | |\", \"\\\\n\")\n\t\t\t\t.replace(\"|||\", \"\\n\")\n\t\t\t\t.replace(\"| | |\", \"\\n\"))\n\n\ttranslation_dict = {}\n\tfor key, value in zip(frappe.get_file_items(untranslated_file, ignore_empty_lines=False),\n\t\tfrappe.get_file_items(translated_file, ignore_empty_lines=False)):\n\n\t\t# undo hack in get_untranslated\n\t\ttranslation_dict[restore_newlines(key)] = restore_newlines(value)\n\n\tfull_dict.update(translation_dict)\n\n\tfor app in frappe.get_all_apps(True):\n\t\twrite_translations_file(app, lang, full_dict)\n\ndef import_translations(lang, path):\n\t\"\"\"Import translations from file in standard format\"\"\"\n\tclear_cache()\n\tfull_dict = get_full_dict(lang)\n\tfull_dict.update(get_translation_dict_from_file(path, lang, 'import'))\n\n\tfor app in frappe.get_all_apps(True):\n\t\twrite_translations_file(app, lang, full_dict)\n\n\ndef rebuild_all_translation_files():\n\t\"\"\"Rebuild all translation files: `[app]/translations/[lang].csv`.\"\"\"\n\tfor lang in get_all_languages():\n\t\tfor app in frappe.get_all_apps():\n\t\t\twrite_translations_file(app, lang)\n\ndef write_translations_file(app, lang, full_dict=None, app_messages=None):\n\t\"\"\"Write a translation file for a given language.\n\n\t:param app: `app` for which translations are to be written.\n\t:param lang: Language code.\n\t:param full_dict: Full translated language dict (optional).\n\t:param app_messages: Source strings (optional).\n\t\"\"\"\n\tif not app_messages:\n\t\tapp_messages = get_messages_for_app(app)\n\n\tif not app_messages:\n\t\treturn\n\n\ttpath = frappe.get_pymodule_path(app, \"translations\")\n\tfrappe.create_folder(tpath)\n\twrite_csv_file(os.path.join(tpath, lang + \".csv\"),\n\t\tapp_messages, full_dict or get_full_dict(lang))\n\ndef send_translations(translation_dict):\n\t\"\"\"Append translated dict in `frappe.local.response`\"\"\"\n\tif \"__messages\" not in frappe.local.response:\n\t\tfrappe.local.response[\"__messages\"] = {}\n\n\tfrappe.local.response[\"__messages\"].update(translation_dict)\n\ndef deduplicate_messages(messages):\n\tret = []\n\top = operator.itemgetter(1)\n\tmessages = sorted(messages, key=op)\n\tfor k, g in itertools.groupby(messages, op):\n\t\tret.append(next(g))\n\treturn ret\n\ndef rename_language(old_name, new_name):\n\tif not frappe.db.exists('Language', new_name):\n\t\treturn\n\n\tlanguage_in_system_settings = frappe.db.get_single_value(\"System Settings\", \"language\")\n\tif language_in_system_settings == old_name:\n\t\tfrappe.db.set_value(\"System Settings\", \"System Settings\", \"language\", new_name)\n\n\tfrappe.db.sql(\"\"\"update `tabUser` set language=%(new_name)s where language=%(old_name)s\"\"\",\n\t\t{ \"old_name\": old_name, \"new_name\": new_name })\n\[email protected]()\ndef update_translations_for_source(source=None, translation_dict=None):\n\tif not (source and translation_dict):\n\t\treturn\n\n\ttranslation_dict = json.loads(translation_dict)\n\n\t# for existing records\n\ttranslation_records = frappe.db.get_values('Translation', {\n\t\t'source_text': source\n\t}, ['name', 'language'], as_dict=1)\n\tfor d in translation_records:\n\t\tif translation_dict.get(d.language, None):\n\t\t\tdoc = frappe.get_doc('Translation', d.name)\n\t\t\tdoc.translated_text = translation_dict.get(d.language)\n\t\t\tdoc.save()\n\t\t\t# done with this lang value\n\t\t\ttranslation_dict.pop(d.language)\n\t\telse:\n\t\t\tfrappe.delete_doc('Translation', d.name)\n\n\t# remaining values are to be inserted\n\tfor lang, translated_text in iteritems(translation_dict):\n\t\tdoc = frappe.new_doc('Translation')\n\t\tdoc.language = lang\n\t\tdoc.source_text = source\n\t\tdoc.translated_text = translated_text\n\t\tdoc.save()\n\n\treturn translation_records\n\[email protected]()\ndef get_translations(source_text):\n\tif is_html(source_text):\n\t\tsource_text = strip_html_tags(source_text)\n\n\treturn frappe.db.get_list('Translation',\n\t\tfields = ['name', 'language', 'translated_text as translation'],\n\t\tfilters = {\n\t\t\t'source_text': source_text\n\t\t}\n\t)\n\[email protected]()\ndef get_messages(language, start=0, page_length=100, search_text=''):\n\tfrom frappe.frappeclient import FrappeClient\n\ttranslator = FrappeClient(get_translator_url())\n\ttranslated_dict = translator.post_api('translator.api.get_strings_for_translation', params=locals())\n\n\treturn translated_dict\n\n\[email protected]()\ndef get_source_additional_info(source, language=''):\n\tfrom frappe.frappeclient import FrappeClient\n\ttranslator = FrappeClient(get_translator_url())\n\treturn translator.post_api('translator.api.get_source_additional_info', params=locals())\n\[email protected]()\ndef get_contributions(language):\n\treturn frappe.get_all('Translation', fields=['*'], filters={\n\t\t'contributed': 1,\n\t})\n\[email protected]()\ndef get_contribution_status(message_id):\n\tfrom frappe.frappeclient import FrappeClient\n\tdoc = frappe.get_doc('Translation', message_id)\n\ttranslator = FrappeClient(get_translator_url())\n\tcontributed_translation = translator.get_api('translator.api.get_contribution_status', params={\n\t\t'translation_id': doc.contribution_docname\n\t})\n\treturn contributed_translation\n\ndef get_translator_url():\n\treturn frappe.get_hooks()['translator_url'][0]\n\[email protected](allow_guest=True)\ndef get_all_languages(with_language_name=False):\n\t\"\"\"Returns all language codes ar, ch etc\"\"\"\n\tdef get_language_codes():\n\t\treturn frappe.db.sql_list('select name from tabLanguage')\n\n\tdef get_all_language_with_name():\n\t\treturn frappe.db.get_all('Language', ['language_code', 'language_name'])\n\n\tif not frappe.db:\n\t\tfrappe.connect()\n\n\tif with_language_name:\n\t\treturn frappe.cache().get_value('languages_with_name', get_all_language_with_name)\n\telse:\n\t\treturn frappe.cache().get_value('languages', get_language_codes)\n\[email protected](allow_guest=True)\ndef set_preferred_language_cookie(preferred_language):\n\tfrappe.local.cookie_manager.set_cookie(\"preferred_language\", preferred_language)\n\ndef get_preferred_language_cookie():\n\treturn frappe.request.cookies.get(\"preferred_language\")\n", "path": "frappe/translate.py" } ]
[ { "content": "# Copyright (c) 2021, Frappe Technologies Pvt. Ltd. and Contributors\n# MIT License. See license.txt\n\nfrom __future__ import unicode_literals, print_function\n\nfrom six import iteritems, text_type, string_types, PY2\n\nfrom frappe.utils import cstr\n\n\"\"\"\n\tfrappe.translate\n\t~~~~~~~~~~~~~~~~\n\n\tTranslation tools for frappe\n\"\"\"\n\nimport io\nimport itertools\nimport json\nimport operator\nimport functools\nimport os\nimport re\nfrom typing import List, Union, Tuple\n\nimport frappe\nfrom frappe.model.utils import InvalidIncludePath, render_include\nfrom frappe.utils import get_bench_path, is_html, strip, strip_html_tags\n\n\ndef guess_language(lang_list=None):\n\t\"\"\"[DEPRECATED] This method is deprecated, use `frappe.translate.get_language` method instead.\n\tIt will be removed in v14.\n\t\"\"\"\n\timport click\n\n\tclick.secho(f\"{guess_language.__doc__}\\n{get_language.__doc__}\", fg=\"yellow\")\n\treturn get_language(lang_list)\n\n\ndef get_language(lang_list: List = None) -> str:\n\t\"\"\"Set `frappe.local.lang` from HTTP headers at beginning of request\n\n\tOrder of priority for setting language:\n\t1. Form Dict => _lang\n\t2. Cookie => preferred_language (Non authorized user)\n\t3. Request Header => Accept-Language (Non authorized user)\n\t4. User document => language\n\t5. System Settings => language\n\t\"\"\"\n\tis_logged_in = frappe.session.user != \"Guest\"\n\n\t# fetch language from form_dict\n\tif frappe.form_dict._lang:\n\t\tlanguage = get_lang_code(\n\t\t\tfrappe.form_dict._lang or get_parent_language(frappe.form_dict._lang)\n\t\t)\n\t\tif language:\n\t\t\treturn language\n\n\t# use language set in User or System Settings if user is logged in\n\tif is_logged_in:\n\t\treturn frappe.local.lang\n\n\tlang_set = set(lang_list or get_all_languages() or [])\n\n\t# fetch language from cookie\n\tpreferred_language_cookie = get_preferred_language_cookie()\n\n\tif preferred_language_cookie:\n\t\tif preferred_language_cookie in lang_set:\n\t\t\treturn preferred_language_cookie\n\n\t\tparent_language = get_parent_language(language)\n\t\tif parent_language in lang_set:\n\t\t\treturn parent_language\n\n\t# fetch language from request headers\n\taccept_language = list(frappe.request.accept_languages.values())\n\n\tfor language in accept_language:\n\t\tif language in lang_set:\n\t\t\treturn language\n\n\t\tparent_language = get_parent_language(language)\n\t\tif parent_language in lang_set:\n\t\t\treturn parent_language\n\n\t# fallback to language set in User or System Settings\n\treturn frappe.local.lang\n\n\[email protected]_cache()\ndef get_parent_language(lang: str) -> str:\n\t\"\"\"If the passed language is a variant, return its parent\n\n\tEg:\n\t\t1. zh-TW -> zh\n\t\t2. sr-BA -> sr\n\t\"\"\"\n\tis_language_variant = \"-\" in lang\n\tif is_language_variant:\n\t\treturn lang[:lang.index(\"-\")]\n\n\ndef get_user_lang(user: str = None) -> str:\n\t\"\"\"Set frappe.local.lang from user preferences on session beginning or resumption\"\"\"\n\tuser = user or frappe.session.user\n\tlang = frappe.cache().hget(\"lang\", user)\n\n\tif not lang:\n\t\t# User.language => Session Defaults => frappe.local.lang => 'en'\n\t\tlang = (\n\t\t\tfrappe.db.get_value(\"User\", user, \"language\")\n\t\t\tor frappe.db.get_default(\"lang\")\n\t\t\tor frappe.local.lang\n\t\t\tor \"en\"\n\t\t)\n\n\t\tfrappe.cache().hset(\"lang\", user, lang)\n\n\treturn lang\n\ndef get_lang_code(lang: str) -> Union[str, None]:\n\treturn (\n\t\tfrappe.db.get_value(\"Language\", {\"name\": lang})\n\t\tor frappe.db.get_value(\"Language\", {\"language_name\": lang})\n\t)\n\ndef set_default_language(lang):\n\t\"\"\"Set Global default language\"\"\"\n\tif frappe.db.get_default(\"lang\") != lang:\n\t\tfrappe.db.set_default(\"lang\", lang)\n\tfrappe.local.lang = lang\n\ndef get_lang_dict():\n\t\"\"\"Returns all languages in dict format, full name is the key e.g. `{\"english\":\"en\"}`\"\"\"\n\treturn dict(frappe.db.sql('select language_name, name from tabLanguage'))\n\ndef get_dict(fortype, name=None):\n\t\"\"\"Returns translation dict for a type of object.\n\n\t :param fortype: must be one of `doctype`, `page`, `report`, `include`, `jsfile`, `boot`\n\t :param name: name of the document for which assets are to be returned.\n\t \"\"\"\n\tfortype = fortype.lower()\n\tcache = frappe.cache()\n\tasset_key = fortype + \":\" + (name or \"-\")\n\ttranslation_assets = cache.hget(\"translation_assets\", frappe.local.lang, shared=True) or {}\n\n\tif not asset_key in translation_assets:\n\t\tmessages = []\n\t\tif fortype==\"doctype\":\n\t\t\tmessages = get_messages_from_doctype(name)\n\t\telif fortype==\"page\":\n\t\t\tmessages = get_messages_from_page(name)\n\t\telif fortype==\"report\":\n\t\t\tmessages = get_messages_from_report(name)\n\t\telif fortype==\"include\":\n\t\t\tmessages = get_messages_from_include_files()\n\t\telif fortype==\"jsfile\":\n\t\t\tmessages = get_messages_from_file(name)\n\t\telif fortype==\"boot\":\n\t\t\tapps = frappe.get_all_apps(True)\n\t\t\tfor app in apps:\n\t\t\t\tmessages.extend(get_server_messages(app))\n\n\t\t\tmessages += get_messages_from_navbar()\n\t\t\tmessages += get_messages_from_include_files()\n\t\t\tmessages += frappe.db.sql(\"select 'Print Format:', name from `tabPrint Format`\")\n\t\t\tmessages += frappe.db.sql(\"select 'DocType:', name from tabDocType\")\n\t\t\tmessages += frappe.db.sql(\"select 'Role:', name from tabRole\")\n\t\t\tmessages += frappe.db.sql(\"select 'Module:', name from `tabModule Def`\")\n\t\t\tmessages += frappe.db.sql(\"select '', format from `tabWorkspace Shortcut` where format is not null\")\n\t\t\tmessages += frappe.db.sql(\"select '', title from `tabOnboarding Step`\")\n\n\t\tmessages = deduplicate_messages(messages)\n\t\tmessage_dict = make_dict_from_messages(messages, load_user_translation=False)\n\t\tmessage_dict.update(get_dict_from_hooks(fortype, name))\n\t\t# remove untranslated\n\t\tmessage_dict = {k:v for k, v in iteritems(message_dict) if k!=v}\n\t\ttranslation_assets[asset_key] = message_dict\n\t\tcache.hset(\"translation_assets\", frappe.local.lang, translation_assets, shared=True)\n\n\ttranslation_map = translation_assets[asset_key]\n\n\ttranslation_map.update(get_user_translations(frappe.local.lang))\n\n\treturn translation_map\n\n\ndef get_dict_from_hooks(fortype, name):\n\ttranslated_dict = {}\n\n\thooks = frappe.get_hooks(\"get_translated_dict\")\n\tfor (hook_fortype, fortype_name) in hooks:\n\t\tif hook_fortype == fortype and fortype_name == name:\n\t\t\tfor method in hooks[(hook_fortype, fortype_name)]:\n\t\t\t\ttranslated_dict.update(frappe.get_attr(method)())\n\n\treturn translated_dict\n\ndef make_dict_from_messages(messages, full_dict=None, load_user_translation=True):\n\t\"\"\"Returns translated messages as a dict in Language specified in `frappe.local.lang`\n\n\t:param messages: List of untranslated messages\n\t\"\"\"\n\tout = {}\n\tif full_dict==None:\n\t\tif load_user_translation:\n\t\t\tfull_dict = get_full_dict(frappe.local.lang)\n\t\telse:\n\t\t\tfull_dict = load_lang(frappe.local.lang)\n\n\tfor m in messages:\n\t\tif m[1] in full_dict:\n\t\t\tout[m[1]] = full_dict[m[1]]\n\t\t# check if msg with context as key exist eg. msg:context\n\t\tif len(m) > 2 and m[2]:\n\t\t\tkey = m[1] + ':' + m[2]\n\t\t\tif full_dict.get(key):\n\t\t\t\tout[key] = full_dict[key]\n\n\treturn out\n\ndef get_lang_js(fortype, name):\n\t\"\"\"Returns code snippet to be appended at the end of a JS script.\n\n\t:param fortype: Type of object, e.g. `DocType`\n\t:param name: Document name\n\t\"\"\"\n\treturn \"\\n\\n$.extend(frappe._messages, %s)\" % json.dumps(get_dict(fortype, name))\n\ndef get_full_dict(lang):\n\t\"\"\"Load and return the entire translations dictionary for a language from :meth:`frape.cache`\n\n\t:param lang: Language Code, e.g. `hi`\n\t\"\"\"\n\tif not lang:\n\t\treturn {}\n\n\t# found in local, return!\n\tif getattr(frappe.local, 'lang_full_dict', None) and frappe.local.lang_full_dict.get(lang, None):\n\t\treturn frappe.local.lang_full_dict\n\n\tfrappe.local.lang_full_dict = load_lang(lang)\n\n\ttry:\n\t\t# get user specific translation data\n\t\tuser_translations = get_user_translations(lang)\n\t\tfrappe.local.lang_full_dict.update(user_translations)\n\texcept Exception:\n\t\tpass\n\n\treturn frappe.local.lang_full_dict\n\ndef load_lang(lang, apps=None):\n\t\"\"\"Combine all translations from `.csv` files in all `apps`.\n\tFor derivative languages (es-GT), take translations from the\n\tbase language (es) and then update translations from the child (es-GT)\"\"\"\n\n\tif lang=='en':\n\t\treturn {}\n\n\tout = frappe.cache().hget(\"lang_full_dict\", lang, shared=True)\n\tif not out:\n\t\tout = {}\n\t\tfor app in (apps or frappe.get_all_apps(True)):\n\t\t\tpath = os.path.join(frappe.get_pymodule_path(app), \"translations\", lang + \".csv\")\n\t\t\tout.update(get_translation_dict_from_file(path, lang, app) or {})\n\n\t\tif '-' in lang:\n\t\t\tparent = lang.split('-')[0]\n\t\t\tparent_out = load_lang(parent)\n\t\t\tparent_out.update(out)\n\t\t\tout = parent_out\n\n\t\tfrappe.cache().hset(\"lang_full_dict\", lang, out, shared=True)\n\n\treturn out or {}\n\ndef get_translation_dict_from_file(path, lang, app):\n\t\"\"\"load translation dict from given path\"\"\"\n\ttranslation_map = {}\n\tif os.path.exists(path):\n\t\tcsv_content = read_csv_file(path)\n\n\t\tfor item in csv_content:\n\t\t\tif len(item)==3 and item[2]:\n\t\t\t\tkey = item[0] + ':' + item[2]\n\t\t\t\ttranslation_map[key] = strip(item[1])\n\t\t\telif len(item) in [2, 3]:\n\t\t\t\ttranslation_map[item[0]] = strip(item[1])\n\t\t\telif item:\n\t\t\t\traise Exception(\"Bad translation in '{app}' for language '{lang}': {values}\".format(\n\t\t\t\t\tapp=app, lang=lang, values=repr(item).encode(\"utf-8\")\n\t\t\t\t))\n\n\treturn translation_map\n\ndef get_user_translations(lang):\n\tif not frappe.db:\n\t\tfrappe.connect()\n\tout = frappe.cache().hget('lang_user_translations', lang)\n\tif out is None:\n\t\tout = {}\n\t\tuser_translations = frappe.get_all('Translation',\n\t\t\tfields=[\"source_text\", \"translated_text\", \"context\"],\n\t\t\tfilters={'language': lang})\n\n\t\tfor translation in user_translations:\n\t\t\tkey = translation.source_text\n\t\t\tvalue = translation.translated_text\n\t\t\tif translation.context:\n\t\t\t\tkey += ':' + translation.context\n\t\t\tout[key] = value\n\n\t\tfrappe.cache().hset('lang_user_translations', lang, out)\n\n\treturn out\n\n\ndef clear_cache():\n\t\"\"\"Clear all translation assets from :meth:`frappe.cache`\"\"\"\n\tcache = frappe.cache()\n\tcache.delete_key(\"langinfo\")\n\n\t# clear translations saved in boot cache\n\tcache.delete_key(\"bootinfo\")\n\tcache.delete_key(\"lang_full_dict\", shared=True)\n\tcache.delete_key(\"translation_assets\", shared=True)\n\tcache.delete_key(\"lang_user_translations\")\n\ndef get_messages_for_app(app, deduplicate=True):\n\t\"\"\"Returns all messages (list) for a specified `app`\"\"\"\n\tmessages = []\n\tmodules = \", \".join(['\"{}\"'.format(m.title().replace(\"_\", \" \")) \\\n\t\tfor m in frappe.local.app_modules[app]])\n\n\t# doctypes\n\tif modules:\n\t\tfor name in frappe.db.sql_list(\"\"\"select name from tabDocType\n\t\t\twhere module in ({})\"\"\".format(modules)):\n\t\t\tmessages.extend(get_messages_from_doctype(name))\n\n\t\t# pages\n\t\tfor name, title in frappe.db.sql(\"\"\"select name, title from tabPage\n\t\t\twhere module in ({})\"\"\".format(modules)):\n\t\t\tmessages.append((None, title or name))\n\t\t\tmessages.extend(get_messages_from_page(name))\n\n\n\t\t# reports\n\t\tfor name in frappe.db.sql_list(\"\"\"select tabReport.name from tabDocType, tabReport\n\t\t\twhere tabReport.ref_doctype = tabDocType.name\n\t\t\t\tand tabDocType.module in ({})\"\"\".format(modules)):\n\t\t\tmessages.append((None, name))\n\t\t\tmessages.extend(get_messages_from_report(name))\n\t\t\tfor i in messages:\n\t\t\t\tif not isinstance(i, tuple):\n\t\t\t\t\traise Exception\n\n\t# workflow based on app.hooks.fixtures\n\tmessages.extend(get_messages_from_workflow(app_name=app))\n\n\t# custom fields based on app.hooks.fixtures\n\tmessages.extend(get_messages_from_custom_fields(app_name=app))\n\n\t# app_include_files\n\tmessages.extend(get_all_messages_from_js_files(app))\n\n\t# server_messages\n\tmessages.extend(get_server_messages(app))\n\n\t# messages from navbar settings\n\tmessages.extend(get_messages_from_navbar())\n\n\tif deduplicate:\n\t\tmessages = deduplicate_messages(messages)\n\n\treturn messages\n\n\ndef get_messages_from_navbar():\n\t\"\"\"Return all labels from Navbar Items, as specified in Navbar Settings.\"\"\"\n\tlabels = frappe.get_all('Navbar Item', filters={'item_label': ('is', 'set')}, pluck='item_label')\n\treturn [('Navbar:', label, 'Label of a Navbar Item') for label in labels]\n\n\ndef get_messages_from_doctype(name):\n\t\"\"\"Extract all translatable messages for a doctype. Includes labels, Python code,\n\tJavascript code, html templates\"\"\"\n\tmessages = []\n\tmeta = frappe.get_meta(name)\n\n\tmessages = [meta.name, meta.module]\n\n\tif meta.description:\n\t\tmessages.append(meta.description)\n\n\t# translations of field labels, description and options\n\tfor d in meta.get(\"fields\"):\n\t\tmessages.extend([d.label, d.description])\n\n\t\tif d.fieldtype=='Select' and d.options:\n\t\t\toptions = d.options.split('\\n')\n\t\t\tif not \"icon\" in options[0]:\n\t\t\t\tmessages.extend(options)\n\t\tif d.fieldtype=='HTML' and d.options:\n\t\t\tmessages.append(d.options)\n\n\t# translations of roles\n\tfor d in meta.get(\"permissions\"):\n\t\tif d.role:\n\t\t\tmessages.append(d.role)\n\n\tmessages = [message for message in messages if message]\n\tmessages = [('DocType: ' + name, message) for message in messages if is_translatable(message)]\n\n\t# extract from js, py files\n\tif not meta.custom:\n\t\tdoctype_file_path = frappe.get_module_path(meta.module, \"doctype\", meta.name, meta.name)\n\t\tmessages.extend(get_messages_from_file(doctype_file_path + \".js\"))\n\t\tmessages.extend(get_messages_from_file(doctype_file_path + \"_list.js\"))\n\t\tmessages.extend(get_messages_from_file(doctype_file_path + \"_list.html\"))\n\t\tmessages.extend(get_messages_from_file(doctype_file_path + \"_calendar.js\"))\n\t\tmessages.extend(get_messages_from_file(doctype_file_path + \"_dashboard.html\"))\n\n\t# workflow based on doctype\n\tmessages.extend(get_messages_from_workflow(doctype=name))\n\treturn messages\n\ndef get_messages_from_workflow(doctype=None, app_name=None):\n\tassert doctype or app_name, 'doctype or app_name should be provided'\n\n\t# translations for Workflows\n\tworkflows = []\n\tif doctype:\n\t\tworkflows = frappe.get_all('Workflow', filters={'document_type': doctype})\n\telse:\n\t\tfixtures = frappe.get_hooks('fixtures', app_name=app_name) or []\n\t\tfor fixture in fixtures:\n\t\t\tif isinstance(fixture, string_types) and fixture == 'Worflow':\n\t\t\t\tworkflows = frappe.get_all('Workflow')\n\t\t\t\tbreak\n\t\t\telif isinstance(fixture, dict) and fixture.get('dt', fixture.get('doctype')) == 'Workflow':\n\t\t\t\tworkflows.extend(frappe.get_all('Workflow', filters=fixture.get('filters')))\n\n\tmessages = []\n\tfor w in workflows:\n\t\tstates = frappe.db.sql(\n\t\t\t'select distinct state from `tabWorkflow Document State` where parent=%s',\n\t\t\t(w['name'],), as_dict=True)\n\n\t\tmessages.extend([('Workflow: ' + w['name'], state['state']) for state in states if is_translatable(state['state'])])\n\n\t\tstates = frappe.db.sql(\n\t\t\t'select distinct message from `tabWorkflow Document State` where parent=%s and message is not null',\n\t\t\t(w['name'],), as_dict=True)\n\n\t\tmessages.extend([(\"Workflow: \" + w['name'], state['message'])\n\t\t\tfor state in states if is_translatable(state['message'])])\n\n\t\tactions = frappe.db.sql(\n\t\t\t'select distinct action from `tabWorkflow Transition` where parent=%s',\n\t\t\t(w['name'],), as_dict=True)\n\n\t\tmessages.extend([(\"Workflow: \" + w['name'], action['action']) \\\n\t\t\tfor action in actions if is_translatable(action['action'])])\n\n\treturn messages\n\n\ndef get_messages_from_custom_fields(app_name):\n\tfixtures = frappe.get_hooks('fixtures', app_name=app_name) or []\n\tcustom_fields = []\n\n\tfor fixture in fixtures:\n\t\tif isinstance(fixture, string_types) and fixture == 'Custom Field':\n\t\t\tcustom_fields = frappe.get_all('Custom Field', fields=['name','label', 'description', 'fieldtype', 'options'])\n\t\t\tbreak\n\t\telif isinstance(fixture, dict) and fixture.get('dt', fixture.get('doctype')) == 'Custom Field':\n\t\t\tcustom_fields.extend(frappe.get_all('Custom Field', filters=fixture.get('filters'),\n\t\t\t\tfields=['name','label', 'description', 'fieldtype', 'options']))\n\n\tmessages = []\n\tfor cf in custom_fields:\n\t\tfor prop in ('label', 'description'):\n\t\t\tif not cf.get(prop) or not is_translatable(cf[prop]):\n\t\t\t\tcontinue\n\t\t\tmessages.append(('Custom Field - {}: {}'.format(prop, cf['name']), cf[prop]))\n\t\tif cf['fieldtype'] == 'Selection' and cf.get('options'):\n\t\t\tfor option in cf['options'].split('\\n'):\n\t\t\t\tif option and 'icon' not in option and is_translatable(option):\n\t\t\t\t\tmessages.append(('Custom Field - Description: ' + cf['name'], option))\n\n\treturn messages\n\ndef get_messages_from_page(name):\n\t\"\"\"Returns all translatable strings from a :class:`frappe.core.doctype.Page`\"\"\"\n\treturn _get_messages_from_page_or_report(\"Page\", name)\n\ndef get_messages_from_report(name):\n\t\"\"\"Returns all translatable strings from a :class:`frappe.core.doctype.Report`\"\"\"\n\treport = frappe.get_doc(\"Report\", name)\n\tmessages = _get_messages_from_page_or_report(\"Report\", name,\n\t\tfrappe.db.get_value(\"DocType\", report.ref_doctype, \"module\"))\n\n\tif report.columns:\n\t\tcontext = \"Column of report '%s'\" % report.name # context has to match context in `prepare_columns` in query_report.js\n\t\tmessages.extend([(None, report_column.label, context) for report_column in report.columns])\n\n\tif report.filters:\n\t\tmessages.extend([(None, report_filter.label) for report_filter in report.filters])\n\n\tif report.query:\n\t\tmessages.extend([(None, message) for message in re.findall('\"([^:,^\"]*):', report.query) if is_translatable(message)])\n\n\tmessages.append((None,report.report_name))\n\treturn messages\n\ndef _get_messages_from_page_or_report(doctype, name, module=None):\n\tif not module:\n\t\tmodule = frappe.db.get_value(doctype, name, \"module\")\n\n\tdoc_path = frappe.get_module_path(module, doctype, name)\n\n\tmessages = get_messages_from_file(os.path.join(doc_path, frappe.scrub(name) +\".py\"))\n\n\tif os.path.exists(doc_path):\n\t\tfor filename in os.listdir(doc_path):\n\t\t\tif filename.endswith(\".js\") or filename.endswith(\".html\"):\n\t\t\t\tmessages += get_messages_from_file(os.path.join(doc_path, filename))\n\n\treturn messages\n\ndef get_server_messages(app):\n\t\"\"\"Extracts all translatable strings (tagged with :func:`frappe._`) from Python modules\n\t\tinside an app\"\"\"\n\tmessages = []\n\tfile_extensions = ('.py', '.html', '.js', '.vue')\n\tfor basepath, folders, files in os.walk(frappe.get_pymodule_path(app)):\n\t\tfor dontwalk in (\".git\", \"public\", \"locale\"):\n\t\t\tif dontwalk in folders: folders.remove(dontwalk)\n\n\t\tfor f in files:\n\t\t\tf = frappe.as_unicode(f)\n\t\t\tif f.endswith(file_extensions):\n\t\t\t\tmessages.extend(get_messages_from_file(os.path.join(basepath, f)))\n\n\treturn messages\n\ndef get_messages_from_include_files(app_name=None):\n\t\"\"\"Returns messages from js files included at time of boot like desk.min.js for desk and web\"\"\"\n\tmessages = []\n\tapp_include_js = frappe.get_hooks(\"app_include_js\", app_name=app_name) or []\n\tweb_include_js = frappe.get_hooks(\"web_include_js\", app_name=app_name) or []\n\tinclude_js = app_include_js + web_include_js\n\n\tfor js_path in include_js:\n\t\trelative_path = os.path.join(frappe.local.sites_path, js_path.lstrip('/'))\n\t\tmessages_from_file = get_messages_from_file(relative_path)\n\t\tmessages.extend(messages_from_file)\n\n\treturn messages\n\ndef get_all_messages_from_js_files(app_name=None):\n\t\"\"\"Extracts all translatable strings from app `.js` files\"\"\"\n\tmessages = []\n\tfor app in ([app_name] if app_name else frappe.get_installed_apps()):\n\t\tif os.path.exists(frappe.get_app_path(app, \"public\")):\n\t\t\tfor basepath, folders, files in os.walk(frappe.get_app_path(app, \"public\")):\n\t\t\t\tif \"frappe/public/js/lib\" in basepath:\n\t\t\t\t\tcontinue\n\n\t\t\t\tfor fname in files:\n\t\t\t\t\tif fname.endswith(\".js\") or fname.endswith(\".html\") or fname.endswith('.vue'):\n\t\t\t\t\t\tmessages.extend(get_messages_from_file(os.path.join(basepath, fname)))\n\n\treturn messages\n\ndef get_messages_from_file(path: str) -> List[Tuple[str, str, str, str]]:\n\t\"\"\"Returns a list of transatable strings from a code file\n\n\t:param path: path of the code file\n\t\"\"\"\n\tfrappe.flags.setdefault('scanned_files', [])\n\t# TODO: Find better alternative\n\t# To avoid duplicate scan\n\tif path in set(frappe.flags.scanned_files):\n\t\treturn []\n\n\tfrappe.flags.scanned_files.append(path)\n\n\tbench_path = get_bench_path()\n\tif os.path.exists(path):\n\t\twith open(path, 'r') as sourcefile:\n\t\t\ttry:\n\t\t\t\tfile_contents = sourcefile.read()\n\t\t\texcept Exception:\n\t\t\t\tprint(\"Could not scan file for translation: {0}\".format(path))\n\t\t\t\treturn []\n\n\t\t\treturn [\n\t\t\t\t(os.path.relpath(path, bench_path), message, context, line)\n\t\t\t\tfor (line, message, context) in extract_messages_from_code(file_contents)\n\t\t\t]\n\telse:\n\t\treturn []\n\ndef extract_messages_from_code(code):\n\t\"\"\"\n\t\tExtracts translatable strings from a code file\n\t\t:param code: code from which translatable files are to be extracted\n\t\t:param is_py: include messages in triple quotes e.g. `_('''message''')`\n\t\"\"\"\n\tfrom jinja2 import TemplateError\n\n\ttry:\n\t\tcode = frappe.as_unicode(render_include(code))\n\n\t# Exception will occur when it encounters John Resig's microtemplating code\n\texcept (TemplateError, ImportError, InvalidIncludePath, IOError) as e:\n\t\tif isinstance(e, InvalidIncludePath):\n\t\t\tfrappe.clear_last_message()\n\n\t\tpass\n\n\tmessages = []\n\tpattern = r\"_\\(([\\\"']{,3})(?P<message>((?!\\1).)*)\\1(\\s*,\\s*context\\s*=\\s*([\\\"'])(?P<py_context>((?!\\5).)*)\\5)*(\\s*,\\s*(.)*?\\s*(,\\s*([\\\"'])(?P<js_context>((?!\\11).)*)\\11)*)*\\)\"\n\n\tfor m in re.compile(pattern).finditer(code):\n\t\tmessage = m.group('message')\n\t\tcontext = m.group('py_context') or m.group('js_context')\n\t\tpos = m.start()\n\n\t\tif is_translatable(message):\n\t\t\tmessages.append([pos, message, context])\n\n\treturn add_line_number(messages, code)\n\ndef is_translatable(m):\n\tif re.search(\"[a-zA-Z]\", m) and not m.startswith(\"fa fa-\") and not m.endswith(\"px\") and not m.startswith(\"eval:\"):\n\t\treturn True\n\treturn False\n\ndef add_line_number(messages, code):\n\tret = []\n\tmessages = sorted(messages, key=lambda x: x[0])\n\tnewlines = [m.start() for m in re.compile('\\\\n').finditer(code)]\n\tline = 1\n\tnewline_i = 0\n\tfor pos, message, context in messages:\n\t\twhile newline_i < len(newlines) and pos > newlines[newline_i]:\n\t\t\tline+=1\n\t\t\tnewline_i+= 1\n\t\tret.append([line, message, context])\n\treturn ret\n\ndef read_csv_file(path):\n\t\"\"\"Read CSV file and return as list of list\n\n\t:param path: File path\"\"\"\n\tfrom csv import reader\n\n\tif PY2:\n\t\twith codecs.open(path, 'r', 'utf-8') as msgfile:\n\t\t\tdata = msgfile.read()\n\n\t\t\t# for japanese! #wtf\n\t\t\tdata = data.replace(chr(28), \"\").replace(chr(29), \"\")\n\t\t\tdata = reader([r.encode('utf-8') for r in data.splitlines()])\n\t\t\tnewdata = [[text_type(val, 'utf-8') for val in row] for row in data]\n\telse:\n\t\twith io.open(path, mode='r', encoding='utf-8', newline='') as msgfile:\n\t\t\tdata = reader(msgfile)\n\t\t\tnewdata = [[ val for val in row ] for row in data]\n\treturn newdata\n\ndef write_csv_file(path, app_messages, lang_dict):\n\t\"\"\"Write translation CSV file.\n\n\t:param path: File path, usually `[app]/translations`.\n\t:param app_messages: Translatable strings for this app.\n\t:param lang_dict: Full translated dict.\n\t\"\"\"\n\tapp_messages.sort(key = lambda x: x[1])\n\tfrom csv import writer\n\twith open(path, 'w', newline='') as msgfile:\n\t\tw = writer(msgfile, lineterminator='\\n')\n\n\t\tfor app_message in app_messages:\n\t\t\tcontext = None\n\t\t\tif len(app_message) == 2:\n\t\t\t\tpath, message = app_message\n\t\t\telif len(app_message) == 3:\n\t\t\t\tpath, message, lineno = app_message\n\t\t\telif len(app_message) == 4:\n\t\t\t\tpath, message, context, lineno = app_message\n\t\t\telse:\n\t\t\t\tcontinue\n\n\t\t\tt = lang_dict.get(message, '')\n\t\t\t# strip whitespaces\n\t\t\ttranslated_string = re.sub(r'{\\s?([0-9]+)\\s?}', r\"{\\g<1>}\", t)\n\t\t\tif translated_string:\n\t\t\t\tw.writerow([message, translated_string, context])\n\ndef get_untranslated(lang, untranslated_file, get_all=False):\n\t\"\"\"Returns all untranslated strings for a language and writes in a file\n\n\t:param lang: Language code.\n\t:param untranslated_file: Output file path.\n\t:param get_all: Return all strings, translated or not.\"\"\"\n\tclear_cache()\n\tapps = frappe.get_all_apps(True)\n\n\tmessages = []\n\tuntranslated = []\n\tfor app in apps:\n\t\tmessages.extend(get_messages_for_app(app))\n\n\tmessages = deduplicate_messages(messages)\n\n\tdef escape_newlines(s):\n\t\treturn (s.replace(\"\\\\\\n\", \"|||||\")\n\t\t\t\t.replace(\"\\\\n\", \"||||\")\n\t\t\t\t.replace(\"\\n\", \"|||\"))\n\n\tif get_all:\n\t\tprint(str(len(messages)) + \" messages\")\n\t\twith open(untranslated_file, \"wb\") as f:\n\t\t\tfor m in messages:\n\t\t\t\t# replace \\n with ||| so that internal linebreaks don't get split\n\t\t\t\tf.write((escape_newlines(m[1]) + os.linesep).encode(\"utf-8\"))\n\telse:\n\t\tfull_dict = get_full_dict(lang)\n\n\t\tfor m in messages:\n\t\t\tif not full_dict.get(m[1]):\n\t\t\t\tuntranslated.append(m[1])\n\n\t\tif untranslated:\n\t\t\tprint(str(len(untranslated)) + \" missing translations of \" + str(len(messages)))\n\t\t\twith open(untranslated_file, \"wb\") as f:\n\t\t\t\tfor m in untranslated:\n\t\t\t\t\t# replace \\n with ||| so that internal linebreaks don't get split\n\t\t\t\t\tf.write((escape_newlines(m) + os.linesep).encode(\"utf-8\"))\n\t\telse:\n\t\t\tprint(\"all translated!\")\n\ndef update_translations(lang, untranslated_file, translated_file):\n\t\"\"\"Update translations from a source and target file for a given language.\n\n\t:param lang: Language code (e.g. `en`).\n\t:param untranslated_file: File path with the messages in English.\n\t:param translated_file: File path with messages in language to be updated.\"\"\"\n\tclear_cache()\n\tfull_dict = get_full_dict(lang)\n\n\tdef restore_newlines(s):\n\t\treturn (s.replace(\"|||||\", \"\\\\\\n\")\n\t\t\t\t.replace(\"| | | | |\", \"\\\\\\n\")\n\t\t\t\t.replace(\"||||\", \"\\\\n\")\n\t\t\t\t.replace(\"| | | |\", \"\\\\n\")\n\t\t\t\t.replace(\"|||\", \"\\n\")\n\t\t\t\t.replace(\"| | |\", \"\\n\"))\n\n\ttranslation_dict = {}\n\tfor key, value in zip(frappe.get_file_items(untranslated_file, ignore_empty_lines=False),\n\t\tfrappe.get_file_items(translated_file, ignore_empty_lines=False)):\n\n\t\t# undo hack in get_untranslated\n\t\ttranslation_dict[restore_newlines(key)] = restore_newlines(value)\n\n\tfull_dict.update(translation_dict)\n\n\tfor app in frappe.get_all_apps(True):\n\t\twrite_translations_file(app, lang, full_dict)\n\ndef import_translations(lang, path):\n\t\"\"\"Import translations from file in standard format\"\"\"\n\tclear_cache()\n\tfull_dict = get_full_dict(lang)\n\tfull_dict.update(get_translation_dict_from_file(path, lang, 'import'))\n\n\tfor app in frappe.get_all_apps(True):\n\t\twrite_translations_file(app, lang, full_dict)\n\n\ndef rebuild_all_translation_files():\n\t\"\"\"Rebuild all translation files: `[app]/translations/[lang].csv`.\"\"\"\n\tfor lang in get_all_languages():\n\t\tfor app in frappe.get_all_apps():\n\t\t\twrite_translations_file(app, lang)\n\ndef write_translations_file(app, lang, full_dict=None, app_messages=None):\n\t\"\"\"Write a translation file for a given language.\n\n\t:param app: `app` for which translations are to be written.\n\t:param lang: Language code.\n\t:param full_dict: Full translated language dict (optional).\n\t:param app_messages: Source strings (optional).\n\t\"\"\"\n\tif not app_messages:\n\t\tapp_messages = get_messages_for_app(app)\n\n\tif not app_messages:\n\t\treturn\n\n\ttpath = frappe.get_pymodule_path(app, \"translations\")\n\tfrappe.create_folder(tpath)\n\twrite_csv_file(os.path.join(tpath, lang + \".csv\"),\n\t\tapp_messages, full_dict or get_full_dict(lang))\n\ndef send_translations(translation_dict):\n\t\"\"\"Append translated dict in `frappe.local.response`\"\"\"\n\tif \"__messages\" not in frappe.local.response:\n\t\tfrappe.local.response[\"__messages\"] = {}\n\n\tfrappe.local.response[\"__messages\"].update(translation_dict)\n\ndef deduplicate_messages(messages):\n\tret = []\n\top = operator.itemgetter(1)\n\tmessages = sorted(messages, key=op)\n\tfor k, g in itertools.groupby(messages, op):\n\t\tret.append(next(g))\n\treturn ret\n\ndef rename_language(old_name, new_name):\n\tif not frappe.db.exists('Language', new_name):\n\t\treturn\n\n\tlanguage_in_system_settings = frappe.db.get_single_value(\"System Settings\", \"language\")\n\tif language_in_system_settings == old_name:\n\t\tfrappe.db.set_value(\"System Settings\", \"System Settings\", \"language\", new_name)\n\n\tfrappe.db.sql(\"\"\"update `tabUser` set language=%(new_name)s where language=%(old_name)s\"\"\",\n\t\t{ \"old_name\": old_name, \"new_name\": new_name })\n\[email protected]()\ndef update_translations_for_source(source=None, translation_dict=None):\n\tif not (source and translation_dict):\n\t\treturn\n\n\ttranslation_dict = json.loads(translation_dict)\n\n\tif is_html(source):\n\t\tsource = strip_html_tags(source)\n\n\t# for existing records\n\ttranslation_records = frappe.db.get_values('Translation', {\n\t\t'source_text': source\n\t}, ['name', 'language'], as_dict=1)\n\tfor d in translation_records:\n\t\tif translation_dict.get(d.language, None):\n\t\t\tdoc = frappe.get_doc('Translation', d.name)\n\t\t\tdoc.translated_text = translation_dict.get(d.language)\n\t\t\tdoc.save()\n\t\t\t# done with this lang value\n\t\t\ttranslation_dict.pop(d.language)\n\t\telse:\n\t\t\tfrappe.delete_doc('Translation', d.name)\n\n\t# remaining values are to be inserted\n\tfor lang, translated_text in iteritems(translation_dict):\n\t\tdoc = frappe.new_doc('Translation')\n\t\tdoc.language = lang\n\t\tdoc.source_text = source\n\t\tdoc.translated_text = translated_text\n\t\tdoc.save()\n\n\treturn translation_records\n\[email protected]()\ndef get_translations(source_text):\n\tif is_html(source_text):\n\t\tsource_text = strip_html_tags(source_text)\n\n\treturn frappe.db.get_list('Translation',\n\t\tfields = ['name', 'language', 'translated_text as translation'],\n\t\tfilters = {\n\t\t\t'source_text': source_text\n\t\t}\n\t)\n\[email protected]()\ndef get_messages(language, start=0, page_length=100, search_text=''):\n\tfrom frappe.frappeclient import FrappeClient\n\ttranslator = FrappeClient(get_translator_url())\n\ttranslated_dict = translator.post_api('translator.api.get_strings_for_translation', params=locals())\n\n\treturn translated_dict\n\n\[email protected]()\ndef get_source_additional_info(source, language=''):\n\tfrom frappe.frappeclient import FrappeClient\n\ttranslator = FrappeClient(get_translator_url())\n\treturn translator.post_api('translator.api.get_source_additional_info', params=locals())\n\[email protected]()\ndef get_contributions(language):\n\treturn frappe.get_all('Translation', fields=['*'], filters={\n\t\t'contributed': 1,\n\t})\n\[email protected]()\ndef get_contribution_status(message_id):\n\tfrom frappe.frappeclient import FrappeClient\n\tdoc = frappe.get_doc('Translation', message_id)\n\ttranslator = FrappeClient(get_translator_url())\n\tcontributed_translation = translator.get_api('translator.api.get_contribution_status', params={\n\t\t'translation_id': doc.contribution_docname\n\t})\n\treturn contributed_translation\n\ndef get_translator_url():\n\treturn frappe.get_hooks()['translator_url'][0]\n\[email protected](allow_guest=True)\ndef get_all_languages(with_language_name=False):\n\t\"\"\"Returns all language codes ar, ch etc\"\"\"\n\tdef get_language_codes():\n\t\treturn frappe.db.sql_list('select name from tabLanguage')\n\n\tdef get_all_language_with_name():\n\t\treturn frappe.db.get_all('Language', ['language_code', 'language_name'])\n\n\tif not frappe.db:\n\t\tfrappe.connect()\n\n\tif with_language_name:\n\t\treturn frappe.cache().get_value('languages_with_name', get_all_language_with_name)\n\telse:\n\t\treturn frappe.cache().get_value('languages', get_language_codes)\n\[email protected](allow_guest=True)\ndef set_preferred_language_cookie(preferred_language):\n\tfrappe.local.cookie_manager.set_cookie(\"preferred_language\", preferred_language)\n\ndef get_preferred_language_cookie():\n\treturn frappe.request.cookies.get(\"preferred_language\")\n", "path": "frappe/translate.py" } ]
diff --git a/frappe/public/js/frappe/form/controls/base_control.js b/frappe/public/js/frappe/form/controls/base_control.js index 8c2c5c433866..2ee6246393ab 100644 --- a/frappe/public/js/frappe/form/controls/base_control.js +++ b/frappe/public/js/frappe/form/controls/base_control.js @@ -131,7 +131,7 @@ frappe.ui.form.Control = Class.extend({ if (!this.doc.__islocal) { new frappe.views.TranslationManager({ 'df': this.df, - 'source_text': value, + 'source_text': this.value, 'target_language': this.doc.language, 'doc': this.doc }); diff --git a/frappe/translate.py b/frappe/translate.py index 2a27c0361f3d..a600951db907 100644 --- a/frappe/translate.py +++ b/frappe/translate.py @@ -846,6 +846,9 @@ def update_translations_for_source(source=None, translation_dict=None): translation_dict = json.loads(translation_dict) + if is_html(source): + source = strip_html_tags(source) + # for existing records translation_records = frappe.db.get_values('Translation', { 'source_text': source
goauthentik__authentik-3299
Get username from mailcow source **Is your feature request related to a problem? Please describe.** I like to get a username from mailcow. With username the enrollment for new users is more simple. **Describe the solution you'd like** Set username to full_name provided by mailcow oauths source. **Additional context** For other sources the username is also set redundant to another attribute if there is no special source attribute: azure_ad.py: ``` "username": info.get("displayName"), "name": info.get("displayName"), ``` discord.py: ``` "username": info.get("username"), "name": info.get("username"), ``` facebook.py: ``` "username": info.get("name"), "name": info.get("name"), ``` reddit.py ``` "username": info.get("name"), "name": info.get("name"), ```
[ { "content": "\"\"\"Mailcow OAuth Views\"\"\"\nfrom typing import Any, Optional\n\nfrom requests.exceptions import RequestException\nfrom structlog.stdlib import get_logger\n\nfrom authentik.sources.oauth.clients.oauth2 import OAuth2Client\nfrom authentik.sources.oauth.types.manager import MANAGER, SourceType\nfrom authentik.sources.oauth.views.callback import OAuthCallback\nfrom authentik.sources.oauth.views.redirect import OAuthRedirect\n\nLOGGER = get_logger()\n\n\nclass MailcowOAuthRedirect(OAuthRedirect):\n \"\"\"Mailcow OAuth2 Redirect\"\"\"\n\n def get_additional_parameters(self, source): # pragma: no cover\n return {\n \"scope\": [\"profile\"],\n }\n\n\nclass MailcowOAuth2Client(OAuth2Client):\n \"\"\"MailcowOAuth2Client, for some reason, mailcow does not like the default headers\"\"\"\n\n def get_profile_info(self, token: dict[str, str]) -> Optional[dict[str, Any]]:\n \"Fetch user profile information.\"\n profile_url = self.source.type.profile_url or \"\"\n if self.source.type.urls_customizable and self.source.profile_url:\n profile_url = self.source.profile_url\n try:\n response = self.session.request(\n \"get\",\n f\"{profile_url}?access_token={token['access_token']}\",\n )\n response.raise_for_status()\n except RequestException as exc:\n LOGGER.warning(\"Unable to fetch user profile\", exc=exc, body=response.text)\n return None\n else:\n return response.json()\n\n\nclass MailcowOAuth2Callback(OAuthCallback):\n \"\"\"Mailcow OAuth2 Callback\"\"\"\n\n client_class = MailcowOAuth2Client\n\n def get_user_enroll_context(\n self,\n info: dict[str, Any],\n ) -> dict[str, Any]:\n return {\n \"email\": info.get(\"email\"),\n \"name\": info.get(\"full_name\"),\n }\n\n\[email protected]()\nclass MailcowType(SourceType):\n \"\"\"Mailcow Type definition\"\"\"\n\n callback_view = MailcowOAuth2Callback\n redirect_view = MailcowOAuthRedirect\n name = \"Mailcow\"\n slug = \"mailcow\"\n\n urls_customizable = True\n", "path": "authentik/sources/oauth/types/mailcow.py" } ]
[ { "content": "\"\"\"Mailcow OAuth Views\"\"\"\nfrom typing import Any, Optional\n\nfrom requests.exceptions import RequestException\nfrom structlog.stdlib import get_logger\n\nfrom authentik.sources.oauth.clients.oauth2 import OAuth2Client\nfrom authentik.sources.oauth.types.manager import MANAGER, SourceType\nfrom authentik.sources.oauth.views.callback import OAuthCallback\nfrom authentik.sources.oauth.views.redirect import OAuthRedirect\n\nLOGGER = get_logger()\n\n\nclass MailcowOAuthRedirect(OAuthRedirect):\n \"\"\"Mailcow OAuth2 Redirect\"\"\"\n\n def get_additional_parameters(self, source): # pragma: no cover\n return {\n \"scope\": [\"profile\"],\n }\n\n\nclass MailcowOAuth2Client(OAuth2Client):\n \"\"\"MailcowOAuth2Client, for some reason, mailcow does not like the default headers\"\"\"\n\n def get_profile_info(self, token: dict[str, str]) -> Optional[dict[str, Any]]:\n \"Fetch user profile information.\"\n profile_url = self.source.type.profile_url or \"\"\n if self.source.type.urls_customizable and self.source.profile_url:\n profile_url = self.source.profile_url\n try:\n response = self.session.request(\n \"get\",\n f\"{profile_url}?access_token={token['access_token']}\",\n )\n response.raise_for_status()\n except RequestException as exc:\n LOGGER.warning(\"Unable to fetch user profile\", exc=exc, body=response.text)\n return None\n else:\n return response.json()\n\n\nclass MailcowOAuth2Callback(OAuthCallback):\n \"\"\"Mailcow OAuth2 Callback\"\"\"\n\n client_class = MailcowOAuth2Client\n\n def get_user_enroll_context(\n self,\n info: dict[str, Any],\n ) -> dict[str, Any]:\n return {\n \"username\": info.get(\"full_name\"),\n \"email\": info.get(\"email\"),\n \"name\": info.get(\"full_name\"),\n }\n\n\[email protected]()\nclass MailcowType(SourceType):\n \"\"\"Mailcow Type definition\"\"\"\n\n callback_view = MailcowOAuth2Callback\n redirect_view = MailcowOAuthRedirect\n name = \"Mailcow\"\n slug = \"mailcow\"\n\n urls_customizable = True\n", "path": "authentik/sources/oauth/types/mailcow.py" } ]
diff --git a/authentik/sources/oauth/types/mailcow.py b/authentik/sources/oauth/types/mailcow.py index a296b9efaf7b..93999f5d254e 100644 --- a/authentik/sources/oauth/types/mailcow.py +++ b/authentik/sources/oauth/types/mailcow.py @@ -52,6 +52,7 @@ def get_user_enroll_context( info: dict[str, Any], ) -> dict[str, Any]: return { + "username": info.get("full_name"), "email": info.get("email"), "name": info.get("full_name"), }
bookwyrm-social__bookwyrm-1855
Let admins set default interface language Right now, BookWyrm's interface language is determined by, in order of preference: 1. The language selected in the user's settings 2. The language requested by the browser, if it's available in the app 3. English Admin should be able to set the default display language of the interface for logged out viewers and users that don't have a preference saved.
[ { "content": "\"\"\" bookwyrm settings and configuration \"\"\"\nimport os\nfrom environs import Env\n\nimport requests\nfrom django.utils.translation import gettext_lazy as _\n\n\nenv = Env()\nenv.read_env()\nDOMAIN = env(\"DOMAIN\")\nVERSION = \"0.2.0\"\n\nPAGE_LENGTH = env(\"PAGE_LENGTH\", 15)\nDEFAULT_LANGUAGE = env(\"DEFAULT_LANGUAGE\", \"English\")\n\nJS_CACHE = \"76c5ff1f\"\n\n# email\nEMAIL_BACKEND = env(\"EMAIL_BACKEND\", \"django.core.mail.backends.smtp.EmailBackend\")\nEMAIL_HOST = env(\"EMAIL_HOST\")\nEMAIL_PORT = env(\"EMAIL_PORT\", 587)\nEMAIL_HOST_USER = env(\"EMAIL_HOST_USER\")\nEMAIL_HOST_PASSWORD = env(\"EMAIL_HOST_PASSWORD\")\nEMAIL_USE_TLS = env.bool(\"EMAIL_USE_TLS\", True)\nEMAIL_USE_SSL = env.bool(\"EMAIL_USE_SSL\", False)\nEMAIL_SENDER_NAME = env(\"EMAIL_SENDER_NAME\", \"admin\")\nEMAIL_SENDER_DOMAIN = env(\"EMAIL_SENDER_DOMAIN\", DOMAIN)\nEMAIL_SENDER = f\"{EMAIL_SENDER_NAME}@{EMAIL_SENDER_DOMAIN}\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nLOCALE_PATHS = [\n os.path.join(BASE_DIR, \"locale\"),\n]\nLANGUAGE_COOKIE_NAME = env.str(\"LANGUAGE_COOKIE_NAME\", \"django_language\")\n\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\n# Preview image\nENABLE_PREVIEW_IMAGES = env.bool(\"ENABLE_PREVIEW_IMAGES\", False)\nPREVIEW_BG_COLOR = env.str(\"PREVIEW_BG_COLOR\", \"use_dominant_color_light\")\nPREVIEW_TEXT_COLOR = env.str(\"PREVIEW_TEXT_COLOR\", \"#363636\")\nPREVIEW_IMG_WIDTH = env.int(\"PREVIEW_IMG_WIDTH\", 1200)\nPREVIEW_IMG_HEIGHT = env.int(\"PREVIEW_IMG_HEIGHT\", 630)\nPREVIEW_DEFAULT_COVER_COLOR = env.str(\"PREVIEW_DEFAULT_COVER_COLOR\", \"#002549\")\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = env(\"SECRET_KEY\")\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = env.bool(\"DEBUG\", True)\nUSE_HTTPS = env.bool(\"USE_HTTPS\", False)\n\nALLOWED_HOSTS = env.list(\"ALLOWED_HOSTS\", [\"*\"])\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.humanize\",\n \"django_rename_app\",\n \"bookwyrm\",\n \"celery\",\n \"imagekit\",\n \"storages\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"bookwyrm.middleware.TimezoneMiddleware\",\n \"bookwyrm.middleware.IPBlocklistMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"bookwyrm.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [\"templates\"],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"bookwyrm.context_processors.site_settings\",\n ],\n },\n },\n]\n\nLOG_LEVEL = env(\"LOG_LEVEL\", \"INFO\").upper()\n# Override aspects of the default handler to our taste\n# See https://docs.djangoproject.com/en/3.2/topics/logging/#default-logging-configuration\n# for a reference to the defaults we're overriding\n#\n# It seems that in order to override anything you have to include its\n# entire dependency tree (handlers and filters) which makes this a\n# bit verbose\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"filters\": {\n # These are copied from the default configuration, required for\n # implementing mail_admins below\n \"require_debug_false\": {\n \"()\": \"django.utils.log.RequireDebugFalse\",\n },\n \"require_debug_true\": {\n \"()\": \"django.utils.log.RequireDebugTrue\",\n },\n },\n \"handlers\": {\n # Overrides the default handler to make it log to console\n # regardless of the DEBUG setting (default is to not log to\n # console if DEBUG=False)\n \"console\": {\n \"level\": LOG_LEVEL,\n \"class\": \"logging.StreamHandler\",\n },\n # This is copied as-is from the default logger, and is\n # required for the django section below\n \"mail_admins\": {\n \"level\": \"ERROR\",\n \"filters\": [\"require_debug_false\"],\n \"class\": \"django.utils.log.AdminEmailHandler\",\n },\n },\n \"loggers\": {\n # Install our new console handler for Django's logger, and\n # override the log level while we're at it\n \"django\": {\n \"handlers\": [\"console\", \"mail_admins\"],\n \"level\": LOG_LEVEL,\n },\n # Add a bookwyrm-specific logger\n \"bookwyrm\": {\n \"handlers\": [\"console\"],\n \"level\": LOG_LEVEL,\n },\n },\n}\n\n\nWSGI_APPLICATION = \"bookwyrm.wsgi.application\"\n\n# redis/activity streams settings\nREDIS_ACTIVITY_HOST = env(\"REDIS_ACTIVITY_HOST\", \"localhost\")\nREDIS_ACTIVITY_PORT = env(\"REDIS_ACTIVITY_PORT\", 6379)\nREDIS_ACTIVITY_PASSWORD = env(\"REDIS_ACTIVITY_PASSWORD\", None)\nREDIS_ACTIVITY_DB_INDEX = env(\"REDIS_ACTIVITY_DB_INDEX\", 0)\n\nMAX_STREAM_LENGTH = int(env(\"MAX_STREAM_LENGTH\", 200))\n\nSTREAMS = [\n {\"key\": \"home\", \"name\": _(\"Home Timeline\"), \"shortname\": _(\"Home\")},\n {\"key\": \"books\", \"name\": _(\"Books Timeline\"), \"shortname\": _(\"Books\")},\n]\n\n# Search configuration\n# total time in seconds that the instance will spend searching connectors\nSEARCH_TIMEOUT = int(env(\"SEARCH_TIMEOUT\", 15))\n# timeout for a query to an individual connector\nQUERY_TIMEOUT = int(env(\"QUERY_TIMEOUT\", 5))\n\n# Redis cache backend\nif env(\"USE_DUMMY_CACHE\", False):\n CACHES = {\n \"default\": {\n \"BACKEND\": \"django.core.cache.backends.dummy.DummyCache\",\n }\n }\nelse:\n # pylint: disable=line-too-long\n CACHES = {\n \"default\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": f\"redis://:{REDIS_ACTIVITY_PASSWORD}@{REDIS_ACTIVITY_HOST}:{REDIS_ACTIVITY_PORT}/{REDIS_ACTIVITY_DB_INDEX}\",\n \"OPTIONS\": {\n \"CLIENT_CLASS\": \"django_redis.client.DefaultClient\",\n },\n }\n }\n\n SESSION_ENGINE = \"django.contrib.sessions.backends.cache\"\n SESSION_CACHE_ALIAS = \"default\"\n\n# Database\n# https://docs.djangoproject.com/en/3.2/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql_psycopg2\",\n \"NAME\": env(\"POSTGRES_DB\", \"bookwyrm\"),\n \"USER\": env(\"POSTGRES_USER\", \"bookwyrm\"),\n \"PASSWORD\": env(\"POSTGRES_PASSWORD\", \"bookwyrm\"),\n \"HOST\": env(\"POSTGRES_HOST\", \"\"),\n \"PORT\": env(\"PGPORT\", 5432),\n },\n}\n\n\nLOGIN_URL = \"/login/\"\nAUTH_USER_MODEL = \"bookwyrm.User\"\n\n# Password validation\n# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators\n\n# pylint: disable=line-too-long\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.2/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\nLANGUAGES = [\n (\"en-us\", _(\"English\")),\n (\"de-de\", _(\"Deutsch (German)\")),\n (\"es-es\", _(\"Español (Spanish)\")),\n (\"gl-es\", _(\"Galego (Galician)\")),\n (\"it-it\", _(\"Italiano (Italian)\")),\n (\"fr-fr\", _(\"Français (French)\")),\n (\"lt-lt\", _(\"Lietuvių (Lithuanian)\")),\n (\"no-no\", _(\"Norsk (Norwegian)\")),\n (\"pt-br\", _(\"Português do Brasil (Brazilian Portuguese)\")),\n (\"pt-pt\", _(\"Português Europeu (European Portuguese)\")),\n (\"zh-hans\", _(\"简体中文 (Simplified Chinese)\")),\n (\"zh-hant\", _(\"繁體中文 (Traditional Chinese)\")),\n]\n\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\nagent = requests.utils.default_user_agent()\nUSER_AGENT = f\"{agent} (BookWyrm/{VERSION}; +https://{DOMAIN}/)\"\n\n# Imagekit generated thumbnails\nENABLE_THUMBNAIL_GENERATION = env.bool(\"ENABLE_THUMBNAIL_GENERATION\", False)\nIMAGEKIT_CACHEFILE_DIR = \"thumbnails\"\nIMAGEKIT_DEFAULT_CACHEFILE_STRATEGY = \"bookwyrm.thumbnail_generation.Strategy\"\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.2/howto/static-files/\n\nPROJECT_DIR = os.path.dirname(os.path.abspath(__file__))\n\n# Storage\n\nPROTOCOL = \"http\"\nif USE_HTTPS:\n PROTOCOL = \"https\"\n\nUSE_S3 = env.bool(\"USE_S3\", False)\n\nif USE_S3:\n # AWS settings\n AWS_ACCESS_KEY_ID = env(\"AWS_ACCESS_KEY_ID\")\n AWS_SECRET_ACCESS_KEY = env(\"AWS_SECRET_ACCESS_KEY\")\n AWS_STORAGE_BUCKET_NAME = env(\"AWS_STORAGE_BUCKET_NAME\")\n AWS_S3_CUSTOM_DOMAIN = env(\"AWS_S3_CUSTOM_DOMAIN\")\n AWS_S3_REGION_NAME = env(\"AWS_S3_REGION_NAME\", \"\")\n AWS_S3_ENDPOINT_URL = env(\"AWS_S3_ENDPOINT_URL\")\n AWS_DEFAULT_ACL = \"public-read\"\n AWS_S3_OBJECT_PARAMETERS = {\"CacheControl\": \"max-age=86400\"}\n # S3 Static settings\n STATIC_LOCATION = \"static\"\n STATIC_URL = f\"{PROTOCOL}://{AWS_S3_CUSTOM_DOMAIN}/{STATIC_LOCATION}/\"\n STATICFILES_STORAGE = \"bookwyrm.storage_backends.StaticStorage\"\n # S3 Media settings\n MEDIA_LOCATION = \"images\"\n MEDIA_URL = f\"{PROTOCOL}://{AWS_S3_CUSTOM_DOMAIN}/{MEDIA_LOCATION}/\"\n MEDIA_FULL_URL = MEDIA_URL\n STATIC_FULL_URL = STATIC_URL\n DEFAULT_FILE_STORAGE = \"bookwyrm.storage_backends.ImagesStorage\"\n # I don't know if it's used, but the site crashes without it\n STATIC_ROOT = os.path.join(BASE_DIR, env(\"STATIC_ROOT\", \"static\"))\n MEDIA_ROOT = os.path.join(BASE_DIR, env(\"MEDIA_ROOT\", \"images\"))\nelse:\n STATIC_URL = \"/static/\"\n STATIC_ROOT = os.path.join(BASE_DIR, env(\"STATIC_ROOT\", \"static\"))\n MEDIA_URL = \"/images/\"\n MEDIA_FULL_URL = f\"{PROTOCOL}://{DOMAIN}{MEDIA_URL}\"\n STATIC_FULL_URL = f\"{PROTOCOL}://{DOMAIN}{STATIC_URL}\"\n MEDIA_ROOT = os.path.join(BASE_DIR, env(\"MEDIA_ROOT\", \"images\"))\n", "path": "bookwyrm/settings.py" } ]
[ { "content": "\"\"\" bookwyrm settings and configuration \"\"\"\nimport os\nfrom environs import Env\n\nimport requests\nfrom django.utils.translation import gettext_lazy as _\n\n\nenv = Env()\nenv.read_env()\nDOMAIN = env(\"DOMAIN\")\nVERSION = \"0.2.0\"\n\nPAGE_LENGTH = env(\"PAGE_LENGTH\", 15)\nDEFAULT_LANGUAGE = env(\"DEFAULT_LANGUAGE\", \"English\")\n\nJS_CACHE = \"76c5ff1f\"\n\n# email\nEMAIL_BACKEND = env(\"EMAIL_BACKEND\", \"django.core.mail.backends.smtp.EmailBackend\")\nEMAIL_HOST = env(\"EMAIL_HOST\")\nEMAIL_PORT = env(\"EMAIL_PORT\", 587)\nEMAIL_HOST_USER = env(\"EMAIL_HOST_USER\")\nEMAIL_HOST_PASSWORD = env(\"EMAIL_HOST_PASSWORD\")\nEMAIL_USE_TLS = env.bool(\"EMAIL_USE_TLS\", True)\nEMAIL_USE_SSL = env.bool(\"EMAIL_USE_SSL\", False)\nEMAIL_SENDER_NAME = env(\"EMAIL_SENDER_NAME\", \"admin\")\nEMAIL_SENDER_DOMAIN = env(\"EMAIL_SENDER_DOMAIN\", DOMAIN)\nEMAIL_SENDER = f\"{EMAIL_SENDER_NAME}@{EMAIL_SENDER_DOMAIN}\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nLOCALE_PATHS = [\n os.path.join(BASE_DIR, \"locale\"),\n]\nLANGUAGE_COOKIE_NAME = env.str(\"LANGUAGE_COOKIE_NAME\", \"django_language\")\n\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\n# Preview image\nENABLE_PREVIEW_IMAGES = env.bool(\"ENABLE_PREVIEW_IMAGES\", False)\nPREVIEW_BG_COLOR = env.str(\"PREVIEW_BG_COLOR\", \"use_dominant_color_light\")\nPREVIEW_TEXT_COLOR = env.str(\"PREVIEW_TEXT_COLOR\", \"#363636\")\nPREVIEW_IMG_WIDTH = env.int(\"PREVIEW_IMG_WIDTH\", 1200)\nPREVIEW_IMG_HEIGHT = env.int(\"PREVIEW_IMG_HEIGHT\", 630)\nPREVIEW_DEFAULT_COVER_COLOR = env.str(\"PREVIEW_DEFAULT_COVER_COLOR\", \"#002549\")\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = env(\"SECRET_KEY\")\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = env.bool(\"DEBUG\", True)\nUSE_HTTPS = env.bool(\"USE_HTTPS\", False)\n\nALLOWED_HOSTS = env.list(\"ALLOWED_HOSTS\", [\"*\"])\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.humanize\",\n \"django_rename_app\",\n \"bookwyrm\",\n \"celery\",\n \"imagekit\",\n \"storages\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.locale.LocaleMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"bookwyrm.middleware.TimezoneMiddleware\",\n \"bookwyrm.middleware.IPBlocklistMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"bookwyrm.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [\"templates\"],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"bookwyrm.context_processors.site_settings\",\n ],\n },\n },\n]\n\nLOG_LEVEL = env(\"LOG_LEVEL\", \"INFO\").upper()\n# Override aspects of the default handler to our taste\n# See https://docs.djangoproject.com/en/3.2/topics/logging/#default-logging-configuration\n# for a reference to the defaults we're overriding\n#\n# It seems that in order to override anything you have to include its\n# entire dependency tree (handlers and filters) which makes this a\n# bit verbose\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"filters\": {\n # These are copied from the default configuration, required for\n # implementing mail_admins below\n \"require_debug_false\": {\n \"()\": \"django.utils.log.RequireDebugFalse\",\n },\n \"require_debug_true\": {\n \"()\": \"django.utils.log.RequireDebugTrue\",\n },\n },\n \"handlers\": {\n # Overrides the default handler to make it log to console\n # regardless of the DEBUG setting (default is to not log to\n # console if DEBUG=False)\n \"console\": {\n \"level\": LOG_LEVEL,\n \"class\": \"logging.StreamHandler\",\n },\n # This is copied as-is from the default logger, and is\n # required for the django section below\n \"mail_admins\": {\n \"level\": \"ERROR\",\n \"filters\": [\"require_debug_false\"],\n \"class\": \"django.utils.log.AdminEmailHandler\",\n },\n },\n \"loggers\": {\n # Install our new console handler for Django's logger, and\n # override the log level while we're at it\n \"django\": {\n \"handlers\": [\"console\", \"mail_admins\"],\n \"level\": LOG_LEVEL,\n },\n # Add a bookwyrm-specific logger\n \"bookwyrm\": {\n \"handlers\": [\"console\"],\n \"level\": LOG_LEVEL,\n },\n },\n}\n\n\nWSGI_APPLICATION = \"bookwyrm.wsgi.application\"\n\n# redis/activity streams settings\nREDIS_ACTIVITY_HOST = env(\"REDIS_ACTIVITY_HOST\", \"localhost\")\nREDIS_ACTIVITY_PORT = env(\"REDIS_ACTIVITY_PORT\", 6379)\nREDIS_ACTIVITY_PASSWORD = env(\"REDIS_ACTIVITY_PASSWORD\", None)\nREDIS_ACTIVITY_DB_INDEX = env(\"REDIS_ACTIVITY_DB_INDEX\", 0)\n\nMAX_STREAM_LENGTH = int(env(\"MAX_STREAM_LENGTH\", 200))\n\nSTREAMS = [\n {\"key\": \"home\", \"name\": _(\"Home Timeline\"), \"shortname\": _(\"Home\")},\n {\"key\": \"books\", \"name\": _(\"Books Timeline\"), \"shortname\": _(\"Books\")},\n]\n\n# Search configuration\n# total time in seconds that the instance will spend searching connectors\nSEARCH_TIMEOUT = int(env(\"SEARCH_TIMEOUT\", 15))\n# timeout for a query to an individual connector\nQUERY_TIMEOUT = int(env(\"QUERY_TIMEOUT\", 5))\n\n# Redis cache backend\nif env(\"USE_DUMMY_CACHE\", False):\n CACHES = {\n \"default\": {\n \"BACKEND\": \"django.core.cache.backends.dummy.DummyCache\",\n }\n }\nelse:\n # pylint: disable=line-too-long\n CACHES = {\n \"default\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": f\"redis://:{REDIS_ACTIVITY_PASSWORD}@{REDIS_ACTIVITY_HOST}:{REDIS_ACTIVITY_PORT}/{REDIS_ACTIVITY_DB_INDEX}\",\n \"OPTIONS\": {\n \"CLIENT_CLASS\": \"django_redis.client.DefaultClient\",\n },\n }\n }\n\n SESSION_ENGINE = \"django.contrib.sessions.backends.cache\"\n SESSION_CACHE_ALIAS = \"default\"\n\n# Database\n# https://docs.djangoproject.com/en/3.2/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql_psycopg2\",\n \"NAME\": env(\"POSTGRES_DB\", \"bookwyrm\"),\n \"USER\": env(\"POSTGRES_USER\", \"bookwyrm\"),\n \"PASSWORD\": env(\"POSTGRES_PASSWORD\", \"bookwyrm\"),\n \"HOST\": env(\"POSTGRES_HOST\", \"\"),\n \"PORT\": env(\"PGPORT\", 5432),\n },\n}\n\n\nLOGIN_URL = \"/login/\"\nAUTH_USER_MODEL = \"bookwyrm.User\"\n\n# Password validation\n# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators\n\n# pylint: disable=line-too-long\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.2/topics/i18n/\n\nLANGUAGE_CODE = env(\"LANGUAGE_CODE\", \"en-us\")\nLANGUAGES = [\n (\"en-us\", _(\"English\")),\n (\"de-de\", _(\"Deutsch (German)\")),\n (\"es-es\", _(\"Español (Spanish)\")),\n (\"gl-es\", _(\"Galego (Galician)\")),\n (\"it-it\", _(\"Italiano (Italian)\")),\n (\"fr-fr\", _(\"Français (French)\")),\n (\"lt-lt\", _(\"Lietuvių (Lithuanian)\")),\n (\"no-no\", _(\"Norsk (Norwegian)\")),\n (\"pt-br\", _(\"Português do Brasil (Brazilian Portuguese)\")),\n (\"pt-pt\", _(\"Português Europeu (European Portuguese)\")),\n (\"zh-hans\", _(\"简体中文 (Simplified Chinese)\")),\n (\"zh-hant\", _(\"繁體中文 (Traditional Chinese)\")),\n]\n\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\nagent = requests.utils.default_user_agent()\nUSER_AGENT = f\"{agent} (BookWyrm/{VERSION}; +https://{DOMAIN}/)\"\n\n# Imagekit generated thumbnails\nENABLE_THUMBNAIL_GENERATION = env.bool(\"ENABLE_THUMBNAIL_GENERATION\", False)\nIMAGEKIT_CACHEFILE_DIR = \"thumbnails\"\nIMAGEKIT_DEFAULT_CACHEFILE_STRATEGY = \"bookwyrm.thumbnail_generation.Strategy\"\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.2/howto/static-files/\n\nPROJECT_DIR = os.path.dirname(os.path.abspath(__file__))\n\n# Storage\n\nPROTOCOL = \"http\"\nif USE_HTTPS:\n PROTOCOL = \"https\"\n\nUSE_S3 = env.bool(\"USE_S3\", False)\n\nif USE_S3:\n # AWS settings\n AWS_ACCESS_KEY_ID = env(\"AWS_ACCESS_KEY_ID\")\n AWS_SECRET_ACCESS_KEY = env(\"AWS_SECRET_ACCESS_KEY\")\n AWS_STORAGE_BUCKET_NAME = env(\"AWS_STORAGE_BUCKET_NAME\")\n AWS_S3_CUSTOM_DOMAIN = env(\"AWS_S3_CUSTOM_DOMAIN\")\n AWS_S3_REGION_NAME = env(\"AWS_S3_REGION_NAME\", \"\")\n AWS_S3_ENDPOINT_URL = env(\"AWS_S3_ENDPOINT_URL\")\n AWS_DEFAULT_ACL = \"public-read\"\n AWS_S3_OBJECT_PARAMETERS = {\"CacheControl\": \"max-age=86400\"}\n # S3 Static settings\n STATIC_LOCATION = \"static\"\n STATIC_URL = f\"{PROTOCOL}://{AWS_S3_CUSTOM_DOMAIN}/{STATIC_LOCATION}/\"\n STATICFILES_STORAGE = \"bookwyrm.storage_backends.StaticStorage\"\n # S3 Media settings\n MEDIA_LOCATION = \"images\"\n MEDIA_URL = f\"{PROTOCOL}://{AWS_S3_CUSTOM_DOMAIN}/{MEDIA_LOCATION}/\"\n MEDIA_FULL_URL = MEDIA_URL\n STATIC_FULL_URL = STATIC_URL\n DEFAULT_FILE_STORAGE = \"bookwyrm.storage_backends.ImagesStorage\"\n # I don't know if it's used, but the site crashes without it\n STATIC_ROOT = os.path.join(BASE_DIR, env(\"STATIC_ROOT\", \"static\"))\n MEDIA_ROOT = os.path.join(BASE_DIR, env(\"MEDIA_ROOT\", \"images\"))\nelse:\n STATIC_URL = \"/static/\"\n STATIC_ROOT = os.path.join(BASE_DIR, env(\"STATIC_ROOT\", \"static\"))\n MEDIA_URL = \"/images/\"\n MEDIA_FULL_URL = f\"{PROTOCOL}://{DOMAIN}{MEDIA_URL}\"\n STATIC_FULL_URL = f\"{PROTOCOL}://{DOMAIN}{STATIC_URL}\"\n MEDIA_ROOT = os.path.join(BASE_DIR, env(\"MEDIA_ROOT\", \"images\"))\n", "path": "bookwyrm/settings.py" } ]
diff --git a/.env.example b/.env.example index 2000a7165c..ca6f65bb78 100644 --- a/.env.example +++ b/.env.example @@ -8,6 +8,8 @@ USE_HTTPS=true DOMAIN=your.domain.here [email protected] +# Instance defualt language (see options at bookwyrm/settings.py "LANGUAGES" +LANGUAGE_CODE="en-us" # Used for deciding which editions to prefer DEFAULT_LANGUAGE="English" diff --git a/bookwyrm/settings.py b/bookwyrm/settings.py index 197e672c10..e86d279289 100644 --- a/bookwyrm/settings.py +++ b/bookwyrm/settings.py @@ -243,7 +243,7 @@ # Internationalization # https://docs.djangoproject.com/en/3.2/topics/i18n/ -LANGUAGE_CODE = "en-us" +LANGUAGE_CODE = env("LANGUAGE_CODE", "en-us") LANGUAGES = [ ("en-us", _("English")), ("de-de", _("Deutsch (German)")),
pytest-dev__pytest-django-820
Test re-ordering breaks Pytest's --failed-first and --stepwise options The test re-ordering introduced in response to #214 seems to execute after pytest's own `--failed-first`, `--new-first`, `--stepwise`, etc. ordering options, breaking them. We ran across this in mdn/kuma#6531, where even with `--failed-first` pytest was running dozens of known good tests before executing the failed tests. Removing the `pytest_collection_modifyitems` function or decorating it with `@pytest.hookimpl(tryfirst=True)` seems to resolve this, though I'm not familiar enough with pytest to know if that's an appropriate solution, or if there's something we should be doing on our end instead.
[ { "content": "\"\"\"A pytest plugin which helps testing Django applications\n\nThis plugin handles creating and destroying the test environment and\ntest database and provides some useful text fixtures.\n\"\"\"\n\nimport contextlib\nimport inspect\nfrom functools import reduce\nimport os\nimport sys\nimport types\n\nimport pytest\nfrom pkg_resources import parse_version\n\nfrom .django_compat import is_django_unittest # noqa\nfrom .fixtures import django_assert_num_queries # noqa\nfrom .fixtures import django_assert_max_num_queries # noqa\nfrom .fixtures import django_db_setup # noqa\nfrom .fixtures import django_db_use_migrations # noqa\nfrom .fixtures import django_db_keepdb # noqa\nfrom .fixtures import django_db_createdb # noqa\nfrom .fixtures import django_db_modify_db_settings # noqa\nfrom .fixtures import django_db_modify_db_settings_parallel_suffix # noqa\nfrom .fixtures import django_db_modify_db_settings_tox_suffix # noqa\nfrom .fixtures import django_db_modify_db_settings_xdist_suffix # noqa\nfrom .fixtures import _live_server_helper # noqa\nfrom .fixtures import admin_client # noqa\nfrom .fixtures import admin_user # noqa\nfrom .fixtures import client # noqa\nfrom .fixtures import db # noqa\nfrom .fixtures import django_user_model # noqa\nfrom .fixtures import django_username_field # noqa\nfrom .fixtures import live_server # noqa\nfrom .fixtures import django_db_reset_sequences # noqa\nfrom .fixtures import rf # noqa\nfrom .fixtures import settings # noqa\nfrom .fixtures import transactional_db # noqa\n\nfrom .lazy_django import django_settings_is_configured, skip_if_no_django\n\ntry:\n import pathlib\nexcept ImportError:\n import pathlib2 as pathlib\n\n\nSETTINGS_MODULE_ENV = \"DJANGO_SETTINGS_MODULE\"\nCONFIGURATION_ENV = \"DJANGO_CONFIGURATION\"\nINVALID_TEMPLATE_VARS_ENV = \"FAIL_INVALID_TEMPLATE_VARS\"\n\nPY2 = sys.version_info[0] == 2\n\n# pytest 4.2 handles unittest setup/teardown itself via wrapping fixtures.\n_handle_unittest_methods = parse_version(pytest.__version__) < parse_version(\"4.2\")\n\n_report_header = []\n\n\n# ############### pytest hooks ################\n\n\ndef pytest_addoption(parser):\n group = parser.getgroup(\"django\")\n group._addoption(\n \"--reuse-db\",\n action=\"store_true\",\n dest=\"reuse_db\",\n default=False,\n help=\"Re-use the testing database if it already exists, \"\n \"and do not remove it when the test finishes.\",\n )\n group._addoption(\n \"--create-db\",\n action=\"store_true\",\n dest=\"create_db\",\n default=False,\n help=\"Re-create the database, even if it exists. This \"\n \"option can be used to override --reuse-db.\",\n )\n group._addoption(\n \"--ds\",\n action=\"store\",\n type=str,\n dest=\"ds\",\n default=None,\n help=\"Set DJANGO_SETTINGS_MODULE.\",\n )\n group._addoption(\n \"--dc\",\n action=\"store\",\n type=str,\n dest=\"dc\",\n default=None,\n help=\"Set DJANGO_CONFIGURATION.\",\n )\n group._addoption(\n \"--nomigrations\",\n \"--no-migrations\",\n action=\"store_true\",\n dest=\"nomigrations\",\n default=False,\n help=\"Disable Django migrations on test setup\",\n )\n group._addoption(\n \"--migrations\",\n action=\"store_false\",\n dest=\"nomigrations\",\n default=False,\n help=\"Enable Django migrations on test setup\",\n )\n parser.addini(\n CONFIGURATION_ENV, \"django-configurations class to use by pytest-django.\"\n )\n group._addoption(\n \"--liveserver\",\n default=None,\n help=\"Address and port for the live_server fixture.\",\n )\n parser.addini(\n SETTINGS_MODULE_ENV, \"Django settings module to use by pytest-django.\"\n )\n\n parser.addini(\n \"django_find_project\",\n \"Automatically find and add a Django project to the \" \"Python path.\",\n type=\"bool\",\n default=True,\n )\n group._addoption(\n \"--fail-on-template-vars\",\n action=\"store_true\",\n dest=\"itv\",\n default=False,\n help=\"Fail for invalid variables in templates.\",\n )\n parser.addini(\n INVALID_TEMPLATE_VARS_ENV,\n \"Fail for invalid variables in templates.\",\n type=\"bool\",\n default=False,\n )\n\n\nPROJECT_FOUND = (\n \"pytest-django found a Django project in %s \"\n \"(it contains manage.py) and added it to the Python path.\\n\"\n 'If this is wrong, add \"django_find_project = false\" to '\n \"pytest.ini and explicitly manage your Python path.\"\n)\n\nPROJECT_NOT_FOUND = (\n \"pytest-django could not find a Django project \"\n \"(no manage.py file could be found). You must \"\n \"explicitly add your Django project to the Python path \"\n \"to have it picked up.\"\n)\n\nPROJECT_SCAN_DISABLED = (\n \"pytest-django did not search for Django \"\n \"projects since it is disabled in the configuration \"\n '(\"django_find_project = false\")'\n)\n\n\[email protected]\ndef _handle_import_error(extra_message):\n try:\n yield\n except ImportError as e:\n django_msg = (e.args[0] + \"\\n\\n\") if e.args else \"\"\n msg = django_msg + extra_message\n raise ImportError(msg)\n\n\ndef _add_django_project_to_path(args):\n def is_django_project(path):\n try:\n return path.is_dir() and (path / \"manage.py\").exists()\n except OSError:\n return False\n\n def arg_to_path(arg):\n # Test classes or functions can be appended to paths separated by ::\n arg = arg.split(\"::\", 1)[0]\n return pathlib.Path(arg)\n\n def find_django_path(args):\n args = map(str, args)\n args = [arg_to_path(x) for x in args if not x.startswith(\"-\")]\n\n cwd = pathlib.Path.cwd()\n if not args:\n args.append(cwd)\n elif cwd not in args:\n args.append(cwd)\n\n for arg in args:\n if is_django_project(arg):\n return arg\n for parent in arg.parents:\n if is_django_project(parent):\n return parent\n return None\n\n project_dir = find_django_path(args)\n if project_dir:\n sys.path.insert(0, str(project_dir.absolute()))\n return PROJECT_FOUND % project_dir\n return PROJECT_NOT_FOUND\n\n\ndef _setup_django():\n if \"django\" not in sys.modules:\n return\n\n import django.conf\n\n # Avoid force-loading Django when settings are not properly configured.\n if not django.conf.settings.configured:\n return\n\n import django.apps\n\n if not django.apps.apps.ready:\n django.setup()\n\n _blocking_manager.block()\n\n\ndef _get_boolean_value(x, name, default=None):\n if x is None:\n return default\n if x in (True, False):\n return x\n possible_values = {\"true\": True, \"false\": False, \"1\": True, \"0\": False}\n try:\n return possible_values[x.lower()]\n except KeyError:\n raise ValueError(\n \"{} is not a valid value for {}. \"\n \"It must be one of {}.\" % (x, name, \", \".join(possible_values.keys()))\n )\n\n\ndef pytest_load_initial_conftests(early_config, parser, args):\n # Register the marks\n early_config.addinivalue_line(\n \"markers\",\n \"django_db(transaction=False): Mark the test as using \"\n \"the Django test database. The *transaction* argument marks will \"\n \"allow you to use real transactions in the test like Django's \"\n \"TransactionTestCase.\",\n )\n early_config.addinivalue_line(\n \"markers\",\n \"urls(modstr): Use a different URLconf for this test, similar to \"\n \"the `urls` attribute of Django's `TestCase` objects. *modstr* is \"\n \"a string specifying the module of a URL config, e.g. \"\n '\"my_app.test_urls\".',\n )\n early_config.addinivalue_line(\n \"markers\",\n \"ignore_template_errors(): ignore errors from invalid template \"\n \"variables (if --fail-on-template-vars is used).\",\n )\n\n options = parser.parse_known_args(args)\n\n if options.version or options.help:\n return\n\n django_find_project = _get_boolean_value(\n early_config.getini(\"django_find_project\"), \"django_find_project\"\n )\n\n if django_find_project:\n _django_project_scan_outcome = _add_django_project_to_path(args)\n else:\n _django_project_scan_outcome = PROJECT_SCAN_DISABLED\n\n if (\n options.itv\n or _get_boolean_value(\n os.environ.get(INVALID_TEMPLATE_VARS_ENV), INVALID_TEMPLATE_VARS_ENV\n )\n or early_config.getini(INVALID_TEMPLATE_VARS_ENV)\n ):\n os.environ[INVALID_TEMPLATE_VARS_ENV] = \"true\"\n\n def _get_option_with_source(option, envname):\n if option:\n return option, \"option\"\n if envname in os.environ:\n return os.environ[envname], \"env\"\n cfgval = early_config.getini(envname)\n if cfgval:\n return cfgval, \"ini\"\n return None, None\n\n ds, ds_source = _get_option_with_source(options.ds, SETTINGS_MODULE_ENV)\n dc, dc_source = _get_option_with_source(options.dc, CONFIGURATION_ENV)\n\n if ds:\n _report_header.append(\"settings: %s (from %s)\" % (ds, ds_source))\n os.environ[SETTINGS_MODULE_ENV] = ds\n\n if dc:\n _report_header.append(\"configuration: %s (from %s)\" % (dc, dc_source))\n os.environ[CONFIGURATION_ENV] = dc\n\n # Install the django-configurations importer\n import configurations.importer\n\n configurations.importer.install()\n\n # Forcefully load Django settings, throws ImportError or\n # ImproperlyConfigured if settings cannot be loaded.\n from django.conf import settings as dj_settings\n\n with _handle_import_error(_django_project_scan_outcome):\n dj_settings.DATABASES\n\n _setup_django()\n\n\ndef pytest_report_header():\n if _report_header:\n return [\"django: \" + \", \".join(_report_header)]\n\n\[email protected]\ndef pytest_configure():\n # Allow Django settings to be configured in a user pytest_configure call,\n # but make sure we call django.setup()\n _setup_django()\n\n\ndef _classmethod_is_defined_at_leaf(cls, method_name):\n super_method = None\n\n for base_cls in cls.__mro__[1:]: # pragma: no branch\n super_method = base_cls.__dict__.get(method_name)\n if super_method is not None:\n break\n\n assert super_method is not None, (\n \"%s could not be found in base classes\" % method_name\n )\n\n method = getattr(cls, method_name)\n\n try:\n f = method.__func__\n except AttributeError:\n pytest.fail(\"%s.%s should be a classmethod\" % (cls, method_name))\n if PY2 and not (\n inspect.ismethod(method)\n and inspect.isclass(method.__self__)\n and issubclass(cls, method.__self__)\n ):\n pytest.fail(\"%s.%s should be a classmethod\" % (cls, method_name))\n return f is not super_method.__func__\n\n\n_disabled_classmethods = {}\n\n\ndef _disable_class_methods(cls):\n if cls in _disabled_classmethods:\n return\n\n _disabled_classmethods[cls] = (\n # Get the classmethod object (not the resulting bound method),\n # otherwise inheritance will be broken when restoring.\n cls.__dict__.get(\"setUpClass\"),\n _classmethod_is_defined_at_leaf(cls, \"setUpClass\"),\n cls.__dict__.get(\"tearDownClass\"),\n _classmethod_is_defined_at_leaf(cls, \"tearDownClass\"),\n )\n\n cls.setUpClass = types.MethodType(lambda cls: None, cls)\n cls.tearDownClass = types.MethodType(lambda cls: None, cls)\n\n\ndef _restore_class_methods(cls):\n (\n setUpClass,\n restore_setUpClass,\n tearDownClass,\n restore_tearDownClass,\n ) = _disabled_classmethods.pop(cls)\n\n try:\n del cls.setUpClass\n except AttributeError:\n raise\n\n try:\n del cls.tearDownClass\n except AttributeError:\n pass\n\n if restore_setUpClass:\n cls.setUpClass = setUpClass\n\n if restore_tearDownClass:\n cls.tearDownClass = tearDownClass\n\n\ndef pytest_runtest_setup(item):\n if _handle_unittest_methods:\n if django_settings_is_configured() and is_django_unittest(item):\n _disable_class_methods(item.cls)\n\n\ndef pytest_collection_modifyitems(items):\n def get_order_number(test):\n marker_db = test.get_closest_marker('django_db')\n if marker_db:\n transaction = validate_django_db(marker_db)[0]\n if transaction is True:\n return 1\n else:\n transaction = None\n\n fixtures = getattr(test, 'fixturenames', [])\n if \"transactional_db\" in fixtures:\n return 1\n\n if transaction is False:\n return 0\n if \"db\" in fixtures:\n return 0\n\n return 2\n\n items[:] = sorted(items, key=get_order_number)\n\n\[email protected](autouse=True, scope=\"session\")\ndef django_test_environment(request):\n \"\"\"\n Ensure that Django is loaded and has its testing environment setup.\n\n XXX It is a little dodgy that this is an autouse fixture. Perhaps\n an email fixture should be requested in order to be able to\n use the Django email machinery just like you need to request a\n db fixture for access to the Django database, etc. But\n without duplicating a lot more of Django's test support code\n we need to follow this model.\n \"\"\"\n if django_settings_is_configured():\n _setup_django()\n from django.conf import settings as dj_settings\n from django.test.utils import setup_test_environment, teardown_test_environment\n\n dj_settings.DEBUG = False\n setup_test_environment()\n request.addfinalizer(teardown_test_environment)\n\n\[email protected](scope=\"session\")\ndef django_db_blocker():\n \"\"\"Wrapper around Django's database access.\n\n This object can be used to re-enable database access. This fixture is used\n internally in pytest-django to build the other fixtures and can be used for\n special database handling.\n\n The object is a context manager and provides the methods\n .unblock()/.block() and .restore() to temporarily enable database access.\n\n This is an advanced feature that is meant to be used to implement database\n fixtures.\n \"\"\"\n if not django_settings_is_configured():\n return None\n\n return _blocking_manager\n\n\[email protected](autouse=True)\ndef _django_db_marker(request):\n \"\"\"Implement the django_db marker, internal to pytest-django.\n\n This will dynamically request the ``db``, ``transactional_db`` or\n ``django_db_reset_sequences`` fixtures as required by the django_db marker.\n \"\"\"\n marker = request.node.get_closest_marker(\"django_db\")\n if marker:\n transaction, reset_sequences = validate_django_db(marker)\n if reset_sequences:\n request.getfixturevalue(\"django_db_reset_sequences\")\n elif transaction:\n request.getfixturevalue(\"transactional_db\")\n else:\n request.getfixturevalue(\"db\")\n\n\[email protected](autouse=True, scope=\"class\")\ndef _django_setup_unittest(request, django_db_blocker):\n \"\"\"Setup a django unittest, internal to pytest-django.\"\"\"\n if not django_settings_is_configured() or not is_django_unittest(request):\n yield\n return\n\n from _pytest.unittest import TestCaseFunction\n\n if \"debug\" in TestCaseFunction.runtest.__code__.co_names:\n # Fix pytest (https://github.com/pytest-dev/pytest/issues/5991), only\n # if \"self._testcase.debug()\" is being used (forward compatible).\n from _pytest.monkeypatch import MonkeyPatch\n\n def non_debugging_runtest(self):\n self._testcase(result=self)\n\n mp_debug = MonkeyPatch()\n mp_debug.setattr(\"_pytest.unittest.TestCaseFunction.runtest\", non_debugging_runtest)\n else:\n mp_debug = None\n\n request.getfixturevalue(\"django_db_setup\")\n\n cls = request.node.cls\n\n with django_db_blocker.unblock():\n if _handle_unittest_methods:\n _restore_class_methods(cls)\n cls.setUpClass()\n _disable_class_methods(cls)\n\n yield\n\n _restore_class_methods(cls)\n cls.tearDownClass()\n else:\n yield\n\n if mp_debug:\n mp_debug.undo()\n\n\[email protected](scope=\"function\", autouse=True)\ndef _dj_autoclear_mailbox():\n if not django_settings_is_configured():\n return\n\n from django.core import mail\n\n del mail.outbox[:]\n\n\[email protected](scope=\"function\")\ndef mailoutbox(django_mail_patch_dns, _dj_autoclear_mailbox):\n if not django_settings_is_configured():\n return\n\n from django.core import mail\n\n return mail.outbox\n\n\[email protected](scope=\"function\")\ndef django_mail_patch_dns(monkeypatch, django_mail_dnsname):\n from django.core import mail\n\n monkeypatch.setattr(mail.message, \"DNS_NAME\", django_mail_dnsname)\n\n\[email protected](scope=\"function\")\ndef django_mail_dnsname():\n return \"fake-tests.example.com\"\n\n\[email protected](autouse=True, scope=\"function\")\ndef _django_set_urlconf(request):\n \"\"\"Apply the @pytest.mark.urls marker, internal to pytest-django.\"\"\"\n marker = request.node.get_closest_marker(\"urls\")\n if marker:\n skip_if_no_django()\n import django.conf\n\n try:\n from django.urls import clear_url_caches, set_urlconf\n except ImportError:\n # Removed in Django 2.0\n from django.core.urlresolvers import clear_url_caches, set_urlconf\n\n urls = validate_urls(marker)\n original_urlconf = django.conf.settings.ROOT_URLCONF\n django.conf.settings.ROOT_URLCONF = urls\n clear_url_caches()\n set_urlconf(None)\n\n def restore():\n django.conf.settings.ROOT_URLCONF = original_urlconf\n # Copy the pattern from\n # https://github.com/django/django/blob/master/django/test/signals.py#L152\n clear_url_caches()\n set_urlconf(None)\n\n request.addfinalizer(restore)\n\n\[email protected](autouse=True, scope=\"session\")\ndef _fail_for_invalid_template_variable():\n \"\"\"Fixture that fails for invalid variables in templates.\n\n This fixture will fail each test that uses django template rendering\n should a template contain an invalid template variable.\n The fail message will include the name of the invalid variable and\n in most cases the template name.\n\n It does not raise an exception, but fails, as the stack trace doesn't\n offer any helpful information to debug.\n This behavior can be switched off using the marker:\n ``pytest.mark.ignore_template_errors``\n \"\"\"\n\n class InvalidVarException(object):\n \"\"\"Custom handler for invalid strings in templates.\"\"\"\n\n def __init__(self):\n self.fail = True\n\n def __contains__(self, key):\n \"\"\"There is a test for '%s' in TEMPLATE_STRING_IF_INVALID.\"\"\"\n return key == \"%s\"\n\n @staticmethod\n def _get_origin():\n stack = inspect.stack()\n\n # Try to use topmost `self.origin` first (Django 1.9+, and with\n # TEMPLATE_DEBUG)..\n for f in stack[2:]:\n func = f[3]\n if func == \"render\":\n frame = f[0]\n try:\n origin = frame.f_locals[\"self\"].origin\n except (AttributeError, KeyError):\n continue\n if origin is not None:\n return origin\n\n from django.template import Template\n\n # finding the ``render`` needle in the stack\n frame = reduce(\n lambda x, y: y[3] == \"render\" and \"base.py\" in y[1] and y or x, stack\n )\n # assert 0, stack\n frame = frame[0]\n # finding only the frame locals in all frame members\n f_locals = reduce(\n lambda x, y: y[0] == \"f_locals\" and y or x, inspect.getmembers(frame)\n )[1]\n # ``django.template.base.Template``\n template = f_locals[\"self\"]\n if isinstance(template, Template):\n return template.name\n\n def __mod__(self, var):\n \"\"\"Handle TEMPLATE_STRING_IF_INVALID % var.\"\"\"\n origin = self._get_origin()\n if origin:\n msg = \"Undefined template variable '%s' in '%s'\" % (var, origin)\n else:\n msg = \"Undefined template variable '%s'\" % var\n if self.fail:\n pytest.fail(msg)\n else:\n return msg\n\n if (\n os.environ.get(INVALID_TEMPLATE_VARS_ENV, \"false\") == \"true\"\n and django_settings_is_configured()\n ):\n from django.conf import settings as dj_settings\n\n if dj_settings.TEMPLATES:\n dj_settings.TEMPLATES[0][\"OPTIONS\"][\n \"string_if_invalid\"\n ] = InvalidVarException()\n else:\n dj_settings.TEMPLATE_STRING_IF_INVALID = InvalidVarException()\n\n\[email protected](autouse=True)\ndef _template_string_if_invalid_marker(request):\n \"\"\"Apply the @pytest.mark.ignore_template_errors marker,\n internal to pytest-django.\"\"\"\n marker = request.keywords.get(\"ignore_template_errors\", None)\n if os.environ.get(INVALID_TEMPLATE_VARS_ENV, \"false\") == \"true\":\n if marker and django_settings_is_configured():\n from django.conf import settings as dj_settings\n\n if dj_settings.TEMPLATES:\n dj_settings.TEMPLATES[0][\"OPTIONS\"][\"string_if_invalid\"].fail = False\n else:\n dj_settings.TEMPLATE_STRING_IF_INVALID.fail = False\n\n\[email protected](autouse=True, scope=\"function\")\ndef _django_clear_site_cache():\n \"\"\"Clears ``django.contrib.sites.models.SITE_CACHE`` to avoid\n unexpected behavior with cached site objects.\n \"\"\"\n\n if django_settings_is_configured():\n from django.conf import settings as dj_settings\n\n if \"django.contrib.sites\" in dj_settings.INSTALLED_APPS:\n from django.contrib.sites.models import Site\n\n Site.objects.clear_cache()\n\n\n# ############### Helper Functions ################\n\n\nclass _DatabaseBlockerContextManager(object):\n def __init__(self, db_blocker):\n self._db_blocker = db_blocker\n\n def __enter__(self):\n pass\n\n def __exit__(self, exc_type, exc_value, traceback):\n self._db_blocker.restore()\n\n\nclass _DatabaseBlocker(object):\n \"\"\"Manager for django.db.backends.base.base.BaseDatabaseWrapper.\n\n This is the object returned by django_db_blocker.\n \"\"\"\n\n def __init__(self):\n self._history = []\n self._real_ensure_connection = None\n\n @property\n def _dj_db_wrapper(self):\n from django.db.backends.base.base import BaseDatabaseWrapper\n\n # The first time the _dj_db_wrapper is accessed, we will save a\n # reference to the real implementation.\n if self._real_ensure_connection is None:\n self._real_ensure_connection = BaseDatabaseWrapper.ensure_connection\n\n return BaseDatabaseWrapper\n\n def _save_active_wrapper(self):\n return self._history.append(self._dj_db_wrapper.ensure_connection)\n\n def _blocking_wrapper(*args, **kwargs):\n __tracebackhide__ = True\n __tracebackhide__ # Silence pyflakes\n raise RuntimeError(\n \"Database access not allowed, \"\n 'use the \"django_db\" mark, or the '\n '\"db\" or \"transactional_db\" fixtures to enable it.'\n )\n\n def unblock(self):\n \"\"\"Enable access to the Django database.\"\"\"\n self._save_active_wrapper()\n self._dj_db_wrapper.ensure_connection = self._real_ensure_connection\n return _DatabaseBlockerContextManager(self)\n\n def block(self):\n \"\"\"Disable access to the Django database.\"\"\"\n self._save_active_wrapper()\n self._dj_db_wrapper.ensure_connection = self._blocking_wrapper\n return _DatabaseBlockerContextManager(self)\n\n def restore(self):\n self._dj_db_wrapper.ensure_connection = self._history.pop()\n\n\n_blocking_manager = _DatabaseBlocker()\n\n\ndef validate_django_db(marker):\n \"\"\"Validate the django_db marker.\n\n It checks the signature and creates the ``transaction`` and\n ``reset_sequences`` attributes on the marker which will have the\n correct values.\n\n A sequence reset is only allowed when combined with a transaction.\n \"\"\"\n\n def apifun(transaction=False, reset_sequences=False):\n return transaction, reset_sequences\n\n return apifun(*marker.args, **marker.kwargs)\n\n\ndef validate_urls(marker):\n \"\"\"Validate the urls marker.\n\n It checks the signature and creates the `urls` attribute on the\n marker which will have the correct value.\n \"\"\"\n\n def apifun(urls):\n return urls\n\n return apifun(*marker.args, **marker.kwargs)\n", "path": "pytest_django/plugin.py" } ]
[ { "content": "\"\"\"A pytest plugin which helps testing Django applications\n\nThis plugin handles creating and destroying the test environment and\ntest database and provides some useful text fixtures.\n\"\"\"\n\nimport contextlib\nimport inspect\nfrom functools import reduce\nimport os\nimport sys\nimport types\n\nimport pytest\nfrom pkg_resources import parse_version\n\nfrom .django_compat import is_django_unittest # noqa\nfrom .fixtures import django_assert_num_queries # noqa\nfrom .fixtures import django_assert_max_num_queries # noqa\nfrom .fixtures import django_db_setup # noqa\nfrom .fixtures import django_db_use_migrations # noqa\nfrom .fixtures import django_db_keepdb # noqa\nfrom .fixtures import django_db_createdb # noqa\nfrom .fixtures import django_db_modify_db_settings # noqa\nfrom .fixtures import django_db_modify_db_settings_parallel_suffix # noqa\nfrom .fixtures import django_db_modify_db_settings_tox_suffix # noqa\nfrom .fixtures import django_db_modify_db_settings_xdist_suffix # noqa\nfrom .fixtures import _live_server_helper # noqa\nfrom .fixtures import admin_client # noqa\nfrom .fixtures import admin_user # noqa\nfrom .fixtures import client # noqa\nfrom .fixtures import db # noqa\nfrom .fixtures import django_user_model # noqa\nfrom .fixtures import django_username_field # noqa\nfrom .fixtures import live_server # noqa\nfrom .fixtures import django_db_reset_sequences # noqa\nfrom .fixtures import rf # noqa\nfrom .fixtures import settings # noqa\nfrom .fixtures import transactional_db # noqa\n\nfrom .lazy_django import django_settings_is_configured, skip_if_no_django\n\ntry:\n import pathlib\nexcept ImportError:\n import pathlib2 as pathlib\n\n\nSETTINGS_MODULE_ENV = \"DJANGO_SETTINGS_MODULE\"\nCONFIGURATION_ENV = \"DJANGO_CONFIGURATION\"\nINVALID_TEMPLATE_VARS_ENV = \"FAIL_INVALID_TEMPLATE_VARS\"\n\nPY2 = sys.version_info[0] == 2\n\n# pytest 4.2 handles unittest setup/teardown itself via wrapping fixtures.\n_handle_unittest_methods = parse_version(pytest.__version__) < parse_version(\"4.2\")\n\n_report_header = []\n\n\n# ############### pytest hooks ################\n\n\ndef pytest_addoption(parser):\n group = parser.getgroup(\"django\")\n group._addoption(\n \"--reuse-db\",\n action=\"store_true\",\n dest=\"reuse_db\",\n default=False,\n help=\"Re-use the testing database if it already exists, \"\n \"and do not remove it when the test finishes.\",\n )\n group._addoption(\n \"--create-db\",\n action=\"store_true\",\n dest=\"create_db\",\n default=False,\n help=\"Re-create the database, even if it exists. This \"\n \"option can be used to override --reuse-db.\",\n )\n group._addoption(\n \"--ds\",\n action=\"store\",\n type=str,\n dest=\"ds\",\n default=None,\n help=\"Set DJANGO_SETTINGS_MODULE.\",\n )\n group._addoption(\n \"--dc\",\n action=\"store\",\n type=str,\n dest=\"dc\",\n default=None,\n help=\"Set DJANGO_CONFIGURATION.\",\n )\n group._addoption(\n \"--nomigrations\",\n \"--no-migrations\",\n action=\"store_true\",\n dest=\"nomigrations\",\n default=False,\n help=\"Disable Django migrations on test setup\",\n )\n group._addoption(\n \"--migrations\",\n action=\"store_false\",\n dest=\"nomigrations\",\n default=False,\n help=\"Enable Django migrations on test setup\",\n )\n parser.addini(\n CONFIGURATION_ENV, \"django-configurations class to use by pytest-django.\"\n )\n group._addoption(\n \"--liveserver\",\n default=None,\n help=\"Address and port for the live_server fixture.\",\n )\n parser.addini(\n SETTINGS_MODULE_ENV, \"Django settings module to use by pytest-django.\"\n )\n\n parser.addini(\n \"django_find_project\",\n \"Automatically find and add a Django project to the \" \"Python path.\",\n type=\"bool\",\n default=True,\n )\n group._addoption(\n \"--fail-on-template-vars\",\n action=\"store_true\",\n dest=\"itv\",\n default=False,\n help=\"Fail for invalid variables in templates.\",\n )\n parser.addini(\n INVALID_TEMPLATE_VARS_ENV,\n \"Fail for invalid variables in templates.\",\n type=\"bool\",\n default=False,\n )\n\n\nPROJECT_FOUND = (\n \"pytest-django found a Django project in %s \"\n \"(it contains manage.py) and added it to the Python path.\\n\"\n 'If this is wrong, add \"django_find_project = false\" to '\n \"pytest.ini and explicitly manage your Python path.\"\n)\n\nPROJECT_NOT_FOUND = (\n \"pytest-django could not find a Django project \"\n \"(no manage.py file could be found). You must \"\n \"explicitly add your Django project to the Python path \"\n \"to have it picked up.\"\n)\n\nPROJECT_SCAN_DISABLED = (\n \"pytest-django did not search for Django \"\n \"projects since it is disabled in the configuration \"\n '(\"django_find_project = false\")'\n)\n\n\[email protected]\ndef _handle_import_error(extra_message):\n try:\n yield\n except ImportError as e:\n django_msg = (e.args[0] + \"\\n\\n\") if e.args else \"\"\n msg = django_msg + extra_message\n raise ImportError(msg)\n\n\ndef _add_django_project_to_path(args):\n def is_django_project(path):\n try:\n return path.is_dir() and (path / \"manage.py\").exists()\n except OSError:\n return False\n\n def arg_to_path(arg):\n # Test classes or functions can be appended to paths separated by ::\n arg = arg.split(\"::\", 1)[0]\n return pathlib.Path(arg)\n\n def find_django_path(args):\n args = map(str, args)\n args = [arg_to_path(x) for x in args if not x.startswith(\"-\")]\n\n cwd = pathlib.Path.cwd()\n if not args:\n args.append(cwd)\n elif cwd not in args:\n args.append(cwd)\n\n for arg in args:\n if is_django_project(arg):\n return arg\n for parent in arg.parents:\n if is_django_project(parent):\n return parent\n return None\n\n project_dir = find_django_path(args)\n if project_dir:\n sys.path.insert(0, str(project_dir.absolute()))\n return PROJECT_FOUND % project_dir\n return PROJECT_NOT_FOUND\n\n\ndef _setup_django():\n if \"django\" not in sys.modules:\n return\n\n import django.conf\n\n # Avoid force-loading Django when settings are not properly configured.\n if not django.conf.settings.configured:\n return\n\n import django.apps\n\n if not django.apps.apps.ready:\n django.setup()\n\n _blocking_manager.block()\n\n\ndef _get_boolean_value(x, name, default=None):\n if x is None:\n return default\n if x in (True, False):\n return x\n possible_values = {\"true\": True, \"false\": False, \"1\": True, \"0\": False}\n try:\n return possible_values[x.lower()]\n except KeyError:\n raise ValueError(\n \"{} is not a valid value for {}. \"\n \"It must be one of {}.\" % (x, name, \", \".join(possible_values.keys()))\n )\n\n\ndef pytest_load_initial_conftests(early_config, parser, args):\n # Register the marks\n early_config.addinivalue_line(\n \"markers\",\n \"django_db(transaction=False): Mark the test as using \"\n \"the Django test database. The *transaction* argument marks will \"\n \"allow you to use real transactions in the test like Django's \"\n \"TransactionTestCase.\",\n )\n early_config.addinivalue_line(\n \"markers\",\n \"urls(modstr): Use a different URLconf for this test, similar to \"\n \"the `urls` attribute of Django's `TestCase` objects. *modstr* is \"\n \"a string specifying the module of a URL config, e.g. \"\n '\"my_app.test_urls\".',\n )\n early_config.addinivalue_line(\n \"markers\",\n \"ignore_template_errors(): ignore errors from invalid template \"\n \"variables (if --fail-on-template-vars is used).\",\n )\n\n options = parser.parse_known_args(args)\n\n if options.version or options.help:\n return\n\n django_find_project = _get_boolean_value(\n early_config.getini(\"django_find_project\"), \"django_find_project\"\n )\n\n if django_find_project:\n _django_project_scan_outcome = _add_django_project_to_path(args)\n else:\n _django_project_scan_outcome = PROJECT_SCAN_DISABLED\n\n if (\n options.itv\n or _get_boolean_value(\n os.environ.get(INVALID_TEMPLATE_VARS_ENV), INVALID_TEMPLATE_VARS_ENV\n )\n or early_config.getini(INVALID_TEMPLATE_VARS_ENV)\n ):\n os.environ[INVALID_TEMPLATE_VARS_ENV] = \"true\"\n\n def _get_option_with_source(option, envname):\n if option:\n return option, \"option\"\n if envname in os.environ:\n return os.environ[envname], \"env\"\n cfgval = early_config.getini(envname)\n if cfgval:\n return cfgval, \"ini\"\n return None, None\n\n ds, ds_source = _get_option_with_source(options.ds, SETTINGS_MODULE_ENV)\n dc, dc_source = _get_option_with_source(options.dc, CONFIGURATION_ENV)\n\n if ds:\n _report_header.append(\"settings: %s (from %s)\" % (ds, ds_source))\n os.environ[SETTINGS_MODULE_ENV] = ds\n\n if dc:\n _report_header.append(\"configuration: %s (from %s)\" % (dc, dc_source))\n os.environ[CONFIGURATION_ENV] = dc\n\n # Install the django-configurations importer\n import configurations.importer\n\n configurations.importer.install()\n\n # Forcefully load Django settings, throws ImportError or\n # ImproperlyConfigured if settings cannot be loaded.\n from django.conf import settings as dj_settings\n\n with _handle_import_error(_django_project_scan_outcome):\n dj_settings.DATABASES\n\n _setup_django()\n\n\ndef pytest_report_header():\n if _report_header:\n return [\"django: \" + \", \".join(_report_header)]\n\n\[email protected]\ndef pytest_configure():\n # Allow Django settings to be configured in a user pytest_configure call,\n # but make sure we call django.setup()\n _setup_django()\n\n\ndef _classmethod_is_defined_at_leaf(cls, method_name):\n super_method = None\n\n for base_cls in cls.__mro__[1:]: # pragma: no branch\n super_method = base_cls.__dict__.get(method_name)\n if super_method is not None:\n break\n\n assert super_method is not None, (\n \"%s could not be found in base classes\" % method_name\n )\n\n method = getattr(cls, method_name)\n\n try:\n f = method.__func__\n except AttributeError:\n pytest.fail(\"%s.%s should be a classmethod\" % (cls, method_name))\n if PY2 and not (\n inspect.ismethod(method)\n and inspect.isclass(method.__self__)\n and issubclass(cls, method.__self__)\n ):\n pytest.fail(\"%s.%s should be a classmethod\" % (cls, method_name))\n return f is not super_method.__func__\n\n\n_disabled_classmethods = {}\n\n\ndef _disable_class_methods(cls):\n if cls in _disabled_classmethods:\n return\n\n _disabled_classmethods[cls] = (\n # Get the classmethod object (not the resulting bound method),\n # otherwise inheritance will be broken when restoring.\n cls.__dict__.get(\"setUpClass\"),\n _classmethod_is_defined_at_leaf(cls, \"setUpClass\"),\n cls.__dict__.get(\"tearDownClass\"),\n _classmethod_is_defined_at_leaf(cls, \"tearDownClass\"),\n )\n\n cls.setUpClass = types.MethodType(lambda cls: None, cls)\n cls.tearDownClass = types.MethodType(lambda cls: None, cls)\n\n\ndef _restore_class_methods(cls):\n (\n setUpClass,\n restore_setUpClass,\n tearDownClass,\n restore_tearDownClass,\n ) = _disabled_classmethods.pop(cls)\n\n try:\n del cls.setUpClass\n except AttributeError:\n raise\n\n try:\n del cls.tearDownClass\n except AttributeError:\n pass\n\n if restore_setUpClass:\n cls.setUpClass = setUpClass\n\n if restore_tearDownClass:\n cls.tearDownClass = tearDownClass\n\n\ndef pytest_runtest_setup(item):\n if _handle_unittest_methods:\n if django_settings_is_configured() and is_django_unittest(item):\n _disable_class_methods(item.cls)\n\n\[email protected](tryfirst=True)\ndef pytest_collection_modifyitems(items):\n def get_order_number(test):\n marker_db = test.get_closest_marker('django_db')\n if marker_db:\n transaction = validate_django_db(marker_db)[0]\n if transaction is True:\n return 1\n else:\n transaction = None\n\n fixtures = getattr(test, 'fixturenames', [])\n if \"transactional_db\" in fixtures:\n return 1\n\n if transaction is False:\n return 0\n if \"db\" in fixtures:\n return 0\n\n return 2\n\n items[:] = sorted(items, key=get_order_number)\n\n\[email protected](autouse=True, scope=\"session\")\ndef django_test_environment(request):\n \"\"\"\n Ensure that Django is loaded and has its testing environment setup.\n\n XXX It is a little dodgy that this is an autouse fixture. Perhaps\n an email fixture should be requested in order to be able to\n use the Django email machinery just like you need to request a\n db fixture for access to the Django database, etc. But\n without duplicating a lot more of Django's test support code\n we need to follow this model.\n \"\"\"\n if django_settings_is_configured():\n _setup_django()\n from django.conf import settings as dj_settings\n from django.test.utils import setup_test_environment, teardown_test_environment\n\n dj_settings.DEBUG = False\n setup_test_environment()\n request.addfinalizer(teardown_test_environment)\n\n\[email protected](scope=\"session\")\ndef django_db_blocker():\n \"\"\"Wrapper around Django's database access.\n\n This object can be used to re-enable database access. This fixture is used\n internally in pytest-django to build the other fixtures and can be used for\n special database handling.\n\n The object is a context manager and provides the methods\n .unblock()/.block() and .restore() to temporarily enable database access.\n\n This is an advanced feature that is meant to be used to implement database\n fixtures.\n \"\"\"\n if not django_settings_is_configured():\n return None\n\n return _blocking_manager\n\n\[email protected](autouse=True)\ndef _django_db_marker(request):\n \"\"\"Implement the django_db marker, internal to pytest-django.\n\n This will dynamically request the ``db``, ``transactional_db`` or\n ``django_db_reset_sequences`` fixtures as required by the django_db marker.\n \"\"\"\n marker = request.node.get_closest_marker(\"django_db\")\n if marker:\n transaction, reset_sequences = validate_django_db(marker)\n if reset_sequences:\n request.getfixturevalue(\"django_db_reset_sequences\")\n elif transaction:\n request.getfixturevalue(\"transactional_db\")\n else:\n request.getfixturevalue(\"db\")\n\n\[email protected](autouse=True, scope=\"class\")\ndef _django_setup_unittest(request, django_db_blocker):\n \"\"\"Setup a django unittest, internal to pytest-django.\"\"\"\n if not django_settings_is_configured() or not is_django_unittest(request):\n yield\n return\n\n from _pytest.unittest import TestCaseFunction\n\n if \"debug\" in TestCaseFunction.runtest.__code__.co_names:\n # Fix pytest (https://github.com/pytest-dev/pytest/issues/5991), only\n # if \"self._testcase.debug()\" is being used (forward compatible).\n from _pytest.monkeypatch import MonkeyPatch\n\n def non_debugging_runtest(self):\n self._testcase(result=self)\n\n mp_debug = MonkeyPatch()\n mp_debug.setattr(\"_pytest.unittest.TestCaseFunction.runtest\", non_debugging_runtest)\n else:\n mp_debug = None\n\n request.getfixturevalue(\"django_db_setup\")\n\n cls = request.node.cls\n\n with django_db_blocker.unblock():\n if _handle_unittest_methods:\n _restore_class_methods(cls)\n cls.setUpClass()\n _disable_class_methods(cls)\n\n yield\n\n _restore_class_methods(cls)\n cls.tearDownClass()\n else:\n yield\n\n if mp_debug:\n mp_debug.undo()\n\n\[email protected](scope=\"function\", autouse=True)\ndef _dj_autoclear_mailbox():\n if not django_settings_is_configured():\n return\n\n from django.core import mail\n\n del mail.outbox[:]\n\n\[email protected](scope=\"function\")\ndef mailoutbox(django_mail_patch_dns, _dj_autoclear_mailbox):\n if not django_settings_is_configured():\n return\n\n from django.core import mail\n\n return mail.outbox\n\n\[email protected](scope=\"function\")\ndef django_mail_patch_dns(monkeypatch, django_mail_dnsname):\n from django.core import mail\n\n monkeypatch.setattr(mail.message, \"DNS_NAME\", django_mail_dnsname)\n\n\[email protected](scope=\"function\")\ndef django_mail_dnsname():\n return \"fake-tests.example.com\"\n\n\[email protected](autouse=True, scope=\"function\")\ndef _django_set_urlconf(request):\n \"\"\"Apply the @pytest.mark.urls marker, internal to pytest-django.\"\"\"\n marker = request.node.get_closest_marker(\"urls\")\n if marker:\n skip_if_no_django()\n import django.conf\n\n try:\n from django.urls import clear_url_caches, set_urlconf\n except ImportError:\n # Removed in Django 2.0\n from django.core.urlresolvers import clear_url_caches, set_urlconf\n\n urls = validate_urls(marker)\n original_urlconf = django.conf.settings.ROOT_URLCONF\n django.conf.settings.ROOT_URLCONF = urls\n clear_url_caches()\n set_urlconf(None)\n\n def restore():\n django.conf.settings.ROOT_URLCONF = original_urlconf\n # Copy the pattern from\n # https://github.com/django/django/blob/master/django/test/signals.py#L152\n clear_url_caches()\n set_urlconf(None)\n\n request.addfinalizer(restore)\n\n\[email protected](autouse=True, scope=\"session\")\ndef _fail_for_invalid_template_variable():\n \"\"\"Fixture that fails for invalid variables in templates.\n\n This fixture will fail each test that uses django template rendering\n should a template contain an invalid template variable.\n The fail message will include the name of the invalid variable and\n in most cases the template name.\n\n It does not raise an exception, but fails, as the stack trace doesn't\n offer any helpful information to debug.\n This behavior can be switched off using the marker:\n ``pytest.mark.ignore_template_errors``\n \"\"\"\n\n class InvalidVarException(object):\n \"\"\"Custom handler for invalid strings in templates.\"\"\"\n\n def __init__(self):\n self.fail = True\n\n def __contains__(self, key):\n \"\"\"There is a test for '%s' in TEMPLATE_STRING_IF_INVALID.\"\"\"\n return key == \"%s\"\n\n @staticmethod\n def _get_origin():\n stack = inspect.stack()\n\n # Try to use topmost `self.origin` first (Django 1.9+, and with\n # TEMPLATE_DEBUG)..\n for f in stack[2:]:\n func = f[3]\n if func == \"render\":\n frame = f[0]\n try:\n origin = frame.f_locals[\"self\"].origin\n except (AttributeError, KeyError):\n continue\n if origin is not None:\n return origin\n\n from django.template import Template\n\n # finding the ``render`` needle in the stack\n frame = reduce(\n lambda x, y: y[3] == \"render\" and \"base.py\" in y[1] and y or x, stack\n )\n # assert 0, stack\n frame = frame[0]\n # finding only the frame locals in all frame members\n f_locals = reduce(\n lambda x, y: y[0] == \"f_locals\" and y or x, inspect.getmembers(frame)\n )[1]\n # ``django.template.base.Template``\n template = f_locals[\"self\"]\n if isinstance(template, Template):\n return template.name\n\n def __mod__(self, var):\n \"\"\"Handle TEMPLATE_STRING_IF_INVALID % var.\"\"\"\n origin = self._get_origin()\n if origin:\n msg = \"Undefined template variable '%s' in '%s'\" % (var, origin)\n else:\n msg = \"Undefined template variable '%s'\" % var\n if self.fail:\n pytest.fail(msg)\n else:\n return msg\n\n if (\n os.environ.get(INVALID_TEMPLATE_VARS_ENV, \"false\") == \"true\"\n and django_settings_is_configured()\n ):\n from django.conf import settings as dj_settings\n\n if dj_settings.TEMPLATES:\n dj_settings.TEMPLATES[0][\"OPTIONS\"][\n \"string_if_invalid\"\n ] = InvalidVarException()\n else:\n dj_settings.TEMPLATE_STRING_IF_INVALID = InvalidVarException()\n\n\[email protected](autouse=True)\ndef _template_string_if_invalid_marker(request):\n \"\"\"Apply the @pytest.mark.ignore_template_errors marker,\n internal to pytest-django.\"\"\"\n marker = request.keywords.get(\"ignore_template_errors\", None)\n if os.environ.get(INVALID_TEMPLATE_VARS_ENV, \"false\") == \"true\":\n if marker and django_settings_is_configured():\n from django.conf import settings as dj_settings\n\n if dj_settings.TEMPLATES:\n dj_settings.TEMPLATES[0][\"OPTIONS\"][\"string_if_invalid\"].fail = False\n else:\n dj_settings.TEMPLATE_STRING_IF_INVALID.fail = False\n\n\[email protected](autouse=True, scope=\"function\")\ndef _django_clear_site_cache():\n \"\"\"Clears ``django.contrib.sites.models.SITE_CACHE`` to avoid\n unexpected behavior with cached site objects.\n \"\"\"\n\n if django_settings_is_configured():\n from django.conf import settings as dj_settings\n\n if \"django.contrib.sites\" in dj_settings.INSTALLED_APPS:\n from django.contrib.sites.models import Site\n\n Site.objects.clear_cache()\n\n\n# ############### Helper Functions ################\n\n\nclass _DatabaseBlockerContextManager(object):\n def __init__(self, db_blocker):\n self._db_blocker = db_blocker\n\n def __enter__(self):\n pass\n\n def __exit__(self, exc_type, exc_value, traceback):\n self._db_blocker.restore()\n\n\nclass _DatabaseBlocker(object):\n \"\"\"Manager for django.db.backends.base.base.BaseDatabaseWrapper.\n\n This is the object returned by django_db_blocker.\n \"\"\"\n\n def __init__(self):\n self._history = []\n self._real_ensure_connection = None\n\n @property\n def _dj_db_wrapper(self):\n from django.db.backends.base.base import BaseDatabaseWrapper\n\n # The first time the _dj_db_wrapper is accessed, we will save a\n # reference to the real implementation.\n if self._real_ensure_connection is None:\n self._real_ensure_connection = BaseDatabaseWrapper.ensure_connection\n\n return BaseDatabaseWrapper\n\n def _save_active_wrapper(self):\n return self._history.append(self._dj_db_wrapper.ensure_connection)\n\n def _blocking_wrapper(*args, **kwargs):\n __tracebackhide__ = True\n __tracebackhide__ # Silence pyflakes\n raise RuntimeError(\n \"Database access not allowed, \"\n 'use the \"django_db\" mark, or the '\n '\"db\" or \"transactional_db\" fixtures to enable it.'\n )\n\n def unblock(self):\n \"\"\"Enable access to the Django database.\"\"\"\n self._save_active_wrapper()\n self._dj_db_wrapper.ensure_connection = self._real_ensure_connection\n return _DatabaseBlockerContextManager(self)\n\n def block(self):\n \"\"\"Disable access to the Django database.\"\"\"\n self._save_active_wrapper()\n self._dj_db_wrapper.ensure_connection = self._blocking_wrapper\n return _DatabaseBlockerContextManager(self)\n\n def restore(self):\n self._dj_db_wrapper.ensure_connection = self._history.pop()\n\n\n_blocking_manager = _DatabaseBlocker()\n\n\ndef validate_django_db(marker):\n \"\"\"Validate the django_db marker.\n\n It checks the signature and creates the ``transaction`` and\n ``reset_sequences`` attributes on the marker which will have the\n correct values.\n\n A sequence reset is only allowed when combined with a transaction.\n \"\"\"\n\n def apifun(transaction=False, reset_sequences=False):\n return transaction, reset_sequences\n\n return apifun(*marker.args, **marker.kwargs)\n\n\ndef validate_urls(marker):\n \"\"\"Validate the urls marker.\n\n It checks the signature and creates the `urls` attribute on the\n marker which will have the correct value.\n \"\"\"\n\n def apifun(urls):\n return urls\n\n return apifun(*marker.args, **marker.kwargs)\n", "path": "pytest_django/plugin.py" } ]
diff --git a/pytest_django/plugin.py b/pytest_django/plugin.py index e54a900ee..4deca87f3 100644 --- a/pytest_django/plugin.py +++ b/pytest_django/plugin.py @@ -415,6 +415,7 @@ def pytest_runtest_setup(item): _disable_class_methods(item.cls) [email protected](tryfirst=True) def pytest_collection_modifyitems(items): def get_order_number(test): marker_db = test.get_closest_marker('django_db')
CTFd__CTFd-1800
Invalid model identifier https://github.com/CTFd/CTFd/blob/master/CTFd/themes/core/templates/scoreboard.html#L26 This should change depending on the mode of the CTF
[ { "content": "import glob\nimport importlib\nimport os\nfrom collections import namedtuple\n\nfrom flask import current_app as app\nfrom flask import send_file, send_from_directory, url_for\n\nfrom CTFd.utils.config.pages import get_pages\nfrom CTFd.utils.decorators import admins_only as admins_only_wrapper\nfrom CTFd.utils.plugins import override_template as utils_override_template\nfrom CTFd.utils.plugins import (\n register_admin_script as utils_register_admin_plugin_script,\n)\nfrom CTFd.utils.plugins import (\n register_admin_stylesheet as utils_register_admin_plugin_stylesheet,\n)\nfrom CTFd.utils.plugins import register_script as utils_register_plugin_script\nfrom CTFd.utils.plugins import register_stylesheet as utils_register_plugin_stylesheet\n\nMenu = namedtuple(\"Menu\", [\"title\", \"route\"])\n\n\ndef register_plugin_assets_directory(app, base_path, admins_only=False, endpoint=None):\n \"\"\"\n Registers a directory to serve assets\n\n :param app: A CTFd application\n :param string base_path: The path to the directory\n :param boolean admins_only: Whether or not the assets served out of the directory should be accessible to the public\n :return:\n \"\"\"\n base_path = base_path.strip(\"/\")\n if endpoint is None:\n endpoint = base_path.replace(\"/\", \".\")\n\n def assets_handler(path):\n return send_from_directory(base_path, path)\n\n rule = \"/\" + base_path + \"/<path:path>\"\n app.add_url_rule(rule=rule, endpoint=endpoint, view_func=assets_handler)\n\n\ndef register_plugin_asset(app, asset_path, admins_only=False, endpoint=None):\n \"\"\"\n Registers an file path to be served by CTFd\n\n :param app: A CTFd application\n :param string asset_path: The path to the asset file\n :param boolean admins_only: Whether or not this file should be accessible to the public\n :return:\n \"\"\"\n asset_path = asset_path.strip(\"/\")\n if endpoint is None:\n endpoint = asset_path.replace(\"/\", \".\")\n\n def asset_handler():\n return send_file(asset_path)\n\n if admins_only:\n asset_handler = admins_only_wrapper(asset_handler)\n rule = \"/\" + asset_path\n app.add_url_rule(rule=rule, endpoint=endpoint, view_func=asset_handler)\n\n\ndef override_template(*args, **kwargs):\n \"\"\"\n Overrides a template with the provided html content.\n\n e.g. override_template('scoreboard.html', '<h1>scores</h1>')\n \"\"\"\n utils_override_template(*args, **kwargs)\n\n\ndef register_plugin_script(*args, **kwargs):\n \"\"\"\n Adds a given script to the base.html template which all pages inherit from\n \"\"\"\n utils_register_plugin_script(*args, **kwargs)\n\n\ndef register_plugin_stylesheet(*args, **kwargs):\n \"\"\"\n Adds a given stylesheet to the base.html template which all pages inherit from.\n \"\"\"\n utils_register_plugin_stylesheet(*args, **kwargs)\n\n\ndef register_admin_plugin_script(*args, **kwargs):\n \"\"\"\n Adds a given script to the base.html of the admin theme which all admin pages inherit from\n :param args:\n :param kwargs:\n :return:\n \"\"\"\n utils_register_admin_plugin_script(*args, **kwargs)\n\n\ndef register_admin_plugin_stylesheet(*args, **kwargs):\n \"\"\"\n Adds a given stylesheet to the base.html of the admin theme which all admin pages inherit from\n :param args:\n :param kwargs:\n :return:\n \"\"\"\n utils_register_admin_plugin_stylesheet(*args, **kwargs)\n\n\ndef register_admin_plugin_menu_bar(title, route):\n \"\"\"\n Registers links on the Admin Panel menubar/navbar\n\n :param name: A string that is shown on the navbar HTML\n :param route: A string that is the href used by the link\n :return:\n \"\"\"\n am = Menu(title=title, route=route)\n app.admin_plugin_menu_bar.append(am)\n\n\ndef get_admin_plugin_menu_bar():\n \"\"\"\n Access the list used to store the plugin menu bar\n\n :return: Returns a list of Menu namedtuples. They have name, and route attributes.\n \"\"\"\n return app.admin_plugin_menu_bar\n\n\ndef register_user_page_menu_bar(title, route):\n \"\"\"\n Registers links on the User side menubar/navbar\n\n :param name: A string that is shown on the navbar HTML\n :param route: A string that is the href used by the link\n :return:\n \"\"\"\n p = Menu(title=title, route=route)\n app.plugin_menu_bar.append(p)\n\n\ndef get_user_page_menu_bar():\n \"\"\"\n Access the list used to store the user page menu bar\n\n :return: Returns a list of Menu namedtuples. They have name, and route attributes.\n \"\"\"\n pages = []\n for p in get_pages() + app.plugin_menu_bar:\n if p.route.startswith(\"http\"):\n route = p.route\n else:\n route = url_for(\"views.static_html\", route=p.route)\n print(route)\n pages.append(Menu(title=p.title, route=route))\n return pages\n\n\ndef bypass_csrf_protection(f):\n \"\"\"\n Decorator that allows a route to bypass the need for a CSRF nonce on POST requests.\n\n This should be considered beta and may change in future versions.\n\n :param f: A function that needs to bypass CSRF protection\n :return: Returns a function with the _bypass_csrf attribute set which tells CTFd to not require CSRF protection.\n \"\"\"\n f._bypass_csrf = True\n return f\n\n\ndef get_plugin_names():\n modules = sorted(glob.glob(app.plugins_dir + \"/*\"))\n blacklist = {\"__pycache__\"}\n plugins = []\n for module in modules:\n module_name = os.path.basename(module)\n if os.path.isdir(module) and module_name not in blacklist:\n plugins.append(module_name)\n return plugins\n\n\ndef init_plugins(app):\n \"\"\"\n Searches for the load function in modules in the CTFd/plugins folder. This function is called with the current CTFd\n app as a parameter. This allows CTFd plugins to modify CTFd's behavior.\n\n :param app: A CTFd application\n :return:\n \"\"\"\n app.admin_plugin_scripts = []\n app.admin_plugin_stylesheets = []\n app.plugin_scripts = []\n app.plugin_stylesheets = []\n\n app.admin_plugin_menu_bar = []\n app.plugin_menu_bar = []\n app.plugins_dir = os.path.dirname(__file__)\n\n if app.config.get(\"SAFE_MODE\", False) is False:\n for plugin in get_plugin_names():\n module = \".\" + plugin\n module = importlib.import_module(module, package=\"CTFd.plugins\")\n module.load(app)\n print(\" * Loaded module, %s\" % module)\n\n app.jinja_env.globals.update(get_admin_plugin_menu_bar=get_admin_plugin_menu_bar)\n app.jinja_env.globals.update(get_user_page_menu_bar=get_user_page_menu_bar)\n", "path": "CTFd/plugins/__init__.py" } ]
[ { "content": "import glob\nimport importlib\nimport os\nfrom collections import namedtuple\n\nfrom flask import current_app as app\nfrom flask import send_file, send_from_directory, url_for\n\nfrom CTFd.utils.config.pages import get_pages\nfrom CTFd.utils.decorators import admins_only as admins_only_wrapper\nfrom CTFd.utils.plugins import override_template as utils_override_template\nfrom CTFd.utils.plugins import (\n register_admin_script as utils_register_admin_plugin_script,\n)\nfrom CTFd.utils.plugins import (\n register_admin_stylesheet as utils_register_admin_plugin_stylesheet,\n)\nfrom CTFd.utils.plugins import register_script as utils_register_plugin_script\nfrom CTFd.utils.plugins import register_stylesheet as utils_register_plugin_stylesheet\n\nMenu = namedtuple(\"Menu\", [\"title\", \"route\"])\n\n\ndef register_plugin_assets_directory(app, base_path, admins_only=False, endpoint=None):\n \"\"\"\n Registers a directory to serve assets\n\n :param app: A CTFd application\n :param string base_path: The path to the directory\n :param boolean admins_only: Whether or not the assets served out of the directory should be accessible to the public\n :return:\n \"\"\"\n base_path = base_path.strip(\"/\")\n if endpoint is None:\n endpoint = base_path.replace(\"/\", \".\")\n\n def assets_handler(path):\n return send_from_directory(base_path, path)\n\n rule = \"/\" + base_path + \"/<path:path>\"\n app.add_url_rule(rule=rule, endpoint=endpoint, view_func=assets_handler)\n\n\ndef register_plugin_asset(app, asset_path, admins_only=False, endpoint=None):\n \"\"\"\n Registers an file path to be served by CTFd\n\n :param app: A CTFd application\n :param string asset_path: The path to the asset file\n :param boolean admins_only: Whether or not this file should be accessible to the public\n :return:\n \"\"\"\n asset_path = asset_path.strip(\"/\")\n if endpoint is None:\n endpoint = asset_path.replace(\"/\", \".\")\n\n def asset_handler():\n return send_file(asset_path)\n\n if admins_only:\n asset_handler = admins_only_wrapper(asset_handler)\n rule = \"/\" + asset_path\n app.add_url_rule(rule=rule, endpoint=endpoint, view_func=asset_handler)\n\n\ndef override_template(*args, **kwargs):\n \"\"\"\n Overrides a template with the provided html content.\n\n e.g. override_template('scoreboard.html', '<h1>scores</h1>')\n \"\"\"\n utils_override_template(*args, **kwargs)\n\n\ndef register_plugin_script(*args, **kwargs):\n \"\"\"\n Adds a given script to the base.html template which all pages inherit from\n \"\"\"\n utils_register_plugin_script(*args, **kwargs)\n\n\ndef register_plugin_stylesheet(*args, **kwargs):\n \"\"\"\n Adds a given stylesheet to the base.html template which all pages inherit from.\n \"\"\"\n utils_register_plugin_stylesheet(*args, **kwargs)\n\n\ndef register_admin_plugin_script(*args, **kwargs):\n \"\"\"\n Adds a given script to the base.html of the admin theme which all admin pages inherit from\n :param args:\n :param kwargs:\n :return:\n \"\"\"\n utils_register_admin_plugin_script(*args, **kwargs)\n\n\ndef register_admin_plugin_stylesheet(*args, **kwargs):\n \"\"\"\n Adds a given stylesheet to the base.html of the admin theme which all admin pages inherit from\n :param args:\n :param kwargs:\n :return:\n \"\"\"\n utils_register_admin_plugin_stylesheet(*args, **kwargs)\n\n\ndef register_admin_plugin_menu_bar(title, route):\n \"\"\"\n Registers links on the Admin Panel menubar/navbar\n\n :param name: A string that is shown on the navbar HTML\n :param route: A string that is the href used by the link\n :return:\n \"\"\"\n am = Menu(title=title, route=route)\n app.admin_plugin_menu_bar.append(am)\n\n\ndef get_admin_plugin_menu_bar():\n \"\"\"\n Access the list used to store the plugin menu bar\n\n :return: Returns a list of Menu namedtuples. They have name, and route attributes.\n \"\"\"\n return app.admin_plugin_menu_bar\n\n\ndef register_user_page_menu_bar(title, route):\n \"\"\"\n Registers links on the User side menubar/navbar\n\n :param name: A string that is shown on the navbar HTML\n :param route: A string that is the href used by the link\n :return:\n \"\"\"\n p = Menu(title=title, route=route)\n app.plugin_menu_bar.append(p)\n\n\ndef get_user_page_menu_bar():\n \"\"\"\n Access the list used to store the user page menu bar\n\n :return: Returns a list of Menu namedtuples. They have name, and route attributes.\n \"\"\"\n pages = []\n for p in get_pages() + app.plugin_menu_bar:\n if p.route.startswith(\"http\"):\n route = p.route\n else:\n route = url_for(\"views.static_html\", route=p.route)\n pages.append(Menu(title=p.title, route=route))\n return pages\n\n\ndef bypass_csrf_protection(f):\n \"\"\"\n Decorator that allows a route to bypass the need for a CSRF nonce on POST requests.\n\n This should be considered beta and may change in future versions.\n\n :param f: A function that needs to bypass CSRF protection\n :return: Returns a function with the _bypass_csrf attribute set which tells CTFd to not require CSRF protection.\n \"\"\"\n f._bypass_csrf = True\n return f\n\n\ndef get_plugin_names():\n modules = sorted(glob.glob(app.plugins_dir + \"/*\"))\n blacklist = {\"__pycache__\"}\n plugins = []\n for module in modules:\n module_name = os.path.basename(module)\n if os.path.isdir(module) and module_name not in blacklist:\n plugins.append(module_name)\n return plugins\n\n\ndef init_plugins(app):\n \"\"\"\n Searches for the load function in modules in the CTFd/plugins folder. This function is called with the current CTFd\n app as a parameter. This allows CTFd plugins to modify CTFd's behavior.\n\n :param app: A CTFd application\n :return:\n \"\"\"\n app.admin_plugin_scripts = []\n app.admin_plugin_stylesheets = []\n app.plugin_scripts = []\n app.plugin_stylesheets = []\n\n app.admin_plugin_menu_bar = []\n app.plugin_menu_bar = []\n app.plugins_dir = os.path.dirname(__file__)\n\n if app.config.get(\"SAFE_MODE\", False) is False:\n for plugin in get_plugin_names():\n module = \".\" + plugin\n module = importlib.import_module(module, package=\"CTFd.plugins\")\n module.load(app)\n print(\" * Loaded module, %s\" % module)\n\n app.jinja_env.globals.update(get_admin_plugin_menu_bar=get_admin_plugin_menu_bar)\n app.jinja_env.globals.update(get_user_page_menu_bar=get_user_page_menu_bar)\n", "path": "CTFd/plugins/__init__.py" } ]
diff --git a/CTFd/plugins/__init__.py b/CTFd/plugins/__init__.py index a53892450..889002d16 100644 --- a/CTFd/plugins/__init__.py +++ b/CTFd/plugins/__init__.py @@ -151,7 +151,6 @@ def get_user_page_menu_bar(): route = p.route else: route = url_for("views.static_html", route=p.route) - print(route) pages.append(Menu(title=p.title, route=route)) return pages diff --git a/CTFd/themes/admin/templates/scoreboard.html b/CTFd/themes/admin/templates/scoreboard.html index 8ff0cc9d0..74f0ad7b1 100644 --- a/CTFd/themes/admin/templates/scoreboard.html +++ b/CTFd/themes/admin/templates/scoreboard.html @@ -29,7 +29,7 @@ <h1>Scoreboard</h1> </div> </th> <th class="sort-col text-center"><b>Place</b></th> - <th class="sort-col"><b>Team</b></th> + <th class="sort-col"><b>{{ get_mode_as_word(capitalize=True) }}</b></th> <th class="sort-col"><b>Score</b></th> <th class="sort-col"><b>Visibility</b></th> </tr> diff --git a/CTFd/themes/core/templates/scoreboard.html b/CTFd/themes/core/templates/scoreboard.html index d657409f9..b0e8ce014 100644 --- a/CTFd/themes/core/templates/scoreboard.html +++ b/CTFd/themes/core/templates/scoreboard.html @@ -23,7 +23,7 @@ <h1>Scoreboard</h1> <thead> <tr> <td scope="col" width="10px"><b>Place</b></td> - <td scope="col"><b>Team</b></td> + <td scope="col"><b>{{ get_mode_as_word(capitalize=True) }}</b></td> <td scope="col"><b>Score</b></td> </tr> </thead>
svthalia__concrexit-2710
Cannot create new shifts: ValueError: 'Shift' instance needs to have a primary key value before this relationship can be used. Sentry Issue: [CONCREXIT-KK](https://sentry.io/organizations/thalia/issues/3788518453/?referrer=github_integration) ``` ValueError: 'Shift' instance needs to have a primary key value before this relationship can be used. (14 additional frame(s) were not displayed) ... File "django/forms/models.py", line 492, in _post_clean self.instance.full_clean(exclude=exclude, validate_unique=False) File "django/db/models/base.py", line 1452, in full_clean self.clean() File "sales/models/shift.py", line 69, in clean if self.orders.filter(created_at__lt=self.start): File "django/db/models/manager.py", line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "django/db/models/fields/related_descriptors.py", line 687, in get_queryset raise ValueError( ```
[ { "content": "from django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.db.models import Count, Q, Sum\nfrom django.db.models.expressions import Value\nfrom django.db.models.functions import Coalesce\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom queryable_properties.managers import QueryablePropertiesManager\nfrom queryable_properties.properties import AggregateProperty, RangeCheckProperty\n\nfrom activemembers.models import MemberGroup\nfrom payments.models import PaymentAmountField\nfrom sales.models.product import ProductList\n\n\nclass Shift(models.Model):\n class Meta:\n permissions = [\n (\"override_manager\", _(\"Can access all shifts as manager\")),\n ]\n\n objects = QueryablePropertiesManager()\n\n start = models.DateTimeField(\n verbose_name=_(\"start\"),\n blank=False,\n null=False,\n )\n end = models.DateTimeField(\n verbose_name=_(\"end\"),\n blank=False,\n null=False,\n help_text=_(\n \"The end time is only indicative and does not prevent orders being created after the shift has ended. This only happens after locking the shift.\"\n ),\n )\n\n title = models.CharField(\n verbose_name=_(\"title\"), blank=True, null=True, max_length=100\n )\n\n product_list = models.ForeignKey(\n ProductList,\n verbose_name=_(\"product list\"),\n blank=False,\n null=False,\n on_delete=models.PROTECT,\n )\n\n managers = models.ManyToManyField(\n MemberGroup, verbose_name=_(\"managers\"), related_name=\"manager_shifts\"\n )\n\n locked = models.BooleanField(\n verbose_name=_(\"locked\"),\n blank=False,\n null=False,\n default=False,\n help_text=_(\n \"Prevent orders being changed or created for this shift. This will also clean up all unpaid orders in this shift.\"\n ),\n )\n\n def clean(self):\n super().clean()\n errors = {}\n\n if self.orders.filter(created_at__lt=self.start):\n errors.update(\n {\n \"start\": _(\n \"There are already orders created in this shift before this start time.\"\n )\n }\n )\n\n if self.end and self.start and self.end <= self.start:\n errors.update({\"end\": _(\"End cannot be before start.\")})\n\n if errors:\n raise ValidationError(errors)\n\n def save(\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.locked:\n self.orders.filter(\n (Q(payment__isnull=True) & Q(total_amount__gt=0))\n | Q(order_items__isnull=True)\n ).delete()\n\n return super().save(force_insert, force_update, using, update_fields)\n\n active = RangeCheckProperty(\"start\", \"end\", timezone.now)\n\n total_revenue = AggregateProperty(\n Sum(\n Coalesce(\"orders___total_amount\", Value(0.00)),\n output_field=PaymentAmountField(allow_zero=True),\n )\n )\n\n total_revenue_paid = AggregateProperty(\n Sum(\n Coalesce(\"orders__payment__amount\", Value(0.00)),\n output_field=PaymentAmountField(allow_zero=True),\n )\n )\n\n num_orders = AggregateProperty(\n Count(\n \"orders\",\n )\n )\n\n num_orders_paid = AggregateProperty(\n Count(\n \"orders\",\n filter=Q(orders___is_free=True)\n | Q(\n orders__payment__isnull=False, # or the order is free\n ),\n )\n )\n\n @property\n def product_sales(self):\n qs = (\n self.orders.exclude(order_items__isnull=True)\n .values(\"order_items__product\")\n .annotate(sold=Sum(\"order_items__amount\"))\n .order_by()\n )\n return {\n item[0]: item[1]\n for item in qs.values_list(\"order_items__product__product__name\", \"sold\")\n }\n\n @property\n def payment_method_sales(self):\n qs = (\n self.orders.values(\"payment__type\")\n .annotate(sold=Sum(\"order_items__total\"))\n .order_by()\n )\n return {item[0]: item[1] for item in qs.values_list(\"payment__type\", \"sold\")}\n\n @property\n def user_orders_allowed(self):\n return self.selforderperiod_set.filter(\n start__lte=timezone.now(), end__gt=timezone.now()\n ).exists()\n\n @property\n def user_order_period(self):\n qs = self.selforderperiod_set.filter(\n start__lte=timezone.now(), end__gt=timezone.now()\n )\n if qs.exists():\n return qs.first()\n return None\n\n def __str__(self):\n if self.title and self.title != \"\":\n return f\"Shift {self.pk} - {self.title}\"\n return f\"Shift {self.pk}\"\n\n\nclass SelfOrderPeriod(models.Model):\n class Meta:\n verbose_name = _(\"self-order period\")\n verbose_name_plural = _(\"self-order periods\")\n ordering = [\"start\"]\n\n shift = models.ForeignKey(Shift, blank=False, null=False, on_delete=models.CASCADE)\n start = models.DateTimeField(\n verbose_name=_(\"start\"),\n blank=False,\n null=False,\n )\n end = models.DateTimeField(\n verbose_name=_(\"end\"),\n blank=False,\n null=False,\n help_text=_(\n \"After this moment, users cannot place orders themselves anymore in this shift.\"\n ),\n )\n\n def __str__(self):\n return f\"Self-order period for shift {self.shift.pk}\"\n", "path": "website/sales/models/shift.py" } ]
[ { "content": "from django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.db.models import Count, Q, Sum\nfrom django.db.models.expressions import Value\nfrom django.db.models.functions import Coalesce\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom queryable_properties.managers import QueryablePropertiesManager\nfrom queryable_properties.properties import AggregateProperty, RangeCheckProperty\n\nfrom activemembers.models import MemberGroup\nfrom payments.models import PaymentAmountField\nfrom sales.models.product import ProductList\n\n\nclass Shift(models.Model):\n class Meta:\n permissions = [\n (\"override_manager\", _(\"Can access all shifts as manager\")),\n ]\n\n objects = QueryablePropertiesManager()\n\n start = models.DateTimeField(\n verbose_name=_(\"start\"),\n blank=False,\n null=False,\n )\n end = models.DateTimeField(\n verbose_name=_(\"end\"),\n blank=False,\n null=False,\n help_text=_(\n \"The end time is only indicative and does not prevent orders being created after the shift has ended. This only happens after locking the shift.\"\n ),\n )\n\n title = models.CharField(\n verbose_name=_(\"title\"), blank=True, null=True, max_length=100\n )\n\n product_list = models.ForeignKey(\n ProductList,\n verbose_name=_(\"product list\"),\n blank=False,\n null=False,\n on_delete=models.PROTECT,\n )\n\n managers = models.ManyToManyField(\n MemberGroup, verbose_name=_(\"managers\"), related_name=\"manager_shifts\"\n )\n\n locked = models.BooleanField(\n verbose_name=_(\"locked\"),\n blank=False,\n null=False,\n default=False,\n help_text=_(\n \"Prevent orders being changed or created for this shift. This will also clean up all unpaid orders in this shift.\"\n ),\n )\n\n def clean(self):\n super().clean()\n errors = {}\n\n if self.pk is not None and self.orders.filter(created_at__lt=self.start):\n errors.update(\n {\n \"start\": _(\n \"There are already orders created in this shift before this start time.\"\n )\n }\n )\n\n if self.end and self.start and self.end <= self.start:\n errors.update({\"end\": _(\"End cannot be before start.\")})\n\n if errors:\n raise ValidationError(errors)\n\n def save(\n self, force_insert=False, force_update=False, using=None, update_fields=None\n ):\n if self.locked:\n self.orders.filter(\n (Q(payment__isnull=True) & Q(total_amount__gt=0))\n | Q(order_items__isnull=True)\n ).delete()\n\n return super().save(force_insert, force_update, using, update_fields)\n\n active = RangeCheckProperty(\"start\", \"end\", timezone.now)\n\n total_revenue = AggregateProperty(\n Sum(\n Coalesce(\"orders___total_amount\", Value(0.00)),\n output_field=PaymentAmountField(allow_zero=True),\n )\n )\n\n total_revenue_paid = AggregateProperty(\n Sum(\n Coalesce(\"orders__payment__amount\", Value(0.00)),\n output_field=PaymentAmountField(allow_zero=True),\n )\n )\n\n num_orders = AggregateProperty(\n Count(\n \"orders\",\n )\n )\n\n num_orders_paid = AggregateProperty(\n Count(\n \"orders\",\n filter=Q(orders___is_free=True)\n | Q(\n orders__payment__isnull=False, # or the order is free\n ),\n )\n )\n\n @property\n def product_sales(self):\n qs = (\n self.orders.exclude(order_items__isnull=True)\n .values(\"order_items__product\")\n .annotate(sold=Sum(\"order_items__amount\"))\n .order_by()\n )\n return {\n item[0]: item[1]\n for item in qs.values_list(\"order_items__product__product__name\", \"sold\")\n }\n\n @property\n def payment_method_sales(self):\n qs = (\n self.orders.values(\"payment__type\")\n .annotate(sold=Sum(\"order_items__total\"))\n .order_by()\n )\n return {item[0]: item[1] for item in qs.values_list(\"payment__type\", \"sold\")}\n\n @property\n def user_orders_allowed(self):\n return self.selforderperiod_set.filter(\n start__lte=timezone.now(), end__gt=timezone.now()\n ).exists()\n\n @property\n def user_order_period(self):\n qs = self.selforderperiod_set.filter(\n start__lte=timezone.now(), end__gt=timezone.now()\n )\n if qs.exists():\n return qs.first()\n return None\n\n def __str__(self):\n if self.title and self.title != \"\":\n return f\"Shift {self.pk} - {self.title}\"\n return f\"Shift {self.pk}\"\n\n\nclass SelfOrderPeriod(models.Model):\n class Meta:\n verbose_name = _(\"self-order period\")\n verbose_name_plural = _(\"self-order periods\")\n ordering = [\"start\"]\n\n shift = models.ForeignKey(Shift, blank=False, null=False, on_delete=models.CASCADE)\n start = models.DateTimeField(\n verbose_name=_(\"start\"),\n blank=False,\n null=False,\n )\n end = models.DateTimeField(\n verbose_name=_(\"end\"),\n blank=False,\n null=False,\n help_text=_(\n \"After this moment, users cannot place orders themselves anymore in this shift.\"\n ),\n )\n\n def __str__(self):\n return f\"Self-order period for shift {self.shift.pk}\"\n", "path": "website/sales/models/shift.py" } ]
diff --git a/website/sales/models/shift.py b/website/sales/models/shift.py index 7d71aed94..261990639 100644 --- a/website/sales/models/shift.py +++ b/website/sales/models/shift.py @@ -66,7 +66,7 @@ def clean(self): super().clean() errors = {} - if self.orders.filter(created_at__lt=self.start): + if self.pk is not None and self.orders.filter(created_at__lt=self.start): errors.update( { "start": _(
ivy-llc__ivy-22632
amin
[ { "content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef abs(x, name=None):\n return ivy.abs(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acos(x, name=None):\n return ivy.acos(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acosh(x, name=None):\n return ivy.acosh(x)\n\n\n@with_unsupported_dtypes(\n {\"2.5.1 and below\": (\"bool\", \"unsigned\", \"int8\", \"float16\", \"bfloat16\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef add(x, y, name=None):\n return ivy.add(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef addmm(input, x, y, beta=1.0, alpha=1.0, name=None):\n value = alpha * ivy.matmul(x, y) + (beta * input)\n return value\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef amax(x, axis=None, keepdims=False):\n if axis is None:\n return ivy.max(x)\n if isinstance(axis, int):\n axis = [axis]\n for i in range(len(axis)):\n if axis[i] < 0:\n axis[i] += x.ndim\n for i in axis:\n if i < 0 or i >= x.ndim:\n raise ValueError(\"axis {} is out of range [-{}:{}]\".format(i, 0, x.ndim))\n return ivy.max(x, axis=axis, keepdims=keepdims)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\", \"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef angle(x, name=None):\n return ivy.angle(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": \"bool\"}, \"paddle\")\n@to_ivy_arrays_and_back\ndef any(x, axis=None, keepdim=False, name=None):\n return ivy.any(x, axis=axis, keepdims=keepdim)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef asin(x, name=None):\n return ivy.asin(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef asinh(x, name=None):\n return ivy.asinh(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atan(x, name=None):\n return ivy.atan(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atan2(x, y, name=None):\n return ivy.atan2(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atanh(x, name=None):\n return ivy.atanh(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef ceil(x, name=None):\n return ivy.ceil(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"int16\", \"float16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef conj(x, name=None):\n return ivy.conj(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cos(x, name=None):\n return ivy.cos(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cosh(x, name=None):\n return ivy.cosh(x)\n\n\n@with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef cumprod(x, dim=None, dtype=None, name=None):\n return ivy.cumprod(x, axis=dim, dtype=dtype)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef deg2rad(x, name=None):\n return ivy.deg2rad(x)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef diff(x, n=1, axis=-1, prepend=None, append=None, name=None):\n return ivy.diff(x, n=n, axis=axis, prepend=prepend, append=append)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef divide(x, y, name=None):\n return ivy.divide(x, y)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef erf(x, name=None):\n return ivy.erf(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef exp(x, name=None):\n return ivy.exp(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef expm1(x, name=None):\n return ivy.expm1(x)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"bfloat16\", \"float32\", \"float64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef floor(x, name=None):\n return ivy.floor(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": \"bfloat16\"}, \"paddle\")\n@to_ivy_arrays_and_back\ndef fmax(x, y, name=None):\n return ivy.fmax(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": \"bfloat16\"}, \"paddle\")\n@to_ivy_arrays_and_back\ndef fmin(x, y, name=None):\n return ivy.fmin(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef frac(x, name=None):\n y = ivy.trunc(x)\n return ivy.subtract(x, y)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"int32\", \"int64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef gcd(x, y, name=None):\n return ivy.gcd(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef heaviside(x, y, name=None):\n return ivy.heaviside(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef isfinite(x, name=None):\n return ivy.isfinite(x)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef isinf(x, name=None):\n return ivy.isinf(x)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef isnan(x, name=None):\n return ivy.isnan(x)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef kron(x, y, name=None):\n return ivy.kron(x, y)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"int32\", \"int64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef lcm(x, y, name=None):\n return ivy.lcm(x, y)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef lerp(x, y, weight, name=None):\n return ivy.lerp(x, y, weight)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef lgamma(x, name=None):\n return ivy.lgamma(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log(x, name=None):\n return ivy.log(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log1p(x, name=None):\n return ivy.log1p(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log2(x, name=None):\n return ivy.log2(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef logit(x, eps=None, name=None):\n return ivy.logit(x, eps=eps)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef max(x, axis=None, keepdim=False, name=None):\n return ivy.max(x, axis=axis, keepdims=keepdim)\n\n\n# maximum\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef maximum(x, y, name=None):\n return ivy.maximum(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef minimum(x, y, name=None):\n return ivy.minimum(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef mm(input, mat2, name=None):\n return ivy.matmul(input, mat2)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef multiply(x, y, name=None):\n return ivy.multiply(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int8\", \"int16\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef neg(x, name=None):\n return ivy.negative(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef outer(x, y, name=None):\n return ivy.outer(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef pow(x, y, name=None):\n return ivy.pow(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef prod(x, axis=None, keepdim=False, dtype=None, name=None):\n return ivy.prod(x, axis=axis, keepdims=keepdim, dtype=dtype)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef rad2deg(x, name=None):\n return ivy.rad2deg(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef reciprocal(x, name=None):\n return ivy.reciprocal(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef remainder(x, y, name=None):\n return ivy.remainder(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef round(x, name=None):\n return ivy.round(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef round_(x, name=None):\n return ivy.inplace_update(x, round(x))\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef rsqrt(x, name=None):\n return 1 / ivy.sqrt(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\ndef rsqrt_(x, name=None):\n return ivy.inplace_update(x, ivy.reciprocal(ivy.inplace_update(x, ivy.sqrt(x))))\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sgn(x, name=None):\n return ivy.sign(x, np_variant=True)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sign(x, name=None):\n return ivy.sign(x, np_variant=False)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sin(x, name=None):\n return ivy.sin(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sinh(x, name=None):\n return ivy.sinh(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sqrt(x, name=None):\n return ivy.sqrt(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef square(x, name=None):\n return ivy.square(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef stanh(x, scale_a=0.67, scale_b=1.7159, name=None):\n # TODO this function will be simplified as soon as the ivy.stanh(x,a,b) is added\n exp_ax = ivy.exp(ivy.multiply(scale_a, x))\n exp_minus_ax = ivy.exp(ivy.multiply(-scale_a, x))\n numerator = ivy.subtract(exp_ax, exp_minus_ax)\n denominator = ivy.add(exp_ax, exp_minus_ax)\n ret = ivy.multiply(scale_b, ivy.divide(numerator, denominator))\n return ret\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef subtract(x, y, name=None):\n return ivy.subtract(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int6\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef take(\n x,\n index,\n mode=\"raise\",\n name=None,\n):\n if mode not in [\"raise\", \"wrap\", \"clip\"]:\n raise ValueError(\n \"'mode' in 'take' should be 'raise', 'wrap', 'clip', but received {}.\"\n .format(mode)\n )\n x = ivy.reshape(x, (-1,))\n if mode == \"clip\":\n index = ivy.clip(index, 0, x.shape[-1] - 1)\n elif mode == \"wrap\":\n index = ivy.where(index < 0, index % x.shape[-1], index)\n index = ivy.where(index >= x.shape[-1], index % x.shape[-1], index)\n return ivy.gather(x, index, axis=0)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tan(x, name=None):\n return ivy.tan(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tanh(x, name=None):\n return ivy.tanh(x)\n\n\n@with_supported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef trunc(x, name=None):\n return ivy.trunc(x)\n", "path": "ivy/functional/frontends/paddle/tensor/math.py" } ]
[ { "content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef abs(x, name=None):\n return ivy.abs(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acos(x, name=None):\n return ivy.acos(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acosh(x, name=None):\n return ivy.acosh(x)\n\n\n@with_unsupported_dtypes(\n {\"2.5.1 and below\": (\"bool\", \"unsigned\", \"int8\", \"float16\", \"bfloat16\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef add(x, y, name=None):\n return ivy.add(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef addmm(input, x, y, beta=1.0, alpha=1.0, name=None):\n value = alpha * ivy.matmul(x, y) + (beta * input)\n return value\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef amax(x, axis=None, keepdims=False):\n if axis is None:\n return ivy.max(x)\n if isinstance(axis, int):\n axis = [axis]\n for i in range(len(axis)):\n if axis[i] < 0:\n axis[i] += x.ndim\n for i in axis:\n if i < 0 or i >= x.ndim:\n raise ValueError(\"axis {} is out of range [-{}:{}]\".format(i, 0, x.ndim))\n return ivy.max(x, axis=axis, keepdims=keepdims)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\", \"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef angle(x, name=None):\n return ivy.angle(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": \"bool\"}, \"paddle\")\n@to_ivy_arrays_and_back\ndef any(x, axis=None, keepdim=False, name=None):\n return ivy.any(x, axis=axis, keepdims=keepdim)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef asin(x, name=None):\n return ivy.asin(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef asinh(x, name=None):\n return ivy.asinh(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atan(x, name=None):\n return ivy.atan(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atan2(x, y, name=None):\n return ivy.atan2(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atanh(x, name=None):\n return ivy.atanh(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef ceil(x, name=None):\n return ivy.ceil(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"int16\", \"float16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef conj(x, name=None):\n return ivy.conj(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cos(x, name=None):\n return ivy.cos(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cosh(x, name=None):\n return ivy.cosh(x)\n\n\n@with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef cumprod(x, dim=None, dtype=None, name=None):\n return ivy.cumprod(x, axis=dim, dtype=dtype)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef deg2rad(x, name=None):\n return ivy.deg2rad(x)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef diff(x, n=1, axis=-1, prepend=None, append=None, name=None):\n return ivy.diff(x, n=n, axis=axis, prepend=prepend, append=append)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef divide(x, y, name=None):\n return ivy.divide(x, y)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef erf(x, name=None):\n return ivy.erf(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef exp(x, name=None):\n return ivy.exp(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef expm1(x, name=None):\n return ivy.expm1(x)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"bfloat16\", \"float32\", \"float64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef floor(x, name=None):\n return ivy.floor(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": \"bfloat16\"}, \"paddle\")\n@to_ivy_arrays_and_back\ndef fmax(x, y, name=None):\n return ivy.fmax(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": \"bfloat16\"}, \"paddle\")\n@to_ivy_arrays_and_back\ndef fmin(x, y, name=None):\n return ivy.fmin(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef frac(x, name=None):\n y = ivy.trunc(x)\n return ivy.subtract(x, y)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"int32\", \"int64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef gcd(x, y, name=None):\n return ivy.gcd(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef heaviside(x, y, name=None):\n return ivy.heaviside(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef isfinite(x, name=None):\n return ivy.isfinite(x)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef isinf(x, name=None):\n return ivy.isinf(x)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef isnan(x, name=None):\n return ivy.isnan(x)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef kron(x, y, name=None):\n return ivy.kron(x, y)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"int32\", \"int64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef lcm(x, y, name=None):\n return ivy.lcm(x, y)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef lerp(x, y, weight, name=None):\n return ivy.lerp(x, y, weight)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef lgamma(x, name=None):\n return ivy.lgamma(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log(x, name=None):\n return ivy.log(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log1p(x, name=None):\n return ivy.log1p(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log2(x, name=None):\n return ivy.log2(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef logit(x, eps=None, name=None):\n return ivy.logit(x, eps=eps)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef max(x, axis=None, keepdim=False, name=None):\n return ivy.max(x, axis=axis, keepdims=keepdim)\n\n\n# maximum\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef maximum(x, y, name=None):\n return ivy.maximum(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef minimum(x, y, name=None):\n return ivy.minimum(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef mm(input, mat2, name=None):\n return ivy.matmul(input, mat2)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef multiply(x, y, name=None):\n return ivy.multiply(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int8\", \"int16\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef neg(x, name=None):\n return ivy.negative(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef outer(x, y, name=None):\n return ivy.outer(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef pow(x, y, name=None):\n return ivy.pow(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef prod(x, axis=None, keepdim=False, dtype=None, name=None):\n return ivy.prod(x, axis=axis, keepdims=keepdim, dtype=dtype)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef rad2deg(x, name=None):\n return ivy.rad2deg(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef reciprocal(x, name=None):\n return ivy.reciprocal(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef remainder(x, y, name=None):\n return ivy.remainder(x, y)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef round(x, name=None):\n return ivy.round(x)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef round_(x, name=None):\n return ivy.inplace_update(x, round(x))\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef rsqrt(x, name=None):\n return 1 / ivy.sqrt(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\ndef rsqrt_(x, name=None):\n return ivy.inplace_update(x, ivy.reciprocal(ivy.inplace_update(x, ivy.sqrt(x))))\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sgn(x, name=None):\n return ivy.sign(x, np_variant=True)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sign(x, name=None):\n return ivy.sign(x, np_variant=False)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sin(x, name=None):\n return ivy.sin(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sinh(x, name=None):\n return ivy.sinh(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sqrt(x, name=None):\n return ivy.sqrt(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef square(x, name=None):\n return ivy.square(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef stanh(x, scale_a=0.67, scale_b=1.7159, name=None):\n # TODO this function will be simplified as soon as the ivy.stanh(x,a,b) is added\n exp_ax = ivy.exp(ivy.multiply(scale_a, x))\n exp_minus_ax = ivy.exp(ivy.multiply(-scale_a, x))\n numerator = ivy.subtract(exp_ax, exp_minus_ax)\n denominator = ivy.add(exp_ax, exp_minus_ax)\n ret = ivy.multiply(scale_b, ivy.divide(numerator, denominator))\n return ret\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef subtract(x, y, name=None):\n return ivy.subtract(x, y)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int6\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef take(\n x,\n index,\n mode=\"raise\",\n name=None,\n):\n if mode not in [\"raise\", \"wrap\", \"clip\"]:\n raise ValueError(\n \"'mode' in 'take' should be 'raise', 'wrap', 'clip', but received {}.\"\n .format(mode)\n )\n x = ivy.reshape(x, (-1,))\n if mode == \"clip\":\n index = ivy.clip(index, 0, x.shape[-1] - 1)\n elif mode == \"wrap\":\n index = ivy.where(index < 0, index % x.shape[-1], index)\n index = ivy.where(index >= x.shape[-1], index % x.shape[-1], index)\n return ivy.gather(x, index, axis=0)\n\n\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tan(x, name=None):\n return ivy.tan(x)\n\n\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tanh(x, name=None):\n return ivy.tanh(x)\n\n\n@with_supported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef trunc(x, name=None):\n return ivy.trunc(x)\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef amin(x, axis=None, keepdim=False, name=None): \n return ivy.min(x, axis=axis, keepdims=keepdim)\n", "path": "ivy/functional/frontends/paddle/tensor/math.py" } ]
diff --git a/ivy/functional/frontends/paddle/tensor/math.py b/ivy/functional/frontends/paddle/tensor/math.py index 75242ca8e62a4..4df4724f52019 100644 --- a/ivy/functional/frontends/paddle/tensor/math.py +++ b/ivy/functional/frontends/paddle/tensor/math.py @@ -502,3 +502,10 @@ def tanh(x, name=None): @to_ivy_arrays_and_back def trunc(x, name=None): return ivy.trunc(x) + +@with_supported_dtypes( + {"2.5.1 and below": ("float32", "float64", "int32", "int64")}, "paddle" +) +@to_ivy_arrays_and_back +def amin(x, axis=None, keepdim=False, name=None): + return ivy.min(x, axis=axis, keepdims=keepdim) diff --git a/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_math.py b/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_math.py index 50b4e48de810d..2bac85f674262 100644 --- a/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_math.py +++ b/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_math.py @@ -2137,3 +2137,36 @@ def test_paddle_stanh( scale_a=scale_a, scale_b=scale_b, ) + + +# amin +@handle_frontend_test( + fn_tree="paddle.tensor.math.amin", + dtype_and_x=helpers.dtype_values_axis( + available_dtypes=helpers.get_dtypes("valid"), + valid_axis=True, + ), + keepdim=st.booleans(), +) +def test_paddle_amin( + *, + dtype_and_x, + keepdim, + on_device, + fn_tree, + backend_fw, + frontend, + test_flags, +): + input_dtype, x, axis = dtype_and_x + helpers.test_frontend_function( + input_dtypes=input_dtype, + frontend=frontend, + backend_to_test=backend_fw, + fn_tree=fn_tree, + test_flags=test_flags, + on_device=on_device, + x=x[0], + axis = axis, + keepdim = keepdim, + )
spack__spack-5099
spack find : always prompt 0 installed packages On a clean `develop` checkout : ``` $ git clone https://github.com/LLNL/spack.git Cloning into 'spack'... remote: Counting objects: 25613, done. remote: Compressing objects: 100% (42/42), done. remote: Total 25613 (delta 12), reused 3 (delta 3), pack-reused 25557 Receiving objects: 100% (25613/25613), 6.65 MiB | 6.46 MiB/s, done. Resolving deltas: 100% (13031/13031), done. Checking connectivity... done. $ cd spack $ . share/spack/setup-env.sh $ spack compilers ==> Available compilers -- gcc ---------------------------------------------------------- [email protected] $ spack install zlib ==> Installing zlib ==> Trying to fetch from file:///home/mculpo/production/spack-mirror/zlib/zlib-1.2.8.tar.gz ######################################################################## 100,0% ==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix/zlib-1.2.8.tar.gz ==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix ==> No patches needed for zlib ==> Building zlib ==> Successfully installed zlib Fetch: 0.01s. Build: 3.69s. Total: 3.70s. [+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix $ spack find ==> 0 installed packages. $ spack install szip ==> Installing szip ==> Trying to fetch from file:///home/mculpo/production/spack-mirror/szip/szip-2.1.tar.gz ######################################################################## 100,0% ==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq/szip-2.1.tar.gz ==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq ==> No patches needed for szip ==> Building szip ==> Successfully installed szip Fetch: 0.01s. Build: 8.09s. Total: 8.10s. [+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq $ spack find ==> 0 installed packages. ``` The db seems to be written correctly : ``` database: installs: d6pdl6xvnvap6ihrqcqtgvweghbszmix: explicit: true installed: true path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix ref_count: 0 spec: zlib: arch: linux-x86_64 compiler: name: gcc version: '4.8' dependencies: {} namespace: builtin parameters: cflags: [] cppflags: [] cxxflags: [] fflags: [] ldflags: [] ldlibs: [] version: 1.2.8 esfmhl54wbdb7nnnip6y6jbxlbmxs2jq: explicit: true installed: true path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq ref_count: 0 spec: szip: arch: linux-x86_64 compiler: name: gcc version: '4.8' dependencies: {} namespace: builtin parameters: cflags: [] cppflags: [] cxxflags: [] fflags: [] ldflags: [] ldlibs: [] version: '2.1' version: 0.9.1 ```
[ { "content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the NOTICE and LICENSE files for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom spack import *\n\n\nclass H5zZfp(MakefilePackage):\n \"\"\"A highly flexible floating point and integer compression plugin for the\n HDF5 library using ZFP compression.\"\"\"\n\n homepage = \"http://h5z-zfp.readthedocs.io/en/latest\"\n url = \"https://github.com/LLNL/H5Z-ZFP\"\n\n version('develop', git='https://github.com/LLNL/H5Z-ZFP.git', tag='master')\n version('0.7.0', git='https://github.com/LLNL/H5Z-ZFP.git', commit='58ac811')\n\n variant('fortran', default=True, description='Enable Fortran support')\n\n depends_on('hdf5')\n# depends_on('zfp bsws=8')\n depends_on('zfp')\n\n @property\n def make_defs(self):\n make_defs = [\n 'PREFIX=%s' % prefix,\n 'CC=%s' % spack_cc,\n 'HDF5_HOME=%s' % self.spec['hdf5'].prefix,\n 'ZFP_HOME=%s' % self.spec['zfp'].prefix]\n\n if '+fortran' in self.spec and spack_fc:\n make_defs += ['FC=%s' % spack_fc]\n\n return make_defs\n\n @property\n def build_targets(self):\n targets = ['all']\n return self.make_defs + targets\n\n @property\n def install_targets(self):\n make_args = ['install']\n return make_args + self.make_defs\n", "path": "var/spack/repos/builtin/packages/h5z-zfp/package.py" } ]
[ { "content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the NOTICE and LICENSE files for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom spack import *\n\n\nclass H5zZfp(MakefilePackage):\n \"\"\"A highly flexible floating point and integer compression plugin for the\n HDF5 library using ZFP compression.\"\"\"\n\n homepage = \"http://h5z-zfp.readthedocs.io/en/latest\"\n url = \"https://github.com/LLNL/H5Z-ZFP\"\n\n version('develop', git='https://github.com/LLNL/H5Z-ZFP.git', tag='master')\n version('0.7.0', git='https://github.com/LLNL/H5Z-ZFP.git', commit='58ac811')\n\n variant('fortran', default=True, description='Enable Fortran support')\n\n depends_on('hdf5')\n depends_on('zfp bsws=8')\n\n @property\n def make_defs(self):\n make_defs = [\n 'PREFIX=%s' % prefix,\n 'CC=%s' % spack_cc,\n 'HDF5_HOME=%s' % self.spec['hdf5'].prefix,\n 'ZFP_HOME=%s' % self.spec['zfp'].prefix]\n\n if '+fortran' in self.spec and spack_fc:\n make_defs += ['FC=%s' % spack_fc]\n\n return make_defs\n\n @property\n def build_targets(self):\n targets = ['all']\n return self.make_defs + targets\n\n @property\n def install_targets(self):\n make_args = ['install']\n return make_args + self.make_defs\n", "path": "var/spack/repos/builtin/packages/h5z-zfp/package.py" } ]
diff --git a/var/spack/repos/builtin/packages/h5z-zfp/package.py b/var/spack/repos/builtin/packages/h5z-zfp/package.py index 0063c2fd37d3c1..ddb03c64e0cbaa 100644 --- a/var/spack/repos/builtin/packages/h5z-zfp/package.py +++ b/var/spack/repos/builtin/packages/h5z-zfp/package.py @@ -38,8 +38,7 @@ class H5zZfp(MakefilePackage): variant('fortran', default=True, description='Enable Fortran support') depends_on('hdf5') -# depends_on('zfp bsws=8') - depends_on('zfp') + depends_on('zfp bsws=8') @property def make_defs(self):
buildbot__buildbot-5970
lazylogfiles broken on 3.0.2 I'm upgrading from 0.8.x to 3.0.2, and have this: ```python test_factory = util.BuildFactory() test_factory.addStep(ShellCommand( env={'PATH' : bin_path}, command=["runurl", bb_url + "bb-dependencies.sh"], decodeRC={0 : SUCCESS, 1 : FAILURE, 2 : WARNINGS, 3 : SKIPPED }, haltOnFailure=True, logEnviron=False, lazylogfiles=True, description=["installing dependencies"], descriptionDone=["installed dependencies"])) ``` Which gives me: ``` 2021-03-26 18:38:03+0000 [-] Invalid argument(s) passed to ShellCommand: lazylogfiles ``` According to the 3.0.2 documentation, `lazylogfiles` is a valid parameter for `ShellCommand` http://docs.buildbot.net/3.0.2/manual/configuration/steps/shell_command.html I originally opened a question for this (https://github.com/buildbot/buildbot/discussions/5954) but was told to open a bug instead.
[ { "content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nimport re\n\nfrom twisted.internet import defer\nfrom twisted.python.deprecate import deprecatedModuleAttribute\nfrom twisted.python.versions import Version\n\nfrom buildbot import config\nfrom buildbot.process import buildstep\nfrom buildbot.process import logobserver\n# for existing configurations that import WithProperties from here. We like\n# to move this class around just to keep our readers guessing.\nfrom buildbot.process.properties import WithProperties\nfrom buildbot.process.results import FAILURE\nfrom buildbot.process.results import SUCCESS\nfrom buildbot.process.results import WARNINGS\nfrom buildbot.process.results import Results\nfrom buildbot.process.results import worst_status\nfrom buildbot.steps.worker import CompositeStepMixin\nfrom buildbot.util import join_list\n\n_hush_pyflakes = [\n WithProperties,\n]\ndel _hush_pyflakes\n\n\nclass TreeSize(buildstep.ShellMixin, buildstep.BuildStep):\n name = \"treesize\"\n command = [\"du\", \"-s\", \"-k\", \".\"]\n description = [\"measuring\", \"tree\", \"size\"]\n\n def __init__(self, **kwargs):\n kwargs = self.setupShellMixin(kwargs)\n super().__init__(**kwargs)\n self.observer = logobserver.BufferLogObserver(wantStdout=True,\n wantStderr=True)\n self.addLogObserver('stdio', self.observer)\n\n @defer.inlineCallbacks\n def run(self):\n cmd = yield self.makeRemoteShellCommand()\n\n yield self.runCommand(cmd)\n\n stdio_log = yield self.getLog('stdio')\n yield stdio_log.finish()\n\n out = self.observer.getStdout()\n m = re.search(r'^(\\d+)', out)\n\n kib = None\n if m:\n kib = int(m.group(1))\n self.setProperty(\"tree-size-KiB\", kib, \"treesize\")\n self.descriptionDone = \"treesize {} KiB\".format(kib)\n else:\n self.descriptionDone = \"treesize unknown\"\n\n if cmd.didFail():\n return FAILURE\n if kib is None:\n return WARNINGS # not sure how 'du' could fail, but whatever\n return SUCCESS\n\n\nclass SetPropertyFromCommand(buildstep.ShellMixin, buildstep.BuildStep):\n name = \"setproperty\"\n renderables = ['property']\n\n def __init__(self, property=None, extract_fn=None, strip=True,\n includeStdout=True, includeStderr=False, **kwargs):\n\n kwargs = self.setupShellMixin(kwargs)\n\n self.property = property\n self.extract_fn = extract_fn\n self.strip = strip\n self.includeStdout = includeStdout\n self.includeStderr = includeStderr\n\n if not ((property is not None) ^ (extract_fn is not None)):\n config.error(\n \"Exactly one of property and extract_fn must be set\")\n\n super().__init__(**kwargs)\n\n if self.extract_fn:\n self.includeStderr = True\n\n self.observer = logobserver.BufferLogObserver(\n wantStdout=self.includeStdout,\n wantStderr=self.includeStderr)\n self.addLogObserver('stdio', self.observer)\n\n @defer.inlineCallbacks\n def run(self):\n cmd = yield self.makeRemoteShellCommand()\n\n yield self.runCommand(cmd)\n\n stdio_log = yield self.getLog('stdio')\n yield stdio_log.finish()\n\n property_changes = {}\n\n if self.property:\n if cmd.didFail():\n return FAILURE\n result = self.observer.getStdout()\n if self.strip:\n result = result.strip()\n propname = self.property\n self.setProperty(propname, result, \"SetPropertyFromCommand Step\")\n property_changes[propname] = result\n else:\n new_props = self.extract_fn(cmd.rc,\n self.observer.getStdout(),\n self.observer.getStderr())\n for k, v in new_props.items():\n self.setProperty(k, v, \"SetPropertyFromCommand Step\")\n property_changes = new_props\n\n props_set = [\"{}: {}\".format(k, repr(v))\n for k, v in sorted(property_changes.items())]\n yield self.addCompleteLog('property changes', \"\\n\".join(props_set))\n\n if len(property_changes) > 1:\n self.descriptionDone = '{} properties set'.format(len(property_changes))\n elif len(property_changes) == 1:\n self.descriptionDone = 'property \\'{}\\' set'.format(list(property_changes)[0])\n if cmd.didFail():\n return FAILURE\n return SUCCESS\n\n\nSetPropertyFromCommandNewStyle = SetPropertyFromCommand\ndeprecatedModuleAttribute(\n Version(\"buildbot\", 3, 0, 0),\n message=\"Use SetPropertyFromCommand instead. This step will be removed in Buildbot 3.2.\",\n moduleName=\"buildbot.steps.shell\",\n name=\"SetPropertyFromCommandNewStyle\",\n)\n\n\nSetProperty = SetPropertyFromCommand\ndeprecatedModuleAttribute(Version(\"Buildbot\", 0, 8, 8),\n \"It has been renamed to SetPropertyFromCommand\",\n \"buildbot.steps.shell\", \"SetProperty\")\n\n\nclass ShellCommand(buildstep.ShellMixin, buildstep.BuildStep):\n name = 'shell'\n\n def __init__(self, **kwargs):\n\n if self.__class__ is ShellCommand:\n if 'command' not in kwargs:\n config.error(\"ShellCommand's `command' argument is not specified\")\n\n # check validity of arguments being passed to RemoteShellCommand\n valid_rsc_args = [\n 'command',\n 'env',\n 'want_stdout',\n 'want_stderr',\n 'timeout',\n 'maxTime',\n 'sigtermTime',\n 'logfiles',\n 'usePTY',\n 'logEnviron',\n 'collectStdout',\n 'collectStderr',\n 'interruptSignal',\n 'initialStdin',\n 'decodeRC',\n 'stdioLogName',\n 'workdir',\n ] + buildstep.BuildStep.parms\n\n invalid_args = []\n for arg in kwargs:\n if arg not in valid_rsc_args:\n invalid_args.append(arg)\n\n if invalid_args:\n config.error(\"Invalid argument(s) passed to ShellCommand: \" +\n ', '.join(invalid_args))\n\n kwargs = self.setupShellMixin(kwargs)\n super().__init__(**kwargs)\n\n @defer.inlineCallbacks\n def run(self):\n cmd = yield self.makeRemoteShellCommand()\n yield self.runCommand(cmd)\n return cmd.results()\n\n\nShellCommandNewStyle = ShellCommand\ndeprecatedModuleAttribute(\n Version(\"buildbot\", 3, 0, 0),\n message=\"Use ShellCommand instead. This step will be removed in Buildbot 3.2.\",\n moduleName=\"buildbot.steps.shell\",\n name=\"ShellCommandNewStyle\",\n)\n\n\nclass Configure(ShellCommand):\n name = \"configure\"\n haltOnFailure = 1\n flunkOnFailure = 1\n description = \"configuring\"\n descriptionDone = \"configure\"\n command = [\"./configure\"]\n\n\nConfigureNewStyle = Configure\ndeprecatedModuleAttribute(\n Version(\"buildbot\", 3, 0, 0),\n message=\"Use Configure instead. This step will be removed in Buildbot 3.2.\",\n moduleName=\"buildbot.steps.shell\",\n name=\"ConfigureNewStyle\",\n)\n\n\nclass WarningCountingShellCommand(buildstep.ShellMixin, CompositeStepMixin, buildstep.BuildStep):\n renderables = [\n 'suppressionFile',\n 'suppressionList',\n 'warningPattern',\n 'directoryEnterPattern',\n 'directoryLeavePattern',\n 'maxWarnCount',\n ]\n\n warnCount = 0\n warningPattern = '(?i).*warning[: ].*'\n # The defaults work for GNU Make.\n directoryEnterPattern = (\"make.*: Entering directory \"\n \"[\\u2019\\\"`'](.*)[\\u2019'`\\\"]\")\n directoryLeavePattern = \"make.*: Leaving directory\"\n suppressionFile = None\n\n commentEmptyLineRe = re.compile(r\"^\\s*(#.*)?$\")\n suppressionLineRe = re.compile(\n r\"^\\s*(.+?)\\s*:\\s*(.+?)\\s*(?:[:]\\s*([0-9]+)(?:-([0-9]+))?\\s*)?$\")\n\n def __init__(self,\n warningPattern=None, warningExtractor=None, maxWarnCount=None,\n directoryEnterPattern=None, directoryLeavePattern=None,\n suppressionFile=None, suppressionList=None, **kwargs):\n # See if we've been given a regular expression to use to match\n # warnings. If not, use a default that assumes any line with \"warning\"\n # present is a warning. This may lead to false positives in some cases.\n if warningPattern:\n self.warningPattern = warningPattern\n if directoryEnterPattern:\n self.directoryEnterPattern = directoryEnterPattern\n if directoryLeavePattern:\n self.directoryLeavePattern = directoryLeavePattern\n if suppressionFile:\n self.suppressionFile = suppressionFile\n # self.suppressions is already taken, so use something else\n self.suppressionList = suppressionList\n if warningExtractor:\n self.warningExtractor = warningExtractor\n else:\n self.warningExtractor = WarningCountingShellCommand.warnExtractWholeLine\n self.maxWarnCount = maxWarnCount\n\n if self.__class__ is WarningCountingShellCommand and not kwargs.get('command'):\n # WarningCountingShellCommand class is directly instantiated.\n # Explicitly check that command is set to prevent runtime error\n # later.\n config.error(\"WarningCountingShellCommand's 'command' argument is not specified\")\n\n kwargs = self.setupShellMixin(kwargs)\n super().__init__(**kwargs)\n\n self.suppressions = []\n self.directoryStack = []\n\n self.warnCount = 0\n self.loggedWarnings = []\n\n self.addLogObserver(\n 'stdio',\n logobserver.LineConsumerLogObserver(self.warningLogConsumer))\n\n def addSuppression(self, suppressionList):\n \"\"\"\n This method can be used to add patters of warnings that should\n not be counted.\n\n It takes a single argument, a list of patterns.\n\n Each pattern is a 4-tuple (FILE-RE, WARN-RE, START, END).\n\n FILE-RE is a regular expression (string or compiled regexp), or None.\n If None, the pattern matches all files, else only files matching the\n regexp. If directoryEnterPattern is specified in the class constructor,\n matching is against the full path name, eg. src/main.c.\n\n WARN-RE is similarly a regular expression matched against the\n text of the warning, or None to match all warnings.\n\n START and END form an inclusive line number range to match against. If\n START is None, there is no lower bound, similarly if END is none there\n is no upper bound.\"\"\"\n\n for fileRe, warnRe, start, end in suppressionList:\n if fileRe is not None and isinstance(fileRe, str):\n fileRe = re.compile(fileRe)\n if warnRe is not None and isinstance(warnRe, str):\n warnRe = re.compile(warnRe)\n self.suppressions.append((fileRe, warnRe, start, end))\n\n def warnExtractWholeLine(self, line, match):\n \"\"\"\n Extract warning text as the whole line.\n No file names or line numbers.\"\"\"\n return (None, None, line)\n\n def warnExtractFromRegexpGroups(self, line, match):\n \"\"\"\n Extract file name, line number, and warning text as groups (1,2,3)\n of warningPattern match.\"\"\"\n file = match.group(1)\n lineNo = match.group(2)\n if lineNo is not None:\n lineNo = int(lineNo)\n text = match.group(3)\n return (file, lineNo, text)\n\n def warningLogConsumer(self):\n # Now compile a regular expression from whichever warning pattern we're\n # using\n wre = self.warningPattern\n if isinstance(wre, str):\n wre = re.compile(wre)\n\n directoryEnterRe = self.directoryEnterPattern\n if (directoryEnterRe is not None and\n isinstance(directoryEnterRe, str)):\n directoryEnterRe = re.compile(directoryEnterRe)\n\n directoryLeaveRe = self.directoryLeavePattern\n if (directoryLeaveRe is not None and\n isinstance(directoryLeaveRe, str)):\n directoryLeaveRe = re.compile(directoryLeaveRe)\n\n # Check if each line in the output from this command matched our\n # warnings regular expressions. If did, bump the warnings count and\n # add the line to the collection of lines with warnings\n self.loggedWarnings = []\n while True:\n stream, line = yield\n if directoryEnterRe:\n match = directoryEnterRe.search(line)\n if match:\n self.directoryStack.append(match.group(1))\n continue\n if (directoryLeaveRe and\n self.directoryStack and\n directoryLeaveRe.search(line)):\n self.directoryStack.pop()\n continue\n\n match = wre.match(line)\n if match:\n self.maybeAddWarning(self.loggedWarnings, line, match)\n\n def maybeAddWarning(self, warnings, line, match):\n if self.suppressions:\n (file, lineNo, text) = self.warningExtractor(self, line, match)\n lineNo = lineNo and int(lineNo)\n\n if file is not None and file != \"\" and self.directoryStack:\n currentDirectory = '/'.join(self.directoryStack)\n if currentDirectory is not None and currentDirectory != \"\":\n file = \"{}/{}\".format(currentDirectory, file)\n\n # Skip adding the warning if any suppression matches.\n for fileRe, warnRe, start, end in self.suppressions:\n if not (file is None or fileRe is None or fileRe.match(file)):\n continue\n if not (warnRe is None or warnRe.search(text)):\n continue\n if ((start is not None and end is not None) and\n not (lineNo is not None and start <= lineNo <= end)):\n continue\n return\n\n warnings.append(line)\n self.warnCount += 1\n\n @defer.inlineCallbacks\n def setup_suppression(self):\n if self.suppressionList is not None:\n self.addSuppression(self.suppressionList)\n\n if self.suppressionFile is not None:\n data = yield self.getFileContentFromWorker(self.suppressionFile, abandonOnFailure=True)\n lines = data.split(\"\\n\")\n\n list = []\n for line in lines:\n if self.commentEmptyLineRe.match(line):\n continue\n match = self.suppressionLineRe.match(line)\n if (match):\n file, test, start, end = match.groups()\n if (end is not None):\n end = int(end)\n if (start is not None):\n start = int(start)\n if end is None:\n end = start\n list.append((file, test, start, end))\n\n self.addSuppression(list)\n\n @defer.inlineCallbacks\n def run(self):\n yield self.setup_suppression()\n\n cmd = yield self.makeRemoteShellCommand()\n yield self.runCommand(cmd)\n\n yield self.finish_logs()\n yield self.createSummary()\n return self.evaluateCommand(cmd)\n\n @defer.inlineCallbacks\n def finish_logs(self):\n stdio_log = yield self.getLog('stdio')\n yield stdio_log.finish()\n\n @defer.inlineCallbacks\n def createSummary(self):\n \"\"\"\n Match log lines against warningPattern.\n\n Warnings are collected into another log for this step, and the\n build-wide 'warnings-count' is updated.\"\"\"\n\n # If there were any warnings, make the log if lines with warnings\n # available\n if self.warnCount:\n yield self.addCompleteLog(\"warnings (%d)\" % self.warnCount,\n \"\\n\".join(self.loggedWarnings) + \"\\n\")\n\n warnings_stat = self.getStatistic('warnings', 0)\n self.setStatistic('warnings', warnings_stat + self.warnCount)\n\n old_count = self.getProperty(\"warnings-count\", 0)\n self.setProperty(\n \"warnings-count\", old_count + self.warnCount, \"WarningCountingShellCommand\")\n\n def evaluateCommand(self, cmd):\n result = cmd.results()\n if (self.maxWarnCount is not None and self.warnCount > self.maxWarnCount):\n result = worst_status(result, FAILURE)\n elif self.warnCount:\n result = worst_status(result, WARNINGS)\n return result\n\n\nWarningCountingShellCommandNewStyle = WarningCountingShellCommand\ndeprecatedModuleAttribute(\n Version(\"buildbot\", 3, 0, 0),\n message=\"Use WarningCountingShellCommand instead. This step will be removed in Buildbot 3.2.\",\n moduleName=\"buildbot.steps.shell\",\n name=\"WarningCountingShellCommandNewStyle\",\n)\n\n\nclass Compile(WarningCountingShellCommand):\n\n name = \"compile\"\n haltOnFailure = 1\n flunkOnFailure = 1\n description = [\"compiling\"]\n descriptionDone = [\"compile\"]\n command = [\"make\", \"all\"]\n\n\nCompileNewStyle = Compile\ndeprecatedModuleAttribute(\n Version(\"buildbot\", 3, 0, 0),\n message=\"Use Compile instead. This step will be removed in Buildbot 3.2.\",\n moduleName=\"buildbot.steps.shell\",\n name=\"CompileNewStyle\",\n)\n\n\nclass Test(WarningCountingShellCommand):\n\n name = \"test\"\n warnOnFailure = 1\n description = [\"testing\"]\n descriptionDone = [\"test\"]\n command = [\"make\", \"test\"]\n\n def setTestResults(self, total=0, failed=0, passed=0, warnings=0):\n \"\"\"\n Called by subclasses to set the relevant statistics; this actually\n adds to any statistics already present\n \"\"\"\n total += self.getStatistic('tests-total', 0)\n self.setStatistic('tests-total', total)\n failed += self.getStatistic('tests-failed', 0)\n self.setStatistic('tests-failed', failed)\n warnings += self.getStatistic('tests-warnings', 0)\n self.setStatistic('tests-warnings', warnings)\n passed += self.getStatistic('tests-passed', 0)\n self.setStatistic('tests-passed', passed)\n\n def getResultSummary(self):\n description = []\n\n if self.hasStatistic('tests-total'):\n total = self.getStatistic(\"tests-total\", 0)\n failed = self.getStatistic(\"tests-failed\", 0)\n passed = self.getStatistic(\"tests-passed\", 0)\n warnings = self.getStatistic(\"tests-warnings\", 0)\n if not total:\n total = failed + passed + warnings\n\n if total:\n description += [str(total), 'tests']\n if passed:\n description += [str(passed), 'passed']\n if warnings:\n description += [str(warnings), 'warnings']\n if failed:\n description += [str(failed), 'failed']\n\n if description:\n summary = join_list(description)\n if self.results != SUCCESS:\n summary += ' ({})'.format(Results[self.results])\n return {'step': summary}\n\n return super().getResultSummary()\n\n\nTestNewStyle = Test\ndeprecatedModuleAttribute(\n Version(\"buildbot\", 3, 0, 0),\n message=\"Use Test instead. This step will be removed in Buildbot 3.2.\",\n moduleName=\"buildbot.steps.shell\",\n name=\"TestNewStyle\",\n)\n\n\nclass PerlModuleTestObserver(logobserver.LogLineObserver):\n\n def __init__(self, warningPattern):\n super().__init__()\n if warningPattern:\n self.warningPattern = re.compile(warningPattern)\n else:\n self.warningPattern = None\n self.rc = SUCCESS\n self.total = 0\n self.failed = 0\n self.warnings = 0\n self.newStyle = False\n self.complete = False\n\n failedRe = re.compile(r\"Tests: \\d+ Failed: (\\d+)\\)\")\n testsRe = re.compile(r\"Files=\\d+, Tests=(\\d+)\")\n oldFailureCountsRe = re.compile(r\"(\\d+)/(\\d+) subtests failed\")\n oldSuccessCountsRe = re.compile(r\"Files=\\d+, Tests=(\\d+),\")\n\n def outLineReceived(self, line):\n if self.warningPattern.match(line):\n self.warnings += 1\n if self.newStyle:\n if line.startswith('Result: FAIL'):\n self.rc = FAILURE\n mo = self.failedRe.search(line)\n if mo:\n self.failed += int(mo.group(1))\n if self.failed:\n self.rc = FAILURE\n mo = self.testsRe.search(line)\n if mo:\n self.total = int(mo.group(1))\n else:\n if line.startswith('Test Summary Report'):\n self.newStyle = True\n mo = self.oldFailureCountsRe.search(line)\n if mo:\n self.failed = int(mo.group(1))\n self.total = int(mo.group(2))\n self.rc = FAILURE\n mo = self.oldSuccessCountsRe.search(line)\n if mo:\n self.total = int(mo.group(1))\n\n\nclass PerlModuleTest(Test):\n command = [\"prove\", \"--lib\", \"lib\", \"-r\", \"t\"]\n total = 0\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.observer = PerlModuleTestObserver(\n warningPattern=self.warningPattern)\n self.addLogObserver('stdio', self.observer)\n\n def evaluateCommand(self, cmd):\n if self.observer.total:\n passed = self.observer.total - self.observer.failed\n\n self.setTestResults(\n total=self.observer.total,\n failed=self.observer.failed,\n passed=passed,\n warnings=self.observer.warnings)\n\n rc = self.observer.rc\n if rc == SUCCESS and self.observer.warnings:\n rc = WARNINGS\n return rc\n", "path": "master/buildbot/steps/shell.py" } ]
[ { "content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nimport re\n\nfrom twisted.internet import defer\nfrom twisted.python.deprecate import deprecatedModuleAttribute\nfrom twisted.python.versions import Version\n\nfrom buildbot import config\nfrom buildbot.process import buildstep\nfrom buildbot.process import logobserver\n# for existing configurations that import WithProperties from here. We like\n# to move this class around just to keep our readers guessing.\nfrom buildbot.process.properties import WithProperties\nfrom buildbot.process.results import FAILURE\nfrom buildbot.process.results import SUCCESS\nfrom buildbot.process.results import WARNINGS\nfrom buildbot.process.results import Results\nfrom buildbot.process.results import worst_status\nfrom buildbot.steps.worker import CompositeStepMixin\nfrom buildbot.util import join_list\n\n_hush_pyflakes = [\n WithProperties,\n]\ndel _hush_pyflakes\n\n\nclass TreeSize(buildstep.ShellMixin, buildstep.BuildStep):\n name = \"treesize\"\n command = [\"du\", \"-s\", \"-k\", \".\"]\n description = [\"measuring\", \"tree\", \"size\"]\n\n def __init__(self, **kwargs):\n kwargs = self.setupShellMixin(kwargs)\n super().__init__(**kwargs)\n self.observer = logobserver.BufferLogObserver(wantStdout=True,\n wantStderr=True)\n self.addLogObserver('stdio', self.observer)\n\n @defer.inlineCallbacks\n def run(self):\n cmd = yield self.makeRemoteShellCommand()\n\n yield self.runCommand(cmd)\n\n stdio_log = yield self.getLog('stdio')\n yield stdio_log.finish()\n\n out = self.observer.getStdout()\n m = re.search(r'^(\\d+)', out)\n\n kib = None\n if m:\n kib = int(m.group(1))\n self.setProperty(\"tree-size-KiB\", kib, \"treesize\")\n self.descriptionDone = \"treesize {} KiB\".format(kib)\n else:\n self.descriptionDone = \"treesize unknown\"\n\n if cmd.didFail():\n return FAILURE\n if kib is None:\n return WARNINGS # not sure how 'du' could fail, but whatever\n return SUCCESS\n\n\nclass SetPropertyFromCommand(buildstep.ShellMixin, buildstep.BuildStep):\n name = \"setproperty\"\n renderables = ['property']\n\n def __init__(self, property=None, extract_fn=None, strip=True,\n includeStdout=True, includeStderr=False, **kwargs):\n\n kwargs = self.setupShellMixin(kwargs)\n\n self.property = property\n self.extract_fn = extract_fn\n self.strip = strip\n self.includeStdout = includeStdout\n self.includeStderr = includeStderr\n\n if not ((property is not None) ^ (extract_fn is not None)):\n config.error(\n \"Exactly one of property and extract_fn must be set\")\n\n super().__init__(**kwargs)\n\n if self.extract_fn:\n self.includeStderr = True\n\n self.observer = logobserver.BufferLogObserver(\n wantStdout=self.includeStdout,\n wantStderr=self.includeStderr)\n self.addLogObserver('stdio', self.observer)\n\n @defer.inlineCallbacks\n def run(self):\n cmd = yield self.makeRemoteShellCommand()\n\n yield self.runCommand(cmd)\n\n stdio_log = yield self.getLog('stdio')\n yield stdio_log.finish()\n\n property_changes = {}\n\n if self.property:\n if cmd.didFail():\n return FAILURE\n result = self.observer.getStdout()\n if self.strip:\n result = result.strip()\n propname = self.property\n self.setProperty(propname, result, \"SetPropertyFromCommand Step\")\n property_changes[propname] = result\n else:\n new_props = self.extract_fn(cmd.rc,\n self.observer.getStdout(),\n self.observer.getStderr())\n for k, v in new_props.items():\n self.setProperty(k, v, \"SetPropertyFromCommand Step\")\n property_changes = new_props\n\n props_set = [\"{}: {}\".format(k, repr(v))\n for k, v in sorted(property_changes.items())]\n yield self.addCompleteLog('property changes', \"\\n\".join(props_set))\n\n if len(property_changes) > 1:\n self.descriptionDone = '{} properties set'.format(len(property_changes))\n elif len(property_changes) == 1:\n self.descriptionDone = 'property \\'{}\\' set'.format(list(property_changes)[0])\n if cmd.didFail():\n return FAILURE\n return SUCCESS\n\n\nSetPropertyFromCommandNewStyle = SetPropertyFromCommand\ndeprecatedModuleAttribute(\n Version(\"buildbot\", 3, 0, 0),\n message=\"Use SetPropertyFromCommand instead. This step will be removed in Buildbot 3.2.\",\n moduleName=\"buildbot.steps.shell\",\n name=\"SetPropertyFromCommandNewStyle\",\n)\n\n\nSetProperty = SetPropertyFromCommand\ndeprecatedModuleAttribute(Version(\"Buildbot\", 0, 8, 8),\n \"It has been renamed to SetPropertyFromCommand\",\n \"buildbot.steps.shell\", \"SetProperty\")\n\n\nclass ShellCommand(buildstep.ShellMixin, buildstep.BuildStep):\n name = 'shell'\n\n def __init__(self, **kwargs):\n\n if self.__class__ is ShellCommand:\n if 'command' not in kwargs:\n config.error(\"ShellCommand's `command' argument is not specified\")\n\n # check validity of arguments being passed to RemoteShellCommand\n valid_rsc_args = [\n 'command',\n 'env',\n 'want_stdout',\n 'want_stderr',\n 'timeout',\n 'maxTime',\n 'sigtermTime',\n 'logfiles',\n 'lazylogfiles',\n 'usePTY',\n 'logEnviron',\n 'collectStdout',\n 'collectStderr',\n 'interruptSignal',\n 'initialStdin',\n 'decodeRC',\n 'stdioLogName',\n 'workdir',\n ] + buildstep.BuildStep.parms\n\n invalid_args = []\n for arg in kwargs:\n if arg not in valid_rsc_args:\n invalid_args.append(arg)\n\n if invalid_args:\n config.error(\"Invalid argument(s) passed to ShellCommand: \" +\n ', '.join(invalid_args))\n\n kwargs = self.setupShellMixin(kwargs)\n super().__init__(**kwargs)\n\n @defer.inlineCallbacks\n def run(self):\n cmd = yield self.makeRemoteShellCommand()\n yield self.runCommand(cmd)\n return cmd.results()\n\n\nShellCommandNewStyle = ShellCommand\ndeprecatedModuleAttribute(\n Version(\"buildbot\", 3, 0, 0),\n message=\"Use ShellCommand instead. This step will be removed in Buildbot 3.2.\",\n moduleName=\"buildbot.steps.shell\",\n name=\"ShellCommandNewStyle\",\n)\n\n\nclass Configure(ShellCommand):\n name = \"configure\"\n haltOnFailure = 1\n flunkOnFailure = 1\n description = \"configuring\"\n descriptionDone = \"configure\"\n command = [\"./configure\"]\n\n\nConfigureNewStyle = Configure\ndeprecatedModuleAttribute(\n Version(\"buildbot\", 3, 0, 0),\n message=\"Use Configure instead. This step will be removed in Buildbot 3.2.\",\n moduleName=\"buildbot.steps.shell\",\n name=\"ConfigureNewStyle\",\n)\n\n\nclass WarningCountingShellCommand(buildstep.ShellMixin, CompositeStepMixin, buildstep.BuildStep):\n renderables = [\n 'suppressionFile',\n 'suppressionList',\n 'warningPattern',\n 'directoryEnterPattern',\n 'directoryLeavePattern',\n 'maxWarnCount',\n ]\n\n warnCount = 0\n warningPattern = '(?i).*warning[: ].*'\n # The defaults work for GNU Make.\n directoryEnterPattern = (\"make.*: Entering directory \"\n \"[\\u2019\\\"`'](.*)[\\u2019'`\\\"]\")\n directoryLeavePattern = \"make.*: Leaving directory\"\n suppressionFile = None\n\n commentEmptyLineRe = re.compile(r\"^\\s*(#.*)?$\")\n suppressionLineRe = re.compile(\n r\"^\\s*(.+?)\\s*:\\s*(.+?)\\s*(?:[:]\\s*([0-9]+)(?:-([0-9]+))?\\s*)?$\")\n\n def __init__(self,\n warningPattern=None, warningExtractor=None, maxWarnCount=None,\n directoryEnterPattern=None, directoryLeavePattern=None,\n suppressionFile=None, suppressionList=None, **kwargs):\n # See if we've been given a regular expression to use to match\n # warnings. If not, use a default that assumes any line with \"warning\"\n # present is a warning. This may lead to false positives in some cases.\n if warningPattern:\n self.warningPattern = warningPattern\n if directoryEnterPattern:\n self.directoryEnterPattern = directoryEnterPattern\n if directoryLeavePattern:\n self.directoryLeavePattern = directoryLeavePattern\n if suppressionFile:\n self.suppressionFile = suppressionFile\n # self.suppressions is already taken, so use something else\n self.suppressionList = suppressionList\n if warningExtractor:\n self.warningExtractor = warningExtractor\n else:\n self.warningExtractor = WarningCountingShellCommand.warnExtractWholeLine\n self.maxWarnCount = maxWarnCount\n\n if self.__class__ is WarningCountingShellCommand and not kwargs.get('command'):\n # WarningCountingShellCommand class is directly instantiated.\n # Explicitly check that command is set to prevent runtime error\n # later.\n config.error(\"WarningCountingShellCommand's 'command' argument is not specified\")\n\n kwargs = self.setupShellMixin(kwargs)\n super().__init__(**kwargs)\n\n self.suppressions = []\n self.directoryStack = []\n\n self.warnCount = 0\n self.loggedWarnings = []\n\n self.addLogObserver(\n 'stdio',\n logobserver.LineConsumerLogObserver(self.warningLogConsumer))\n\n def addSuppression(self, suppressionList):\n \"\"\"\n This method can be used to add patters of warnings that should\n not be counted.\n\n It takes a single argument, a list of patterns.\n\n Each pattern is a 4-tuple (FILE-RE, WARN-RE, START, END).\n\n FILE-RE is a regular expression (string or compiled regexp), or None.\n If None, the pattern matches all files, else only files matching the\n regexp. If directoryEnterPattern is specified in the class constructor,\n matching is against the full path name, eg. src/main.c.\n\n WARN-RE is similarly a regular expression matched against the\n text of the warning, or None to match all warnings.\n\n START and END form an inclusive line number range to match against. If\n START is None, there is no lower bound, similarly if END is none there\n is no upper bound.\"\"\"\n\n for fileRe, warnRe, start, end in suppressionList:\n if fileRe is not None and isinstance(fileRe, str):\n fileRe = re.compile(fileRe)\n if warnRe is not None and isinstance(warnRe, str):\n warnRe = re.compile(warnRe)\n self.suppressions.append((fileRe, warnRe, start, end))\n\n def warnExtractWholeLine(self, line, match):\n \"\"\"\n Extract warning text as the whole line.\n No file names or line numbers.\"\"\"\n return (None, None, line)\n\n def warnExtractFromRegexpGroups(self, line, match):\n \"\"\"\n Extract file name, line number, and warning text as groups (1,2,3)\n of warningPattern match.\"\"\"\n file = match.group(1)\n lineNo = match.group(2)\n if lineNo is not None:\n lineNo = int(lineNo)\n text = match.group(3)\n return (file, lineNo, text)\n\n def warningLogConsumer(self):\n # Now compile a regular expression from whichever warning pattern we're\n # using\n wre = self.warningPattern\n if isinstance(wre, str):\n wre = re.compile(wre)\n\n directoryEnterRe = self.directoryEnterPattern\n if (directoryEnterRe is not None and\n isinstance(directoryEnterRe, str)):\n directoryEnterRe = re.compile(directoryEnterRe)\n\n directoryLeaveRe = self.directoryLeavePattern\n if (directoryLeaveRe is not None and\n isinstance(directoryLeaveRe, str)):\n directoryLeaveRe = re.compile(directoryLeaveRe)\n\n # Check if each line in the output from this command matched our\n # warnings regular expressions. If did, bump the warnings count and\n # add the line to the collection of lines with warnings\n self.loggedWarnings = []\n while True:\n stream, line = yield\n if directoryEnterRe:\n match = directoryEnterRe.search(line)\n if match:\n self.directoryStack.append(match.group(1))\n continue\n if (directoryLeaveRe and\n self.directoryStack and\n directoryLeaveRe.search(line)):\n self.directoryStack.pop()\n continue\n\n match = wre.match(line)\n if match:\n self.maybeAddWarning(self.loggedWarnings, line, match)\n\n def maybeAddWarning(self, warnings, line, match):\n if self.suppressions:\n (file, lineNo, text) = self.warningExtractor(self, line, match)\n lineNo = lineNo and int(lineNo)\n\n if file is not None and file != \"\" and self.directoryStack:\n currentDirectory = '/'.join(self.directoryStack)\n if currentDirectory is not None and currentDirectory != \"\":\n file = \"{}/{}\".format(currentDirectory, file)\n\n # Skip adding the warning if any suppression matches.\n for fileRe, warnRe, start, end in self.suppressions:\n if not (file is None or fileRe is None or fileRe.match(file)):\n continue\n if not (warnRe is None or warnRe.search(text)):\n continue\n if ((start is not None and end is not None) and\n not (lineNo is not None and start <= lineNo <= end)):\n continue\n return\n\n warnings.append(line)\n self.warnCount += 1\n\n @defer.inlineCallbacks\n def setup_suppression(self):\n if self.suppressionList is not None:\n self.addSuppression(self.suppressionList)\n\n if self.suppressionFile is not None:\n data = yield self.getFileContentFromWorker(self.suppressionFile, abandonOnFailure=True)\n lines = data.split(\"\\n\")\n\n list = []\n for line in lines:\n if self.commentEmptyLineRe.match(line):\n continue\n match = self.suppressionLineRe.match(line)\n if (match):\n file, test, start, end = match.groups()\n if (end is not None):\n end = int(end)\n if (start is not None):\n start = int(start)\n if end is None:\n end = start\n list.append((file, test, start, end))\n\n self.addSuppression(list)\n\n @defer.inlineCallbacks\n def run(self):\n yield self.setup_suppression()\n\n cmd = yield self.makeRemoteShellCommand()\n yield self.runCommand(cmd)\n\n yield self.finish_logs()\n yield self.createSummary()\n return self.evaluateCommand(cmd)\n\n @defer.inlineCallbacks\n def finish_logs(self):\n stdio_log = yield self.getLog('stdio')\n yield stdio_log.finish()\n\n @defer.inlineCallbacks\n def createSummary(self):\n \"\"\"\n Match log lines against warningPattern.\n\n Warnings are collected into another log for this step, and the\n build-wide 'warnings-count' is updated.\"\"\"\n\n # If there were any warnings, make the log if lines with warnings\n # available\n if self.warnCount:\n yield self.addCompleteLog(\"warnings (%d)\" % self.warnCount,\n \"\\n\".join(self.loggedWarnings) + \"\\n\")\n\n warnings_stat = self.getStatistic('warnings', 0)\n self.setStatistic('warnings', warnings_stat + self.warnCount)\n\n old_count = self.getProperty(\"warnings-count\", 0)\n self.setProperty(\n \"warnings-count\", old_count + self.warnCount, \"WarningCountingShellCommand\")\n\n def evaluateCommand(self, cmd):\n result = cmd.results()\n if (self.maxWarnCount is not None and self.warnCount > self.maxWarnCount):\n result = worst_status(result, FAILURE)\n elif self.warnCount:\n result = worst_status(result, WARNINGS)\n return result\n\n\nWarningCountingShellCommandNewStyle = WarningCountingShellCommand\ndeprecatedModuleAttribute(\n Version(\"buildbot\", 3, 0, 0),\n message=\"Use WarningCountingShellCommand instead. This step will be removed in Buildbot 3.2.\",\n moduleName=\"buildbot.steps.shell\",\n name=\"WarningCountingShellCommandNewStyle\",\n)\n\n\nclass Compile(WarningCountingShellCommand):\n\n name = \"compile\"\n haltOnFailure = 1\n flunkOnFailure = 1\n description = [\"compiling\"]\n descriptionDone = [\"compile\"]\n command = [\"make\", \"all\"]\n\n\nCompileNewStyle = Compile\ndeprecatedModuleAttribute(\n Version(\"buildbot\", 3, 0, 0),\n message=\"Use Compile instead. This step will be removed in Buildbot 3.2.\",\n moduleName=\"buildbot.steps.shell\",\n name=\"CompileNewStyle\",\n)\n\n\nclass Test(WarningCountingShellCommand):\n\n name = \"test\"\n warnOnFailure = 1\n description = [\"testing\"]\n descriptionDone = [\"test\"]\n command = [\"make\", \"test\"]\n\n def setTestResults(self, total=0, failed=0, passed=0, warnings=0):\n \"\"\"\n Called by subclasses to set the relevant statistics; this actually\n adds to any statistics already present\n \"\"\"\n total += self.getStatistic('tests-total', 0)\n self.setStatistic('tests-total', total)\n failed += self.getStatistic('tests-failed', 0)\n self.setStatistic('tests-failed', failed)\n warnings += self.getStatistic('tests-warnings', 0)\n self.setStatistic('tests-warnings', warnings)\n passed += self.getStatistic('tests-passed', 0)\n self.setStatistic('tests-passed', passed)\n\n def getResultSummary(self):\n description = []\n\n if self.hasStatistic('tests-total'):\n total = self.getStatistic(\"tests-total\", 0)\n failed = self.getStatistic(\"tests-failed\", 0)\n passed = self.getStatistic(\"tests-passed\", 0)\n warnings = self.getStatistic(\"tests-warnings\", 0)\n if not total:\n total = failed + passed + warnings\n\n if total:\n description += [str(total), 'tests']\n if passed:\n description += [str(passed), 'passed']\n if warnings:\n description += [str(warnings), 'warnings']\n if failed:\n description += [str(failed), 'failed']\n\n if description:\n summary = join_list(description)\n if self.results != SUCCESS:\n summary += ' ({})'.format(Results[self.results])\n return {'step': summary}\n\n return super().getResultSummary()\n\n\nTestNewStyle = Test\ndeprecatedModuleAttribute(\n Version(\"buildbot\", 3, 0, 0),\n message=\"Use Test instead. This step will be removed in Buildbot 3.2.\",\n moduleName=\"buildbot.steps.shell\",\n name=\"TestNewStyle\",\n)\n\n\nclass PerlModuleTestObserver(logobserver.LogLineObserver):\n\n def __init__(self, warningPattern):\n super().__init__()\n if warningPattern:\n self.warningPattern = re.compile(warningPattern)\n else:\n self.warningPattern = None\n self.rc = SUCCESS\n self.total = 0\n self.failed = 0\n self.warnings = 0\n self.newStyle = False\n self.complete = False\n\n failedRe = re.compile(r\"Tests: \\d+ Failed: (\\d+)\\)\")\n testsRe = re.compile(r\"Files=\\d+, Tests=(\\d+)\")\n oldFailureCountsRe = re.compile(r\"(\\d+)/(\\d+) subtests failed\")\n oldSuccessCountsRe = re.compile(r\"Files=\\d+, Tests=(\\d+),\")\n\n def outLineReceived(self, line):\n if self.warningPattern.match(line):\n self.warnings += 1\n if self.newStyle:\n if line.startswith('Result: FAIL'):\n self.rc = FAILURE\n mo = self.failedRe.search(line)\n if mo:\n self.failed += int(mo.group(1))\n if self.failed:\n self.rc = FAILURE\n mo = self.testsRe.search(line)\n if mo:\n self.total = int(mo.group(1))\n else:\n if line.startswith('Test Summary Report'):\n self.newStyle = True\n mo = self.oldFailureCountsRe.search(line)\n if mo:\n self.failed = int(mo.group(1))\n self.total = int(mo.group(2))\n self.rc = FAILURE\n mo = self.oldSuccessCountsRe.search(line)\n if mo:\n self.total = int(mo.group(1))\n\n\nclass PerlModuleTest(Test):\n command = [\"prove\", \"--lib\", \"lib\", \"-r\", \"t\"]\n total = 0\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.observer = PerlModuleTestObserver(\n warningPattern=self.warningPattern)\n self.addLogObserver('stdio', self.observer)\n\n def evaluateCommand(self, cmd):\n if self.observer.total:\n passed = self.observer.total - self.observer.failed\n\n self.setTestResults(\n total=self.observer.total,\n failed=self.observer.failed,\n passed=passed,\n warnings=self.observer.warnings)\n\n rc = self.observer.rc\n if rc == SUCCESS and self.observer.warnings:\n rc = WARNINGS\n return rc\n", "path": "master/buildbot/steps/shell.py" } ]
diff --git a/master/buildbot/newsfragments/shellcommand-lazylogfiles.bugfix b/master/buildbot/newsfragments/shellcommand-lazylogfiles.bugfix new file mode 100644 index 000000000000..d813bd65ddf1 --- /dev/null +++ b/master/buildbot/newsfragments/shellcommand-lazylogfiles.bugfix @@ -0,0 +1 @@ +Re-added support for ``lazylogfiles`` argument of ``ShellCommand`` that was available in old style steps. diff --git a/master/buildbot/steps/shell.py b/master/buildbot/steps/shell.py index d983d4daa4db..8099c030e635 100644 --- a/master/buildbot/steps/shell.py +++ b/master/buildbot/steps/shell.py @@ -182,6 +182,7 @@ def __init__(self, **kwargs): 'maxTime', 'sigtermTime', 'logfiles', + 'lazylogfiles', 'usePTY', 'logEnviron', 'collectStdout', diff --git a/master/buildbot/test/fake/remotecommand.py b/master/buildbot/test/fake/remotecommand.py index dd2487b8f405..152390e25990 100644 --- a/master/buildbot/test/fake/remotecommand.py +++ b/master/buildbot/test/fake/remotecommand.py @@ -21,7 +21,6 @@ from buildbot.process.results import CANCELLED from buildbot.process.results import FAILURE from buildbot.process.results import SUCCESS -from buildbot.test.fake import logfile class FakeRemoteCommand: @@ -75,12 +74,47 @@ def run(self, step, conn, builder_name): def useLog(self, log_, closeWhenFinished=False, logfileName=None): if not logfileName: logfileName = log_.getName() + assert logfileName not in self.logs + assert logfileName not in self.delayedLogs self.logs[logfileName] = log_ self._log_close_when_finished[logfileName] = closeWhenFinished def useLogDelayed(self, logfileName, activateCallBack, closeWhenFinished=False): + assert logfileName not in self.logs + assert logfileName not in self.delayedLogs self.delayedLogs[logfileName] = (activateCallBack, closeWhenFinished) + def addStdout(self, data): + if self.collectStdout: + self.stdout += data + if self.stdioLogName is not None and self.stdioLogName in self.logs: + self.logs[self.stdioLogName].addStdout(data) + + def addStderr(self, data): + if self.collectStderr: + self.stderr += data + if self.stdioLogName is not None and self.stdioLogName in self.logs: + self.logs[self.stdioLogName].addStderr(data) + + def addHeader(self, data): + if self.stdioLogName is not None and self.stdioLogName in self.logs: + self.logs[self.stdioLogName].addHeader(data) + + @defer.inlineCallbacks + def addToLog(self, logname, data): + # Activate delayed logs on first data. + if logname in self.delayedLogs: + (activate_callback, close_when_finished) = self.delayedLogs[logname] + del self.delayedLogs[logname] + loog = yield activate_callback(self) + self.logs[logname] = loog + self._log_close_when_finished[logname] = close_when_finished + + if logname in self.logs: + self.logs[logname].addStdout(data) + else: + raise Exception("{}.addToLog: no such log {}".format(self, logname)) + def interrupt(self, why): if not self._waiting_for_interrupt: raise RuntimeError("Got interrupt, but FakeRemoteCommand was not expecting it") @@ -97,12 +131,6 @@ def results(self): def didFail(self): return self.results() == FAILURE - def fakeLogData(self, step, log, header='', stdout='', stderr=''): - # note that this should not be used in the same test as useLog(Delayed) - self.logs[log] = fakelog = logfile.FakeLogFile(log) - self._log_close_when_finished[log] = False - fakelog.fakeData(header=header, stdout=stdout, stderr=stderr) - def set_run_interrupt(self): self._waiting_for_interrupt = True @@ -128,6 +156,9 @@ def __init__(self, workdir, command, env=None, initial_stdin=initialStdin, timeout=timeout, maxTime=maxTime, logfiles=logfiles, usePTY=usePTY, logEnviron=logEnviron) + + if interruptSignal is not None and interruptSignal != 'KILL': + args['interruptSignal'] = interruptSignal super().__init__("shell", args, collectStdout=collectStdout, collectStderr=collectStderr, @@ -235,16 +266,21 @@ def runBehavior(self, behavior, args, command): command.updates.setdefault(args[0], []).append(args[1]) elif behavior == 'log': name, streams = args - if 'header' in streams: - command.logs[name].addHeader(streams['header']) - if 'stdout' in streams: - command.logs[name].addStdout(streams['stdout']) - if command.collectStdout: - command.stdout += streams['stdout'] - if 'stderr' in streams: - command.logs[name].addStderr(streams['stderr']) - if command.collectStderr: - command.stderr += streams['stderr'] + for stream in streams: + if stream not in ['header', 'stdout', 'stderr']: + raise Exception('Log stream {} is not recognized'.format(stream)) + + if name == command.stdioLogName: + if 'header' in streams: + command.addHeader(streams['header']) + if 'stdout' in streams: + command.addStdout(streams['stdout']) + if 'stderr' in streams: + command.addStderr(streams['stderr']) + else: + if 'header' in streams or 'stderr' in streams: + raise Exception('Non stdio streams only support stdout') + return command.addToLog(name, streams['stdout']) elif behavior == 'callable': return defer.maybeDeferred(lambda: args[0](command)) else: @@ -325,7 +361,7 @@ class ExpectShell(Expect): def __init__(self, workdir, command, env=None, want_stdout=1, want_stderr=1, initialStdin=None, timeout=20 * 60, maxTime=None, logfiles=None, - usePTY=None, logEnviron=True): + usePTY=None, logEnviron=True, interruptSignal=None): if env is None: env = {} if logfiles is None: @@ -335,6 +371,8 @@ def __init__(self, workdir, command, env=None, initial_stdin=initialStdin, timeout=timeout, maxTime=maxTime, logfiles=logfiles, usePTY=usePTY, logEnviron=logEnviron) + if interruptSignal is not None: + args['interruptSignal'] = interruptSignal super().__init__("shell", args) def __repr__(self): diff --git a/master/buildbot/test/unit/process/test_buildstep.py b/master/buildbot/test/unit/process/test_buildstep.py index 2836b4ca457d..578e9e3ca981 100644 --- a/master/buildbot/test/unit/process/test_buildstep.py +++ b/master/buildbot/test/unit/process/test_buildstep.py @@ -998,38 +998,17 @@ def test_glob_fail(self): return self.runStep() -class ShellMixinExample(buildstep.ShellMixin, buildstep.BuildStep): - # note that this is straight out of cls-buildsteps.rst - - def __init__(self, cleanupScript='./cleanup.sh', **kwargs): - self.cleanupScript = cleanupScript - kwargs = self.setupShellMixin(kwargs, prohibitArgs=['command']) - super().__init__(**kwargs) - - @defer.inlineCallbacks - def run(self): - cmd = yield self.makeRemoteShellCommand( - command=[self.cleanupScript]) - yield self.runCommand(cmd) - if cmd.didFail(): - cmd = yield self.makeRemoteShellCommand( - command=[self.cleanupScript, '--force'], - logEnviron=False) - yield self.runCommand(cmd) - return cmd.results() - - class SimpleShellCommand(buildstep.ShellMixin, buildstep.BuildStep): - def __init__(self, makeRemoteShellCommandKwargs=None, **kwargs): - self.makeRemoteShellCommandKwargs = makeRemoteShellCommandKwargs or {} + def __init__(self, make_cmd_kwargs=None, prohibit_args=None, **kwargs): + self.make_cmd_kwargs = make_cmd_kwargs or {} - kwargs = self.setupShellMixin(kwargs) + kwargs = self.setupShellMixin(kwargs, prohibitArgs=prohibit_args) super().__init__(**kwargs) @defer.inlineCallbacks def run(self): - cmd = yield self.makeRemoteShellCommand(**self.makeRemoteShellCommandKwargs) + cmd = yield self.makeRemoteShellCommand(**self.make_cmd_kwargs) yield self.runCommand(cmd) return cmd.results() @@ -1048,20 +1027,18 @@ def tearDown(self): return self.tearDownBuildStep() def test_setupShellMixin_bad_arg(self): - mixin = ShellMixinExample() - with self.assertRaisesConfigError( - "invalid ShellMixinExample argument invarg"): + mixin = SimpleShellCommand() + with self.assertRaisesConfigError("invalid SimpleShellCommand argument invarg"): mixin.setupShellMixin({'invarg': 13}) def test_setupShellMixin_prohibited_arg(self): - mixin = ShellMixinExample() - with self.assertRaisesConfigError( - "invalid ShellMixinExample argument logfiles"): + mixin = SimpleShellCommand() + with self.assertRaisesConfigError("invalid SimpleShellCommand argument logfiles"): mixin.setupShellMixin({'logfiles': None}, prohibitArgs=['logfiles']) def test_constructor_defaults(self): - class MySubclass(ShellMixinExample): + class MySubclass(SimpleShellCommand): timeout = 9999 # ShellMixin arg self.assertEqual(MySubclass().timeout, 9999) @@ -1075,121 +1052,180 @@ class MySubclass(ShellMixinExample): ['charming']) @defer.inlineCallbacks - def test_example(self): - self.setupStep(ShellMixinExample(), wantDefaultWorkdir=False) + def test_prohibit_args(self): + self.setupStep(SimpleShellCommand(prohibit_args=['command'], + make_cmd_kwargs={'command': ['cmd', 'arg']})) self.expectCommands( - ExpectShell(workdir='build', command=['./cleanup.sh']) + - Expect.log('stdio', stderr="didn't go so well\n") + - 1, - ExpectShell(workdir='build', command=['./cleanup.sh', '--force'], - logEnviron=False) + + ExpectShell(workdir='wkdir', command=['cmd', 'arg']) + 0, ) self.expectOutcome(result=SUCCESS) yield self.runStep() @defer.inlineCallbacks - def test_example_extra_logfile(self): - self.setupStep(ShellMixinExample( - logfiles={'cleanup': 'cleanup.log'}), wantDefaultWorkdir=False) + def test_no_default_workdir(self): + self.setupStep(SimpleShellCommand(command=['cmd', 'arg']), wantDefaultWorkdir=False) self.expectCommands( - ExpectShell(workdir='build', command=['./cleanup.sh'], - logfiles={'cleanup': 'cleanup.log'}) + - Expect.log('cleanup', stdout='cleaning\ncleaned\n') + + ExpectShell(workdir='build', command=['cmd', 'arg']) + 0, ) self.expectOutcome(result=SUCCESS) yield self.runStep() - self.assertEqual(self.step.getLog('cleanup').stdout, - 'cleaning\ncleaned\n') @defer.inlineCallbacks - def test_example_build_workdir(self): - self.setupStep(ShellMixinExample(), wantDefaultWorkdir=False) + def test_build_workdir(self): + self.setupStep(SimpleShellCommand(command=['cmd', 'arg']), wantDefaultWorkdir=False) self.build.workdir = '/alternate' self.expectCommands( - ExpectShell(workdir='/alternate', command=['./cleanup.sh']) + + ExpectShell(workdir='/alternate', command=['cmd', 'arg']) + 0, ) self.expectOutcome(result=SUCCESS) yield self.runStep() @defer.inlineCallbacks - def test_example_build_workdir_callable(self): - self.setupStep(ShellMixinExample(), wantDefaultWorkdir=False) + def test_build_workdir_callable(self): + self.setupStep(SimpleShellCommand(command=['cmd', 'arg']), wantDefaultWorkdir=False) self.build.workdir = lambda x: '/alternate' self.expectCommands( - ExpectShell(workdir='/alternate', command=['./cleanup.sh']) + + ExpectShell(workdir='/alternate', command=['cmd', 'arg']) + 0, ) self.expectOutcome(result=SUCCESS) yield self.runStep() @defer.inlineCallbacks - def test_example_build_workdir_rendereable(self): - self.setupStep(ShellMixinExample(), wantDefaultWorkdir=False) + def test_build_workdir_callable_error(self): + self.setupStep(SimpleShellCommand(command=['cmd', 'arg']), wantDefaultWorkdir=False) + self.build.workdir = lambda x: x.nosuchattribute # will raise AttributeError + self.expectException(buildstep.CallableAttributeError) + yield self.runStep() + + @defer.inlineCallbacks + def test_build_workdir_renderable(self): + self.setupStep(SimpleShellCommand(command=['cmd', 'arg']), wantDefaultWorkdir=False) self.build.workdir = properties.Property("myproperty") self.properties.setProperty("myproperty", "/myproperty", "test") self.expectCommands( - ExpectShell(workdir='/myproperty', command=['./cleanup.sh']) + + ExpectShell(workdir='/myproperty', command=['cmd', 'arg']) + 0, ) self.expectOutcome(result=SUCCESS) yield self.runStep() @defer.inlineCallbacks - def test_example_build_workdir_callable_attribute_error(self): - self.setupStep(ShellMixinExample(), wantDefaultWorkdir=False) - self.build.workdir = lambda x: x.p # will raise AttributeError - self.expectException(buildstep.CallableAttributeError) + def test_step_workdir(self): + self.setupStep(SimpleShellCommand(command=['cmd', 'arg'], workdir='/stepdir')) + self.build.workdir = '/builddir' + self.expectCommands( + ExpectShell(workdir='/stepdir', command=['cmd', 'arg']) + + 0, + ) + self.expectOutcome(result=SUCCESS) yield self.runStep() @defer.inlineCallbacks - def test_example_step_workdir(self): - self.setupStep(ShellMixinExample(workdir='/alternate')) - self.build.workdir = '/overridden' + def test_step_renderable_workdir(self): + @renderer + def rendered_workdir(_): + return '/stepdir' + + self.setupStep(SimpleShellCommand(command=['cmd', 'arg'], workdir=rendered_workdir)) + self.build.workdir = '/builddir' self.expectCommands( - ExpectShell(workdir='/alternate', command=['./cleanup.sh']) + + ExpectShell(workdir='/stepdir', command=['cmd', 'arg']) + 0, ) self.expectOutcome(result=SUCCESS) yield self.runStep() @defer.inlineCallbacks - def test_example_step_renderable_workdir(self): - @renderer - def rendered_workdir(_): - return '/alternate' + def test_step_workdir_overridden(self): + self.setupStep(SimpleShellCommand(command=['cmd', 'arg'], workdir='/stepdir', + make_cmd_kwargs={'workdir': '/overridden'})) + self.build.workdir = '/builddir' + self.expectCommands( + ExpectShell(workdir='/overridden', command=['cmd', 'arg']) + + 0, + ) + self.expectOutcome(result=SUCCESS) + yield self.runStep() - self.setupStep(ShellMixinExample(workdir=rendered_workdir)) - self.build.workdir = '/overridden' + @defer.inlineCallbacks + def test_extra_logfile(self): + self.setupStep(SimpleShellCommand(command=['cmd', 'arg'], + logfiles={'logname': 'logpath.log'})) self.expectCommands( - ExpectShell(workdir='/alternate', command=['./cleanup.sh']) + + ExpectShell(workdir='wkdir', command=['cmd', 'arg'], + logfiles={'logname': 'logpath.log'}) + + Expect.log('logname', stdout='logline\nlogline2\n') + + Expect.log('stdio', stdout="some log\n") + 0, ) self.expectOutcome(result=SUCCESS) yield self.runStep() + self.assertEqual(self.step.getLog('logname').stdout, + 'logline\nlogline2\n') @defer.inlineCallbacks - def test_example_override_workdir(self): - # Test that makeRemoteShellCommand(workdir=X) works. - self.setupStep(SimpleShellCommand( - makeRemoteShellCommandKwargs={'workdir': '/alternate'}, - command=['foo', properties.Property('bar', 'BAR')])) + def test_lazy_logfiles_stdout_has_stdout(self): + self.setupStep(SimpleShellCommand(command=['cmd', 'arg'], lazylogfiles=True)) self.expectCommands( - ExpectShell(workdir='/alternate', command=['foo', 'BAR']) + + ExpectShell(workdir='wkdir', command=['cmd', 'arg']) + + Expect.log('stdio', stdout="some log\n") + 0, ) self.expectOutcome(result=SUCCESS) yield self.runStep() + self.assertEqual(self.step.getLog('stdio').stdout, 'some log\n') @defer.inlineCallbacks - def test_example_env(self): - self.setupStep( - ShellMixinExample(env={'BAR': 'BAR'}), wantDefaultWorkdir=False) + def test_lazy_logfiles_stdout_no_stdout(self): + # lazy log files do not apply to stdout + self.setupStep(SimpleShellCommand(command=['cmd', 'arg'], lazylogfiles=True)) + self.expectCommands( + ExpectShell(workdir='wkdir', command=['cmd', 'arg']) + + 0, + ) + self.expectOutcome(result=SUCCESS) + yield self.runStep() + self.assertEqual(self.step.getLog('stdio').stdout, '') + + @defer.inlineCallbacks + def test_lazy_logfiles_logfile(self): + self.setupStep(SimpleShellCommand(command=['cmd', 'arg'], lazylogfiles=True, + logfiles={'logname': 'logpath.log'})) + self.expectCommands( + ExpectShell(workdir='wkdir', command=['cmd', 'arg'], + logfiles={'logname': 'logpath.log'}) + + Expect.log('logname', stdout='logline\nlogline2\n') + + 0, + ) + self.expectOutcome(result=SUCCESS) + yield self.runStep() + self.assertEqual(self.step.getLog('logname').stdout, + 'logline\nlogline2\n') + + @defer.inlineCallbacks + def test_lazy_logfiles_no_logfile(self): + self.setupStep(SimpleShellCommand(command=['cmd', 'arg'], lazylogfiles=True, + logfiles={'logname': 'logpath.log'})) + self.expectCommands( + ExpectShell(workdir='wkdir', command=['cmd', 'arg'], + logfiles={'logname': 'logpath.log'}) + + 0, + ) + self.expectOutcome(result=SUCCESS) + yield self.runStep() + with self.assertRaises(KeyError): + self.step.getLog('logname') + + @defer.inlineCallbacks + def test_env(self): + self.setupStep(SimpleShellCommand(command=['cmd', 'arg'], env={'BAR': 'BAR'})) self.build.builder.config.env = {'FOO': 'FOO'} self.expectCommands( - ExpectShell(workdir='build', command=['./cleanup.sh'], + ExpectShell(workdir='wkdir', command=['cmd', 'arg'], env={'FOO': 'FOO', 'BAR': 'BAR'}) + 0, ) @@ -1197,11 +1233,12 @@ def test_example_env(self): yield self.runStep() @defer.inlineCallbacks - def test_example_old_worker(self): - self.setupStep(ShellMixinExample(usePTY=False, interruptSignal='DIE'), - worker_version={'*': "1.1"}, wantDefaultWorkdir=False) + def test_old_worker_args(self): + self.setupStep(SimpleShellCommand(command=['cmd', 'arg'], usePTY=False, + interruptSignal='DIE'), + worker_version={'*': "1.1"}) self.expectCommands( - ExpectShell(workdir='build', command=['./cleanup.sh']) + + ExpectShell(workdir='wkdir', command=['cmd', 'arg']) + # note missing parameters 0, ) @@ -1212,25 +1249,24 @@ def test_example_old_worker(self): 'NOTE: worker does not allow master to specify interruptSignal\n') @defer.inlineCallbacks - def test_example_new_worker(self): - self.setupStep(ShellMixinExample(usePTY=False, interruptSignal='DIE'), - worker_version={'*': "3.0"}, wantDefaultWorkdir=False) + def test_new_worker_args(self): + self.setupStep(SimpleShellCommand(command=['cmd', 'arg'], usePTY=False, + interruptSignal='DIE'), + worker_version={'*': "3.0"}) self.expectCommands( - ExpectShell(workdir='build', usePTY=False, command=['./cleanup.sh']) + - # note missing parameters + ExpectShell(workdir='wkdir', usePTY=False, interruptSignal='DIE', + command=['cmd', 'arg']) + 0, ) self.expectOutcome(result=SUCCESS) yield self.runStep() - self.assertEqual(self.step.getLog('stdio').header, - '') + self.assertEqual(self.step.getLog('stdio').header, '') @defer.inlineCallbacks def test_description(self): - self.setupStep(SimpleShellCommand( - command=['foo', properties.Property('bar', 'BAR')]), wantDefaultWorkdir=False) + self.setupStep(SimpleShellCommand(command=['foo', properties.Property('bar', 'BAR')])) self.expectCommands( - ExpectShell(workdir='build', command=['foo', 'BAR']) + + ExpectShell(workdir='wkdir', command=['foo', 'BAR']) + 0, ) self.expectOutcome(result=SUCCESS, state_string="'foo BAR'") diff --git a/master/buildbot/test/unit/steps/test_shell.py b/master/buildbot/test/unit/steps/test_shell.py index 7f84a9a265c2..9cf870b95fb4 100644 --- a/master/buildbot/test/unit/steps/test_shell.py +++ b/master/buildbot/test/unit/steps/test_shell.py @@ -228,7 +228,7 @@ def test_run_misparsed(self): self.expectCommands( ExpectShell(workdir='wkdir', command=['du', '-s', '-k', '.']) - + ExpectShell.log('stdio', stdio='abcdef\n') + + ExpectShell.log('stdio', stdout='abcdef\n') + 0 ) self.expectOutcome(result=WARNINGS, diff --git a/master/buildbot/test/unit/steps/test_shellsequence.py b/master/buildbot/test/unit/steps/test_shellsequence.py index f644256a5830..0463715ea2a5 100644 --- a/master/buildbot/test/unit/steps/test_shellsequence.py +++ b/master/buildbot/test/unit/steps/test_shellsequence.py @@ -21,7 +21,6 @@ from buildbot.process.results import SUCCESS from buildbot.process.results import WARNINGS from buildbot.steps import shellsequence -from buildbot.test.fake.remotecommand import Expect from buildbot.test.fake.remotecommand import ExpectShell from buildbot.test.util import config as configmixin from buildbot.test.util import steps @@ -76,14 +75,12 @@ def testShellArgInput(self): arg.validateAttributes() def testShellArgsAreRendered(self): - arg1 = shellsequence.ShellArg(command=WithProperties('make %s', 'project'), - logname=WithProperties('make %s', 'project')) + arg1 = shellsequence.ShellArg(command=WithProperties('make %s', 'project')) self.setupStep( shellsequence.ShellSequence(commands=[arg1], workdir='build')) self.properties.setProperty("project", "BUILDBOT-TEST", "TEST") - self.expectCommands(ExpectShell(workdir='build', command='make BUILDBOT-TEST') - + 0 + Expect.log('stdio make BUILDBOT-TEST')) + self.expectCommands(ExpectShell(workdir='build', command='make BUILDBOT-TEST') + 0) # TODO: need to factor command-summary stuff into a utility method and # use it here self.expectOutcome(result=SUCCESS, state_string="'make BUILDBOT-TEST'") @@ -114,13 +111,12 @@ def testSanityChecksAreDoneInRuntimeWhenDynamicCmdIsInvalidShellArg(self): def testMultipleCommandsAreRun(self): arg1 = shellsequence.ShellArg(command='make p1') - arg2 = shellsequence.ShellArg(command='deploy p1', logname='deploy') + arg2 = shellsequence.ShellArg(command='deploy p1') self.setupStep( shellsequence.ShellSequence(commands=[arg1, arg2], workdir='build')) self.expectCommands(ExpectShell(workdir='build', command='make p1') + 0, - ExpectShell(workdir='build', command='deploy p1') + 0 + - Expect.log('stdio deploy p1')) + ExpectShell(workdir='build', command='deploy p1') + 0) self.expectOutcome(result=SUCCESS, state_string="'deploy p1'") return self.runStep() @@ -166,16 +162,13 @@ def testShellArgsAreRenderedAnewAtEachBuild(self): This unit test makes sure that ShellArg instances are rendered anew at each new build. """ - arg = shellsequence.ShellArg(command=WithProperties('make %s', 'project'), - logname=WithProperties('make %s', 'project')) + arg = shellsequence.ShellArg(command=WithProperties('make %s', 'project')) step = shellsequence.ShellSequence(commands=[arg], workdir='build') # First "build" self.setupStep(step) self.properties.setProperty("project", "BUILDBOT-TEST-1", "TEST") - self.expectCommands(ExpectShell(workdir='build', - command='make BUILDBOT-TEST-1') + 0 + - Expect.log('stdio make BUILDBOT-TEST-1')) + self.expectCommands(ExpectShell(workdir='build', command='make BUILDBOT-TEST-1') + 0) self.expectOutcome(result=SUCCESS, state_string="'make BUILDBOT-TEST-1'") self.runStep() @@ -183,9 +176,7 @@ def testShellArgsAreRenderedAnewAtEachBuild(self): # Second "build" self.setupStep(step) self.properties.setProperty("project", "BUILDBOT-TEST-2", "TEST") - self.expectCommands(ExpectShell(workdir='build', - command='make BUILDBOT-TEST-2') + 0 + - Expect.log('stdio make BUILDBOT-TEST-2')) + self.expectCommands(ExpectShell(workdir='build', command='make BUILDBOT-TEST-2') + 0) self.expectOutcome(result=SUCCESS, state_string="'make BUILDBOT-TEST-2'") diff --git a/master/buildbot/test/unit/steps/test_source_git.py b/master/buildbot/test/unit/steps/test_source_git.py index 9be4fcb4a46c..8058e897b073 100644 --- a/master/buildbot/test/unit/steps/test_source_git.py +++ b/master/buildbot/test/unit/steps/test_source_git.py @@ -1674,7 +1674,7 @@ def test_mode_incremental_oldworker(self): mode='incremental', progress=True)) self.step.build.getWorkerCommandVersion = lambda cmd, oldversion: "2.15" self.expectCommands( - ExpectShell(workdir='wkdir', + ExpectShell(workdir='wkdir', interruptSignal='TERM', command=['git', '--version']) + ExpectShell.log('stdio', stdout='git version 1.7.5') @@ -1685,15 +1685,15 @@ def test_mode_incremental_oldworker(self): Expect('stat', dict(file='wkdir/.git', logEnviron=True)) + 0, - ExpectShell(workdir='wkdir', + ExpectShell(workdir='wkdir', interruptSignal='TERM', command=['git', 'fetch', '-f', '-t', 'http://github.com/buildbot/buildbot.git', 'HEAD', '--progress']) + 0, - ExpectShell(workdir='wkdir', + ExpectShell(workdir='wkdir', interruptSignal='TERM', command=['git', 'checkout', '-f', 'FETCH_HEAD']) + 0, - ExpectShell(workdir='wkdir', + ExpectShell(workdir='wkdir', interruptSignal='TERM', command=['git', 'rev-parse', 'HEAD']) + ExpectShell.log('stdio', stdout='f6ad368298bd941e934a41f3babc827b2aa95a1d') @@ -2675,7 +2675,7 @@ def test_mode_incremental_no_existing_repo_oldworker(self): mode='incremental')) self.step.build.getWorkerCommandVersion = lambda cmd, oldversion: "2.15" self.expectCommands( - ExpectShell(workdir='wkdir', + ExpectShell(workdir='wkdir', interruptSignal='TERM', command=['git', '--version']) + ExpectShell.log('stdio', stdout='git version 1.7.5') @@ -2686,12 +2686,12 @@ def test_mode_incremental_no_existing_repo_oldworker(self): Expect('stat', dict(file='wkdir/.git', logEnviron=True)) + 1, - ExpectShell(workdir='wkdir', + ExpectShell(workdir='wkdir', interruptSignal='TERM', command=['git', 'clone', 'http://github.com/buildbot/buildbot.git', '.', '--progress']) + 0, - ExpectShell(workdir='wkdir', + ExpectShell(workdir='wkdir', interruptSignal='TERM', command=['git', 'rev-parse', 'HEAD']) + ExpectShell.log('stdio', stdout='f6ad368298bd941e934a41f3babc827b2aa95a1d')
googleapis__python-bigquery-745
missing docs for enums: switch to module-based reference docs for enums module There are several enums in https://github.com/googleapis/python-bigquery/blob/master/google/cloud/bigquery/enums.py which are undocumented. I recall we were doing class-based docs mostly because the "jobs" module was too huge to properly browse. It should be safe to use module-based docs for the smaller modules like enums, so we don't have to keep remembering to keep `reference.rst` in sync.
[ { "content": "# -*- coding: utf-8 -*-\n# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# google-cloud-bigquery documentation build configuration file\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\nimport shlex\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath(\"..\"))\n\n# For plugins that can not read conf.py.\n# See also: https://github.com/docascode/sphinx-docfx-yaml/issues/85\nsys.path.insert(0, os.path.abspath(\".\"))\n\n__version__ = \"\"\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\nneeds_sphinx = \"1.5.5\"\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n \"recommonmark\",\n]\n\n# autodoc/autosummary flags\nautoclass_content = \"both\"\nautodoc_default_options = {\"members\": True, \"inherited-members\": True}\nautosummary_generate = True\n\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = [\".rst\", \".md\"]\n\n# The encoding of source files.\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"google-cloud-bigquery\"\ncopyright = \"2019, Google\"\nauthor = \"Google APIs\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The full version, including alpha/beta/rc tags.\nrelease = __version__\n# The short X.Y version.\nversion = \".\".join(release.split(\".\")[0:2])\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = [\n \"_build\",\n \"samples/AUTHORING_GUIDE.md\",\n \"samples/CONTRIBUTING.md\",\n \"samples/snippets/README.rst\",\n \"bigquery_v2/services.rst\", # generated by the code generator\n]\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n# keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"alabaster\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n \"description\": \"Google Cloud Client Libraries for google-cloud-bigquery\",\n \"github_user\": \"googleapis\",\n \"github_repo\": \"python-bigquery\",\n \"github_banner\": True,\n \"font_family\": \"'Roboto', Georgia, sans\",\n \"head_font_family\": \"'Roboto', Georgia, serif\",\n \"code_font_family\": \"'Roboto Mono', 'Consolas', monospace\",\n}\n\n# Add any paths that contain custom themes here, relative to this directory.\n# html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n# html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n# html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n# html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n# html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n# html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n# html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n# html_domain_indices = True\n\n# If false, no index is generated.\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n# html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n# html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n# html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n# html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"google-cloud-bigquery-doc\"\n\n# -- Options for warnings ------------------------------------------------------\n\n\nsuppress_warnings = [\n # Temporarily suppress this to avoid \"more than one target found for\n # cross-reference\" warning, which are intractable for us to avoid while in\n # a mono-repo.\n # See https://github.com/sphinx-doc/sphinx/blob\n # /2a65ffeef5c107c19084fabdd706cdff3f52d93c/sphinx/domains/python.py#L843\n \"ref.python\"\n]\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #'preamble': '',\n # Latex figure (float) alignment\n #'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"google-cloud-bigquery.tex\",\n \"google-cloud-bigquery Documentation\",\n author,\n \"manual\",\n )\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n# latex_use_parts = False\n\n# If true, show page references after internal links.\n# latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n# latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n# latex_appendices = []\n\n# If false, no module index is generated.\n# latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (\n master_doc,\n \"google-cloud-bigquery\",\n \"google-cloud-bigquery Documentation\",\n [author],\n 1,\n )\n]\n\n# If true, show URL addresses after external links.\n# man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"google-cloud-bigquery\",\n \"google-cloud-bigquery Documentation\",\n author,\n \"google-cloud-bigquery\",\n \"google-cloud-bigquery Library\",\n \"APIs\",\n )\n]\n\n# Documents to append as an appendix to all manuals.\n# texinfo_appendices = []\n\n# If false, no module index is generated.\n# texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n# texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n# texinfo_no_detailmenu = False\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n \"python\": (\"https://python.readthedocs.org/en/latest/\", None),\n \"google-auth\": (\"https://googleapis.dev/python/google-auth/latest/\", None),\n \"google.api_core\": (\"https://googleapis.dev/python/google-api-core/latest/\", None,),\n \"grpc\": (\"https://grpc.github.io/grpc/python/\", None),\n \"proto-plus\": (\"https://proto-plus-python.readthedocs.io/en/latest/\", None),\n \"protobuf\": (\"https://googleapis.dev/python/protobuf/latest/\", None),\n}\n\n\n# Napoleon settings\nnapoleon_google_docstring = True\nnapoleon_numpy_docstring = True\nnapoleon_include_private_with_doc = False\nnapoleon_include_special_with_doc = True\nnapoleon_use_admonition_for_examples = False\nnapoleon_use_admonition_for_notes = False\nnapoleon_use_admonition_for_references = False\nnapoleon_use_ivar = False\nnapoleon_use_param = True\nnapoleon_use_rtype = True\n", "path": "docs/conf.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\n# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# google-cloud-bigquery documentation build configuration file\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\nimport shlex\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath(\"..\"))\n\n# For plugins that can not read conf.py.\n# See also: https://github.com/docascode/sphinx-docfx-yaml/issues/85\nsys.path.insert(0, os.path.abspath(\".\"))\n\n__version__ = \"\"\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\nneeds_sphinx = \"1.5.5\"\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.viewcode\",\n \"recommonmark\",\n]\n\n# autodoc/autosummary flags\nautoclass_content = \"both\"\nautodoc_default_options = {\"members\": True, \"inherited-members\": True}\nautosummary_generate = True\n\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = [\".rst\", \".md\"]\n\n# The encoding of source files.\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"google-cloud-bigquery\"\ncopyright = \"2019, Google\"\nauthor = \"Google APIs\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The full version, including alpha/beta/rc tags.\nrelease = __version__\n# The short X.Y version.\nversion = \".\".join(release.split(\".\")[0:2])\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = [\n \"_build\",\n \"**/.nox/**/*\",\n \"samples/AUTHORING_GUIDE.md\",\n \"samples/CONTRIBUTING.md\",\n \"samples/snippets/README.rst\",\n \"bigquery_v2/services.rst\", # generated by the code generator\n]\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n# keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"alabaster\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n \"description\": \"Google Cloud Client Libraries for google-cloud-bigquery\",\n \"github_user\": \"googleapis\",\n \"github_repo\": \"python-bigquery\",\n \"github_banner\": True,\n \"font_family\": \"'Roboto', Georgia, sans\",\n \"head_font_family\": \"'Roboto', Georgia, serif\",\n \"code_font_family\": \"'Roboto Mono', 'Consolas', monospace\",\n}\n\n# Add any paths that contain custom themes here, relative to this directory.\n# html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n# html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n# html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n# html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n# html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n# html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n# html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n# html_domain_indices = True\n\n# If false, no index is generated.\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n# html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n# html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n# html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n# html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"google-cloud-bigquery-doc\"\n\n# -- Options for warnings ------------------------------------------------------\n\n\nsuppress_warnings = [\n # Temporarily suppress this to avoid \"more than one target found for\n # cross-reference\" warning, which are intractable for us to avoid while in\n # a mono-repo.\n # See https://github.com/sphinx-doc/sphinx/blob\n # /2a65ffeef5c107c19084fabdd706cdff3f52d93c/sphinx/domains/python.py#L843\n \"ref.python\"\n]\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #'preamble': '',\n # Latex figure (float) alignment\n #'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"google-cloud-bigquery.tex\",\n \"google-cloud-bigquery Documentation\",\n author,\n \"manual\",\n )\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n# latex_use_parts = False\n\n# If true, show page references after internal links.\n# latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n# latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n# latex_appendices = []\n\n# If false, no module index is generated.\n# latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (\n master_doc,\n \"google-cloud-bigquery\",\n \"google-cloud-bigquery Documentation\",\n [author],\n 1,\n )\n]\n\n# If true, show URL addresses after external links.\n# man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"google-cloud-bigquery\",\n \"google-cloud-bigquery Documentation\",\n author,\n \"google-cloud-bigquery\",\n \"google-cloud-bigquery Library\",\n \"APIs\",\n )\n]\n\n# Documents to append as an appendix to all manuals.\n# texinfo_appendices = []\n\n# If false, no module index is generated.\n# texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n# texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n# texinfo_no_detailmenu = False\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\n \"python\": (\"https://python.readthedocs.org/en/latest/\", None),\n \"google-auth\": (\"https://googleapis.dev/python/google-auth/latest/\", None),\n \"google.api_core\": (\"https://googleapis.dev/python/google-api-core/latest/\", None,),\n \"grpc\": (\"https://grpc.github.io/grpc/python/\", None),\n \"proto-plus\": (\"https://proto-plus-python.readthedocs.io/en/latest/\", None),\n \"protobuf\": (\"https://googleapis.dev/python/protobuf/latest/\", None),\n}\n\n\n# Napoleon settings\nnapoleon_google_docstring = True\nnapoleon_numpy_docstring = True\nnapoleon_include_private_with_doc = False\nnapoleon_include_special_with_doc = True\nnapoleon_use_admonition_for_examples = False\nnapoleon_use_admonition_for_notes = False\nnapoleon_use_admonition_for_references = False\nnapoleon_use_ivar = False\nnapoleon_use_param = True\nnapoleon_use_rtype = True\n", "path": "docs/conf.py" } ]
diff --git a/docs/conf.py b/docs/conf.py index cb347160d..09f7ea414 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -110,6 +110,7 @@ # directories to ignore when looking for source files. exclude_patterns = [ "_build", + "**/.nox/**/*", "samples/AUTHORING_GUIDE.md", "samples/CONTRIBUTING.md", "samples/snippets/README.rst", diff --git a/docs/enums.rst b/docs/enums.rst new file mode 100644 index 000000000..57608968a --- /dev/null +++ b/docs/enums.rst @@ -0,0 +1,6 @@ +BigQuery Enums +============== + +.. automodule:: google.cloud.bigquery.enums + :members: + :undoc-members: diff --git a/docs/reference.rst b/docs/reference.rst index 52d916f96..694379cd2 100644 --- a/docs/reference.rst +++ b/docs/reference.rst @@ -173,10 +173,11 @@ Magics Enums ===== -.. autosummary:: - :toctree: generated +.. toctree:: + :maxdepth: 2 + + enums - enums.StandardSqlDataTypes Encryption Configuration ========================
avocado-framework__avocado-4585
Empty distro file with `avocado distro` When running `avocado distro` to generate a definition file as indicated in the manpage, there is a problem and the resulting distro file is empty. ``` $ avocado distro --distro-def-create --distro-def-name avocadix --distro-def-version 1 --distro-def-arch x86_64 --distro-def-type rpm --distro-def-path /mnt/dvd Loading distro information from tree... Please wait... Avocado crashed unexpectedly: a bytes-like object is required, not 'str' You can find details in /home/anguerre/avocado/data/crashes/avocado-traceback-2021-05-12_22:29:25-3_t5qeh8.log ``` ``` $ cat /home/anguerre/avocado/data/crashes/avocado-traceback-2021-05-12_22:29:25-3_t5qeh8.log Avocado crashed: Traceback (most recent call last): File "/usr/bin/avocado", line 11, in <module> load_entry_point('avocado-framework==85.0', 'console_scripts', 'avocado')() File "/usr/lib/python3.6/site-packages/avocado/core/main.py", line 76, in main return app.run() File "/usr/lib/python3.6/site-packages/avocado/core/app.py", line 112, in run return method(self.parser.config) File "/usr/lib/python3.6/site-packages/avocado/plugins/distro.py", line 403, in run save_distro(distro, output_file_name) File "/usr/lib/python3.6/site-packages/avocado/plugins/distro.py", line 237, in save_distro output.write(bz2.compress(linux_distro.to_json())) File "/usr/lib64/python3.6/bz2.py", line 338, in compress return comp.compress(data) + comp.flush() TypeError: a bytes-like object is required, not 'str' ``` And the file `avocadix-1-x86_64.distro` is created empty.
[ { "content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n#\n# See LICENSE for more details.\n#\n# Copyright: Red Hat Inc. 2015\n# Author: Cleber Rosa <[email protected]>\n\nimport bz2\nimport json\nimport os\nimport sys\n\nfrom avocado.core import exit_codes\nfrom avocado.core.output import LOG_UI\nfrom avocado.core.plugin_interfaces import CLICmd\nfrom avocado.core.settings import settings\nfrom avocado.utils import distro as utils_distro\nfrom avocado.utils import path as utils_path\nfrom avocado.utils import process\n\n\nclass SoftwarePackage:\n\n \"\"\"\n Definition of relevant information on a software package\n \"\"\"\n\n def __init__(self, name, version, release, checksum, arch):\n self.name = name\n self.version = version\n self.release = release\n self.checksum = checksum\n self.arch = arch\n\n def to_dict(self):\n \"\"\"\n Returns the representation as a dictionary\n \"\"\"\n return {'name': self.name,\n 'version': self.version,\n 'release': self.release,\n 'checksum': self.checksum,\n 'arch': self.arch}\n\n def to_json(self):\n \"\"\"\n Returns the representation of the distro as JSON\n \"\"\"\n return json.dumps(self.to_dict())\n\n\nclass DistroDef(utils_distro.LinuxDistro):\n\n \"\"\"\n More complete information on a given Linux Distribution\n\n Can and should include all the software packages that ship with the distro,\n so that an analysis can be made on whether a given package that may be\n responsible for a regression is part of the official set or an external\n package.\n \"\"\"\n\n def __init__(self, name, version, release, arch):\n super(DistroDef, self).__init__(name, version, release, arch)\n\n #: All the software packages that ship with this Linux distro\n self.software_packages = []\n\n #: A simple text that denotes the software type that makes this distro\n self.software_packages_type = 'unknown'\n\n def to_dict(self):\n \"\"\"\n Returns the representation as a dictionary\n \"\"\"\n d = {'name': self.name,\n 'version': self.version,\n 'release': self.release,\n 'arch': self.arch,\n 'software_packages_type': self.software_packages_type,\n 'software_packages': []}\n\n for package in self.software_packages:\n d['software_packages'].append(package.to_dict())\n\n return d\n\n def to_json(self):\n \"\"\"\n Returns the representation of the distro as JSON\n \"\"\"\n return json.dumps(self.to_dict())\n\n\nclass DistroPkgInfoLoader:\n\n \"\"\"\n Loads information from the distro installation tree into a DistroDef\n\n It will go through all package files and inspect them with specific\n package utilities, collecting the necessary information.\n \"\"\"\n\n def __init__(self, path):\n self.path = path\n\n def get_packages_info(self):\n \"\"\"\n This method will go through each file, checking if it's a valid\n software package file by calling :meth:`is_software_package` and\n calling :meth:`load_package_info` if it's so.\n \"\"\"\n packages_info = set()\n for dirpath, _, filenames in os.walk(self.path):\n for filename in filenames:\n path = os.path.join(dirpath, filename)\n if self.is_software_package(path):\n packages_info.add(self.get_package_info(path))\n\n # because we do not track of locations or how many copies of a given\n # package file exists in the installation tree, packages should be\n # comprised of unique entries\n return list(packages_info)\n\n def is_software_package(self, path):\n \"\"\"\n Determines if the given file at `path` is a software package\n\n This check will be used to determine if :meth:`load_package_info`\n will be called for file at `path`. This method should be\n implemented by classes inheriting from :class:`DistroPkgInfoLoader` and\n could be as simple as checking for a file suffix.\n\n :param path: path to the software package file\n :type path: str\n :return: either True if the file is a valid software package or False\n otherwise\n :rtype: bool\n \"\"\"\n raise NotImplementedError\n\n def get_package_info(self, path):\n \"\"\"\n Returns information about a given software package\n\n Should be implemented by classes inheriting from\n :class:`DistroDefinitionLoader`.\n\n :param path: path to the software package file\n :type path: str\n :returns: tuple with name, version, release, checksum and arch\n :rtype: tuple\n \"\"\"\n raise NotImplementedError\n\n\nclass DistroPkgInfoLoaderRpm(DistroPkgInfoLoader):\n\n \"\"\"\n Loads package information for RPM files\n \"\"\"\n\n def __init__(self, path):\n super(DistroPkgInfoLoaderRpm, self).__init__(path)\n try:\n utils_path.find_command('rpm')\n self.capable = True\n except utils_path.CmdNotFoundError:\n self.capable = False\n\n def is_software_package(self, path):\n \"\"\"\n Systems needs to be able to run the rpm binary in order to fetch\n information on package files. If the rpm binary is not available\n on this system, we simply ignore the rpm files found\n \"\"\"\n return self.capable and path.endswith('.rpm')\n\n def get_package_info(self, path):\n cmd = \"rpm -qp --qf '%{NAME} %{VERSION} %{RELEASE} %{SIGMD5} %{ARCH}' \"\n cmd += path\n info = process.system_output(cmd, ignore_status=True)\n info = tuple(info.split(' '))\n return info\n\n\nclass DistroPkgInfoLoaderDeb(DistroPkgInfoLoader):\n\n \"\"\"\n Loads package information for DEB files\n \"\"\"\n\n def __init__(self, path):\n super(DistroPkgInfoLoaderDeb, self).__init__(path)\n try:\n utils_path.find_command('dpkg-deb')\n self.capable = True\n except utils_path.CmdNotFoundError:\n self.capable = False\n\n def is_software_package(self, path):\n return self.capable and (path.endswith('.deb') or\n path.endswith('.udeb'))\n\n def get_package_info(self, path):\n cmd = (\"dpkg-deb --showformat '${Package} ${Version} ${Architecture}' \"\n \"--show \")\n cmd += path\n info = process.system_output(cmd, ignore_status=True)\n name, version, arch = info.split(' ')\n return (name, version, '', '', arch)\n\n\n#: the type of distro that will determine what loader will be used\nDISTRO_PKG_INFO_LOADERS = {'rpm': DistroPkgInfoLoaderRpm,\n 'deb': DistroPkgInfoLoaderDeb}\n\n\ndef save_distro(linux_distro, path):\n \"\"\"\n Saves the linux_distro to an external file format\n\n :param linux_distro: an :class:`DistroDef` instance\n :type linux_distro: DistroDef\n :param path: the location for the output file\n :type path: str\n :return: None\n \"\"\"\n with open(path, 'w') as output:\n output.write(bz2.compress(linux_distro.to_json()))\n\n\ndef load_distro(path):\n \"\"\"\n Loads the distro from an external file\n\n :param path: the location for the input file\n :type path: str\n :return: a dict with the distro definition data\n :rtype: dict\n \"\"\"\n with open(path, 'rb') as distro_file:\n json_data = json.loads(bz2.decompress(distro_file.read()))\n return json_data\n\n\ndef load_from_tree(name, version, release, arch, package_type, path):\n \"\"\"\n Loads a DistroDef from an installable tree\n\n :param name: a short name that precisely distinguishes this Linux\n Distribution among all others.\n :type name: str\n :param version: the major version of the distribution. Usually this\n is a single number that denotes a large development\n cycle and support file.\n :type version: str\n :param release: the release or minor version of the distribution.\n Usually this is also a single number, that is often\n omitted or starts with a 0 when the major version\n is initially release. It's often associated with a\n shorter development cycle that contains incremental\n a collection of improvements and fixes.\n :type release: str\n :param arch: the main target for this Linux Distribution. It's common\n for some architectures to ship with packages for\n previous and still compatible architectures, such as it's\n the case with Intel/AMD 64 bit architecture that support\n 32 bit code. In cases like this, this should be set to\n the 64 bit architecture name.\n :type arch: str\n :param package_type: one of the available package info loader types\n :type package_type: str\n :param path: top level directory of the distro installation tree files\n :type path: str\n \"\"\"\n distro_def = DistroDef(name, version, release, arch)\n\n loader_class = DISTRO_PKG_INFO_LOADERS.get(package_type, None)\n if loader_class is not None:\n loader = loader_class(path)\n distro_def.software_packages = [SoftwarePackage(*args)\n for args in loader.get_packages_info()]\n distro_def.software_packages_type = package_type\n return distro_def\n\n\nclass Distro(CLICmd):\n\n \"\"\"\n Implements the avocado 'distro' subcommand\n \"\"\"\n\n name = 'distro'\n description = 'Shows detected Linux distribution'\n\n def configure(self, parser):\n parser = super(Distro, self).configure(parser)\n\n help_msg = 'Cretes a distro definition file based on the path given.'\n settings.register_option(section='distro',\n key='distro_def_create',\n default=False,\n help_msg=help_msg,\n key_type=bool,\n parser=parser,\n long_arg='--distro-def-create')\n\n help_msg = 'Distribution short name'\n settings.register_option(section='distro',\n key='distro_def_name',\n default='',\n help_msg=help_msg,\n parser=parser,\n long_arg='--distro-def-name')\n\n help_msg = 'Distribution major version name'\n settings.register_option(section='distro',\n key='distro_def_version',\n default='',\n help_msg=help_msg,\n parser=parser,\n long_arg='--distro-def-version')\n\n help_msg = 'Distribution release version number'\n settings.register_option(section='distro',\n key='distro_def_release',\n default='',\n help_msg=help_msg,\n parser=parser,\n long_arg='--distro-def-release')\n\n help_msg = 'Primary architecture that the distro targets'\n settings.register_option(section='distro',\n key='distro_def_arch',\n default='',\n help_msg=help_msg,\n parser=parser,\n long_arg='--distro-def-arch')\n\n help_msg = 'Top level directory of the distro installation files'\n settings.register_option(section='distro',\n key='distro_def_path',\n default='',\n help_msg=help_msg,\n parser=parser,\n long_arg='--distro-def-path')\n\n type_choices = tuple(DISTRO_PKG_INFO_LOADERS.keys())\n type_choices_hlp = ', '.join(type_choices)\n help_msg = 'Distro type (one of: %s)' % type_choices_hlp\n settings.register_option(section='distro',\n key='distro_def_type',\n default='',\n help_msg=help_msg,\n choices=type_choices,\n parser=parser,\n long_arg='--distro-def-type')\n\n @staticmethod\n def _get_output_file_name(name, version, arch, release=None):\n \"\"\"\n Adapt the output file name based on given args\n\n It's not uncommon for some distros to not have a release number, so\n adapt the output file name to that\n \"\"\"\n if release:\n return '%s-%s.%s-%s.distro' % (name, version, release, arch)\n else:\n return '%s-%s-%s.distro' % (name, version, arch)\n\n def run(self, config):\n name = config.get('distro.distro_def_name')\n version = config.get('distro.distro_def_version')\n release = config.get('distro.distro_def_release')\n arch = config.get('distro.distro_def_arch')\n distro_type = config.get('distro.distro_def_type')\n path = config.get('distro.distro_def_path')\n if config.get('distro.distro_def_create'):\n if not (name and version and arch and distro_type and path):\n LOG_UI.error('Required arguments: name, version, arch, type '\n 'and path')\n sys.exit(exit_codes.AVOCADO_FAIL)\n\n output_file_name = self._get_output_file_name(name, version,\n arch, release)\n if os.path.exists(output_file_name):\n error_msg = ('Output file \"%s\" already exists, will not '\n 'overwrite it', output_file_name)\n LOG_UI.error(error_msg)\n else:\n LOG_UI.debug(\"Loading distro information from tree... \"\n \"Please wait...\")\n distro = load_from_tree(name, version, release, arch,\n distro_type, path)\n save_distro(distro, output_file_name)\n LOG_UI.debug('Distro information saved to \"%s\"',\n output_file_name)\n else:\n detected = utils_distro.detect()\n LOG_UI.debug('Detected distribution: %s (%s) version %s release '\n '%s', detected.name, detected.arch, detected.version,\n detected.release)\n", "path": "avocado/plugins/distro.py" } ]
[ { "content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n#\n# See LICENSE for more details.\n#\n# Copyright: Red Hat Inc. 2015\n# Author: Cleber Rosa <[email protected]>\n\nimport bz2\nimport json\nimport os\nimport sys\n\nfrom avocado.core import exit_codes\nfrom avocado.core.output import LOG_UI\nfrom avocado.core.plugin_interfaces import CLICmd\nfrom avocado.core.settings import settings\nfrom avocado.utils import distro as utils_distro\nfrom avocado.utils import path as utils_path\nfrom avocado.utils import process\n\n\nclass SoftwarePackage:\n\n \"\"\"\n Definition of relevant information on a software package\n \"\"\"\n\n def __init__(self, name, version, release, checksum, arch):\n self.name = name\n self.version = version\n self.release = release\n self.checksum = checksum\n self.arch = arch\n\n def to_dict(self):\n \"\"\"\n Returns the representation as a dictionary\n \"\"\"\n return {'name': self.name,\n 'version': self.version,\n 'release': self.release,\n 'checksum': self.checksum,\n 'arch': self.arch}\n\n def to_json(self):\n \"\"\"\n Returns the representation of the distro as JSON\n \"\"\"\n return json.dumps(self.to_dict())\n\n\nclass DistroDef(utils_distro.LinuxDistro):\n\n \"\"\"\n More complete information on a given Linux Distribution\n\n Can and should include all the software packages that ship with the distro,\n so that an analysis can be made on whether a given package that may be\n responsible for a regression is part of the official set or an external\n package.\n \"\"\"\n\n def __init__(self, name, version, release, arch):\n super(DistroDef, self).__init__(name, version, release, arch)\n\n #: All the software packages that ship with this Linux distro\n self.software_packages = []\n\n #: A simple text that denotes the software type that makes this distro\n self.software_packages_type = 'unknown'\n\n def to_dict(self):\n \"\"\"\n Returns the representation as a dictionary\n \"\"\"\n d = {'name': self.name,\n 'version': self.version,\n 'release': self.release,\n 'arch': self.arch,\n 'software_packages_type': self.software_packages_type,\n 'software_packages': []}\n\n for package in self.software_packages:\n d['software_packages'].append(package.to_dict())\n\n return d\n\n def to_json(self):\n \"\"\"\n Returns the representation of the distro as JSON\n \"\"\"\n return json.dumps(self.to_dict())\n\n\nclass DistroPkgInfoLoader:\n\n \"\"\"\n Loads information from the distro installation tree into a DistroDef\n\n It will go through all package files and inspect them with specific\n package utilities, collecting the necessary information.\n \"\"\"\n\n def __init__(self, path):\n self.path = path\n\n def get_packages_info(self):\n \"\"\"\n This method will go through each file, checking if it's a valid\n software package file by calling :meth:`is_software_package` and\n calling :meth:`load_package_info` if it's so.\n \"\"\"\n packages_info = set()\n for dirpath, _, filenames in os.walk(self.path):\n for filename in filenames:\n path = os.path.join(dirpath, filename)\n if self.is_software_package(path):\n packages_info.add(self.get_package_info(path))\n\n # because we do not track of locations or how many copies of a given\n # package file exists in the installation tree, packages should be\n # comprised of unique entries\n return list(packages_info)\n\n def is_software_package(self, path):\n \"\"\"\n Determines if the given file at `path` is a software package\n\n This check will be used to determine if :meth:`load_package_info`\n will be called for file at `path`. This method should be\n implemented by classes inheriting from :class:`DistroPkgInfoLoader` and\n could be as simple as checking for a file suffix.\n\n :param path: path to the software package file\n :type path: str\n :return: either True if the file is a valid software package or False\n otherwise\n :rtype: bool\n \"\"\"\n raise NotImplementedError\n\n def get_package_info(self, path):\n \"\"\"\n Returns information about a given software package\n\n Should be implemented by classes inheriting from\n :class:`DistroDefinitionLoader`.\n\n :param path: path to the software package file\n :type path: str\n :returns: tuple with name, version, release, checksum and arch\n :rtype: tuple\n \"\"\"\n raise NotImplementedError\n\n\nclass DistroPkgInfoLoaderRpm(DistroPkgInfoLoader):\n\n \"\"\"\n Loads package information for RPM files\n \"\"\"\n\n def __init__(self, path):\n super(DistroPkgInfoLoaderRpm, self).__init__(path)\n try:\n utils_path.find_command('rpm')\n self.capable = True\n except utils_path.CmdNotFoundError:\n self.capable = False\n\n def is_software_package(self, path):\n \"\"\"\n Systems needs to be able to run the rpm binary in order to fetch\n information on package files. If the rpm binary is not available\n on this system, we simply ignore the rpm files found\n \"\"\"\n return self.capable and path.endswith('.rpm')\n\n def get_package_info(self, path):\n cmd = \"rpm -qp --qf '%{NAME} %{VERSION} %{RELEASE} %{SIGMD5} %{ARCH}' \"\n cmd += path\n info = process.system_output(cmd, ignore_status=True)\n info = tuple(info.split(' '))\n return info\n\n\nclass DistroPkgInfoLoaderDeb(DistroPkgInfoLoader):\n\n \"\"\"\n Loads package information for DEB files\n \"\"\"\n\n def __init__(self, path):\n super(DistroPkgInfoLoaderDeb, self).__init__(path)\n try:\n utils_path.find_command('dpkg-deb')\n self.capable = True\n except utils_path.CmdNotFoundError:\n self.capable = False\n\n def is_software_package(self, path):\n return self.capable and (path.endswith('.deb') or\n path.endswith('.udeb'))\n\n def get_package_info(self, path):\n cmd = (\"dpkg-deb --showformat '${Package} ${Version} ${Architecture}' \"\n \"--show \")\n cmd += path\n info = process.system_output(cmd, ignore_status=True)\n name, version, arch = info.split(' ')\n return (name, version, '', '', arch)\n\n\n#: the type of distro that will determine what loader will be used\nDISTRO_PKG_INFO_LOADERS = {'rpm': DistroPkgInfoLoaderRpm,\n 'deb': DistroPkgInfoLoaderDeb}\n\n\ndef save_distro(linux_distro, path):\n \"\"\"\n Saves the linux_distro to an external file format\n\n :param linux_distro: an :class:`DistroDef` instance\n :type linux_distro: DistroDef\n :param path: the location for the output file\n :type path: str\n :return: None\n \"\"\"\n with open(path, 'wb') as output:\n buff = linux_distro.to_json()\n output.write(bz2.compress(buff.encode('utf-8')))\n\n\ndef load_distro(path):\n \"\"\"\n Loads the distro from an external file\n\n :param path: the location for the input file\n :type path: str\n :return: a dict with the distro definition data\n :rtype: dict\n \"\"\"\n with open(path, 'rb') as distro_file:\n json_data = json.loads(bz2.decompress(distro_file.read()))\n return json_data\n\n\ndef load_from_tree(name, version, release, arch, package_type, path):\n \"\"\"\n Loads a DistroDef from an installable tree\n\n :param name: a short name that precisely distinguishes this Linux\n Distribution among all others.\n :type name: str\n :param version: the major version of the distribution. Usually this\n is a single number that denotes a large development\n cycle and support file.\n :type version: str\n :param release: the release or minor version of the distribution.\n Usually this is also a single number, that is often\n omitted or starts with a 0 when the major version\n is initially release. It's often associated with a\n shorter development cycle that contains incremental\n a collection of improvements and fixes.\n :type release: str\n :param arch: the main target for this Linux Distribution. It's common\n for some architectures to ship with packages for\n previous and still compatible architectures, such as it's\n the case with Intel/AMD 64 bit architecture that support\n 32 bit code. In cases like this, this should be set to\n the 64 bit architecture name.\n :type arch: str\n :param package_type: one of the available package info loader types\n :type package_type: str\n :param path: top level directory of the distro installation tree files\n :type path: str\n \"\"\"\n distro_def = DistroDef(name, version, release, arch)\n\n loader_class = DISTRO_PKG_INFO_LOADERS.get(package_type, None)\n if loader_class is not None:\n loader = loader_class(path)\n distro_def.software_packages = [SoftwarePackage(*args)\n for args in loader.get_packages_info()]\n distro_def.software_packages_type = package_type\n return distro_def\n\n\nclass Distro(CLICmd):\n\n \"\"\"\n Implements the avocado 'distro' subcommand\n \"\"\"\n\n name = 'distro'\n description = 'Shows detected Linux distribution'\n\n def configure(self, parser):\n parser = super(Distro, self).configure(parser)\n\n help_msg = 'Cretes a distro definition file based on the path given.'\n settings.register_option(section='distro',\n key='distro_def_create',\n default=False,\n help_msg=help_msg,\n key_type=bool,\n parser=parser,\n long_arg='--distro-def-create')\n\n help_msg = 'Distribution short name'\n settings.register_option(section='distro',\n key='distro_def_name',\n default='',\n help_msg=help_msg,\n parser=parser,\n long_arg='--distro-def-name')\n\n help_msg = 'Distribution major version name'\n settings.register_option(section='distro',\n key='distro_def_version',\n default='',\n help_msg=help_msg,\n parser=parser,\n long_arg='--distro-def-version')\n\n help_msg = 'Distribution release version number'\n settings.register_option(section='distro',\n key='distro_def_release',\n default='',\n help_msg=help_msg,\n parser=parser,\n long_arg='--distro-def-release')\n\n help_msg = 'Primary architecture that the distro targets'\n settings.register_option(section='distro',\n key='distro_def_arch',\n default='',\n help_msg=help_msg,\n parser=parser,\n long_arg='--distro-def-arch')\n\n help_msg = 'Top level directory of the distro installation files'\n settings.register_option(section='distro',\n key='distro_def_path',\n default='',\n help_msg=help_msg,\n parser=parser,\n long_arg='--distro-def-path')\n\n type_choices = tuple(DISTRO_PKG_INFO_LOADERS.keys())\n type_choices_hlp = ', '.join(type_choices)\n help_msg = 'Distro type (one of: %s)' % type_choices_hlp\n settings.register_option(section='distro',\n key='distro_def_type',\n default='',\n help_msg=help_msg,\n choices=type_choices,\n parser=parser,\n long_arg='--distro-def-type')\n\n @staticmethod\n def _get_output_file_name(name, version, arch, release=None):\n \"\"\"\n Adapt the output file name based on given args\n\n It's not uncommon for some distros to not have a release number, so\n adapt the output file name to that\n \"\"\"\n if release:\n return '%s-%s.%s-%s.distro' % (name, version, release, arch)\n else:\n return '%s-%s-%s.distro' % (name, version, arch)\n\n def run(self, config):\n name = config.get('distro.distro_def_name')\n version = config.get('distro.distro_def_version')\n release = config.get('distro.distro_def_release')\n arch = config.get('distro.distro_def_arch')\n distro_type = config.get('distro.distro_def_type')\n path = config.get('distro.distro_def_path')\n if config.get('distro.distro_def_create'):\n if not (name and version and arch and distro_type and path):\n LOG_UI.error('Required arguments: name, version, arch, type '\n 'and path')\n sys.exit(exit_codes.AVOCADO_FAIL)\n\n output_file_name = self._get_output_file_name(name, version,\n arch, release)\n if os.path.exists(output_file_name):\n error_msg = ('Output file \"%s\" already exists, will not '\n 'overwrite it', output_file_name)\n LOG_UI.error(error_msg)\n else:\n LOG_UI.debug(\"Loading distro information from tree... \"\n \"Please wait...\")\n distro = load_from_tree(name, version, release, arch,\n distro_type, path)\n save_distro(distro, output_file_name)\n LOG_UI.debug('Distro information saved to \"%s\"',\n output_file_name)\n else:\n detected = utils_distro.detect()\n LOG_UI.debug('Detected distribution: %s (%s) version %s release '\n '%s', detected.name, detected.arch, detected.version,\n detected.release)\n", "path": "avocado/plugins/distro.py" } ]
diff --git a/avocado/plugins/distro.py b/avocado/plugins/distro.py index ffe339c9a5..15478d031e 100644 --- a/avocado/plugins/distro.py +++ b/avocado/plugins/distro.py @@ -233,8 +233,9 @@ def save_distro(linux_distro, path): :type path: str :return: None """ - with open(path, 'w') as output: - output.write(bz2.compress(linux_distro.to_json())) + with open(path, 'wb') as output: + buff = linux_distro.to_json() + output.write(bz2.compress(buff.encode('utf-8'))) def load_distro(path):
ivy-llc__ivy-23588
ifft2
[ { "content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back\nfrom ivy.func_wrapper import with_unsupported_dtypes\n\n\n@to_ivy_arrays_and_back\ndef fft(a, n=None, axis=-1, norm=None):\n if norm is None:\n norm = \"backward\"\n return ivy.fft(a, axis, norm=norm, n=n)\n\n\n@to_ivy_arrays_and_back\ndef fft2(a, s=None, axes=(-2, -1), norm=None):\n if norm is None:\n norm = \"backward\"\n return ivy.array(ivy.fft2(a, s=s, dim=axes, norm=norm), dtype=ivy.dtype(a))\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\ndef fftshift(x, axes=None, name=None):\n shape = x.shape\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shifts = [(dim // 2) for dim in shape]\n elif isinstance(axes, int):\n shifts = shape[axes] // 2\n else:\n shifts = [shape[ax] // 2 for ax in axes]\n\n roll = ivy.roll(x, shifts, axis=axes)\n\n return roll\n\n\n@to_ivy_arrays_and_back\ndef ifft(a, n=None, axis=-1, norm=None):\n if norm is None:\n norm = \"backward\"\n return ivy.ifft(a, axis, norm=norm, n=n)\n", "path": "ivy/functional/frontends/jax/numpy/fft.py" } ]
[ { "content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back\nfrom ivy.func_wrapper import with_unsupported_dtypes\n\n\n@to_ivy_arrays_and_back\ndef fft(a, n=None, axis=-1, norm=None):\n if norm is None:\n norm = \"backward\"\n return ivy.fft(a, axis, norm=norm, n=n)\n\n\n@to_ivy_arrays_and_back\ndef fft2(a, s=None, axes=(-2, -1), norm=None):\n if norm is None:\n norm = \"backward\"\n return ivy.array(ivy.fft2(a, s=s, dim=axes, norm=norm), dtype=ivy.dtype(a))\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\ndef fftshift(x, axes=None, name=None):\n shape = x.shape\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shifts = [(dim // 2) for dim in shape]\n elif isinstance(axes, int):\n shifts = shape[axes] // 2\n else:\n shifts = [shape[ax] // 2 for ax in axes]\n\n roll = ivy.roll(x, shifts, axis=axes)\n\n return roll\n\n\n@to_ivy_arrays_and_back\ndef ifft(a, n=None, axis=-1, norm=None):\n if norm is None:\n norm = \"backward\"\n return ivy.ifft(a, axis, norm=norm, n=n)\n\n\n@to_ivy_arrays_and_back\ndef ifft2(a, s=None, axes=(-2, -1), norm=None):\n if norm is None:\n norm = \"backward\"\n return ivy.array(ivy.ifft2(a, s=s, dim=axes, norm=norm), dtype=ivy.dtype(a))\n", "path": "ivy/functional/frontends/jax/numpy/fft.py" } ]
diff --git a/ivy/functional/frontends/jax/numpy/fft.py b/ivy/functional/frontends/jax/numpy/fft.py index 0ba0fa31f9c94..8b2d0e17aebf6 100644 --- a/ivy/functional/frontends/jax/numpy/fft.py +++ b/ivy/functional/frontends/jax/numpy/fft.py @@ -41,3 +41,10 @@ def ifft(a, n=None, axis=-1, norm=None): if norm is None: norm = "backward" return ivy.ifft(a, axis, norm=norm, n=n) + + +@to_ivy_arrays_and_back +def ifft2(a, s=None, axes=(-2, -1), norm=None): + if norm is None: + norm = "backward" + return ivy.array(ivy.ifft2(a, s=s, dim=axes, norm=norm), dtype=ivy.dtype(a)) diff --git a/ivy_tests/test_ivy/test_frontends/test_jax/test_numpy/test_fft.py b/ivy_tests/test_ivy/test_frontends/test_jax/test_numpy/test_fft.py index f4fc0911ae534..a13ea4f44bb6d 100644 --- a/ivy_tests/test_ivy/test_frontends/test_jax/test_numpy/test_fft.py +++ b/ivy_tests/test_ivy/test_frontends/test_jax/test_numpy/test_fft.py @@ -163,3 +163,54 @@ def test_jax_numpy_ifft( atol=1e-02, rtol=1e-02, ) + + +# ifft2 +@handle_frontend_test( + fn_tree="jax.numpy.fft.ifft2", + dtype_values=helpers.dtype_and_values( + available_dtypes=helpers.get_dtypes("valid"), + num_arrays=1, + min_value=-1e5, + max_value=1e5, + min_num_dims=2, + max_num_dims=5, + min_dim_size=2, + max_dim_size=5, + allow_inf=False, + large_abs_safety_factor=2.5, + small_abs_safety_factor=2.5, + safety_factor_scale="log", + ), + axes=st.sampled_from([(0, 1), (-1, -2), (1, 0)]), + s=st.tuples( + st.integers(min_value=2, max_value=256), st.integers(min_value=2, max_value=256) + ), + norm=st.sampled_from(["backward", "ortho", "forward", None]), +) +def test_jax_numpy_ifft2( + dtype_values, + s, + axes, + norm, + frontend, + backend_fw, + test_flags, + fn_tree, + on_device, +): + dtype, values = dtype_values + helpers.test_frontend_function( + input_dtypes=dtype, + frontend=frontend, + backend_to_test=backend_fw, + test_flags=test_flags, + fn_tree=fn_tree, + on_device=on_device, + a=values[0], + s=s, + axes=axes, + norm=norm, + atol=1e-02, + rtol=1e-02, + )
PrefectHQ__prefect-2959
Undefined name: make_env in ./server/src/prefect_server/cli/dev.py ## Description *A clear description of the bug* An _undefined name_ like #2235 and #1199 https://github.com/PrefectHQ/prefect/blob/master/server/src/prefect_server/cli/dev.py#L88 `make_env` is an undefined name in this context which will raise a `NameError` at runtime. Should this be `make_dev_env()` defined on line 36 or is `from prefect.cli.server import make_env` the right solution? [flake8](http://flake8.pycqa.org) testing of https://github.com/PrefectHQ/prefect on Python 3.8.3 $ __flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics__ ``` ./server/src/prefect_server/cli/dev.py:88:11: F821 undefined name 'make_env' env = make_env() ^ 1 F821 undefined name 'make_env' 1 ``` https://flake8.pycqa.org/en/latest/user/error-codes.html On the flake8 test selection, this PR does _not_ focus on "_style violations_" (the majority of flake8 error codes that [__psf/black__](https://github.com/psf/black) can autocorrect). Instead these tests are focus on runtime safety and correctness: * E9 tests are about Python syntax errors usually raised because flake8 can not build an Abstract Syntax Tree (AST). Often these issues are a sign of unused code or code that has not been ported to Python 3. These would be compile-time errors in a compiled language but in a dynamic language like Python they result in the script halting/crashing on the user. * F63 tests are usually about the confusion between identity and equality in Python. Use ==/!= to compare str, bytes, and int literals is the classic case. These are areas where __a == b__ is True but __a is b__ is False (or vice versa). Python >= 3.8 will raise SyntaxWarnings on these instances. * F7 tests logic errors and syntax errors in type hints * F82 tests are almost always _undefined names_ which are usually a sign of a typo, missing imports, or code that has not been ported to Python 3. These also would be compile-time errors in a compiled language but in Python a __NameError__ is raised which will halt/crash the script on the user. ## Expected Behavior *What did you expect to happen instead?* ## Reproduction *A minimal example that exhibits the behavior.* ## Environment *Any additional information about your environment* *Optionally run `prefect diagnostics` from the command line and paste the information here*
[ { "content": "# Licensed under the Prefect Community License, available at\n# https://www.prefect.io/legal/prefect-community-license\n\n\nimport glob\nimport os\nimport shutil\nimport signal\nimport subprocess\nimport time\nfrom pathlib import Path\n\nimport click\n\nimport prefect\nimport prefect_server\nfrom prefect_server import api, cli, config, utilities\nfrom prefect_server.database import models\n\n\[email protected]()\ndef dev():\n \"\"\"\n Commands for developing Server\n\n \\b\n Usage:\n $ prefect-server ...\n \n \\b\n Arguments:\n build builds prefect server, ui, apollo from source\n \"\"\"\n\n\ndef make_dev_env(fname=None):\n\n # replace localhost with postgres to use docker-compose dns\n PREFECT_ENV = dict(\n DB_CONNECTION_URL=config.database.connection_url.replace(\n \"localhost\", \"postgres\"\n )\n )\n\n APOLLO_ENV = dict(\n HASURA_API_URL=f\"http://{config.hasura.host}:{config.hasura.port}/v1alpha1/graphql\",\n HASURA_WS_URL=f\"ws://{config.hasura.host}:{config.hasura.port}/v1alpha1/graphql\",\n PREFECT_API_URL=f\"http://{config.services.graphql.host}:{config.services.graphql.port}{config.services.graphql.path}\",\n PREFECT_API_HEALTH_URL=f\"http://{config.services.graphql.host}:{config.services.graphql.port}/health\",\n )\n\n POSTGRES_ENV = dict(\n POSTGRES_USER=config.database.username,\n POSTGRES_PASSWORD=config.database.password,\n POSTGRES_DB=config.database.name,\n )\n\n HASURA_ENV = dict()\n\n UI_ENV = dict(GRAPHQL_URL=config.services.ui.graphql_url)\n\n ENV = os.environ.copy()\n ENV.update(**PREFECT_ENV, **APOLLO_ENV, **POSTGRES_ENV, **UI_ENV, **HASURA_ENV)\n\n if fname is not None:\n list_of_pairs = [\n f\"{k}={repr(v)}\" if \"\\n\" in v else f\"{k}={v}\" for k, v in ENV.items()\n ]\n with open(fname, \"w\") as f:\n f.write(\"\\n\".join(list_of_pairs))\n return ENV.copy()\n\n\[email protected](hidden=True)\[email protected](\n \"--version\",\n \"-v\",\n help=\"The server image versions to build (for example, '0.10.0' or 'master')\",\n # TODO: update this default to use prefect.__version__ logic\n default=\"latest\",\n)\ndef build(version):\n \"\"\"\n foobar\n \"\"\"\n docker_dir = Path(prefect_server.__file__).parents[2] / \"docker\"\n\n env = make_env()\n\n if \"PREFECT_SERVER_TAG\" not in env:\n env.update(PREFECT_SERVER_TAG=version)\n\n proc = None\n cmd = [\"docker-compose\", \"build\"]\n proc = subprocess.Popen(cmd, cwd=docker_dir, env=env)\n\n\[email protected]()\[email protected](\"--tag\", \"-t\", help=\"The server image/tag to use\", default=\"latest\")\[email protected](\n \"--skip-pull\",\n help=\"Pass this flag to skip pulling new images (if available)\",\n is_flag=True,\n)\ndef infrastructure(tag, skip_pull):\n \"\"\"\n This command:\n - starts a PostgreSQL database\n - starts Hasura\n \"\"\"\n docker_dir = Path(prefect_server.__file__).parents[2] / \"docker\"\n\n env = make_dev_env()\n\n proc = None\n try:\n if not skip_pull:\n subprocess.check_call(\n [\"docker-compose\", \"pull\", \"postgres\", \"hasura\"],\n cwd=docker_dir,\n env=env,\n )\n proc = subprocess.Popen(\n [\"docker-compose\", \"up\", \"postgres\", \"hasura\"], cwd=docker_dir, env=env\n )\n\n # if not initialize, just run hasura (and dependencies), which will skip the init step\n while True:\n time.sleep(0.5)\n except:\n click.secho(\n \"Exception caught; killing services (press ctrl-C to force)\",\n fg=\"white\",\n bg=\"red\",\n )\n subprocess.check_output([\"docker-compose\", \"down\"], cwd=docker_dir, env=env)\n if proc:\n proc.kill()\n raise\n\n\[email protected]()\[email protected](\"--skip-ui\", help=\"Pass this flag to skip UI dependencies\", is_flag=True)\[email protected](\n \"--skip-apollo\", help=\"Pass this flag to skip Apollo dependencies\", is_flag=True\n)\ndef install_dependencies(skip_ui, skip_apollo):\n \"\"\"\n This command:\n - installs Apollo dependencies\n - install UI dependencies\n \"\"\"\n if not skip_ui:\n click.secho(\"Installing UI dependencies...\")\n time.sleep(0.5)\n install_ui_dependencies()\n\n if not skip_apollo:\n click.secho(\"Installing Apollo dependencies...\")\n time.sleep(0.5)\n install_apollo_dependencies()\n\n if skip_ui and skip_apollo:\n click.secho(\"No dependencies were installed because all were skipped.\")\n\n\ndef install_apollo_dependencies():\n apollo_dir = Path(prefect_server.__file__).parents[2] / \"services\" / \"apollo\"\n\n proc = None\n try:\n proc = subprocess.check_call([\"npm\", \"install\"], cwd=apollo_dir)\n click.secho(\"Apollo dependencies installed! 🚀🚀🚀\")\n except:\n click.secho(\n \"Exception caught while installing Apollo dependencies.\",\n fg=\"white\",\n bg=\"red\",\n )\n if proc:\n proc.kill()\n raise\n\n\ndef install_ui_dependencies():\n ui_dir = Path(prefect_server.__file__).parents[2] / \"services\" / \"ui\"\n\n proc = None\n try:\n proc = subprocess.check_call([\"npm\", \"install\"], cwd=ui_dir)\n click.secho(\"UI dependencies installed! 🕹🕹🕹\")\n except:\n click.secho(\n \"Exception caught while installing UI dependencies.\", fg=\"white\", bg=\"red\"\n )\n if proc:\n proc.kill()\n raise\n\n\ndef is_process_group_empty(pgid: int):\n proc = subprocess.Popen(\n [\"pgrep\", \"-g\", str(pgid)], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL\n )\n proc.wait()\n return proc.returncode != 0\n\n\ndef kill_process_group(proc, timeout: int = 3):\n try:\n pgid = os.getpgid(proc.pid)\n os.killpg(pgid, signal.SIGTERM)\n proc.terminate()\n\n for _ in range(timeout):\n if is_process_group_empty(pgid):\n return\n click.secho(\"Waiting for process group to exit...\", fg=\"white\", bg=\"blue\")\n time.sleep(1)\n\n click.secho(\"Timeout while shutting down, killing!\", fg=\"white\", bg=\"red\")\n os.killpg(pgid, signal.SIGKILL)\n proc.kill()\n except Exception as exc:\n click.secho(exc)\n\n\[email protected]()\[email protected](\n \"--include\", \"-i\", help=\"A comma-seperated list of serivces that should be run\"\n)\[email protected](\n \"--exclude\", \"-e\", help=\"A comma-seperated list of services that should not be run\"\n)\ndef services(include, exclude):\n \"\"\"\n This command starts services\n \"\"\"\n\n all_services = [\"graphql\", \"scheduler\", \"apollo\", \"ui\"]\n if not include:\n include = all_services\n else:\n include = include.split(\",\")\n if not exclude:\n exclude = \"\"\n run_services = sorted(set(include).difference(exclude.split(\",\")))\n\n click.secho(\n f\"\\n\\nStarting Prefect Server services: {' '.join(run_services)}\\n\\n\",\n fg=\"green\",\n )\n\n procs = []\n for service in run_services:\n procs.append(\n subprocess.Popen(\n [\"prefect-server\", \"services\", service],\n env=make_dev_env(),\n preexec_fn=os.setsid,\n )\n )\n\n try:\n while True:\n time.sleep(1)\n except:\n click.secho(\"Exception caught; shutting down!\", fg=\"white\", bg=\"red\")\n for proc in procs:\n kill_process_group(proc)\n\n\ndef config_to_dict(config):\n if isinstance(config, (list, tuple, set)):\n return type(config)([config_to_dict(d) for d in config])\n elif isinstance(config, prefect.configuration.Config):\n return dict({k: config_to_dict(v) for k, v in config.items()})\n return config\n\n\ndef set_nested(dictionary, path: str, value: str):\n path = path.split(\".\")\n for level in path[:-1]:\n dictionary = dictionary.setdefault(level, {})\n dictionary[path[-1]] = value\n\n\[email protected]()\[email protected](\"-m\", \"--migration-message\", required=True)\ndef generate_migration(migration_message):\n # ensure this is called from the root server directory\n if Path(prefect_server.__file__).parents[2] != Path(os.getcwd()):\n raise click.ClickException(\n \"generate-migration must be run from the server root directory.\"\n )\n # find the most recent revision\n alembic_migrations_path = \"../../../services/postgres/alembic/versions\"\n versions = glob.glob(\n os.path.join(os.path.dirname(__file__), alembic_migrations_path, \"*.py\")\n )\n versions.sort()\n most_recent_migration = versions[-1]\n with open(\n os.path.join(\n os.path.dirname(__file__), alembic_migrations_path, most_recent_migration\n )\n ) as migration:\n for line in migration.readlines():\n if line.startswith(\"Revision ID:\"):\n revision = line.split(\": \")[1].strip()\n click.echo(f\"Most recent Alembic revision is {revision}\")\n # copy metadata to a backup for corresponding revision\n hasura_migrations_path = \"../../../services/hasura/migrations\"\n backup_metadata_file = f\"metadata-{revision}.yaml\"\n backup_metadata_destination = os.path.abspath(\n os.path.join(\n os.path.dirname(__file__),\n hasura_migrations_path,\n \"versions\",\n backup_metadata_file,\n )\n )\n shutil.copy(\n os.path.join(\n os.path.dirname(__file__), hasura_migrations_path, \"metadata.yaml\"\n ),\n backup_metadata_destination,\n )\n click.echo(f\"Copied metadata to {backup_metadata_destination}\")\n # create a new revision\n click.echo(\n subprocess.check_output([\"alembic\", \"revision\", \"-m\", migration_message])\n )\n click.secho(\"Prefect Server migration generated!\", fg=\"green\")\n", "path": "server/src/prefect_server/cli/dev.py" } ]
[ { "content": "# Licensed under the Prefect Community License, available at\n# https://www.prefect.io/legal/prefect-community-license\n\n\nimport glob\nimport os\nimport shutil\nimport signal\nimport subprocess\nimport time\nfrom pathlib import Path\n\nimport click\n\nimport prefect\nimport prefect_server\nfrom prefect_server import api, cli, config, utilities\nfrom prefect_server.database import models\n\n\[email protected]()\ndef dev():\n \"\"\"\n Commands for developing Server\n\n \\b\n Usage:\n $ prefect-server ...\n \n \\b\n Arguments:\n build builds prefect server, ui, apollo from source\n \"\"\"\n\n\ndef make_dev_env(fname=None):\n\n # replace localhost with postgres to use docker-compose dns\n PREFECT_ENV = dict(\n DB_CONNECTION_URL=config.database.connection_url.replace(\n \"localhost\", \"postgres\"\n )\n )\n\n APOLLO_ENV = dict(\n HASURA_API_URL=f\"http://{config.hasura.host}:{config.hasura.port}/v1alpha1/graphql\",\n HASURA_WS_URL=f\"ws://{config.hasura.host}:{config.hasura.port}/v1alpha1/graphql\",\n PREFECT_API_URL=f\"http://{config.services.graphql.host}:{config.services.graphql.port}{config.services.graphql.path}\",\n PREFECT_API_HEALTH_URL=f\"http://{config.services.graphql.host}:{config.services.graphql.port}/health\",\n )\n\n POSTGRES_ENV = dict(\n POSTGRES_USER=config.database.username,\n POSTGRES_PASSWORD=config.database.password,\n POSTGRES_DB=config.database.name,\n )\n\n HASURA_ENV = dict()\n\n UI_ENV = dict(GRAPHQL_URL=config.services.ui.graphql_url)\n\n ENV = os.environ.copy()\n ENV.update(**PREFECT_ENV, **APOLLO_ENV, **POSTGRES_ENV, **UI_ENV, **HASURA_ENV)\n\n if fname is not None:\n list_of_pairs = [\n f\"{k}={repr(v)}\" if \"\\n\" in v else f\"{k}={v}\" for k, v in ENV.items()\n ]\n with open(fname, \"w\") as f:\n f.write(\"\\n\".join(list_of_pairs))\n return ENV.copy()\n\n\[email protected](hidden=True)\[email protected](\n \"--version\",\n \"-v\",\n help=\"The server image versions to build (for example, '0.10.0' or 'master')\",\n # TODO: update this default to use prefect.__version__ logic\n default=\"latest\",\n)\ndef build(version):\n \"\"\"\n foobar\n \"\"\"\n docker_dir = Path(prefect_server.__file__).parents[2] / \"docker\"\n\n env = make_dev_env()\n\n if \"PREFECT_SERVER_TAG\" not in env:\n env.update(PREFECT_SERVER_TAG=version)\n\n proc = None\n cmd = [\"docker-compose\", \"build\"]\n proc = subprocess.Popen(cmd, cwd=docker_dir, env=env)\n\n\[email protected]()\[email protected](\"--tag\", \"-t\", help=\"The server image/tag to use\", default=\"latest\")\[email protected](\n \"--skip-pull\",\n help=\"Pass this flag to skip pulling new images (if available)\",\n is_flag=True,\n)\ndef infrastructure(tag, skip_pull):\n \"\"\"\n This command:\n - starts a PostgreSQL database\n - starts Hasura\n \"\"\"\n docker_dir = Path(prefect_server.__file__).parents[2] / \"docker\"\n\n env = make_dev_env()\n\n proc = None\n try:\n if not skip_pull:\n subprocess.check_call(\n [\"docker-compose\", \"pull\", \"postgres\", \"hasura\"],\n cwd=docker_dir,\n env=env,\n )\n proc = subprocess.Popen(\n [\"docker-compose\", \"up\", \"postgres\", \"hasura\"], cwd=docker_dir, env=env\n )\n\n # if not initialize, just run hasura (and dependencies), which will skip the init step\n while True:\n time.sleep(0.5)\n except:\n click.secho(\n \"Exception caught; killing services (press ctrl-C to force)\",\n fg=\"white\",\n bg=\"red\",\n )\n subprocess.check_output([\"docker-compose\", \"down\"], cwd=docker_dir, env=env)\n if proc:\n proc.kill()\n raise\n\n\[email protected]()\[email protected](\"--skip-ui\", help=\"Pass this flag to skip UI dependencies\", is_flag=True)\[email protected](\n \"--skip-apollo\", help=\"Pass this flag to skip Apollo dependencies\", is_flag=True\n)\ndef install_dependencies(skip_ui, skip_apollo):\n \"\"\"\n This command:\n - installs Apollo dependencies\n - install UI dependencies\n \"\"\"\n if not skip_ui:\n click.secho(\"Installing UI dependencies...\")\n time.sleep(0.5)\n install_ui_dependencies()\n\n if not skip_apollo:\n click.secho(\"Installing Apollo dependencies...\")\n time.sleep(0.5)\n install_apollo_dependencies()\n\n if skip_ui and skip_apollo:\n click.secho(\"No dependencies were installed because all were skipped.\")\n\n\ndef install_apollo_dependencies():\n apollo_dir = Path(prefect_server.__file__).parents[2] / \"services\" / \"apollo\"\n\n proc = None\n try:\n proc = subprocess.check_call([\"npm\", \"install\"], cwd=apollo_dir)\n click.secho(\"Apollo dependencies installed! 🚀🚀🚀\")\n except:\n click.secho(\n \"Exception caught while installing Apollo dependencies.\",\n fg=\"white\",\n bg=\"red\",\n )\n if proc:\n proc.kill()\n raise\n\n\ndef install_ui_dependencies():\n ui_dir = Path(prefect_server.__file__).parents[2] / \"services\" / \"ui\"\n\n proc = None\n try:\n proc = subprocess.check_call([\"npm\", \"install\"], cwd=ui_dir)\n click.secho(\"UI dependencies installed! 🕹🕹🕹\")\n except:\n click.secho(\n \"Exception caught while installing UI dependencies.\", fg=\"white\", bg=\"red\"\n )\n if proc:\n proc.kill()\n raise\n\n\ndef is_process_group_empty(pgid: int):\n proc = subprocess.Popen(\n [\"pgrep\", \"-g\", str(pgid)], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL\n )\n proc.wait()\n return proc.returncode != 0\n\n\ndef kill_process_group(proc, timeout: int = 3):\n try:\n pgid = os.getpgid(proc.pid)\n os.killpg(pgid, signal.SIGTERM)\n proc.terminate()\n\n for _ in range(timeout):\n if is_process_group_empty(pgid):\n return\n click.secho(\"Waiting for process group to exit...\", fg=\"white\", bg=\"blue\")\n time.sleep(1)\n\n click.secho(\"Timeout while shutting down, killing!\", fg=\"white\", bg=\"red\")\n os.killpg(pgid, signal.SIGKILL)\n proc.kill()\n except Exception as exc:\n click.secho(exc)\n\n\[email protected]()\[email protected](\n \"--include\", \"-i\", help=\"A comma-seperated list of serivces that should be run\"\n)\[email protected](\n \"--exclude\", \"-e\", help=\"A comma-seperated list of services that should not be run\"\n)\ndef services(include, exclude):\n \"\"\"\n This command starts services\n \"\"\"\n\n all_services = [\"graphql\", \"scheduler\", \"apollo\", \"ui\"]\n if not include:\n include = all_services\n else:\n include = include.split(\",\")\n if not exclude:\n exclude = \"\"\n run_services = sorted(set(include).difference(exclude.split(\",\")))\n\n click.secho(\n f\"\\n\\nStarting Prefect Server services: {' '.join(run_services)}\\n\\n\",\n fg=\"green\",\n )\n\n procs = []\n for service in run_services:\n procs.append(\n subprocess.Popen(\n [\"prefect-server\", \"services\", service],\n env=make_dev_env(),\n preexec_fn=os.setsid,\n )\n )\n\n try:\n while True:\n time.sleep(1)\n except:\n click.secho(\"Exception caught; shutting down!\", fg=\"white\", bg=\"red\")\n for proc in procs:\n kill_process_group(proc)\n\n\ndef config_to_dict(config):\n if isinstance(config, (list, tuple, set)):\n return type(config)([config_to_dict(d) for d in config])\n elif isinstance(config, prefect.configuration.Config):\n return dict({k: config_to_dict(v) for k, v in config.items()})\n return config\n\n\ndef set_nested(dictionary, path: str, value: str):\n path = path.split(\".\")\n for level in path[:-1]:\n dictionary = dictionary.setdefault(level, {})\n dictionary[path[-1]] = value\n\n\[email protected]()\[email protected](\"-m\", \"--migration-message\", required=True)\ndef generate_migration(migration_message):\n # ensure this is called from the root server directory\n if Path(prefect_server.__file__).parents[2] != Path(os.getcwd()):\n raise click.ClickException(\n \"generate-migration must be run from the server root directory.\"\n )\n # find the most recent revision\n alembic_migrations_path = \"../../../services/postgres/alembic/versions\"\n versions = glob.glob(\n os.path.join(os.path.dirname(__file__), alembic_migrations_path, \"*.py\")\n )\n versions.sort()\n most_recent_migration = versions[-1]\n with open(\n os.path.join(\n os.path.dirname(__file__), alembic_migrations_path, most_recent_migration\n )\n ) as migration:\n for line in migration.readlines():\n if line.startswith(\"Revision ID:\"):\n revision = line.split(\": \")[1].strip()\n click.echo(f\"Most recent Alembic revision is {revision}\")\n # copy metadata to a backup for corresponding revision\n hasura_migrations_path = \"../../../services/hasura/migrations\"\n backup_metadata_file = f\"metadata-{revision}.yaml\"\n backup_metadata_destination = os.path.abspath(\n os.path.join(\n os.path.dirname(__file__),\n hasura_migrations_path,\n \"versions\",\n backup_metadata_file,\n )\n )\n shutil.copy(\n os.path.join(\n os.path.dirname(__file__), hasura_migrations_path, \"metadata.yaml\"\n ),\n backup_metadata_destination,\n )\n click.echo(f\"Copied metadata to {backup_metadata_destination}\")\n # create a new revision\n click.echo(\n subprocess.check_output([\"alembic\", \"revision\", \"-m\", migration_message])\n )\n click.secho(\"Prefect Server migration generated!\", fg=\"green\")\n", "path": "server/src/prefect_server/cli/dev.py" } ]
diff --git a/server/src/prefect_server/cli/dev.py b/server/src/prefect_server/cli/dev.py index 440f96e6e5e8..2a6a4fdee5fb 100644 --- a/server/src/prefect_server/cli/dev.py +++ b/server/src/prefect_server/cli/dev.py @@ -85,7 +85,7 @@ def build(version): """ docker_dir = Path(prefect_server.__file__).parents[2] / "docker" - env = make_env() + env = make_dev_env() if "PREFECT_SERVER_TAG" not in env: env.update(PREFECT_SERVER_TAG=version)
gwastro__pycbc-3663
Bug in command line option of pycbc_multi_inspiral code Hi, So I was trying running pycbc_multi_inspiral code with command line option of `injection-f-ref 10 ` and I get this error. ``` ValueError: Issue with option: injection_f_ref Received value: 1 0 If you are supplying a value for all ifos, you must only supply one value. ``` Probing further it turns out this command line option if removed the code did get run. I also pasted the full error below. Just to mention what I was trying is to use pycbc_multi_inspiral code given an injection parameter file. So, I created an injection.hdf file using pycbc_create_injections and I am using templatebank files from previous pygrb successful runs. If further information is needed please tell me. This was full error I received ``` File "pycbc_multi_inspiral", line 118, in <module> opt = parser.parse_args() File "/usr/lib64/python3.6/argparse.py", line 1734, in parse_args args, argv = self.parse_known_args(args, namespace) File "/usr/lib64/python3.6/argparse.py", line 1766, in parse_known_args namespace, args = self._parse_known_args(args, namespace) File "/usr/lib64/python3.6/argparse.py", line 1972, in _parse_known_args start_index = consume_optional(start_index) File "/usr/lib64/python3.6/argparse.py", line 1912, in consume_optional take_action(action, args, option_string) File "/usr/lib64/python3.6/argparse.py", line 1840, in take_action action(self, namespace, argument_values, option_string) File "/home/jam.sadiq/SearchPipeline/env/lib/python3.6/site-packages/PyCBC-0.0a8053-py3.6-linux-x86_64.egg/pycbc/types/optparse.py", line 102, in __call__ raise ValueError(err_msg) ValueError: Issue with option: injection_f_ref Received value: 1 0 If you are supplying a value for all ifos, you must only supply one value. ```
[ { "content": "# Copyright (C) 2015 Ian Harry\n#\n# This program is free software; you can redistribute it and/or modify it\n# under the terms of the GNU General Public License as published by the\n# Free Software Foundation; either version 3 of the License, or (at your\n# option) any later version.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Generals\n# Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along\n# with this program; if not, write to the Free Software Foundation, Inc.,\n# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"\nThis modules contains extensions for use with argparse\n\"\"\"\nimport copy\nimport argparse\nfrom collections import defaultdict\n\nclass DictWithDefaultReturn(defaultdict):\n default_set = False\n ifo_set = False\n def __bool__(self):\n if self.items() and not all(entry is None for entry in self.values()):\n # True if any values are explictly set.\n return True\n elif self['RANDOM_STRING_314324'] is not None:\n # Or true if the default value was set\n # NOTE: This stores the string RANDOM_STRING_314324 in the dict\n # so subsequent calls will be caught in the first test here.\n return True\n else:\n # Else false\n return False\n # Python 2 and 3 have different conventions for boolean method\n __nonzero__ = __bool__\n\nclass MultiDetOptionAction(argparse.Action):\n # Initialise the same as the standard 'append' action\n def __init__(self,\n option_strings,\n dest,\n nargs=None,\n const=None,\n default=None,\n type=None,\n choices=None,\n required=False,\n help=None,\n metavar=None):\n if type is not None:\n self.internal_type = type\n else:\n self.internal_type = str\n new_default = DictWithDefaultReturn(lambda: default)\n #new_default.default_value=default\n if nargs == 0:\n raise ValueError('nargs for append actions must be > 0; if arg '\n 'strings are not supplying the value to append, '\n 'the append const action may be more appropriate')\n if const is not None and nargs != argparse.OPTIONAL:\n raise ValueError('nargs must be %r to supply const'\n % argparse.OPTIONAL)\n super(MultiDetOptionAction, self).__init__(\n option_strings=option_strings,\n dest=dest,\n nargs=nargs,\n const=const,\n default=new_default,\n type=str,\n choices=choices,\n required=required,\n help=help,\n metavar=metavar)\n\n def __call__(self, parser, namespace, values, option_string=None):\n # Again this is modified from the standard argparse 'append' action\n err_msg = \"Issue with option: %s \\n\" %(self.dest,)\n err_msg += \"Received value: %s \\n\" %(' '.join(values),)\n if getattr(namespace, self.dest, None) is None:\n setattr(namespace, self.dest, DictWithDefaultReturn())\n items = getattr(namespace, self.dest)\n items = copy.copy(items)\n for value in values:\n value = value.split(':')\n if len(value) == 2:\n # \"Normal\" case, all ifos supplied independently as \"H1:VALUE\"\n if items.default_set:\n err_msg += \"If you are supplying a value for all ifos, you \"\n err_msg += \"cannot also supply values for specific ifos.\"\n raise ValueError(err_msg)\n items[value[0]] = self.internal_type(value[1])\n items.ifo_set = True\n elif len(value) == 1:\n # OR supply only one value and use this for all ifos\n if items.default_set:\n err_msg += \"If you are supplying a value for all ifos, you \"\n err_msg += \"must only supply one value.\"\n raise ValueError(err_msg)\n # Can't use a global and ifo specific options\n if items.ifo_set:\n err_msg += \"If you are supplying a value for all ifos, you \"\n err_msg += \"cannot also supply values for specific ifos.\"\n raise ValueError(err_msg)\n #items.default_value = self.internal_type(value[0])\n new_default = self.internal_type(value[0])\n items.default_factory = lambda: new_default\n items.default_set = True\n else:\n err_msg += \"The character ':' is used to deliminate the \"\n err_msg += \"ifo and the value. Please do not use it more than \"\n err_msg += \"once.\"\n raise ValueError(err_msg)\n setattr(namespace, self.dest, items)\n\nclass MultiDetOptionActionSpecial(MultiDetOptionAction):\n \"\"\"\n This class in an extension of the MultiDetOptionAction class to handle\n cases where the : is already a special character. For example the channel\n name is something like H1:CHANNEL_NAME. Here the channel name *must*\n be provided uniquely for each ifo. The dictionary key is set to H1 and the\n value to H1:CHANNEL_NAME for this example.\n \"\"\"\n def __call__(self, parser, namespace, values, option_string=None):\n # Again this is modified from the standard argparse 'append' action\n err_msg = \"Issue with option: %s \\n\" %(self.dest,)\n err_msg += \"Received value: %s \\n\" %(' '.join(values),)\n if getattr(namespace, self.dest, None) is None:\n setattr(namespace, self.dest, {})\n items = getattr(namespace, self.dest)\n items = copy.copy(items)\n for value in values:\n value_split = value.split(':')\n if len(value_split) == 2:\n # \"Normal\" case, all ifos supplied independently as \"H1:VALUE\"\n if value_split[0] in items:\n err_msg += \"Multiple values supplied for ifo %s.\\n\" \\\n %(value_split[0],)\n err_msg += \"Already have %s.\" %(items[value_split[0]])\n raise ValueError(err_msg)\n else:\n items[value_split[0]] = value\n elif len(value_split) == 3:\n # This is an unadvertised feature. It is used for cases where I\n # want to pretend H1 data is actually L1 (or similar). So if I\n # supply --channel-name H1:L1:LDAS-STRAIN I can use L1 data and\n # pretend it is H1 internally.\n if value_split[0] in items:\n err_msg += \"Multiple values supplied for ifo %s.\\n\" \\\n %(value_split[0],)\n err_msg += \"Already have %s.\" %(items[value_split[0]])\n raise ValueError(err_msg)\n else:\n items[value_split[0]] = ':'.join(value_split[1:3])\n else:\n err_msg += \"The character ':' is used to deliminate the \"\n err_msg += \"ifo and the value. It must appear exactly \"\n err_msg += \"once.\"\n raise ValueError(err_msg)\n setattr(namespace, self.dest, items)\n\nclass MultiDetMultiColonOptionAction(MultiDetOptionAction):\n \"\"\"A special case of `MultiDetOptionAction` which allows one to use\n arguments containing colons, such as `V1:FOOBAR:1`. The first colon is\n assumed to be the separator between the detector and the argument.\n All subsequent colons are kept as part of the argument. Unlike\n `MultiDetOptionAction`, all arguments must be prefixed by the\n corresponding detector.\n \"\"\"\n def __call__(self, parser, namespace, values, option_string=None):\n err_msg = ('Issue with option: {}\\n'\n 'Received value: {}\\n').format(self.dest, ' '.join(values))\n if getattr(namespace, self.dest, None) is None:\n setattr(namespace, self.dest, {})\n items = copy.copy(getattr(namespace, self.dest))\n for value in values:\n if ':' not in value:\n err_msg += (\"Each argument must contain at least one ':' \"\n \"character\")\n raise ValueError(err_msg)\n detector, argument = value.split(':', 1)\n if detector in items:\n err_msg += ('Multiple values supplied for detector {},\\n'\n 'already have {}.')\n err_msg = err_msg.format(detector, items[detector])\n raise ValueError(err_msg)\n items[detector] = self.internal_type(argument)\n setattr(namespace, self.dest, items)\n\nclass MultiDetOptionAppendAction(MultiDetOptionAction):\n def __call__(self, parser, namespace, values, option_string=None):\n # Again this is modified from the standard argparse 'append' action\n if getattr(namespace, self.dest, None) is None:\n setattr(namespace, self.dest, {})\n items = getattr(namespace, self.dest)\n items = copy.copy(items)\n for value in values:\n value = value.split(':')\n if len(value) == 2:\n # \"Normal\" case, all ifos supplied independetly as \"H1:VALUE\"\n if value[0] in items:\n items[value[0]].append(self.internal_type(value[1]))\n else:\n items[value[0]] = [self.internal_type(value[1])]\n else:\n err_msg = \"Issue with option: %s \\n\" %(self.dest,)\n err_msg += \"Received value: %s \\n\" %(' '.join(values),)\n err_msg += \"The character ':' is used to distinguish the \"\n err_msg += \"ifo and the value. It must be given exactly once \"\n err_msg += \"for all entries\"\n raise ValueError(err_msg)\n setattr(namespace, self.dest, items)\n\ndef required_opts(opt, parser, opt_list, required_by=None):\n \"\"\"Check that all the opts are defined\n\n Parameters\n ----------\n opt : object\n Result of option parsing\n parser : object\n OptionParser instance.\n opt_list : list of strings\n required_by : string, optional\n the option that requires these options (if applicable)\n \"\"\"\n for name in opt_list:\n attr = name[2:].replace('-', '_')\n if not hasattr(opt, attr) or (getattr(opt, attr) is None):\n err_str = \"%s is missing \" % name\n if required_by is not None:\n err_str += \", required by %s\" % required_by\n parser.error(err_str)\n\ndef required_opts_multi_ifo(opt, parser, ifo, opt_list, required_by=None):\n \"\"\"Check that all the opts are defined\n\n Parameters\n ----------\n opt : object\n Result of option parsing\n parser : object\n OptionParser instance.\n ifo : string\n opt_list : list of strings\n required_by : string, optional\n the option that requires these options (if applicable)\n \"\"\"\n for name in opt_list:\n attr = name[2:].replace('-', '_')\n try:\n if getattr(opt, attr)[ifo] is None:\n raise KeyError\n except KeyError:\n err_str = \"%s is missing \" % name\n if required_by is not None:\n err_str += \", required by %s\" % required_by\n parser.error(err_str)\n\ndef ensure_one_opt(opt, parser, opt_list):\n \"\"\" Check that one and only one in the opt_list is defined in opt\n\n Parameters\n ----------\n opt : object\n Result of option parsing\n parser : object\n OptionParser instance.\n opt_list : list of strings\n \"\"\"\n\n the_one = None\n for name in opt_list:\n attr = name[2:].replace('-', '_')\n if hasattr(opt, attr) and (getattr(opt, attr) is not None):\n if the_one is None:\n the_one = name\n else:\n parser.error(\"%s and %s are mutually exculsive\" \\\n % (the_one, name))\n\n if the_one is None:\n parser.error(\"you must supply one of the following %s\" \\\n % (', '.join(opt_list)))\n\ndef ensure_one_opt_multi_ifo(opt, parser, ifo, opt_list):\n \"\"\" Check that one and only one in the opt_list is defined in opt\n\n Parameters\n ----------\n opt : object\n Result of option parsing\n parser : object\n OptionParser instance.\n opt_list : list of strings\n \"\"\"\n\n the_one = None\n for name in opt_list:\n attr = name[2:].replace('-', '_')\n try:\n if getattr(opt, attr)[ifo] is None:\n raise KeyError\n except KeyError:\n pass\n else:\n if the_one is None:\n the_one = name\n else:\n parser.error(\"%s and %s are mutually exculsive\" \\\n % (the_one, name))\n\n if the_one is None:\n parser.error(\"you must supply one of the following %s\" \\\n % (', '.join(opt_list)))\n\ndef copy_opts_for_single_ifo(opt, ifo):\n \"\"\"\n Takes the namespace object (opt) from the multi-detector interface and\n returns a namespace object for a single ifo that can be used with\n functions expecting output from the single-detector interface.\n \"\"\"\n opt = copy.deepcopy(opt)\n for arg, val in vars(opt).items():\n if isinstance(val, DictWithDefaultReturn):\n setattr(opt, arg, getattr(opt, arg)[ifo])\n return opt\n\ndef convert_to_process_params_dict(opt):\n \"\"\"\n Takes the namespace object (opt) from the multi-detector interface and\n returns a dictionary of command line options that will be handled correctly\n by the register_to_process_params ligolw function.\n \"\"\"\n opt = copy.deepcopy(opt)\n for arg, val in vars(opt).items():\n if isinstance(val, DictWithDefaultReturn):\n new_val = []\n for key in val.keys():\n if isinstance(val[key], list):\n for item in val[key]:\n if item is not None:\n new_val.append(':'.join([key, str(item)]))\n else:\n if val[key] is not None:\n new_val.append(':'.join([key, str(val[key])]))\n setattr(opt, arg, new_val)\n return vars(opt)\n\ndef positive_float(s):\n \"\"\"\n Ensure argument is a positive real number and return it as float.\n\n To be used as type in argparse arguments.\n \"\"\"\n err_msg = \"must be a positive number, not %r\" % s\n try:\n value = float(s)\n except ValueError:\n raise argparse.ArgumentTypeError(err_msg)\n if value <= 0:\n raise argparse.ArgumentTypeError(err_msg)\n return value\n\ndef nonnegative_float(s):\n \"\"\"\n Ensure argument is a positive real number or zero and return it as float.\n\n To be used as type in argparse arguments.\n \"\"\"\n err_msg = \"must be either positive or zero, not %r\" % s\n try:\n value = float(s)\n except ValueError:\n raise argparse.ArgumentTypeError(err_msg)\n if value < 0:\n raise argparse.ArgumentTypeError(err_msg)\n return value\n", "path": "pycbc/types/optparse.py" } ]
[ { "content": "# Copyright (C) 2015 Ian Harry\n#\n# This program is free software; you can redistribute it and/or modify it\n# under the terms of the GNU General Public License as published by the\n# Free Software Foundation; either version 3 of the License, or (at your\n# option) any later version.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Generals\n# Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along\n# with this program; if not, write to the Free Software Foundation, Inc.,\n# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"\nThis modules contains extensions for use with argparse\n\"\"\"\nimport copy\nimport argparse\nfrom collections import defaultdict\n\nclass DictWithDefaultReturn(defaultdict):\n default_set = False\n ifo_set = False\n def __bool__(self):\n if self.items() and not all(entry is None for entry in self.values()):\n # True if any values are explictly set.\n return True\n elif self['RANDOM_STRING_314324'] is not None:\n # Or true if the default value was set\n # NOTE: This stores the string RANDOM_STRING_314324 in the dict\n # so subsequent calls will be caught in the first test here.\n return True\n else:\n # Else false\n return False\n # Python 2 and 3 have different conventions for boolean method\n __nonzero__ = __bool__\n\nclass MultiDetOptionAction(argparse.Action):\n # Initialise the same as the standard 'append' action\n def __init__(self,\n option_strings,\n dest,\n nargs='+',\n const=None,\n default=None,\n type=None,\n choices=None,\n required=False,\n help=None,\n metavar=None):\n if type is not None:\n self.internal_type = type\n else:\n self.internal_type = str\n new_default = DictWithDefaultReturn(lambda: default)\n #new_default.default_value=default\n if nargs == 0:\n raise ValueError('nargs for append actions must be > 0; if arg '\n 'strings are not supplying the value to append, '\n 'the append const action may be more appropriate')\n if const is not None and nargs != argparse.OPTIONAL:\n raise ValueError('nargs must be %r to supply const'\n % argparse.OPTIONAL)\n super(MultiDetOptionAction, self).__init__(\n option_strings=option_strings,\n dest=dest,\n nargs=nargs,\n const=const,\n default=new_default,\n type=str,\n choices=choices,\n required=required,\n help=help,\n metavar=metavar)\n\n def __call__(self, parser, namespace, values, option_string=None):\n # Again this is modified from the standard argparse 'append' action\n err_msg = \"Issue with option: %s \\n\" %(self.dest,)\n err_msg += \"Received value: %s \\n\" %(' '.join(values),)\n if getattr(namespace, self.dest, None) is None:\n setattr(namespace, self.dest, DictWithDefaultReturn())\n items = getattr(namespace, self.dest)\n items = copy.copy(items)\n for value in values:\n value = value.split(':')\n if len(value) == 2:\n # \"Normal\" case, all ifos supplied independently as \"H1:VALUE\"\n if items.default_set:\n err_msg += \"If you are supplying a value for all ifos, you \"\n err_msg += \"cannot also supply values for specific ifos.\"\n raise ValueError(err_msg)\n items[value[0]] = self.internal_type(value[1])\n items.ifo_set = True\n elif len(value) == 1:\n # OR supply only one value and use this for all ifos\n if items.default_set:\n err_msg += \"If you are supplying a value for all ifos, you \"\n err_msg += \"must only supply one value.\"\n raise ValueError(err_msg)\n # Can't use a global and ifo specific options\n if items.ifo_set:\n err_msg += \"If you are supplying a value for all ifos, you \"\n err_msg += \"cannot also supply values for specific ifos.\"\n raise ValueError(err_msg)\n #items.default_value = self.internal_type(value[0])\n new_default = self.internal_type(value[0])\n items.default_factory = lambda: new_default\n items.default_set = True\n else:\n err_msg += \"The character ':' is used to deliminate the \"\n err_msg += \"ifo and the value. Please do not use it more than \"\n err_msg += \"once.\"\n raise ValueError(err_msg)\n setattr(namespace, self.dest, items)\n\nclass MultiDetOptionActionSpecial(MultiDetOptionAction):\n \"\"\"\n This class in an extension of the MultiDetOptionAction class to handle\n cases where the : is already a special character. For example the channel\n name is something like H1:CHANNEL_NAME. Here the channel name *must*\n be provided uniquely for each ifo. The dictionary key is set to H1 and the\n value to H1:CHANNEL_NAME for this example.\n \"\"\"\n def __call__(self, parser, namespace, values, option_string=None):\n # Again this is modified from the standard argparse 'append' action\n err_msg = \"Issue with option: %s \\n\" %(self.dest,)\n err_msg += \"Received value: %s \\n\" %(' '.join(values),)\n if getattr(namespace, self.dest, None) is None:\n setattr(namespace, self.dest, {})\n items = getattr(namespace, self.dest)\n items = copy.copy(items)\n for value in values:\n value_split = value.split(':')\n if len(value_split) == 2:\n # \"Normal\" case, all ifos supplied independently as \"H1:VALUE\"\n if value_split[0] in items:\n err_msg += \"Multiple values supplied for ifo %s.\\n\" \\\n %(value_split[0],)\n err_msg += \"Already have %s.\" %(items[value_split[0]])\n raise ValueError(err_msg)\n else:\n items[value_split[0]] = value\n elif len(value_split) == 3:\n # This is an unadvertised feature. It is used for cases where I\n # want to pretend H1 data is actually L1 (or similar). So if I\n # supply --channel-name H1:L1:LDAS-STRAIN I can use L1 data and\n # pretend it is H1 internally.\n if value_split[0] in items:\n err_msg += \"Multiple values supplied for ifo %s.\\n\" \\\n %(value_split[0],)\n err_msg += \"Already have %s.\" %(items[value_split[0]])\n raise ValueError(err_msg)\n else:\n items[value_split[0]] = ':'.join(value_split[1:3])\n else:\n err_msg += \"The character ':' is used to deliminate the \"\n err_msg += \"ifo and the value. It must appear exactly \"\n err_msg += \"once.\"\n raise ValueError(err_msg)\n setattr(namespace, self.dest, items)\n\nclass MultiDetMultiColonOptionAction(MultiDetOptionAction):\n \"\"\"A special case of `MultiDetOptionAction` which allows one to use\n arguments containing colons, such as `V1:FOOBAR:1`. The first colon is\n assumed to be the separator between the detector and the argument.\n All subsequent colons are kept as part of the argument. Unlike\n `MultiDetOptionAction`, all arguments must be prefixed by the\n corresponding detector.\n \"\"\"\n def __call__(self, parser, namespace, values, option_string=None):\n err_msg = ('Issue with option: {}\\n'\n 'Received value: {}\\n').format(self.dest, ' '.join(values))\n if getattr(namespace, self.dest, None) is None:\n setattr(namespace, self.dest, {})\n items = copy.copy(getattr(namespace, self.dest))\n for value in values:\n if ':' not in value:\n err_msg += (\"Each argument must contain at least one ':' \"\n \"character\")\n raise ValueError(err_msg)\n detector, argument = value.split(':', 1)\n if detector in items:\n err_msg += ('Multiple values supplied for detector {},\\n'\n 'already have {}.')\n err_msg = err_msg.format(detector, items[detector])\n raise ValueError(err_msg)\n items[detector] = self.internal_type(argument)\n setattr(namespace, self.dest, items)\n\nclass MultiDetOptionAppendAction(MultiDetOptionAction):\n def __call__(self, parser, namespace, values, option_string=None):\n # Again this is modified from the standard argparse 'append' action\n if getattr(namespace, self.dest, None) is None:\n setattr(namespace, self.dest, {})\n items = getattr(namespace, self.dest)\n items = copy.copy(items)\n for value in values:\n value = value.split(':')\n if len(value) == 2:\n # \"Normal\" case, all ifos supplied independetly as \"H1:VALUE\"\n if value[0] in items:\n items[value[0]].append(self.internal_type(value[1]))\n else:\n items[value[0]] = [self.internal_type(value[1])]\n else:\n err_msg = \"Issue with option: %s \\n\" %(self.dest,)\n err_msg += \"Received value: %s \\n\" %(' '.join(values),)\n err_msg += \"The character ':' is used to distinguish the \"\n err_msg += \"ifo and the value. It must be given exactly once \"\n err_msg += \"for all entries\"\n raise ValueError(err_msg)\n setattr(namespace, self.dest, items)\n\ndef required_opts(opt, parser, opt_list, required_by=None):\n \"\"\"Check that all the opts are defined\n\n Parameters\n ----------\n opt : object\n Result of option parsing\n parser : object\n OptionParser instance.\n opt_list : list of strings\n required_by : string, optional\n the option that requires these options (if applicable)\n \"\"\"\n for name in opt_list:\n attr = name[2:].replace('-', '_')\n if not hasattr(opt, attr) or (getattr(opt, attr) is None):\n err_str = \"%s is missing \" % name\n if required_by is not None:\n err_str += \", required by %s\" % required_by\n parser.error(err_str)\n\ndef required_opts_multi_ifo(opt, parser, ifo, opt_list, required_by=None):\n \"\"\"Check that all the opts are defined\n\n Parameters\n ----------\n opt : object\n Result of option parsing\n parser : object\n OptionParser instance.\n ifo : string\n opt_list : list of strings\n required_by : string, optional\n the option that requires these options (if applicable)\n \"\"\"\n for name in opt_list:\n attr = name[2:].replace('-', '_')\n try:\n if getattr(opt, attr)[ifo] is None:\n raise KeyError\n except KeyError:\n err_str = \"%s is missing \" % name\n if required_by is not None:\n err_str += \", required by %s\" % required_by\n parser.error(err_str)\n\ndef ensure_one_opt(opt, parser, opt_list):\n \"\"\" Check that one and only one in the opt_list is defined in opt\n\n Parameters\n ----------\n opt : object\n Result of option parsing\n parser : object\n OptionParser instance.\n opt_list : list of strings\n \"\"\"\n\n the_one = None\n for name in opt_list:\n attr = name[2:].replace('-', '_')\n if hasattr(opt, attr) and (getattr(opt, attr) is not None):\n if the_one is None:\n the_one = name\n else:\n parser.error(\"%s and %s are mutually exculsive\" \\\n % (the_one, name))\n\n if the_one is None:\n parser.error(\"you must supply one of the following %s\" \\\n % (', '.join(opt_list)))\n\ndef ensure_one_opt_multi_ifo(opt, parser, ifo, opt_list):\n \"\"\" Check that one and only one in the opt_list is defined in opt\n\n Parameters\n ----------\n opt : object\n Result of option parsing\n parser : object\n OptionParser instance.\n opt_list : list of strings\n \"\"\"\n\n the_one = None\n for name in opt_list:\n attr = name[2:].replace('-', '_')\n try:\n if getattr(opt, attr)[ifo] is None:\n raise KeyError\n except KeyError:\n pass\n else:\n if the_one is None:\n the_one = name\n else:\n parser.error(\"%s and %s are mutually exculsive\" \\\n % (the_one, name))\n\n if the_one is None:\n parser.error(\"you must supply one of the following %s\" \\\n % (', '.join(opt_list)))\n\ndef copy_opts_for_single_ifo(opt, ifo):\n \"\"\"\n Takes the namespace object (opt) from the multi-detector interface and\n returns a namespace object for a single ifo that can be used with\n functions expecting output from the single-detector interface.\n \"\"\"\n opt = copy.deepcopy(opt)\n for arg, val in vars(opt).items():\n if isinstance(val, DictWithDefaultReturn):\n setattr(opt, arg, getattr(opt, arg)[ifo])\n return opt\n\ndef convert_to_process_params_dict(opt):\n \"\"\"\n Takes the namespace object (opt) from the multi-detector interface and\n returns a dictionary of command line options that will be handled correctly\n by the register_to_process_params ligolw function.\n \"\"\"\n opt = copy.deepcopy(opt)\n for arg, val in vars(opt).items():\n if isinstance(val, DictWithDefaultReturn):\n new_val = []\n for key in val.keys():\n if isinstance(val[key], list):\n for item in val[key]:\n if item is not None:\n new_val.append(':'.join([key, str(item)]))\n else:\n if val[key] is not None:\n new_val.append(':'.join([key, str(val[key])]))\n setattr(opt, arg, new_val)\n return vars(opt)\n\ndef positive_float(s):\n \"\"\"\n Ensure argument is a positive real number and return it as float.\n\n To be used as type in argparse arguments.\n \"\"\"\n err_msg = \"must be a positive number, not %r\" % s\n try:\n value = float(s)\n except ValueError:\n raise argparse.ArgumentTypeError(err_msg)\n if value <= 0:\n raise argparse.ArgumentTypeError(err_msg)\n return value\n\ndef nonnegative_float(s):\n \"\"\"\n Ensure argument is a positive real number or zero and return it as float.\n\n To be used as type in argparse arguments.\n \"\"\"\n err_msg = \"must be either positive or zero, not %r\" % s\n try:\n value = float(s)\n except ValueError:\n raise argparse.ArgumentTypeError(err_msg)\n if value < 0:\n raise argparse.ArgumentTypeError(err_msg)\n return value\n", "path": "pycbc/types/optparse.py" } ]
diff --git a/pycbc/types/optparse.py b/pycbc/types/optparse.py index c8299bcfcc2..1ff934c827e 100644 --- a/pycbc/types/optparse.py +++ b/pycbc/types/optparse.py @@ -43,7 +43,7 @@ class MultiDetOptionAction(argparse.Action): def __init__(self, option_strings, dest, - nargs=None, + nargs='+', const=None, default=None, type=None,
scikit-hep__pyhf-941
use short URL for better help message The current help msg has a long url, but this includes line breaks which makes it hard to copy. ``` pyhf cls --help Usage: pyhf cls [OPTIONS] [WORKSPACE] Compute CLs value(s) for a given pyhf workspace. Example: .. code-block:: shell $ curl -sL https://raw.githubusercontent.com/scikit- hep/pyhf/master/docs/examples/json/2-bin_1-channel.json | pyhf cls { "CLs_exp": [ 0.07807427911686156, 0.17472571775474618, 0.35998495263681285, 0.6343568235898907, 0.8809947004472013 ], "CLs_obs": 0.3599845631401915 } Options: --output-file TEXT The location of the output json file. If not specified, prints to screen. --measurement TEXT -p, --patch TEXT --testpoi FLOAT --teststat [q|qtilde] --backend [numpy|pytorch|tensorflow|jax|np|torch|tf] The tensor backend used for the calculation. --optimizer TEXT --optconf EQUAL-DELIMITED OPTION -h, --help Show this message and exit. ```
[ { "content": "\"\"\"The inference CLI group.\"\"\"\nimport logging\n\nimport click\nimport json\n\nfrom ..utils import EqDelimStringParamType\nfrom ..infer import hypotest\nfrom ..workspace import Workspace\nfrom .. import tensor, get_backend, set_backend, optimize\n\nlog = logging.getLogger(__name__)\n\n\[email protected](name='infer')\ndef cli():\n \"\"\"Infererence CLI group.\"\"\"\n\n\[email protected]()\[email protected]('workspace', default='-')\[email protected](\n '--output-file',\n help='The location of the output json file. If not specified, prints to screen.',\n default=None,\n)\[email protected]('--measurement', default=None)\[email protected]('-p', '--patch', multiple=True)\[email protected]('--testpoi', default=1.0)\[email protected]('--teststat', type=click.Choice(['q', 'qtilde']), default='qtilde')\[email protected](\n '--backend',\n type=click.Choice(['numpy', 'pytorch', 'tensorflow', 'jax', 'np', 'torch', 'tf']),\n help='The tensor backend used for the calculation.',\n default='numpy',\n)\[email protected]('--optimizer')\[email protected]('--optconf', type=EqDelimStringParamType(), multiple=True)\ndef cls(\n workspace,\n output_file,\n measurement,\n patch,\n testpoi,\n teststat,\n backend,\n optimizer,\n optconf,\n):\n \"\"\"\n Compute CLs value(s) for a given pyhf workspace.\n\n Example:\n\n .. code-block:: shell\n\n $ curl -sL https://raw.githubusercontent.com/scikit-hep/pyhf/master/docs/examples/json/2-bin_1-channel.json | pyhf cls\n {\n \"CLs_exp\": [\n 0.07807427911686156,\n 0.17472571775474618,\n 0.35998495263681285,\n 0.6343568235898907,\n 0.8809947004472013\n ],\n \"CLs_obs\": 0.3599845631401915\n }\n \"\"\"\n with click.open_file(workspace, 'r') as specstream:\n spec = json.load(specstream)\n\n ws = Workspace(spec)\n\n is_qtilde = teststat == 'qtilde'\n\n patches = [json.loads(click.open_file(pfile, 'r').read()) for pfile in patch]\n model = ws.model(\n measurement_name=measurement,\n patches=patches,\n modifier_settings={\n 'normsys': {'interpcode': 'code4'},\n 'histosys': {'interpcode': 'code4p'},\n },\n )\n\n # set the backend if not NumPy\n if backend in ['pytorch', 'torch']:\n set_backend(tensor.pytorch_backend(precision='64b'))\n elif backend in ['tensorflow', 'tf']:\n set_backend(tensor.tensorflow_backend(precision='64b'))\n elif backend in ['jax']:\n set_backend(tensor.jax_backend())\n tensorlib, _ = get_backend()\n\n optconf = {k: v for item in optconf for k, v in item.items()}\n\n # set the new optimizer\n if optimizer:\n new_optimizer = getattr(optimize, optimizer)\n set_backend(tensorlib, new_optimizer(**optconf))\n\n result = hypotest(\n testpoi, ws.data(model), model, qtilde=is_qtilde, return_expected_set=True\n )\n result = {\n 'CLs_obs': tensorlib.tolist(result[0])[0],\n 'CLs_exp': tensorlib.tolist(tensorlib.reshape(result[-1], [-1])),\n }\n\n if output_file is None:\n click.echo(json.dumps(result, indent=4, sort_keys=True))\n else:\n with open(output_file, 'w+') as out_file:\n json.dump(result, out_file, indent=4, sort_keys=True)\n log.debug(\"Written to {0:s}\".format(output_file))\n", "path": "src/pyhf/cli/infer.py" } ]
[ { "content": "\"\"\"The inference CLI group.\"\"\"\nimport logging\n\nimport click\nimport json\n\nfrom ..utils import EqDelimStringParamType\nfrom ..infer import hypotest\nfrom ..workspace import Workspace\nfrom .. import tensor, get_backend, set_backend, optimize\n\nlog = logging.getLogger(__name__)\n\n\[email protected](name='infer')\ndef cli():\n \"\"\"Infererence CLI group.\"\"\"\n\n\[email protected]()\[email protected]('workspace', default='-')\[email protected](\n '--output-file',\n help='The location of the output json file. If not specified, prints to screen.',\n default=None,\n)\[email protected]('--measurement', default=None)\[email protected]('-p', '--patch', multiple=True)\[email protected]('--testpoi', default=1.0)\[email protected]('--teststat', type=click.Choice(['q', 'qtilde']), default='qtilde')\[email protected](\n '--backend',\n type=click.Choice(['numpy', 'pytorch', 'tensorflow', 'jax', 'np', 'torch', 'tf']),\n help='The tensor backend used for the calculation.',\n default='numpy',\n)\[email protected]('--optimizer')\[email protected]('--optconf', type=EqDelimStringParamType(), multiple=True)\ndef cls(\n workspace,\n output_file,\n measurement,\n patch,\n testpoi,\n teststat,\n backend,\n optimizer,\n optconf,\n):\n \"\"\"\n Compute CLs value(s) for a given pyhf workspace.\n\n Example:\n\n .. code-block:: shell\n\n $ curl -sL https://git.io/JJYDE | pyhf cls\n\n \\b\n {\n \"CLs_exp\": [\n 0.07807427911686156,\n 0.17472571775474618,\n 0.35998495263681285,\n 0.6343568235898907,\n 0.8809947004472013\n ],\n \"CLs_obs\": 0.3599845631401915\n }\n \"\"\"\n with click.open_file(workspace, 'r') as specstream:\n spec = json.load(specstream)\n\n ws = Workspace(spec)\n\n is_qtilde = teststat == 'qtilde'\n\n patches = [json.loads(click.open_file(pfile, 'r').read()) for pfile in patch]\n model = ws.model(\n measurement_name=measurement,\n patches=patches,\n modifier_settings={\n 'normsys': {'interpcode': 'code4'},\n 'histosys': {'interpcode': 'code4p'},\n },\n )\n\n # set the backend if not NumPy\n if backend in ['pytorch', 'torch']:\n set_backend(tensor.pytorch_backend(precision='64b'))\n elif backend in ['tensorflow', 'tf']:\n set_backend(tensor.tensorflow_backend(precision='64b'))\n elif backend in ['jax']:\n set_backend(tensor.jax_backend())\n tensorlib, _ = get_backend()\n\n optconf = {k: v for item in optconf for k, v in item.items()}\n\n # set the new optimizer\n if optimizer:\n new_optimizer = getattr(optimize, optimizer)\n set_backend(tensorlib, new_optimizer(**optconf))\n\n result = hypotest(\n testpoi, ws.data(model), model, qtilde=is_qtilde, return_expected_set=True\n )\n result = {\n 'CLs_obs': tensorlib.tolist(result[0])[0],\n 'CLs_exp': tensorlib.tolist(tensorlib.reshape(result[-1], [-1])),\n }\n\n if output_file is None:\n click.echo(json.dumps(result, indent=4, sort_keys=True))\n else:\n with open(output_file, 'w+') as out_file:\n json.dump(result, out_file, indent=4, sort_keys=True)\n log.debug(\"Written to {0:s}\".format(output_file))\n", "path": "src/pyhf/cli/infer.py" } ]
diff --git a/src/pyhf/cli/infer.py b/src/pyhf/cli/infer.py index f6381131b0..db7f624068 100644 --- a/src/pyhf/cli/infer.py +++ b/src/pyhf/cli/infer.py @@ -54,7 +54,9 @@ def cls( .. code-block:: shell - $ curl -sL https://raw.githubusercontent.com/scikit-hep/pyhf/master/docs/examples/json/2-bin_1-channel.json | pyhf cls + $ curl -sL https://git.io/JJYDE | pyhf cls + + \b { "CLs_exp": [ 0.07807427911686156,
roboflow__supervision-901
[Detections] - `from_inference` should include `'class_name'` key in `Detections.data` even if result is empty ### Bug [`from_inference`](https://github.com/roboflow/supervision/blob/0ccb0b85adee4202f5fe96834a374a057bbbd9da/supervision/detection/core.py#L448) should include `'class_name'` key in `Detections.data` even if `roboflow` inference result is empty. ### Environment - Latest version of supervision - macOS ### Minimal Reproducible Example ```python import cv2 import roboflow import numpy as np import supervision as sv from tempfile import NamedTemporaryFile roboflow.login() rf = roboflow.Roboflow() project = rf.workspace().project("people-detection-general") model = project.version(5).model x = np.zeros((1000, 1000, 3), dtype=np.uint8) with NamedTemporaryFile(suffix=".jpeg") as f: cv2.imwrite(f.name, x) result = model.predict(f.name).json() detections = sv.Detections.from_inference(result) print(detections) ``` Here is the result: ``` Detections(xyxy=array([], shape=(0, 4), dtype=float32), mask=None, confidence=array([], dtype=float32), class_id=array([], dtype=int64), tracker_id=None, data={}) ``` Note `data={}` and it should be `data={'class_name': []}`. ### Additional - Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will speed up the review process. The reviewer must test each change. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻
[ { "content": "from __future__ import annotations\n\nfrom contextlib import suppress\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, Iterator, List, Optional, Tuple, Union\n\nimport numpy as np\n\nfrom supervision.config import CLASS_NAME_DATA_FIELD, ORIENTED_BOX_COORDINATES\nfrom supervision.detection.utils import (\n box_non_max_suppression,\n calculate_masks_centroids,\n extract_ultralytics_masks,\n get_data_item,\n is_data_equal,\n mask_non_max_suppression,\n merge_data,\n process_roboflow_result,\n validate_detections_fields,\n xywh_to_xyxy,\n)\nfrom supervision.geometry.core import Position\nfrom supervision.utils.internal import deprecated\n\n\n@dataclass\nclass Detections:\n \"\"\"\n The `sv.Detections` allows you to convert results from a variety of object detection\n and segmentation models into a single, unified format. The `sv.Detections` class\n enables easy data manipulation and filtering, and provides a consistent API for\n Supervision's tools like trackers, annotators, and zones.\n\n ```python\n import cv2\n import supervision as sv\n from ultralytics import YOLO\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = YOLO('yolov8s.pt')\n annotator = sv.BoundingBoxAnnotator()\n\n result = model(image)[0]\n detections = sv.Detections.from_ultralytics(result)\n\n annotated_image = annotator.annotate(image, detections)\n ```\n\n !!! tip\n\n In `sv.Detections`, detection data is categorized into two main field types:\n fixed and custom. The fixed fields include `xyxy`, `mask`, `confidence`,\n `class_id`, and `tracker_id`. For any additional data requirements, custom\n fields come into play, stored in the data field. These custom fields are easily\n accessible using the `detections[<FIELD_NAME>]` syntax, providing flexibility\n for diverse data handling needs.\n\n Attributes:\n xyxy (np.ndarray): An array of shape `(n, 4)` containing\n the bounding boxes coordinates in format `[x1, y1, x2, y2]`\n mask: (Optional[np.ndarray]): An array of shape\n `(n, H, W)` containing the segmentation masks.\n confidence (Optional[np.ndarray]): An array of shape\n `(n,)` containing the confidence scores of the detections.\n class_id (Optional[np.ndarray]): An array of shape\n `(n,)` containing the class ids of the detections.\n tracker_id (Optional[np.ndarray]): An array of shape\n `(n,)` containing the tracker ids of the detections.\n data (Dict[str, Union[np.ndarray, List]]): A dictionary containing additional\n data where each key is a string representing the data type, and the value\n is either a NumPy array or a list of corresponding data.\n\n !!! warning\n\n The `data` field in the `sv.Detections` class is currently in an experimental\n phase. Please be aware that its API and functionality are subject to change in\n future updates as we continue to refine and improve its capabilities.\n We encourage users to experiment with this feature and provide feedback, but\n also to be prepared for potential modifications in upcoming releases.\n \"\"\"\n\n xyxy: np.ndarray\n mask: Optional[np.ndarray] = None\n confidence: Optional[np.ndarray] = None\n class_id: Optional[np.ndarray] = None\n tracker_id: Optional[np.ndarray] = None\n data: Dict[str, Union[np.ndarray, List]] = field(default_factory=dict)\n\n def __post_init__(self):\n validate_detections_fields(\n xyxy=self.xyxy,\n mask=self.mask,\n confidence=self.confidence,\n class_id=self.class_id,\n tracker_id=self.tracker_id,\n data=self.data,\n )\n\n def __len__(self):\n \"\"\"\n Returns the number of detections in the Detections object.\n \"\"\"\n return len(self.xyxy)\n\n def __iter__(\n self,\n ) -> Iterator[\n Tuple[\n np.ndarray,\n Optional[np.ndarray],\n Optional[float],\n Optional[int],\n Optional[int],\n Dict[str, Union[np.ndarray, List]],\n ]\n ]:\n \"\"\"\n Iterates over the Detections object and yield a tuple of\n `(xyxy, mask, confidence, class_id, tracker_id, data)` for each detection.\n \"\"\"\n for i in range(len(self.xyxy)):\n yield (\n self.xyxy[i],\n self.mask[i] if self.mask is not None else None,\n self.confidence[i] if self.confidence is not None else None,\n self.class_id[i] if self.class_id is not None else None,\n self.tracker_id[i] if self.tracker_id is not None else None,\n get_data_item(self.data, i),\n )\n\n def __eq__(self, other: Detections):\n return all(\n [\n np.array_equal(self.xyxy, other.xyxy),\n np.array_equal(self.mask, other.mask),\n np.array_equal(self.class_id, other.class_id),\n np.array_equal(self.confidence, other.confidence),\n np.array_equal(self.tracker_id, other.tracker_id),\n is_data_equal(self.data, other.data),\n ]\n )\n\n @classmethod\n def from_yolov5(cls, yolov5_results) -> Detections:\n \"\"\"\n Creates a Detections instance from a\n [YOLOv5](https://github.com/ultralytics/yolov5) inference result.\n\n Args:\n yolov5_results (yolov5.models.common.Detections):\n The output Detections instance from YOLOv5\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import cv2\n import torch\n import supervision as sv\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = torch.hub.load('ultralytics/yolov5', 'yolov5s')\n result = model(image)\n detections = sv.Detections.from_yolov5(result)\n ```\n \"\"\"\n yolov5_detections_predictions = yolov5_results.pred[0].cpu().cpu().numpy()\n\n return cls(\n xyxy=yolov5_detections_predictions[:, :4],\n confidence=yolov5_detections_predictions[:, 4],\n class_id=yolov5_detections_predictions[:, 5].astype(int),\n )\n\n @classmethod\n def from_ultralytics(cls, ultralytics_results) -> Detections:\n \"\"\"\n Creates a Detections instance from a\n [YOLOv8](https://github.com/ultralytics/ultralytics) inference result.\n\n !!! Note\n\n `from_ultralytics` is compatible with\n [detection](https://docs.ultralytics.com/tasks/detect/),\n [segmentation](https://docs.ultralytics.com/tasks/segment/), and\n [OBB](https://docs.ultralytics.com/tasks/obb/) models.\n\n Args:\n ultralytics_results (ultralytics.yolo.engine.results.Results):\n The output Results instance from YOLOv8\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import cv2\n import supervision as sv\n from ultralytics import YOLO\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = YOLO('yolov8s.pt')\n\n result = model(image)[0]\n detections = sv.Detections.from_ultralytics(result)\n ```\n \"\"\" # noqa: E501 // docs\n\n if ultralytics_results.obb is not None:\n class_id = ultralytics_results.obb.cls.cpu().numpy().astype(int)\n class_names = np.array([ultralytics_results.names[i] for i in class_id])\n oriented_box_coordinates = ultralytics_results.obb.xyxyxyxy.cpu().numpy()\n return cls(\n xyxy=ultralytics_results.obb.xyxy.cpu().numpy(),\n confidence=ultralytics_results.obb.conf.cpu().numpy(),\n class_id=class_id,\n tracker_id=ultralytics_results.obb.id.int().cpu().numpy()\n if ultralytics_results.obb.id is not None\n else None,\n data={\n ORIENTED_BOX_COORDINATES: oriented_box_coordinates,\n CLASS_NAME_DATA_FIELD: class_names,\n },\n )\n\n class_id = ultralytics_results.boxes.cls.cpu().numpy().astype(int)\n class_names = np.array([ultralytics_results.names[i] for i in class_id])\n return cls(\n xyxy=ultralytics_results.boxes.xyxy.cpu().numpy(),\n confidence=ultralytics_results.boxes.conf.cpu().numpy(),\n class_id=class_id,\n mask=extract_ultralytics_masks(ultralytics_results),\n tracker_id=ultralytics_results.boxes.id.int().cpu().numpy()\n if ultralytics_results.boxes.id is not None\n else None,\n data={CLASS_NAME_DATA_FIELD: class_names},\n )\n\n @classmethod\n def from_yolo_nas(cls, yolo_nas_results) -> Detections:\n \"\"\"\n Creates a Detections instance from a\n [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md)\n inference result.\n\n Args:\n yolo_nas_results (ImageDetectionPrediction):\n The output Results instance from YOLO-NAS\n ImageDetectionPrediction is coming from\n 'super_gradients.training.models.prediction_results'\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import cv2\n from super_gradients.training import models\n import supervision as sv\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = models.get('yolo_nas_l', pretrained_weights=\"coco\")\n\n result = list(model.predict(image, conf=0.35))[0]\n detections = sv.Detections.from_yolo_nas(result)\n ```\n \"\"\"\n if np.asarray(yolo_nas_results.prediction.bboxes_xyxy).shape[0] == 0:\n return cls.empty()\n\n return cls(\n xyxy=yolo_nas_results.prediction.bboxes_xyxy,\n confidence=yolo_nas_results.prediction.confidence,\n class_id=yolo_nas_results.prediction.labels.astype(int),\n )\n\n @classmethod\n def from_tensorflow(\n cls, tensorflow_results: dict, resolution_wh: tuple\n ) -> Detections:\n \"\"\"\n Creates a Detections instance from a\n [Tensorflow Hub](https://www.tensorflow.org/hub/tutorials/tf2_object_detection)\n inference result.\n\n Args:\n tensorflow_results (dict):\n The output results from Tensorflow Hub.\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import tensorflow as tf\n import tensorflow_hub as hub\n import numpy as np\n import cv2\n\n module_handle = \"https://tfhub.dev/tensorflow/centernet/hourglass_512x512_kpts/1\"\n model = hub.load(module_handle)\n img = np.array(cv2.imread(SOURCE_IMAGE_PATH))\n result = model(img)\n detections = sv.Detections.from_tensorflow(result)\n ```\n \"\"\" # noqa: E501 // docs\n\n boxes = tensorflow_results[\"detection_boxes\"][0].numpy()\n boxes[:, [0, 2]] *= resolution_wh[0]\n boxes[:, [1, 3]] *= resolution_wh[1]\n boxes = boxes[:, [1, 0, 3, 2]]\n return cls(\n xyxy=boxes,\n confidence=tensorflow_results[\"detection_scores\"][0].numpy(),\n class_id=tensorflow_results[\"detection_classes\"][0].numpy().astype(int),\n )\n\n @classmethod\n def from_deepsparse(cls, deepsparse_results) -> Detections:\n \"\"\"\n Creates a Detections instance from a\n [DeepSparse](https://github.com/neuralmagic/deepsparse)\n inference result.\n\n Args:\n deepsparse_results (deepsparse.yolo.schemas.YOLOOutput):\n The output Results instance from DeepSparse.\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import supervision as sv\n from deepsparse import Pipeline\n\n yolo_pipeline = Pipeline.create(\n task=\"yolo\",\n model_path = \"zoo:cv/detection/yolov5-l/pytorch/ultralytics/coco/pruned80_quant-none\"\n )\n result = yolo_pipeline(<SOURCE IMAGE PATH>)\n detections = sv.Detections.from_deepsparse(result)\n ```\n \"\"\" # noqa: E501 // docs\n\n if np.asarray(deepsparse_results.boxes[0]).shape[0] == 0:\n return cls.empty()\n\n return cls(\n xyxy=np.array(deepsparse_results.boxes[0]),\n confidence=np.array(deepsparse_results.scores[0]),\n class_id=np.array(deepsparse_results.labels[0]).astype(float).astype(int),\n )\n\n @classmethod\n def from_mmdetection(cls, mmdet_results) -> Detections:\n \"\"\"\n Creates a Detections instance from a\n [mmdetection](https://github.com/open-mmlab/mmdetection) and\n [mmyolo](https://github.com/open-mmlab/mmyolo) inference result.\n\n Args:\n mmdet_results (mmdet.structures.DetDataSample):\n The output Results instance from MMDetection.\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import cv2\n import supervision as sv\n from mmdet.apis import init_detector, inference_detector\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = init_detector(<CONFIG_PATH>, <WEIGHTS_PATH>, device=<DEVICE>)\n\n result = inference_detector(model, image)\n detections = sv.Detections.from_mmdetection(result)\n ```\n \"\"\" # noqa: E501 // docs\n\n return cls(\n xyxy=mmdet_results.pred_instances.bboxes.cpu().numpy(),\n confidence=mmdet_results.pred_instances.scores.cpu().numpy(),\n class_id=mmdet_results.pred_instances.labels.cpu().numpy().astype(int),\n )\n\n @classmethod\n def from_transformers(cls, transformers_results: dict) -> Detections:\n \"\"\"\n Creates a Detections instance from object detection\n [transformer](https://github.com/huggingface/transformers) inference result.\n\n Returns:\n Detections: A new Detections object.\n \"\"\"\n\n return cls(\n xyxy=transformers_results[\"boxes\"].cpu().numpy(),\n confidence=transformers_results[\"scores\"].cpu().numpy(),\n class_id=transformers_results[\"labels\"].cpu().numpy().astype(int),\n )\n\n @classmethod\n def from_detectron2(cls, detectron2_results) -> Detections:\n \"\"\"\n Create a Detections object from the\n [Detectron2](https://github.com/facebookresearch/detectron2) inference result.\n\n Args:\n detectron2_results: The output of a\n Detectron2 model containing instances with prediction data.\n\n Returns:\n (Detections): A Detections object containing the bounding boxes,\n class IDs, and confidences of the predictions.\n\n Example:\n ```python\n import cv2\n import supervision as sv\n from detectron2.engine import DefaultPredictor\n from detectron2.config import get_cfg\n\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n cfg = get_cfg()\n cfg.merge_from_file(<CONFIG_PATH>)\n cfg.MODEL.WEIGHTS = <WEIGHTS_PATH>\n predictor = DefaultPredictor(cfg)\n\n result = predictor(image)\n detections = sv.Detections.from_detectron2(result)\n ```\n \"\"\"\n\n return cls(\n xyxy=detectron2_results[\"instances\"].pred_boxes.tensor.cpu().numpy(),\n confidence=detectron2_results[\"instances\"].scores.cpu().numpy(),\n class_id=detectron2_results[\"instances\"]\n .pred_classes.cpu()\n .numpy()\n .astype(int),\n )\n\n @classmethod\n def from_inference(cls, roboflow_result: Union[dict, Any]) -> Detections:\n \"\"\"\n Create a Detections object from the [Roboflow](https://roboflow.com/)\n API inference result or the [Inference](https://inference.roboflow.com/)\n package results. This method extracts bounding boxes, class IDs,\n confidences, and class names from the Roboflow API result and encapsulates\n them into a Detections object.\n\n !!! note\n\n Class names can be accessed using the key 'class_name' in the returned\n object's data attribute.\n\n Args:\n roboflow_result (dict, any): The result from the\n Roboflow API or Inference package containing predictions.\n\n Returns:\n (Detections): A Detections object containing the bounding boxes, class IDs,\n and confidences of the predictions.\n\n Example:\n ```python\n import cv2\n import supervision as sv\n from inference.models.utils import get_roboflow_model\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = get_roboflow_model(model_id=\"yolov8s-640\")\n\n result = model.infer(image)[0]\n detections = sv.Detections.from_inference(result)\n ```\n \"\"\"\n with suppress(AttributeError):\n roboflow_result = roboflow_result.dict(exclude_none=True, by_alias=True)\n xyxy, confidence, class_id, masks, trackers, data = process_roboflow_result(\n roboflow_result=roboflow_result\n )\n\n if np.asarray(xyxy).shape[0] == 0:\n return cls.empty()\n\n return cls(\n xyxy=xyxy,\n confidence=confidence,\n class_id=class_id,\n mask=masks,\n tracker_id=trackers,\n data=data,\n )\n\n @classmethod\n @deprecated(\n \"`Detections.from_roboflow` is deprecated and will be removed in \"\n \"`supervision-0.22.0`. Use `Detections.from_inference` instead.\"\n )\n def from_roboflow(cls, roboflow_result: Union[dict, Any]) -> Detections:\n \"\"\"\n !!! failure \"Deprecated\"\n\n `Detections.from_roboflow` is deprecated and will be removed in\n `supervision-0.22.0`. Use `Detections.from_inference` instead.\n\n Create a Detections object from the [Roboflow](https://roboflow.com/)\n API inference result or the [Inference](https://inference.roboflow.com/)\n package results.\n\n Args:\n roboflow_result (dict): The result from the\n Roboflow API containing predictions.\n\n Returns:\n (Detections): A Detections object containing the bounding boxes, class IDs,\n and confidences of the predictions.\n\n Example:\n ```python\n import cv2\n import supervision as sv\n from inference.models.utils import get_roboflow_model\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = get_roboflow_model(model_id=\"yolov8s-640\")\n\n result = model.infer(image)[0]\n detections = sv.Detections.from_roboflow(result)\n ```\n \"\"\"\n return cls.from_inference(roboflow_result)\n\n @classmethod\n def from_sam(cls, sam_result: List[dict]) -> Detections:\n \"\"\"\n Creates a Detections instance from\n [Segment Anything Model](https://github.com/facebookresearch/segment-anything)\n inference result.\n\n Args:\n sam_result (List[dict]): The output Results instance from SAM\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import supervision as sv\n from segment_anything import (\n sam_model_registry,\n SamAutomaticMaskGenerator\n )\n\n sam_model_reg = sam_model_registry[MODEL_TYPE]\n sam = sam_model_reg(checkpoint=CHECKPOINT_PATH).to(device=DEVICE)\n mask_generator = SamAutomaticMaskGenerator(sam)\n sam_result = mask_generator.generate(IMAGE)\n detections = sv.Detections.from_sam(sam_result=sam_result)\n ```\n \"\"\"\n\n sorted_generated_masks = sorted(\n sam_result, key=lambda x: x[\"area\"], reverse=True\n )\n\n xywh = np.array([mask[\"bbox\"] for mask in sorted_generated_masks])\n mask = np.array([mask[\"segmentation\"] for mask in sorted_generated_masks])\n\n if np.asarray(xywh).shape[0] == 0:\n return cls.empty()\n\n xyxy = xywh_to_xyxy(boxes_xywh=xywh)\n return cls(xyxy=xyxy, mask=mask)\n\n @classmethod\n def from_azure_analyze_image(\n cls, azure_result: dict, class_map: Optional[Dict[int, str]] = None\n ) -> Detections:\n \"\"\"\n Creates a Detections instance from [Azure Image Analysis 4.0](\n https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/\n concept-object-detection-40).\n\n Args:\n azure_result (dict): The result from Azure Image Analysis. It should\n contain detected objects and their bounding box coordinates.\n class_map (Optional[Dict[int, str]]): A mapping ofclass IDs (int) to class\n names (str). If None, a new mapping is created dynamically.\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import requests\n import supervision as sv\n\n image = open(input, \"rb\").read()\n\n endpoint = \"https://.cognitiveservices.azure.com/\"\n subscription_key = \"\"\n\n headers = {\n \"Content-Type\": \"application/octet-stream\",\n \"Ocp-Apim-Subscription-Key\": subscription_key\n }\n\n response = requests.post(endpoint,\n headers=self.headers,\n data=image\n ).json()\n\n detections = sv.Detections.from_azure_analyze_image(response)\n ```\n \"\"\"\n if \"error\" in azure_result:\n raise ValueError(\n f'Azure API returned an error {azure_result[\"error\"][\"message\"]}'\n )\n\n xyxy, confidences, class_ids = [], [], []\n\n is_dynamic_mapping = class_map is None\n if is_dynamic_mapping:\n class_map = {}\n\n class_map = {value: key for key, value in class_map.items()}\n\n for detection in azure_result[\"objectsResult\"][\"values\"]:\n bbox = detection[\"boundingBox\"]\n\n tags = detection[\"tags\"]\n\n x0 = bbox[\"x\"]\n y0 = bbox[\"y\"]\n x1 = x0 + bbox[\"w\"]\n y1 = y0 + bbox[\"h\"]\n\n for tag in tags:\n confidence = tag[\"confidence\"]\n class_name = tag[\"name\"]\n class_id = class_map.get(class_name, None)\n\n if is_dynamic_mapping and class_id is None:\n class_id = len(class_map)\n class_map[class_name] = class_id\n\n if class_id is not None:\n xyxy.append([x0, y0, x1, y1])\n confidences.append(confidence)\n class_ids.append(class_id)\n\n if len(xyxy) == 0:\n return Detections.empty()\n\n return cls(\n xyxy=np.array(xyxy),\n class_id=np.array(class_ids),\n confidence=np.array(confidences),\n )\n\n @classmethod\n def from_paddledet(cls, paddledet_result) -> Detections:\n \"\"\"\n Creates a Detections instance from\n [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection)\n inference result.\n\n Args:\n paddledet_result (List[dict]): The output Results instance from PaddleDet\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import supervision as sv\n import paddle\n from ppdet.engine import Trainer\n from ppdet.core.workspace import load_config\n\n weights = ()\n config = ()\n\n cfg = load_config(config)\n trainer = Trainer(cfg, mode='test')\n trainer.load_weights(weights)\n\n paddledet_result = trainer.predict([images])[0]\n\n detections = sv.Detections.from_paddledet(paddledet_result)\n ```\n \"\"\"\n\n if np.asarray(paddledet_result[\"bbox\"][:, 2:6]).shape[0] == 0:\n return cls.empty()\n\n return cls(\n xyxy=paddledet_result[\"bbox\"][:, 2:6],\n confidence=paddledet_result[\"bbox\"][:, 1],\n class_id=paddledet_result[\"bbox\"][:, 0].astype(int),\n )\n\n @classmethod\n def empty(cls) -> Detections:\n \"\"\"\n Create an empty Detections object with no bounding boxes,\n confidences, or class IDs.\n\n Returns:\n (Detections): An empty Detections object.\n\n Example:\n ```python\n from supervision import Detections\n\n empty_detections = Detections.empty()\n ```\n \"\"\"\n return cls(\n xyxy=np.empty((0, 4), dtype=np.float32),\n confidence=np.array([], dtype=np.float32),\n class_id=np.array([], dtype=int),\n )\n\n @classmethod\n def merge(cls, detections_list: List[Detections]) -> Detections:\n \"\"\"\n Merge a list of Detections objects into a single Detections object.\n\n This method takes a list of Detections objects and combines their\n respective fields (`xyxy`, `mask`, `confidence`, `class_id`, and `tracker_id`)\n into a single Detections object. If all elements in a field are not\n `None`, the corresponding field will be stacked.\n Otherwise, the field will be set to `None`.\n\n Args:\n detections_list (List[Detections]): A list of Detections objects to merge.\n\n Returns:\n (Detections): A single Detections object containing\n the merged data from the input list.\n\n Example:\n ```python\n import numpy as np\n import supervision as sv\n\n detections_1 = sv.Detections(\n xyxy=np.array([[15, 15, 100, 100], [200, 200, 300, 300]]),\n class_id=np.array([1, 2]),\n data={'feature_vector': np.array([0.1, 0.2)])}\n )\n\n detections_2 = sv.Detections(\n xyxy=np.array([[30, 30, 120, 120]]),\n class_id=np.array([1]),\n data={'feature_vector': [np.array([0.3])]}\n )\n\n merged_detections = Detections.merge([detections_1, detections_2])\n\n merged_detections.xyxy\n array([[ 15, 15, 100, 100],\n [200, 200, 300, 300],\n [ 30, 30, 120, 120]])\n\n merged_detections.class_id\n array([1, 2, 1])\n\n merged_detections.data['feature_vector']\n array([0.1, 0.2, 0.3])\n ```\n \"\"\"\n if len(detections_list) == 0:\n return Detections.empty()\n\n for detections in detections_list:\n validate_detections_fields(\n xyxy=detections.xyxy,\n mask=detections.mask,\n confidence=detections.confidence,\n class_id=detections.class_id,\n tracker_id=detections.tracker_id,\n data=detections.data,\n )\n\n xyxy = np.vstack([d.xyxy for d in detections_list])\n\n def stack_or_none(name: str):\n if all(d.__getattribute__(name) is None for d in detections_list):\n return None\n if any(d.__getattribute__(name) is None for d in detections_list):\n raise ValueError(f\"All or none of the '{name}' fields must be None\")\n return (\n np.vstack([d.__getattribute__(name) for d in detections_list])\n if name == \"mask\"\n else np.hstack([d.__getattribute__(name) for d in detections_list])\n )\n\n mask = stack_or_none(\"mask\")\n confidence = stack_or_none(\"confidence\")\n class_id = stack_or_none(\"class_id\")\n tracker_id = stack_or_none(\"tracker_id\")\n\n data = merge_data([d.data for d in detections_list])\n\n return cls(\n xyxy=xyxy,\n mask=mask,\n confidence=confidence,\n class_id=class_id,\n tracker_id=tracker_id,\n data=data,\n )\n\n def get_anchors_coordinates(self, anchor: Position) -> np.ndarray:\n \"\"\"\n Calculates and returns the coordinates of a specific anchor point\n within the bounding boxes defined by the `xyxy` attribute. The anchor\n point can be any of the predefined positions in the `Position` enum,\n such as `CENTER`, `CENTER_LEFT`, `BOTTOM_RIGHT`, etc.\n\n Args:\n anchor (Position): An enum specifying the position of the anchor point\n within the bounding box. Supported positions are defined in the\n `Position` enum.\n\n Returns:\n np.ndarray: An array of shape `(n, 2)`, where `n` is the number of bounding\n boxes. Each row contains the `[x, y]` coordinates of the specified\n anchor point for the corresponding bounding box.\n\n Raises:\n ValueError: If the provided `anchor` is not supported.\n \"\"\"\n if anchor == Position.CENTER:\n return np.array(\n [\n (self.xyxy[:, 0] + self.xyxy[:, 2]) / 2,\n (self.xyxy[:, 1] + self.xyxy[:, 3]) / 2,\n ]\n ).transpose()\n elif anchor == Position.CENTER_OF_MASS:\n if self.mask is None:\n raise ValueError(\n \"Cannot use `Position.CENTER_OF_MASS` without a detection mask.\"\n )\n return calculate_masks_centroids(masks=self.mask)\n elif anchor == Position.CENTER_LEFT:\n return np.array(\n [\n self.xyxy[:, 0],\n (self.xyxy[:, 1] + self.xyxy[:, 3]) / 2,\n ]\n ).transpose()\n elif anchor == Position.CENTER_RIGHT:\n return np.array(\n [\n self.xyxy[:, 2],\n (self.xyxy[:, 1] + self.xyxy[:, 3]) / 2,\n ]\n ).transpose()\n elif anchor == Position.BOTTOM_CENTER:\n return np.array(\n [(self.xyxy[:, 0] + self.xyxy[:, 2]) / 2, self.xyxy[:, 3]]\n ).transpose()\n elif anchor == Position.BOTTOM_LEFT:\n return np.array([self.xyxy[:, 0], self.xyxy[:, 3]]).transpose()\n elif anchor == Position.BOTTOM_RIGHT:\n return np.array([self.xyxy[:, 2], self.xyxy[:, 3]]).transpose()\n elif anchor == Position.TOP_CENTER:\n return np.array(\n [(self.xyxy[:, 0] + self.xyxy[:, 2]) / 2, self.xyxy[:, 1]]\n ).transpose()\n elif anchor == Position.TOP_LEFT:\n return np.array([self.xyxy[:, 0], self.xyxy[:, 1]]).transpose()\n elif anchor == Position.TOP_RIGHT:\n return np.array([self.xyxy[:, 2], self.xyxy[:, 1]]).transpose()\n\n raise ValueError(f\"{anchor} is not supported.\")\n\n def __getitem__(\n self, index: Union[int, slice, List[int], np.ndarray, str]\n ) -> Union[Detections, List, np.ndarray, None]:\n \"\"\"\n Get a subset of the Detections object or access an item from its data field.\n\n When provided with an integer, slice, list of integers, or a numpy array, this\n method returns a new Detections object that represents a subset of the original\n detections. When provided with a string, it accesses the corresponding item in\n the data dictionary.\n\n Args:\n index (Union[int, slice, List[int], np.ndarray, str]): The index, indices,\n or key to access a subset of the Detections or an item from the data.\n\n Returns:\n Union[Detections, Any]: A subset of the Detections object or an item from\n the data field.\n\n Example:\n ```python\n import supervision as sv\n\n detections = sv.Detections()\n\n first_detection = detections[0]\n first_10_detections = detections[0:10]\n some_detections = detections[[0, 2, 4]]\n class_0_detections = detections[detections.class_id == 0]\n high_confidence_detections = detections[detections.confidence > 0.5]\n\n feature_vector = detections['feature_vector']\n ```\n \"\"\"\n if isinstance(index, str):\n return self.data.get(index)\n if isinstance(index, int):\n index = [index]\n return Detections(\n xyxy=self.xyxy[index],\n mask=self.mask[index] if self.mask is not None else None,\n confidence=self.confidence[index] if self.confidence is not None else None,\n class_id=self.class_id[index] if self.class_id is not None else None,\n tracker_id=self.tracker_id[index] if self.tracker_id is not None else None,\n data=get_data_item(self.data, index),\n )\n\n def __setitem__(self, key: str, value: Union[np.ndarray, List]):\n \"\"\"\n Set a value in the data dictionary of the Detections object.\n\n Args:\n key (str): The key in the data dictionary to set.\n value (Union[np.ndarray, List]): The value to set for the key.\n\n Example:\n ```python\n import cv2\n import supervision as sv\n from ultralytics import YOLO\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = YOLO('yolov8s.pt')\n\n result = model(image)[0]\n detections = sv.Detections.from_ultralytics(result)\n\n detections['names'] = [\n model.model.names[class_id]\n for class_id\n in detections.class_id\n ]\n ```\n \"\"\"\n if not isinstance(value, (np.ndarray, list)):\n raise TypeError(\"Value must be a np.ndarray or a list\")\n\n if isinstance(value, list):\n value = np.array(value)\n\n self.data[key] = value\n\n @property\n def area(self) -> np.ndarray:\n \"\"\"\n Calculate the area of each detection in the set of object detections.\n If masks field is defined property returns are of each mask.\n If only box is given property return area of each box.\n\n Returns:\n np.ndarray: An array of floats containing the area of each detection\n in the format of `(area_1, area_2, , area_n)`,\n where n is the number of detections.\n \"\"\"\n if self.mask is not None:\n return np.array([np.sum(mask) for mask in self.mask])\n else:\n return self.box_area\n\n @property\n def box_area(self) -> np.ndarray:\n \"\"\"\n Calculate the area of each bounding box in the set of object detections.\n\n Returns:\n np.ndarray: An array of floats containing the area of each bounding\n box in the format of `(area_1, area_2, , area_n)`,\n where n is the number of detections.\n \"\"\"\n return (self.xyxy[:, 3] - self.xyxy[:, 1]) * (self.xyxy[:, 2] - self.xyxy[:, 0])\n\n def with_nms(\n self, threshold: float = 0.5, class_agnostic: bool = False\n ) -> Detections:\n \"\"\"\n Performs non-max suppression on detection set. If the detections result\n from a segmentation model, the IoU mask is applied. Otherwise, box IoU is used.\n\n Args:\n threshold (float, optional): The intersection-over-union threshold\n to use for non-maximum suppression. I'm the lower the value the more\n restrictive the NMS becomes. Defaults to 0.5.\n class_agnostic (bool, optional): Whether to perform class-agnostic\n non-maximum suppression. If True, the class_id of each detection\n will be ignored. Defaults to False.\n\n Returns:\n Detections: A new Detections object containing the subset of detections\n after non-maximum suppression.\n\n Raises:\n AssertionError: If `confidence` is None and class_agnostic is False.\n If `class_id` is None and class_agnostic is False.\n \"\"\"\n if len(self) == 0:\n return self\n\n assert (\n self.confidence is not None\n ), \"Detections confidence must be given for NMS to be executed.\"\n\n if class_agnostic:\n predictions = np.hstack((self.xyxy, self.confidence.reshape(-1, 1)))\n else:\n assert self.class_id is not None, (\n \"Detections class_id must be given for NMS to be executed. If you\"\n \" intended to perform class agnostic NMS set class_agnostic=True.\"\n )\n predictions = np.hstack(\n (\n self.xyxy,\n self.confidence.reshape(-1, 1),\n self.class_id.reshape(-1, 1),\n )\n )\n\n if self.mask is not None:\n indices = mask_non_max_suppression(\n predictions=predictions, masks=self.mask, iou_threshold=threshold\n )\n else:\n indices = box_non_max_suppression(\n predictions=predictions, iou_threshold=threshold\n )\n\n return self[indices]\n", "path": "supervision/detection/core.py" } ]
[ { "content": "from __future__ import annotations\n\nfrom contextlib import suppress\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, Iterator, List, Optional, Tuple, Union\n\nimport numpy as np\n\nfrom supervision.config import CLASS_NAME_DATA_FIELD, ORIENTED_BOX_COORDINATES\nfrom supervision.detection.utils import (\n box_non_max_suppression,\n calculate_masks_centroids,\n extract_ultralytics_masks,\n get_data_item,\n is_data_equal,\n mask_non_max_suppression,\n merge_data,\n process_roboflow_result,\n validate_detections_fields,\n xywh_to_xyxy,\n)\nfrom supervision.geometry.core import Position\nfrom supervision.utils.internal import deprecated\n\n\n@dataclass\nclass Detections:\n \"\"\"\n The `sv.Detections` allows you to convert results from a variety of object detection\n and segmentation models into a single, unified format. The `sv.Detections` class\n enables easy data manipulation and filtering, and provides a consistent API for\n Supervision's tools like trackers, annotators, and zones.\n\n ```python\n import cv2\n import supervision as sv\n from ultralytics import YOLO\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = YOLO('yolov8s.pt')\n annotator = sv.BoundingBoxAnnotator()\n\n result = model(image)[0]\n detections = sv.Detections.from_ultralytics(result)\n\n annotated_image = annotator.annotate(image, detections)\n ```\n\n !!! tip\n\n In `sv.Detections`, detection data is categorized into two main field types:\n fixed and custom. The fixed fields include `xyxy`, `mask`, `confidence`,\n `class_id`, and `tracker_id`. For any additional data requirements, custom\n fields come into play, stored in the data field. These custom fields are easily\n accessible using the `detections[<FIELD_NAME>]` syntax, providing flexibility\n for diverse data handling needs.\n\n Attributes:\n xyxy (np.ndarray): An array of shape `(n, 4)` containing\n the bounding boxes coordinates in format `[x1, y1, x2, y2]`\n mask: (Optional[np.ndarray]): An array of shape\n `(n, H, W)` containing the segmentation masks.\n confidence (Optional[np.ndarray]): An array of shape\n `(n,)` containing the confidence scores of the detections.\n class_id (Optional[np.ndarray]): An array of shape\n `(n,)` containing the class ids of the detections.\n tracker_id (Optional[np.ndarray]): An array of shape\n `(n,)` containing the tracker ids of the detections.\n data (Dict[str, Union[np.ndarray, List]]): A dictionary containing additional\n data where each key is a string representing the data type, and the value\n is either a NumPy array or a list of corresponding data.\n\n !!! warning\n\n The `data` field in the `sv.Detections` class is currently in an experimental\n phase. Please be aware that its API and functionality are subject to change in\n future updates as we continue to refine and improve its capabilities.\n We encourage users to experiment with this feature and provide feedback, but\n also to be prepared for potential modifications in upcoming releases.\n \"\"\"\n\n xyxy: np.ndarray\n mask: Optional[np.ndarray] = None\n confidence: Optional[np.ndarray] = None\n class_id: Optional[np.ndarray] = None\n tracker_id: Optional[np.ndarray] = None\n data: Dict[str, Union[np.ndarray, List]] = field(default_factory=dict)\n\n def __post_init__(self):\n validate_detections_fields(\n xyxy=self.xyxy,\n mask=self.mask,\n confidence=self.confidence,\n class_id=self.class_id,\n tracker_id=self.tracker_id,\n data=self.data,\n )\n\n def __len__(self):\n \"\"\"\n Returns the number of detections in the Detections object.\n \"\"\"\n return len(self.xyxy)\n\n def __iter__(\n self,\n ) -> Iterator[\n Tuple[\n np.ndarray,\n Optional[np.ndarray],\n Optional[float],\n Optional[int],\n Optional[int],\n Dict[str, Union[np.ndarray, List]],\n ]\n ]:\n \"\"\"\n Iterates over the Detections object and yield a tuple of\n `(xyxy, mask, confidence, class_id, tracker_id, data)` for each detection.\n \"\"\"\n for i in range(len(self.xyxy)):\n yield (\n self.xyxy[i],\n self.mask[i] if self.mask is not None else None,\n self.confidence[i] if self.confidence is not None else None,\n self.class_id[i] if self.class_id is not None else None,\n self.tracker_id[i] if self.tracker_id is not None else None,\n get_data_item(self.data, i),\n )\n\n def __eq__(self, other: Detections):\n return all(\n [\n np.array_equal(self.xyxy, other.xyxy),\n np.array_equal(self.mask, other.mask),\n np.array_equal(self.class_id, other.class_id),\n np.array_equal(self.confidence, other.confidence),\n np.array_equal(self.tracker_id, other.tracker_id),\n is_data_equal(self.data, other.data),\n ]\n )\n\n @classmethod\n def from_yolov5(cls, yolov5_results) -> Detections:\n \"\"\"\n Creates a Detections instance from a\n [YOLOv5](https://github.com/ultralytics/yolov5) inference result.\n\n Args:\n yolov5_results (yolov5.models.common.Detections):\n The output Detections instance from YOLOv5\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import cv2\n import torch\n import supervision as sv\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = torch.hub.load('ultralytics/yolov5', 'yolov5s')\n result = model(image)\n detections = sv.Detections.from_yolov5(result)\n ```\n \"\"\"\n yolov5_detections_predictions = yolov5_results.pred[0].cpu().cpu().numpy()\n\n return cls(\n xyxy=yolov5_detections_predictions[:, :4],\n confidence=yolov5_detections_predictions[:, 4],\n class_id=yolov5_detections_predictions[:, 5].astype(int),\n )\n\n @classmethod\n def from_ultralytics(cls, ultralytics_results) -> Detections:\n \"\"\"\n Creates a Detections instance from a\n [YOLOv8](https://github.com/ultralytics/ultralytics) inference result.\n\n !!! Note\n\n `from_ultralytics` is compatible with\n [detection](https://docs.ultralytics.com/tasks/detect/),\n [segmentation](https://docs.ultralytics.com/tasks/segment/), and\n [OBB](https://docs.ultralytics.com/tasks/obb/) models.\n\n Args:\n ultralytics_results (ultralytics.yolo.engine.results.Results):\n The output Results instance from YOLOv8\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import cv2\n import supervision as sv\n from ultralytics import YOLO\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = YOLO('yolov8s.pt')\n\n result = model(image)[0]\n detections = sv.Detections.from_ultralytics(result)\n ```\n \"\"\" # noqa: E501 // docs\n\n if ultralytics_results.obb is not None:\n class_id = ultralytics_results.obb.cls.cpu().numpy().astype(int)\n class_names = np.array([ultralytics_results.names[i] for i in class_id])\n oriented_box_coordinates = ultralytics_results.obb.xyxyxyxy.cpu().numpy()\n return cls(\n xyxy=ultralytics_results.obb.xyxy.cpu().numpy(),\n confidence=ultralytics_results.obb.conf.cpu().numpy(),\n class_id=class_id,\n tracker_id=ultralytics_results.obb.id.int().cpu().numpy()\n if ultralytics_results.obb.id is not None\n else None,\n data={\n ORIENTED_BOX_COORDINATES: oriented_box_coordinates,\n CLASS_NAME_DATA_FIELD: class_names,\n },\n )\n\n class_id = ultralytics_results.boxes.cls.cpu().numpy().astype(int)\n class_names = np.array([ultralytics_results.names[i] for i in class_id])\n return cls(\n xyxy=ultralytics_results.boxes.xyxy.cpu().numpy(),\n confidence=ultralytics_results.boxes.conf.cpu().numpy(),\n class_id=class_id,\n mask=extract_ultralytics_masks(ultralytics_results),\n tracker_id=ultralytics_results.boxes.id.int().cpu().numpy()\n if ultralytics_results.boxes.id is not None\n else None,\n data={CLASS_NAME_DATA_FIELD: class_names},\n )\n\n @classmethod\n def from_yolo_nas(cls, yolo_nas_results) -> Detections:\n \"\"\"\n Creates a Detections instance from a\n [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md)\n inference result.\n\n Args:\n yolo_nas_results (ImageDetectionPrediction):\n The output Results instance from YOLO-NAS\n ImageDetectionPrediction is coming from\n 'super_gradients.training.models.prediction_results'\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import cv2\n from super_gradients.training import models\n import supervision as sv\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = models.get('yolo_nas_l', pretrained_weights=\"coco\")\n\n result = list(model.predict(image, conf=0.35))[0]\n detections = sv.Detections.from_yolo_nas(result)\n ```\n \"\"\"\n if np.asarray(yolo_nas_results.prediction.bboxes_xyxy).shape[0] == 0:\n return cls.empty()\n\n return cls(\n xyxy=yolo_nas_results.prediction.bboxes_xyxy,\n confidence=yolo_nas_results.prediction.confidence,\n class_id=yolo_nas_results.prediction.labels.astype(int),\n )\n\n @classmethod\n def from_tensorflow(\n cls, tensorflow_results: dict, resolution_wh: tuple\n ) -> Detections:\n \"\"\"\n Creates a Detections instance from a\n [Tensorflow Hub](https://www.tensorflow.org/hub/tutorials/tf2_object_detection)\n inference result.\n\n Args:\n tensorflow_results (dict):\n The output results from Tensorflow Hub.\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import tensorflow as tf\n import tensorflow_hub as hub\n import numpy as np\n import cv2\n\n module_handle = \"https://tfhub.dev/tensorflow/centernet/hourglass_512x512_kpts/1\"\n model = hub.load(module_handle)\n img = np.array(cv2.imread(SOURCE_IMAGE_PATH))\n result = model(img)\n detections = sv.Detections.from_tensorflow(result)\n ```\n \"\"\" # noqa: E501 // docs\n\n boxes = tensorflow_results[\"detection_boxes\"][0].numpy()\n boxes[:, [0, 2]] *= resolution_wh[0]\n boxes[:, [1, 3]] *= resolution_wh[1]\n boxes = boxes[:, [1, 0, 3, 2]]\n return cls(\n xyxy=boxes,\n confidence=tensorflow_results[\"detection_scores\"][0].numpy(),\n class_id=tensorflow_results[\"detection_classes\"][0].numpy().astype(int),\n )\n\n @classmethod\n def from_deepsparse(cls, deepsparse_results) -> Detections:\n \"\"\"\n Creates a Detections instance from a\n [DeepSparse](https://github.com/neuralmagic/deepsparse)\n inference result.\n\n Args:\n deepsparse_results (deepsparse.yolo.schemas.YOLOOutput):\n The output Results instance from DeepSparse.\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import supervision as sv\n from deepsparse import Pipeline\n\n yolo_pipeline = Pipeline.create(\n task=\"yolo\",\n model_path = \"zoo:cv/detection/yolov5-l/pytorch/ultralytics/coco/pruned80_quant-none\"\n )\n result = yolo_pipeline(<SOURCE IMAGE PATH>)\n detections = sv.Detections.from_deepsparse(result)\n ```\n \"\"\" # noqa: E501 // docs\n\n if np.asarray(deepsparse_results.boxes[0]).shape[0] == 0:\n return cls.empty()\n\n return cls(\n xyxy=np.array(deepsparse_results.boxes[0]),\n confidence=np.array(deepsparse_results.scores[0]),\n class_id=np.array(deepsparse_results.labels[0]).astype(float).astype(int),\n )\n\n @classmethod\n def from_mmdetection(cls, mmdet_results) -> Detections:\n \"\"\"\n Creates a Detections instance from a\n [mmdetection](https://github.com/open-mmlab/mmdetection) and\n [mmyolo](https://github.com/open-mmlab/mmyolo) inference result.\n\n Args:\n mmdet_results (mmdet.structures.DetDataSample):\n The output Results instance from MMDetection.\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import cv2\n import supervision as sv\n from mmdet.apis import init_detector, inference_detector\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = init_detector(<CONFIG_PATH>, <WEIGHTS_PATH>, device=<DEVICE>)\n\n result = inference_detector(model, image)\n detections = sv.Detections.from_mmdetection(result)\n ```\n \"\"\" # noqa: E501 // docs\n\n return cls(\n xyxy=mmdet_results.pred_instances.bboxes.cpu().numpy(),\n confidence=mmdet_results.pred_instances.scores.cpu().numpy(),\n class_id=mmdet_results.pred_instances.labels.cpu().numpy().astype(int),\n )\n\n @classmethod\n def from_transformers(cls, transformers_results: dict) -> Detections:\n \"\"\"\n Creates a Detections instance from object detection\n [transformer](https://github.com/huggingface/transformers) inference result.\n\n Returns:\n Detections: A new Detections object.\n \"\"\"\n\n return cls(\n xyxy=transformers_results[\"boxes\"].cpu().numpy(),\n confidence=transformers_results[\"scores\"].cpu().numpy(),\n class_id=transformers_results[\"labels\"].cpu().numpy().astype(int),\n )\n\n @classmethod\n def from_detectron2(cls, detectron2_results) -> Detections:\n \"\"\"\n Create a Detections object from the\n [Detectron2](https://github.com/facebookresearch/detectron2) inference result.\n\n Args:\n detectron2_results: The output of a\n Detectron2 model containing instances with prediction data.\n\n Returns:\n (Detections): A Detections object containing the bounding boxes,\n class IDs, and confidences of the predictions.\n\n Example:\n ```python\n import cv2\n import supervision as sv\n from detectron2.engine import DefaultPredictor\n from detectron2.config import get_cfg\n\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n cfg = get_cfg()\n cfg.merge_from_file(<CONFIG_PATH>)\n cfg.MODEL.WEIGHTS = <WEIGHTS_PATH>\n predictor = DefaultPredictor(cfg)\n\n result = predictor(image)\n detections = sv.Detections.from_detectron2(result)\n ```\n \"\"\"\n\n return cls(\n xyxy=detectron2_results[\"instances\"].pred_boxes.tensor.cpu().numpy(),\n confidence=detectron2_results[\"instances\"].scores.cpu().numpy(),\n class_id=detectron2_results[\"instances\"]\n .pred_classes.cpu()\n .numpy()\n .astype(int),\n )\n\n @classmethod\n def from_inference(cls, roboflow_result: Union[dict, Any]) -> Detections:\n \"\"\"\n Create a Detections object from the [Roboflow](https://roboflow.com/)\n API inference result or the [Inference](https://inference.roboflow.com/)\n package results. This method extracts bounding boxes, class IDs,\n confidences, and class names from the Roboflow API result and encapsulates\n them into a Detections object.\n\n !!! note\n\n Class names can be accessed using the key 'class_name' in the returned\n object's data attribute.\n\n Args:\n roboflow_result (dict, any): The result from the\n Roboflow API or Inference package containing predictions.\n\n Returns:\n (Detections): A Detections object containing the bounding boxes, class IDs,\n and confidences of the predictions.\n\n Example:\n ```python\n import cv2\n import supervision as sv\n from inference.models.utils import get_roboflow_model\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = get_roboflow_model(model_id=\"yolov8s-640\")\n\n result = model.infer(image)[0]\n detections = sv.Detections.from_inference(result)\n ```\n \"\"\"\n with suppress(AttributeError):\n roboflow_result = roboflow_result.dict(exclude_none=True, by_alias=True)\n xyxy, confidence, class_id, masks, trackers, data = process_roboflow_result(\n roboflow_result=roboflow_result\n )\n\n if np.asarray(xyxy).shape[0] == 0:\n empty_detection = cls.empty()\n empty_detection.data = {CLASS_NAME_DATA_FIELD: np.empty(0)}\n return empty_detection\n\n return cls(\n xyxy=xyxy,\n confidence=confidence,\n class_id=class_id,\n mask=masks,\n tracker_id=trackers,\n data=data,\n )\n\n @classmethod\n @deprecated(\n \"`Detections.from_roboflow` is deprecated and will be removed in \"\n \"`supervision-0.22.0`. Use `Detections.from_inference` instead.\"\n )\n def from_roboflow(cls, roboflow_result: Union[dict, Any]) -> Detections:\n \"\"\"\n !!! failure \"Deprecated\"\n\n `Detections.from_roboflow` is deprecated and will be removed in\n `supervision-0.22.0`. Use `Detections.from_inference` instead.\n\n Create a Detections object from the [Roboflow](https://roboflow.com/)\n API inference result or the [Inference](https://inference.roboflow.com/)\n package results.\n\n Args:\n roboflow_result (dict): The result from the\n Roboflow API containing predictions.\n\n Returns:\n (Detections): A Detections object containing the bounding boxes, class IDs,\n and confidences of the predictions.\n\n Example:\n ```python\n import cv2\n import supervision as sv\n from inference.models.utils import get_roboflow_model\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = get_roboflow_model(model_id=\"yolov8s-640\")\n\n result = model.infer(image)[0]\n detections = sv.Detections.from_roboflow(result)\n ```\n \"\"\"\n return cls.from_inference(roboflow_result)\n\n @classmethod\n def from_sam(cls, sam_result: List[dict]) -> Detections:\n \"\"\"\n Creates a Detections instance from\n [Segment Anything Model](https://github.com/facebookresearch/segment-anything)\n inference result.\n\n Args:\n sam_result (List[dict]): The output Results instance from SAM\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import supervision as sv\n from segment_anything import (\n sam_model_registry,\n SamAutomaticMaskGenerator\n )\n\n sam_model_reg = sam_model_registry[MODEL_TYPE]\n sam = sam_model_reg(checkpoint=CHECKPOINT_PATH).to(device=DEVICE)\n mask_generator = SamAutomaticMaskGenerator(sam)\n sam_result = mask_generator.generate(IMAGE)\n detections = sv.Detections.from_sam(sam_result=sam_result)\n ```\n \"\"\"\n\n sorted_generated_masks = sorted(\n sam_result, key=lambda x: x[\"area\"], reverse=True\n )\n\n xywh = np.array([mask[\"bbox\"] for mask in sorted_generated_masks])\n mask = np.array([mask[\"segmentation\"] for mask in sorted_generated_masks])\n\n if np.asarray(xywh).shape[0] == 0:\n return cls.empty()\n\n xyxy = xywh_to_xyxy(boxes_xywh=xywh)\n return cls(xyxy=xyxy, mask=mask)\n\n @classmethod\n def from_azure_analyze_image(\n cls, azure_result: dict, class_map: Optional[Dict[int, str]] = None\n ) -> Detections:\n \"\"\"\n Creates a Detections instance from [Azure Image Analysis 4.0](\n https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/\n concept-object-detection-40).\n\n Args:\n azure_result (dict): The result from Azure Image Analysis. It should\n contain detected objects and their bounding box coordinates.\n class_map (Optional[Dict[int, str]]): A mapping ofclass IDs (int) to class\n names (str). If None, a new mapping is created dynamically.\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import requests\n import supervision as sv\n\n image = open(input, \"rb\").read()\n\n endpoint = \"https://.cognitiveservices.azure.com/\"\n subscription_key = \"\"\n\n headers = {\n \"Content-Type\": \"application/octet-stream\",\n \"Ocp-Apim-Subscription-Key\": subscription_key\n }\n\n response = requests.post(endpoint,\n headers=self.headers,\n data=image\n ).json()\n\n detections = sv.Detections.from_azure_analyze_image(response)\n ```\n \"\"\"\n if \"error\" in azure_result:\n raise ValueError(\n f'Azure API returned an error {azure_result[\"error\"][\"message\"]}'\n )\n\n xyxy, confidences, class_ids = [], [], []\n\n is_dynamic_mapping = class_map is None\n if is_dynamic_mapping:\n class_map = {}\n\n class_map = {value: key for key, value in class_map.items()}\n\n for detection in azure_result[\"objectsResult\"][\"values\"]:\n bbox = detection[\"boundingBox\"]\n\n tags = detection[\"tags\"]\n\n x0 = bbox[\"x\"]\n y0 = bbox[\"y\"]\n x1 = x0 + bbox[\"w\"]\n y1 = y0 + bbox[\"h\"]\n\n for tag in tags:\n confidence = tag[\"confidence\"]\n class_name = tag[\"name\"]\n class_id = class_map.get(class_name, None)\n\n if is_dynamic_mapping and class_id is None:\n class_id = len(class_map)\n class_map[class_name] = class_id\n\n if class_id is not None:\n xyxy.append([x0, y0, x1, y1])\n confidences.append(confidence)\n class_ids.append(class_id)\n\n if len(xyxy) == 0:\n return Detections.empty()\n\n return cls(\n xyxy=np.array(xyxy),\n class_id=np.array(class_ids),\n confidence=np.array(confidences),\n )\n\n @classmethod\n def from_paddledet(cls, paddledet_result) -> Detections:\n \"\"\"\n Creates a Detections instance from\n [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection)\n inference result.\n\n Args:\n paddledet_result (List[dict]): The output Results instance from PaddleDet\n\n Returns:\n Detections: A new Detections object.\n\n Example:\n ```python\n import supervision as sv\n import paddle\n from ppdet.engine import Trainer\n from ppdet.core.workspace import load_config\n\n weights = ()\n config = ()\n\n cfg = load_config(config)\n trainer = Trainer(cfg, mode='test')\n trainer.load_weights(weights)\n\n paddledet_result = trainer.predict([images])[0]\n\n detections = sv.Detections.from_paddledet(paddledet_result)\n ```\n \"\"\"\n\n if np.asarray(paddledet_result[\"bbox\"][:, 2:6]).shape[0] == 0:\n return cls.empty()\n\n return cls(\n xyxy=paddledet_result[\"bbox\"][:, 2:6],\n confidence=paddledet_result[\"bbox\"][:, 1],\n class_id=paddledet_result[\"bbox\"][:, 0].astype(int),\n )\n\n @classmethod\n def empty(cls) -> Detections:\n \"\"\"\n Create an empty Detections object with no bounding boxes,\n confidences, or class IDs.\n\n Returns:\n (Detections): An empty Detections object.\n\n Example:\n ```python\n from supervision import Detections\n\n empty_detections = Detections.empty()\n ```\n \"\"\"\n return cls(\n xyxy=np.empty((0, 4), dtype=np.float32),\n confidence=np.array([], dtype=np.float32),\n class_id=np.array([], dtype=int),\n )\n\n @classmethod\n def merge(cls, detections_list: List[Detections]) -> Detections:\n \"\"\"\n Merge a list of Detections objects into a single Detections object.\n\n This method takes a list of Detections objects and combines their\n respective fields (`xyxy`, `mask`, `confidence`, `class_id`, and `tracker_id`)\n into a single Detections object. If all elements in a field are not\n `None`, the corresponding field will be stacked.\n Otherwise, the field will be set to `None`.\n\n Args:\n detections_list (List[Detections]): A list of Detections objects to merge.\n\n Returns:\n (Detections): A single Detections object containing\n the merged data from the input list.\n\n Example:\n ```python\n import numpy as np\n import supervision as sv\n\n detections_1 = sv.Detections(\n xyxy=np.array([[15, 15, 100, 100], [200, 200, 300, 300]]),\n class_id=np.array([1, 2]),\n data={'feature_vector': np.array([0.1, 0.2)])}\n )\n\n detections_2 = sv.Detections(\n xyxy=np.array([[30, 30, 120, 120]]),\n class_id=np.array([1]),\n data={'feature_vector': [np.array([0.3])]}\n )\n\n merged_detections = Detections.merge([detections_1, detections_2])\n\n merged_detections.xyxy\n array([[ 15, 15, 100, 100],\n [200, 200, 300, 300],\n [ 30, 30, 120, 120]])\n\n merged_detections.class_id\n array([1, 2, 1])\n\n merged_detections.data['feature_vector']\n array([0.1, 0.2, 0.3])\n ```\n \"\"\"\n if len(detections_list) == 0:\n return Detections.empty()\n\n for detections in detections_list:\n validate_detections_fields(\n xyxy=detections.xyxy,\n mask=detections.mask,\n confidence=detections.confidence,\n class_id=detections.class_id,\n tracker_id=detections.tracker_id,\n data=detections.data,\n )\n\n xyxy = np.vstack([d.xyxy for d in detections_list])\n\n def stack_or_none(name: str):\n if all(d.__getattribute__(name) is None for d in detections_list):\n return None\n if any(d.__getattribute__(name) is None for d in detections_list):\n raise ValueError(f\"All or none of the '{name}' fields must be None\")\n return (\n np.vstack([d.__getattribute__(name) for d in detections_list])\n if name == \"mask\"\n else np.hstack([d.__getattribute__(name) for d in detections_list])\n )\n\n mask = stack_or_none(\"mask\")\n confidence = stack_or_none(\"confidence\")\n class_id = stack_or_none(\"class_id\")\n tracker_id = stack_or_none(\"tracker_id\")\n\n data = merge_data([d.data for d in detections_list])\n\n return cls(\n xyxy=xyxy,\n mask=mask,\n confidence=confidence,\n class_id=class_id,\n tracker_id=tracker_id,\n data=data,\n )\n\n def get_anchors_coordinates(self, anchor: Position) -> np.ndarray:\n \"\"\"\n Calculates and returns the coordinates of a specific anchor point\n within the bounding boxes defined by the `xyxy` attribute. The anchor\n point can be any of the predefined positions in the `Position` enum,\n such as `CENTER`, `CENTER_LEFT`, `BOTTOM_RIGHT`, etc.\n\n Args:\n anchor (Position): An enum specifying the position of the anchor point\n within the bounding box. Supported positions are defined in the\n `Position` enum.\n\n Returns:\n np.ndarray: An array of shape `(n, 2)`, where `n` is the number of bounding\n boxes. Each row contains the `[x, y]` coordinates of the specified\n anchor point for the corresponding bounding box.\n\n Raises:\n ValueError: If the provided `anchor` is not supported.\n \"\"\"\n if anchor == Position.CENTER:\n return np.array(\n [\n (self.xyxy[:, 0] + self.xyxy[:, 2]) / 2,\n (self.xyxy[:, 1] + self.xyxy[:, 3]) / 2,\n ]\n ).transpose()\n elif anchor == Position.CENTER_OF_MASS:\n if self.mask is None:\n raise ValueError(\n \"Cannot use `Position.CENTER_OF_MASS` without a detection mask.\"\n )\n return calculate_masks_centroids(masks=self.mask)\n elif anchor == Position.CENTER_LEFT:\n return np.array(\n [\n self.xyxy[:, 0],\n (self.xyxy[:, 1] + self.xyxy[:, 3]) / 2,\n ]\n ).transpose()\n elif anchor == Position.CENTER_RIGHT:\n return np.array(\n [\n self.xyxy[:, 2],\n (self.xyxy[:, 1] + self.xyxy[:, 3]) / 2,\n ]\n ).transpose()\n elif anchor == Position.BOTTOM_CENTER:\n return np.array(\n [(self.xyxy[:, 0] + self.xyxy[:, 2]) / 2, self.xyxy[:, 3]]\n ).transpose()\n elif anchor == Position.BOTTOM_LEFT:\n return np.array([self.xyxy[:, 0], self.xyxy[:, 3]]).transpose()\n elif anchor == Position.BOTTOM_RIGHT:\n return np.array([self.xyxy[:, 2], self.xyxy[:, 3]]).transpose()\n elif anchor == Position.TOP_CENTER:\n return np.array(\n [(self.xyxy[:, 0] + self.xyxy[:, 2]) / 2, self.xyxy[:, 1]]\n ).transpose()\n elif anchor == Position.TOP_LEFT:\n return np.array([self.xyxy[:, 0], self.xyxy[:, 1]]).transpose()\n elif anchor == Position.TOP_RIGHT:\n return np.array([self.xyxy[:, 2], self.xyxy[:, 1]]).transpose()\n\n raise ValueError(f\"{anchor} is not supported.\")\n\n def __getitem__(\n self, index: Union[int, slice, List[int], np.ndarray, str]\n ) -> Union[Detections, List, np.ndarray, None]:\n \"\"\"\n Get a subset of the Detections object or access an item from its data field.\n\n When provided with an integer, slice, list of integers, or a numpy array, this\n method returns a new Detections object that represents a subset of the original\n detections. When provided with a string, it accesses the corresponding item in\n the data dictionary.\n\n Args:\n index (Union[int, slice, List[int], np.ndarray, str]): The index, indices,\n or key to access a subset of the Detections or an item from the data.\n\n Returns:\n Union[Detections, Any]: A subset of the Detections object or an item from\n the data field.\n\n Example:\n ```python\n import supervision as sv\n\n detections = sv.Detections()\n\n first_detection = detections[0]\n first_10_detections = detections[0:10]\n some_detections = detections[[0, 2, 4]]\n class_0_detections = detections[detections.class_id == 0]\n high_confidence_detections = detections[detections.confidence > 0.5]\n\n feature_vector = detections['feature_vector']\n ```\n \"\"\"\n if isinstance(index, str):\n return self.data.get(index)\n if isinstance(index, int):\n index = [index]\n return Detections(\n xyxy=self.xyxy[index],\n mask=self.mask[index] if self.mask is not None else None,\n confidence=self.confidence[index] if self.confidence is not None else None,\n class_id=self.class_id[index] if self.class_id is not None else None,\n tracker_id=self.tracker_id[index] if self.tracker_id is not None else None,\n data=get_data_item(self.data, index),\n )\n\n def __setitem__(self, key: str, value: Union[np.ndarray, List]):\n \"\"\"\n Set a value in the data dictionary of the Detections object.\n\n Args:\n key (str): The key in the data dictionary to set.\n value (Union[np.ndarray, List]): The value to set for the key.\n\n Example:\n ```python\n import cv2\n import supervision as sv\n from ultralytics import YOLO\n\n image = cv2.imread(<SOURCE_IMAGE_PATH>)\n model = YOLO('yolov8s.pt')\n\n result = model(image)[0]\n detections = sv.Detections.from_ultralytics(result)\n\n detections['names'] = [\n model.model.names[class_id]\n for class_id\n in detections.class_id\n ]\n ```\n \"\"\"\n if not isinstance(value, (np.ndarray, list)):\n raise TypeError(\"Value must be a np.ndarray or a list\")\n\n if isinstance(value, list):\n value = np.array(value)\n\n self.data[key] = value\n\n @property\n def area(self) -> np.ndarray:\n \"\"\"\n Calculate the area of each detection in the set of object detections.\n If masks field is defined property returns are of each mask.\n If only box is given property return area of each box.\n\n Returns:\n np.ndarray: An array of floats containing the area of each detection\n in the format of `(area_1, area_2, , area_n)`,\n where n is the number of detections.\n \"\"\"\n if self.mask is not None:\n return np.array([np.sum(mask) for mask in self.mask])\n else:\n return self.box_area\n\n @property\n def box_area(self) -> np.ndarray:\n \"\"\"\n Calculate the area of each bounding box in the set of object detections.\n\n Returns:\n np.ndarray: An array of floats containing the area of each bounding\n box in the format of `(area_1, area_2, , area_n)`,\n where n is the number of detections.\n \"\"\"\n return (self.xyxy[:, 3] - self.xyxy[:, 1]) * (self.xyxy[:, 2] - self.xyxy[:, 0])\n\n def with_nms(\n self, threshold: float = 0.5, class_agnostic: bool = False\n ) -> Detections:\n \"\"\"\n Performs non-max suppression on detection set. If the detections result\n from a segmentation model, the IoU mask is applied. Otherwise, box IoU is used.\n\n Args:\n threshold (float, optional): The intersection-over-union threshold\n to use for non-maximum suppression. I'm the lower the value the more\n restrictive the NMS becomes. Defaults to 0.5.\n class_agnostic (bool, optional): Whether to perform class-agnostic\n non-maximum suppression. If True, the class_id of each detection\n will be ignored. Defaults to False.\n\n Returns:\n Detections: A new Detections object containing the subset of detections\n after non-maximum suppression.\n\n Raises:\n AssertionError: If `confidence` is None and class_agnostic is False.\n If `class_id` is None and class_agnostic is False.\n \"\"\"\n if len(self) == 0:\n return self\n\n assert (\n self.confidence is not None\n ), \"Detections confidence must be given for NMS to be executed.\"\n\n if class_agnostic:\n predictions = np.hstack((self.xyxy, self.confidence.reshape(-1, 1)))\n else:\n assert self.class_id is not None, (\n \"Detections class_id must be given for NMS to be executed. If you\"\n \" intended to perform class agnostic NMS set class_agnostic=True.\"\n )\n predictions = np.hstack(\n (\n self.xyxy,\n self.confidence.reshape(-1, 1),\n self.class_id.reshape(-1, 1),\n )\n )\n\n if self.mask is not None:\n indices = mask_non_max_suppression(\n predictions=predictions, masks=self.mask, iou_threshold=threshold\n )\n else:\n indices = box_non_max_suppression(\n predictions=predictions, iou_threshold=threshold\n )\n\n return self[indices]\n", "path": "supervision/detection/core.py" } ]
diff --git a/supervision/detection/core.py b/supervision/detection/core.py index e727426bf..f170563ca 100644 --- a/supervision/detection/core.py +++ b/supervision/detection/core.py @@ -487,7 +487,9 @@ def from_inference(cls, roboflow_result: Union[dict, Any]) -> Detections: ) if np.asarray(xyxy).shape[0] == 0: - return cls.empty() + empty_detection = cls.empty() + empty_detection.data = {CLASS_NAME_DATA_FIELD: np.empty(0)} + return empty_detection return cls( xyxy=xyxy,
ocf__ocfweb-72
Add "edit this page" link on docs? It would link to the GitHub editor page.
[ { "content": "from collections import namedtuple\n\n\nclass Document(namedtuple('Document', ['name', 'title', 'render'])):\n\n @property\n def category(self):\n \"\"\"Return full category path of the document.\n\n For example, \"/\" or \"/staff/backend/\".\n \"\"\"\n return self.name.rsplit('/', 1)[0] + '/'\n\n @property\n def category_for_sidebar(self):\n \"\"\"Return the category to show similar pages for in the sidebar.\n\n If this page isn't at the root category, we just return this page's\n category.\n\n If this page is at the root category, we return the category rooted at\n this page (which may or may not have any pages in it).\n \"\"\"\n if self.category == '/':\n return self.name + '/'\n else:\n return self.category\n", "path": "ocfweb/docs/doc.py" } ]
[ { "content": "from collections import namedtuple\n\n\nclass Document(namedtuple('Document', ['name', 'title', 'render'])):\n\n @property\n def category(self):\n \"\"\"Return full category path of the document.\n\n For example, \"/\" or \"/staff/backend/\".\n \"\"\"\n return self.name.rsplit('/', 1)[0] + '/'\n\n @property\n def category_for_sidebar(self):\n \"\"\"Return the category to show similar pages for in the sidebar.\n\n If this page isn't at the root category, we just return this page's\n category.\n\n If this page is at the root category, we return the category rooted at\n this page (which may or may not have any pages in it).\n \"\"\"\n if self.category == '/':\n return self.name + '/'\n else:\n return self.category\n\n @property\n def edit_url(self):\n \"\"\"Return a GitHub edit URL for this page.\"\"\"\n return (\n 'https://github.com/ocf/ocfweb/edit/master/ocfweb/docs/docs' +\n self.name +\n '.md'\n )\n", "path": "ocfweb/docs/doc.py" } ]
diff --git a/ocfweb/docs/doc.py b/ocfweb/docs/doc.py index 11cb55d6d..6a0022d89 100644 --- a/ocfweb/docs/doc.py +++ b/ocfweb/docs/doc.py @@ -25,3 +25,12 @@ def category_for_sidebar(self): return self.name + '/' else: return self.category + + @property + def edit_url(self): + """Return a GitHub edit URL for this page.""" + return ( + 'https://github.com/ocf/ocfweb/edit/master/ocfweb/docs/docs' + + self.name + + '.md' + ) diff --git a/ocfweb/docs/templates/doc.html b/ocfweb/docs/templates/doc.html index 3ed35ea31..66b06f69b 100644 --- a/ocfweb/docs/templates/doc.html +++ b/ocfweb/docs/templates/doc.html @@ -13,6 +13,13 @@ <div class="col-sm-4 ocf-sidebar"> {% block sidebar %} + <p> + <a class="edit-this-page" href="{{doc.edit_url}}"> + <span class="glyphicon glyphicon-pencil" aria-hidden="true"></span> + Edit this Page + </a> + </p> + <hr class="edit-page-border" /> {% doc_toc toc=toc %} <h3>More in this category</h3> diff --git a/ocfweb/static/scss/pages/docs.scss b/ocfweb/static/scss/pages/docs.scss index 601c9ae5e..58916db54 100644 --- a/ocfweb/static/scss/pages/docs.scss +++ b/ocfweb/static/scss/pages/docs.scss @@ -44,6 +44,27 @@ h6 .anchor { font-size: 10px; } + + .edit-this-page { + display: block; + padding: 5px; + font-size: 13px; + + &:hover { + background-color: #f7f7f7; + text-decoration: none; + } + + .glyphicon { + font-size: 10px; + margin-left: 5px; + margin-right: 2px; + } + } + + .edit-page-border { + margin-top: 0; + } } .doc-collapse-toggle {
openfun__marsha-2577
Sending a xAPI statement to a LRS not working anymore ## Bug Report **Problematic Behavior** When a LRS is configured in a consumer site, sending a xapi statement is failing https://gip-fun-mooc.sentry.io/share/issue/081e7857e01544d3bd5b5f93d573c428/ **Expected behavior/code** When a LRS is correctly configured for a given consumer site, the statement should be sent to the LRS. **Steps to Reproduce** 1. Configure a LRS in a consumer site 2. Navigate on an existing video 3. And then the bug happens!
[ { "content": "\"\"\"XAPI module.\"\"\"\n\nimport re\nimport uuid\n\nfrom django.conf import settings\nfrom django.utils import timezone\nfrom django.utils.translation import to_locale\n\nimport requests\n\n\ndef get_xapi_statement(resource):\n \"\"\"Return the xapi object statement based on the required resource type.\"\"\"\n if resource == \"video\":\n return XAPIVideoStatement()\n\n if resource == \"document\":\n return XAPIDocumentStatement()\n\n raise NotImplementedError\n\n\nclass XAPIStatementMixin:\n \"\"\"Mixin used by xapi statements.\"\"\"\n\n @staticmethod\n def get_user_id(jwt_token):\n \"\"\"Return the user id if present in the JWT token or the session_is otherwise.\"\"\"\n return (\n jwt_token.payload[\"user\"].get(\"id\")\n if jwt_token.payload.get(\"user\")\n else jwt_token.payload[\"session_id\"]\n )\n\n @staticmethod\n def get_homepage(resource):\n \"\"\"Return the domain associated to the playlist consumer site.\"\"\"\n return resource.playlist.consumer_site.domain\n\n def get_locale(self):\n \"\"\"Return the locale formatted with a - instead of _\"\"\"\n\n return to_locale(settings.LANGUAGE_CODE).replace(\"_\", \"-\")\n\n def get_actor_from_website(self, homepage, user):\n \"\"\"Return the actor property from a website context\"\"\"\n return {\n \"objectType\": \"Agent\",\n \"account\": {\n \"homePage\": homepage,\n \"mbox\": f\"mailto:{user.email}\",\n \"name\": str(user.id),\n },\n }\n\n def get_actor_from_lti(self, homepage, user_id):\n \"\"\"Return the actor property from a LTI context\"\"\"\n return {\n \"objectType\": \"Agent\",\n \"account\": {\"name\": user_id, \"homePage\": homepage},\n }\n\n def build_common_statement_properties(\n self, statement, homepage, user=None, user_id=None\n ):\n \"\"\"build statement properties common to all resources.\"\"\"\n if \"id\" not in statement:\n statement[\"id\"] = str(uuid.uuid4())\n\n statement[\"timestamp\"] = timezone.now().isoformat()\n\n statement[\"actor\"] = (\n self.get_actor_from_website(homepage, user)\n if user\n else self.get_actor_from_lti(homepage, user_id)\n )\n\n return statement\n\n\nclass XAPIDocumentStatement(XAPIStatementMixin):\n \"\"\"Object managing statement for document objects.\"\"\"\n\n # pylint: disable=too-many-arguments\n def _build_statement(self, document, statement, homepage, user=None, user_id=None):\n \"\"\"Build all common properties for a document.\"\"\"\n\n if re.match(r\"^http(s?):\\/\\/.*\", homepage) is None:\n homepage = f\"http://{homepage}\"\n\n statement = self.build_common_statement_properties(\n statement, homepage, user=user, user_id=user_id\n )\n\n statement[\"context\"].update(\n {\"contextActivities\": {\"category\": [{\"id\": \"https://w3id.org/xapi/lms\"}]}}\n )\n\n statement[\"object\"] = {\n \"definition\": {\n \"type\": \"http://id.tincanapi.com/activitytype/document\",\n \"name\": {self.get_locale(): document.title},\n },\n \"id\": f\"uuid://{document.id}\",\n \"objectType\": \"Activity\",\n }\n\n return statement\n\n def from_website(self, document, statement, current_site, user):\n \"\"\"Compute a valid xapi statement in a website context.\n\n Parameters\n ----------\n document : Type[marsha.core.models.Document]\n The document object used in the xAPI statement\n\n statement : dictionary\n Statement containing base information to send to the LRS\n An example of expected statement:\n {\n \"verb\": {\n \"id\": \"http://adlnet.gov/expapi/verbs/initialized\",\n \"display\": {\n \"en-US\": \"initialized\"\n }\n },\n }\n\n current_site : Type[django.contrib.sites.models.Site]\n The current site used to send the XAPI request\n\n user: Type[marsha.core.models.User]\n The connected user who sent the XAPI request\n\n \"\"\"\n\n return self._build_statement(\n document, statement, homepage=current_site.domain, user=user\n )\n\n def from_lti(self, document, statement, jwt_token):\n \"\"\"Compute a valid xapi download activity statement.\"\"\"\n\n statement = self._build_statement(\n document,\n statement,\n homepage=self.get_homepage(document),\n user_id=self.get_user_id(jwt_token),\n )\n\n if jwt_token.payload.get(\"context_id\"):\n statement[\"context\"][\"contextActivities\"].update(\n {\n \"parent\": [\n {\n \"id\": jwt_token.payload[\"context_id\"],\n \"objectType\": \"Activity\",\n \"definition\": {\n \"type\": \"http://adlnet.gov/expapi/activities/course\"\n },\n }\n ]\n }\n )\n\n return statement\n\n\nclass XAPIVideoStatement(XAPIStatementMixin):\n \"\"\"Object managing statement for video objects.\"\"\"\n\n def _get_activity_type(self, video):\n \"\"\"Return the activity type for a given video\"\"\"\n\n activity_type = \"https://w3id.org/xapi/video/activity-type/video\"\n\n # When the video is a live we change the activity to webinar\n if video.is_live:\n activity_type = \"http://id.tincanapi.com/activitytype/webinar\"\n\n return activity_type\n\n # pylint: disable=too-many-arguments\n def _build_statement(self, video, statement, homepage, user=None, user_id=None):\n \"\"\"Build all common properties for a video.\"\"\"\n if re.match(r\"^http(s?):\\/\\/.*\", homepage) is None:\n homepage = f\"http://{homepage}\"\n\n statement = self.build_common_statement_properties(\n statement, homepage, user=user, user_id=user_id\n )\n\n category_id = (\n \"https://w3id.org/xapi/lms\"\n if statement[\"verb\"][\"id\"] == \"http://id.tincanapi.com/verb/downloaded\"\n else \"https://w3id.org/xapi/video\"\n )\n\n statement[\"context\"].update(\n {\"contextActivities\": {\"category\": [{\"id\": category_id}]}}\n )\n\n statement[\"object\"] = {\n \"definition\": {\n \"type\": self._get_activity_type(video),\n \"name\": {self.get_locale(): video.title},\n },\n \"id\": f\"uuid://{video.id}\",\n \"objectType\": \"Activity\",\n }\n\n return statement\n\n def from_website(self, video, statement, current_site, user):\n \"\"\"Compute a valid xapi statement in a website context.\n\n Parameters\n ----------\n video : Type[.models/videos]\n The video object used in the xAPI statement\n\n statement : dictionary\n Statement containing base information to send to the LRS\n An example of expected statement:\n {\n \"verb\": {\n \"id\": \"http://adlnet.gov/expapi/verbs/initialized\",\n \"display\": {\n \"en-US\": \"initialized\"\n }\n },\n \"context\": {\n \"extensions\": {\n \"https://w3id.org/xapi/video/extensions/volume\": 1,\n \"https://w3id.org/xapi/video/extensions/video-playback-size\": \"640x264\",\n }\n }\n }\n\n current_site : Type[django.contrib.sites.models.Site]\n The current site used to send the XAPI request\n\n user: Type[marsha.core.models.User]\n The connected user who sent the XAPI request\n\n \"\"\"\n\n return self._build_statement(\n video, statement, homepage=current_site.domain, user=user\n )\n\n def from_lti(self, video, statement, jwt_token):\n \"\"\"Compute a valid xapi statement in an LTI context.\n\n Parameters\n ----------\n video : Type[.models/videos]\n The video object used in the xAPI statement\n\n statement : dictionary\n Statement containing base information to send to the LRS\n An example of expected statement:\n {\n \"verb\": {\n \"id\": \"http://adlnet.gov/expapi/verbs/initialized\",\n \"display\": {\n \"en-US\": \"initialized\"\n }\n },\n \"context\": {\n \"extensions\": {\n \"https://w3id.org/xapi/video/extensions/volume\": 1,\n \"https://w3id.org/xapi/video/extensions/video-playback-size\": \"640x264\",\n }\n }\n }\n\n jwt_token : Type[rest_framework_simplejwt.tokens.AccessToken]\n A jwt token containing the context used to enrich the xapi statement\n\n \"\"\"\n statement = self._build_statement(\n video,\n statement,\n homepage=self.get_homepage(video),\n user_id=self.get_user_id(jwt_token),\n )\n\n if jwt_token.payload.get(\"context_id\"):\n statement[\"context\"][\"contextActivities\"].update(\n {\n \"parent\": [\n {\n \"id\": jwt_token.payload[\"context_id\"],\n \"objectType\": \"Activity\",\n \"definition\": {\n \"type\": \"http://adlnet.gov/expapi/activities/course\"\n },\n }\n ]\n }\n )\n\n return statement\n\n\nclass XAPI:\n \"\"\"The XAPI object compute statements and send them to a LRS.\"\"\"\n\n def __init__(self, url, auth_token, xapi_version=\"1.0.3\"):\n \"\"\"Initialize the XAPI module.\n\n Parameters\n ----------\n url: string\n The LRS endpoint to fetch\n\n auth_token: string\n The basic_auth token used to authenticate on the LRS\n\n xapi_version: string\n The xAPI version used.\n\n \"\"\"\n self.url = url\n self.auth_token = auth_token\n self.xapi_version = xapi_version\n\n def send(self, xapi_statement):\n \"\"\"Send the statement to a LRS.\n\n Parameters\n ----------\n statement : Type[.XAPIStatement]\n\n \"\"\"\n headers = {\n \"Authorization\": self.auth_token,\n \"Content-Type\": \"application/json\",\n \"X-Experience-API-Version\": self.xapi_version,\n }\n\n response = requests.post(\n self.url,\n json=xapi_statement.get_statement(),\n headers=headers,\n timeout=settings.STAT_BACKEND_TIMEOUT,\n )\n\n response.raise_for_status()\n", "path": "src/backend/marsha/core/xapi.py" } ]
[ { "content": "\"\"\"XAPI module.\"\"\"\n\nimport re\nimport uuid\n\nfrom django.conf import settings\nfrom django.utils import timezone\nfrom django.utils.translation import to_locale\n\nimport requests\n\n\ndef get_xapi_statement(resource):\n \"\"\"Return the xapi object statement based on the required resource type.\"\"\"\n if resource == \"video\":\n return XAPIVideoStatement()\n\n if resource == \"document\":\n return XAPIDocumentStatement()\n\n raise NotImplementedError\n\n\nclass XAPIStatementMixin:\n \"\"\"Mixin used by xapi statements.\"\"\"\n\n @staticmethod\n def get_user_id(jwt_token):\n \"\"\"Return the user id if present in the JWT token or the session_is otherwise.\"\"\"\n return (\n jwt_token.payload[\"user\"].get(\"id\")\n if jwt_token.payload.get(\"user\")\n else jwt_token.payload[\"session_id\"]\n )\n\n @staticmethod\n def get_homepage(resource):\n \"\"\"Return the domain associated to the playlist consumer site.\"\"\"\n return resource.playlist.consumer_site.domain\n\n def get_locale(self):\n \"\"\"Return the locale formatted with a - instead of _\"\"\"\n\n return to_locale(settings.LANGUAGE_CODE).replace(\"_\", \"-\")\n\n def get_actor_from_website(self, homepage, user):\n \"\"\"Return the actor property from a website context\"\"\"\n return {\n \"objectType\": \"Agent\",\n \"account\": {\n \"homePage\": homepage,\n \"mbox\": f\"mailto:{user.email}\",\n \"name\": str(user.id),\n },\n }\n\n def get_actor_from_lti(self, homepage, user_id):\n \"\"\"Return the actor property from a LTI context\"\"\"\n return {\n \"objectType\": \"Agent\",\n \"account\": {\"name\": user_id, \"homePage\": homepage},\n }\n\n def build_common_statement_properties(\n self, statement, homepage, user=None, user_id=None\n ):\n \"\"\"build statement properties common to all resources.\"\"\"\n if \"id\" not in statement:\n statement[\"id\"] = str(uuid.uuid4())\n\n statement[\"timestamp\"] = timezone.now().isoformat()\n\n statement[\"actor\"] = (\n self.get_actor_from_website(homepage, user)\n if user\n else self.get_actor_from_lti(homepage, user_id)\n )\n\n return statement\n\n\nclass XAPIDocumentStatement(XAPIStatementMixin):\n \"\"\"Object managing statement for document objects.\"\"\"\n\n # pylint: disable=too-many-arguments\n def _build_statement(self, document, statement, homepage, user=None, user_id=None):\n \"\"\"Build all common properties for a document.\"\"\"\n\n if re.match(r\"^http(s?):\\/\\/.*\", homepage) is None:\n homepage = f\"http://{homepage}\"\n\n statement = self.build_common_statement_properties(\n statement, homepage, user=user, user_id=user_id\n )\n\n statement[\"context\"].update(\n {\"contextActivities\": {\"category\": [{\"id\": \"https://w3id.org/xapi/lms\"}]}}\n )\n\n statement[\"object\"] = {\n \"definition\": {\n \"type\": \"http://id.tincanapi.com/activitytype/document\",\n \"name\": {self.get_locale(): document.title},\n },\n \"id\": f\"uuid://{document.id}\",\n \"objectType\": \"Activity\",\n }\n\n return statement\n\n def from_website(self, document, statement, current_site, user):\n \"\"\"Compute a valid xapi statement in a website context.\n\n Parameters\n ----------\n document : Type[marsha.core.models.Document]\n The document object used in the xAPI statement\n\n statement : dictionary\n Statement containing base information to send to the LRS\n An example of expected statement:\n {\n \"verb\": {\n \"id\": \"http://adlnet.gov/expapi/verbs/initialized\",\n \"display\": {\n \"en-US\": \"initialized\"\n }\n },\n }\n\n current_site : Type[django.contrib.sites.models.Site]\n The current site used to send the XAPI request\n\n user: Type[marsha.core.models.User]\n The connected user who sent the XAPI request\n\n \"\"\"\n\n return self._build_statement(\n document, statement, homepage=current_site.domain, user=user\n )\n\n def from_lti(self, document, statement, jwt_token):\n \"\"\"Compute a valid xapi download activity statement.\"\"\"\n\n statement = self._build_statement(\n document,\n statement,\n homepage=self.get_homepage(document),\n user_id=self.get_user_id(jwt_token),\n )\n\n if jwt_token.payload.get(\"context_id\"):\n statement[\"context\"][\"contextActivities\"].update(\n {\n \"parent\": [\n {\n \"id\": jwt_token.payload[\"context_id\"],\n \"objectType\": \"Activity\",\n \"definition\": {\n \"type\": \"http://adlnet.gov/expapi/activities/course\"\n },\n }\n ]\n }\n )\n\n return statement\n\n\nclass XAPIVideoStatement(XAPIStatementMixin):\n \"\"\"Object managing statement for video objects.\"\"\"\n\n def _get_activity_type(self, video):\n \"\"\"Return the activity type for a given video\"\"\"\n\n activity_type = \"https://w3id.org/xapi/video/activity-type/video\"\n\n # When the video is a live we change the activity to webinar\n if video.is_live:\n activity_type = \"http://id.tincanapi.com/activitytype/webinar\"\n\n return activity_type\n\n # pylint: disable=too-many-arguments\n def _build_statement(self, video, statement, homepage, user=None, user_id=None):\n \"\"\"Build all common properties for a video.\"\"\"\n if re.match(r\"^http(s?):\\/\\/.*\", homepage) is None:\n homepage = f\"http://{homepage}\"\n\n statement = self.build_common_statement_properties(\n statement, homepage, user=user, user_id=user_id\n )\n\n category_id = (\n \"https://w3id.org/xapi/lms\"\n if statement[\"verb\"][\"id\"] == \"http://id.tincanapi.com/verb/downloaded\"\n else \"https://w3id.org/xapi/video\"\n )\n\n statement[\"context\"].update(\n {\"contextActivities\": {\"category\": [{\"id\": category_id}]}}\n )\n\n statement[\"object\"] = {\n \"definition\": {\n \"type\": self._get_activity_type(video),\n \"name\": {self.get_locale(): video.title},\n },\n \"id\": f\"uuid://{video.id}\",\n \"objectType\": \"Activity\",\n }\n\n return statement\n\n def from_website(self, video, statement, current_site, user):\n \"\"\"Compute a valid xapi statement in a website context.\n\n Parameters\n ----------\n video : Type[.models/videos]\n The video object used in the xAPI statement\n\n statement : dictionary\n Statement containing base information to send to the LRS\n An example of expected statement:\n {\n \"verb\": {\n \"id\": \"http://adlnet.gov/expapi/verbs/initialized\",\n \"display\": {\n \"en-US\": \"initialized\"\n }\n },\n \"context\": {\n \"extensions\": {\n \"https://w3id.org/xapi/video/extensions/volume\": 1,\n \"https://w3id.org/xapi/video/extensions/video-playback-size\": \"640x264\",\n }\n }\n }\n\n current_site : Type[django.contrib.sites.models.Site]\n The current site used to send the XAPI request\n\n user: Type[marsha.core.models.User]\n The connected user who sent the XAPI request\n\n \"\"\"\n\n return self._build_statement(\n video, statement, homepage=current_site.domain, user=user\n )\n\n def from_lti(self, video, statement, jwt_token):\n \"\"\"Compute a valid xapi statement in an LTI context.\n\n Parameters\n ----------\n video : Type[.models/videos]\n The video object used in the xAPI statement\n\n statement : dictionary\n Statement containing base information to send to the LRS\n An example of expected statement:\n {\n \"verb\": {\n \"id\": \"http://adlnet.gov/expapi/verbs/initialized\",\n \"display\": {\n \"en-US\": \"initialized\"\n }\n },\n \"context\": {\n \"extensions\": {\n \"https://w3id.org/xapi/video/extensions/volume\": 1,\n \"https://w3id.org/xapi/video/extensions/video-playback-size\": \"640x264\",\n }\n }\n }\n\n jwt_token : Type[rest_framework_simplejwt.tokens.AccessToken]\n A jwt token containing the context used to enrich the xapi statement\n\n \"\"\"\n statement = self._build_statement(\n video,\n statement,\n homepage=self.get_homepage(video),\n user_id=self.get_user_id(jwt_token),\n )\n\n if jwt_token.payload.get(\"context_id\"):\n statement[\"context\"][\"contextActivities\"].update(\n {\n \"parent\": [\n {\n \"id\": jwt_token.payload[\"context_id\"],\n \"objectType\": \"Activity\",\n \"definition\": {\n \"type\": \"http://adlnet.gov/expapi/activities/course\"\n },\n }\n ]\n }\n )\n\n return statement\n\n\nclass XAPI:\n \"\"\"The XAPI object compute statements and send them to a LRS.\"\"\"\n\n def __init__(self, url, auth_token, xapi_version=\"1.0.3\"):\n \"\"\"Initialize the XAPI module.\n\n Parameters\n ----------\n url: string\n The LRS endpoint to fetch\n\n auth_token: string\n The basic_auth token used to authenticate on the LRS\n\n xapi_version: string\n The xAPI version used.\n\n \"\"\"\n self.url = url\n self.auth_token = auth_token\n self.xapi_version = xapi_version\n\n def send(self, xapi_statement):\n \"\"\"Send the statement to a LRS.\n\n Parameters\n ----------\n statement : Type[.XAPIStatement]\n\n \"\"\"\n headers = {\n \"Authorization\": self.auth_token,\n \"Content-Type\": \"application/json\",\n \"X-Experience-API-Version\": self.xapi_version,\n }\n\n response = requests.post(\n self.url,\n json=xapi_statement,\n headers=headers,\n timeout=settings.STAT_BACKEND_TIMEOUT,\n )\n\n response.raise_for_status()\n", "path": "src/backend/marsha/core/xapi.py" } ]
diff --git a/CHANGELOG.md b/CHANGELOG.md index f577955d2f..308b97d08c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -28,6 +28,7 @@ Versioning](https://semver.org/spec/v2.0.0.html). ### Fixed - Remove non existing fields in PortabilityRequestAdmin +- Correctly send xapi statement to a configured LRS ## [4.9.0] - 2023-12-04 diff --git a/src/backend/marsha/core/tests/test_xapi.py b/src/backend/marsha/core/tests/test_xapi.py index 38088cf683..5d76034d08 100644 --- a/src/backend/marsha/core/tests/test_xapi.py +++ b/src/backend/marsha/core/tests/test_xapi.py @@ -1,9 +1,9 @@ """Tests for the xapi module of the Marsha project.""" -from unittest import mock - from django.test import TestCase, override_settings +import responses + from marsha.core.defaults import ENDED, RAW, READY, RUNNING from marsha.core.factories import DocumentFactory, VideoFactory from marsha.core.simple_jwt.factories import LTIPlaylistAccessTokenFactory @@ -12,10 +12,12 @@ XAPIDocumentStatement, XAPIVideoStatement, get_xapi_statement, - requests, ) +# pylint: disable=unexpected-keyword-arg,no-value-for-parameter + + class XAPIVideoStatementTest(TestCase): """Test the XAPIVideoStatement class.""" @@ -684,35 +686,69 @@ def test_xapi_statement_missing_user_id(self): class XAPITest(TestCase): """Test the xapi module.""" - @mock.patch.object(requests, "post") - def test_xapi_enrich_and_send_statement(self, mock_requests_post): + @responses.activate(assert_all_requests_are_fired=True) + def test_xapi_enrich_and_send_statement(self): """XAPI statement sent by the front application should be enriched. Before sending a statement, the xapi module is responsible for enriching it. """ - xapi = XAPI("https://lrs.example.com", "auth_token") - - mock_response = mock.MagicMock() - mock_response.raise_for_status.return_value = 200 - mock_requests_post.return_value = mock_response + xapi = XAPI("https://lrs.example.com", "Basic auth_token") - statement = {"foo": "bar"} - mock_xapi_statement = mock.MagicMock() - mock_xapi_statement.get_statement.return_value = statement + video = VideoFactory( + id="68333c45-4b8c-4018-a195-5d5e1706b838", + playlist__consumer_site__domain="example.com", + title="test video xapi", + ) - xapi.send(mock_xapi_statement) + jwt_token = LTIPlaylistAccessTokenFactory( + session_id="326c0689-48c1-493e-8d2d-9fb0c289de7f", + context_id="course-v1:ufr+mathematics+0001", + user__id="b2584aa405540758db2a6278521b6478", + ) - args, kwargs = mock_requests_post.call_args_list[0] - self.assertEqual(args[0], "https://lrs.example.com") - self.assertEqual( - kwargs["headers"], - { - "Authorization": "auth_token", - "Content-Type": "application/json", - "X-Experience-API-Version": "1.0.3", + base_statement = { + "context": { + "extensions": { + "https://w3id.org/xapi/video/extensions/session-id": "a6151456-18b7-" + "43b4-8452-2037fed588df" + } }, - ) - self.assertEqual(kwargs["json"], statement) + "result": { + "extensions": { + "https://w3id.org/xapi/video/extensions/time-from": 0, + "https://w3id.org/xapi/video/extensions/time-to": 0, + "https://w3id.org/xapi/video/extensions/length": 104.304, + "https://w3id.org/xapi/video/extensions/progress": 0, + "https://w3id.org/xapi/video/extensions/played-segments": "0", + } + }, + "verb": { + "display": {"en-US": "seeked"}, + "id": "https://w3id.org/xapi/video/verbs/seeked", + }, + "id": "17dfcd44-b3e0-403d-ab96-e3ef7da616d4", + } + + xapi_statement = XAPIVideoStatement() + statement = xapi_statement.from_lti(video, base_statement, jwt_token) + + responses.add( + responses.POST, + "https://lrs.example.com", + match=[ + responses.matchers.json_params_matcher(statement), + responses.matchers.header_matcher( + { + "Authorization": "Basic auth_token", + "Content-Type": "application/json", + "X-Experience-API-Version": "1.0.3", + } + ), + ], + status=204, + ) + + xapi.send(statement) class GetXapiStatementTest(TestCase): diff --git a/src/backend/marsha/core/xapi.py b/src/backend/marsha/core/xapi.py index c99bf91df7..dd019c6ebf 100644 --- a/src/backend/marsha/core/xapi.py +++ b/src/backend/marsha/core/xapi.py @@ -344,7 +344,7 @@ def send(self, xapi_statement): response = requests.post( self.url, - json=xapi_statement.get_statement(), + json=xapi_statement, headers=headers, timeout=settings.STAT_BACKEND_TIMEOUT, )
napari__napari-277
blending mode update error ## 🐛 Bug When viewing multiple layers with blending, I am experiencing a bug whereby changing the blending mode doesn't result in an immediate update. The update does occur when I change the opacity (at which point is happens immediately). ![bug](https://user-images.githubusercontent.com/3387500/55253093-aa80cc00-5211-11e9-828c-686595346b86.gif) ## To Reproduce Steps to reproduce the behavior: 1. Open the viewer with multiple layers (e.g. `examples/layers.py`) 2. Reduce the opacity of the top most layer to 0.5 3. Change the blending mode (e.g. `translucent` -> `opaque`) ## Expected behavior The update to what is rendered should happen immediately upon updating the blending mode. ## Environment - napari 0.18 - OS X 10.14.3 - Python version: 3.7.2
[ { "content": "# TODO: create & use our own transform class\nfrom vispy.visuals.transforms import STTransform\nfrom vispy.gloo import get_state_presets\nfrom ...util.event import EmitterGroup, Event\n\n\nclass VisualWrapper:\n \"\"\"Wrapper around ``vispy.scene.VisualNode`` objects.\n Meant to be subclassed.\n\n \"Hidden\" properties:\n * ``_master_transform``\n * ``_order``\n * ``_parent``\n\n Parameters\n ----------\n central_node : vispy.scene.VisualNode\n Central node/control point with which to interact with the visual.\n Stored as ``_node``.\n\n Attributes\n ----------\n opacity\n visible\n scale\n blending\n translate\n z_index\n\n Notes\n -----\n It is recommended to use the backported ``vispy`` nodes\n at ``_vispy.scene.visuals`` for various bug fixes.\n \"\"\"\n def __init__(self, central_node):\n self._node = central_node\n self._blending = 'translucent'\n self.events = EmitterGroup(source=self,\n auto_connect=True,\n blending=Event,\n opacity=Event,\n visible=Event)\n\n _blending_modes = set(get_state_presets().keys())\n\n @property\n def _master_transform(self):\n \"\"\"vispy.visuals.transforms.STTransform:\n Central node's firstmost transform.\n \"\"\"\n # whenever a new parent is set, the transform is reset\n # to a NullTransform so we reset it here\n if not isinstance(self._node.transform, STTransform):\n self._node.transform = STTransform()\n\n return self._node.transform\n\n @property\n def _order(self):\n \"\"\"int: Order in which the visual is drawn in the scenegraph.\n Lower values are closer to the viewer.\n \"\"\"\n return self._node.order\n\n @_order.setter\n def _order(self, order):\n # workaround for opacity (see: #22)\n order = -order\n self.z_index = order\n # end workaround\n self._node.order = order\n\n @property\n def _parent(self):\n \"\"\"vispy.scene.Node: Parent node.\n \"\"\"\n return self._node.parent\n\n @_parent.setter\n def _parent(self, parent):\n self._node.parent = parent\n\n @property\n def opacity(self):\n \"\"\"float: Opacity value between 0.0 and 1.0.\n \"\"\"\n return self._node.opacity\n\n @opacity.setter\n def opacity(self, opacity):\n if not 0.0 <= opacity <= 1.0:\n raise ValueError('opacity must be between 0.0 and 1.0; '\n f'got {opacity}')\n\n self._node.opacity = opacity\n self.events.opacity()\n\n @property\n def blending(self):\n \"\"\"{'opaque', 'translucent', 'additive'}: Blending mode.\n Selects a preset blending mode in vispy that determines how\n RGB and alpha values get mixed.\n 'opaque'\n Allows for only the top layer to be visible and corresponds to\n depth_test=True, cull_face=False, blend=False.\n 'translucent'\n Allows for multiple layers to be blended with different opacity\n and corresponds to depth_test=True, cull_face=False,\n blend=True, blend_func=('src_alpha', 'one_minus_src_alpha').\n 'additive'\n Allows for multiple layers to be blended together with\n different colors and opacity. Useful for creating overlays. It\n corresponds to depth_test=False, cull_face=False, blend=True,\n blend_func=('src_alpha', 'one').\n \"\"\"\n return self._blending\n\n @blending.setter\n def blending(self, blending):\n if blending not in self._blending_modes:\n raise ValueError('expected one of '\n \"{'opaque', 'translucent', 'additive'}; \"\n f'got {blending}')\n self._node.set_gl_state(blending)\n self._blending = blending\n self.events.blending()\n\n @property\n def visible(self):\n \"\"\"bool: Whether the visual is currently being displayed.\n \"\"\"\n return self._node.visible\n\n @visible.setter\n def visible(self, visibility):\n self._node.visible = visibility\n self.events.visible()\n\n @property\n def scale(self):\n \"\"\"sequence of float: Scale factors.\n \"\"\"\n return self._master_transform.scale\n\n @scale.setter\n def scale(self, scale):\n self._master_transform.scale = scale\n\n @property\n def translate(self):\n \"\"\"sequence of float: Translation values.\n \"\"\"\n return self._master_transform.translate\n\n @translate.setter\n def translate(self, translate):\n self._master_transform.translate = translate\n\n @property\n def z_index(self):\n return -self._master_transform.translate[2]\n\n @z_index.setter\n def z_index(self, index):\n tr = self._master_transform\n tl = tr.translate\n tl[2] = -index\n\n tr.translate = tl\n", "path": "napari/layers/_base_layer/_visual_wrapper.py" } ]
[ { "content": "# TODO: create & use our own transform class\nfrom vispy.visuals.transforms import STTransform\nfrom vispy.gloo import get_state_presets\nfrom ...util.event import EmitterGroup, Event\n\n\nclass VisualWrapper:\n \"\"\"Wrapper around ``vispy.scene.VisualNode`` objects.\n Meant to be subclassed.\n\n \"Hidden\" properties:\n * ``_master_transform``\n * ``_order``\n * ``_parent``\n\n Parameters\n ----------\n central_node : vispy.scene.VisualNode\n Central node/control point with which to interact with the visual.\n Stored as ``_node``.\n\n Attributes\n ----------\n opacity\n visible\n scale\n blending\n translate\n z_index\n\n Notes\n -----\n It is recommended to use the backported ``vispy`` nodes\n at ``_vispy.scene.visuals`` for various bug fixes.\n \"\"\"\n def __init__(self, central_node):\n self._node = central_node\n self._blending = 'translucent'\n self.events = EmitterGroup(source=self,\n auto_connect=True,\n blending=Event,\n opacity=Event,\n visible=Event)\n\n _blending_modes = set(get_state_presets().keys())\n\n @property\n def _master_transform(self):\n \"\"\"vispy.visuals.transforms.STTransform:\n Central node's firstmost transform.\n \"\"\"\n # whenever a new parent is set, the transform is reset\n # to a NullTransform so we reset it here\n if not isinstance(self._node.transform, STTransform):\n self._node.transform = STTransform()\n\n return self._node.transform\n\n @property\n def _order(self):\n \"\"\"int: Order in which the visual is drawn in the scenegraph.\n Lower values are closer to the viewer.\n \"\"\"\n return self._node.order\n\n @_order.setter\n def _order(self, order):\n # workaround for opacity (see: #22)\n order = -order\n self.z_index = order\n # end workaround\n self._node.order = order\n\n @property\n def _parent(self):\n \"\"\"vispy.scene.Node: Parent node.\n \"\"\"\n return self._node.parent\n\n @_parent.setter\n def _parent(self, parent):\n self._node.parent = parent\n\n @property\n def opacity(self):\n \"\"\"float: Opacity value between 0.0 and 1.0.\n \"\"\"\n return self._node.opacity\n\n @opacity.setter\n def opacity(self, opacity):\n if not 0.0 <= opacity <= 1.0:\n raise ValueError('opacity must be between 0.0 and 1.0; '\n f'got {opacity}')\n\n self._node.opacity = opacity\n self.events.opacity()\n\n @property\n def blending(self):\n \"\"\"{'opaque', 'translucent', 'additive'}: Blending mode.\n Selects a preset blending mode in vispy that determines how\n RGB and alpha values get mixed.\n 'opaque'\n Allows for only the top layer to be visible and corresponds to\n depth_test=True, cull_face=False, blend=False.\n 'translucent'\n Allows for multiple layers to be blended with different opacity\n and corresponds to depth_test=True, cull_face=False,\n blend=True, blend_func=('src_alpha', 'one_minus_src_alpha').\n 'additive'\n Allows for multiple layers to be blended together with\n different colors and opacity. Useful for creating overlays. It\n corresponds to depth_test=False, cull_face=False, blend=True,\n blend_func=('src_alpha', 'one').\n \"\"\"\n return self._blending\n\n @blending.setter\n def blending(self, blending):\n if blending not in self._blending_modes:\n raise ValueError('expected one of '\n \"{'opaque', 'translucent', 'additive'}; \"\n f'got {blending}')\n self._node.set_gl_state(blending)\n self._blending = blending\n self._node.update()\n self.events.blending()\n\n @property\n def visible(self):\n \"\"\"bool: Whether the visual is currently being displayed.\n \"\"\"\n return self._node.visible\n\n @visible.setter\n def visible(self, visibility):\n self._node.visible = visibility\n self.events.visible()\n\n @property\n def scale(self):\n \"\"\"sequence of float: Scale factors.\n \"\"\"\n return self._master_transform.scale\n\n @scale.setter\n def scale(self, scale):\n self._master_transform.scale = scale\n\n @property\n def translate(self):\n \"\"\"sequence of float: Translation values.\n \"\"\"\n return self._master_transform.translate\n\n @translate.setter\n def translate(self, translate):\n self._master_transform.translate = translate\n\n @property\n def z_index(self):\n return -self._master_transform.translate[2]\n\n @z_index.setter\n def z_index(self, index):\n tr = self._master_transform\n tl = tr.translate\n tl[2] = -index\n\n tr.translate = tl\n", "path": "napari/layers/_base_layer/_visual_wrapper.py" } ]
diff --git a/napari/layers/_base_layer/_visual_wrapper.py b/napari/layers/_base_layer/_visual_wrapper.py index 8408ff8b798..18fa23c33ac 100644 --- a/napari/layers/_base_layer/_visual_wrapper.py +++ b/napari/layers/_base_layer/_visual_wrapper.py @@ -124,6 +124,7 @@ def blending(self, blending): f'got {blending}') self._node.set_gl_state(blending) self._blending = blending + self._node.update() self.events.blending() @property
pyqtgraph__pyqtgraph-1066
AttributeError: module 'pyqtgraph.widgets' has no attribute 'RemoteGraphicsView' <!-- In the following, please describe your issue in detail! --> <!-- If some of the sections do not apply, just remove them. --> ### Short description <!-- This should summarize the issue. --> Trying to use RemoteGraphicsView throws an error, says there is no RemoteGraphicsView module ### Code to reproduce <!-- Please provide a minimal working example that reproduces the issue in the code block below. Ideally, this should be a full example someone else could run without additional setup. --> ```python import pyqtgraph as pg import numpy as np view = pg.widgets.RemoteGraphicsView() ``` ### Expected behavior <!-- What should happen? --> Create a RemoteGraphicsView object ### Real behavior <!-- What happens? --> ``` >>> view = pg.widgets.RemoteGraphicsView() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'pyqtgraph.widgets' has no attribute 'RemoteGraphicsView' ``` ### Tested environment(s) * PyQtGraph version: 0.10.0<!-- output of pyqtgraph.__version__ --> * Qt Python binding: PyQt5<!-- output of pyqtgraph.Qt.VERSION_INFO --> * Python version: 3.7.4 * NumPy version: 1.16.4 <!-- output of numpy.__version__ --> * Operating system: Windows 10 * Installation method: pip<!-- e.g. pip, conda, system packages, ... --> ### Additional context If I look in the site-packages folder for pyqtgraph, I can see the RemoteGraphicsView file in the widgets folder, and other widgets load normally
[ { "content": "# -*- coding: utf-8 -*-\n\"\"\"\nPyQtGraph - Scientific Graphics and GUI Library for Python\nwww.pyqtgraph.org\n\"\"\"\n\n__version__ = '0.11.0.dev0'\n\n### import all the goodies and add some helper functions for easy CLI use\n\n## 'Qt' is a local module; it is intended mainly to cover up the differences\n## between PyQt4 and PySide.\nfrom .Qt import QtGui, mkQApp\n\n## not really safe--If we accidentally create another QApplication, the process hangs (and it is very difficult to trace the cause)\n#if QtGui.QApplication.instance() is None:\n #app = QtGui.QApplication([])\n\nimport numpy ## pyqtgraph requires numpy\n ## (import here to avoid massive error dump later on if numpy is not available)\n\nimport os, sys\n\n## check python version\n## Allow anything >= 2.7\nif sys.version_info[0] < 2 or (sys.version_info[0] == 2 and sys.version_info[1] < 6):\n raise Exception(\"Pyqtgraph requires Python version 2.6 or greater (this is %d.%d)\" % (sys.version_info[0], sys.version_info[1]))\n\n## helpers for 2/3 compatibility\nfrom . import python2_3\n\n## in general openGL is poorly supported with Qt+GraphicsView.\n## we only enable it where the performance benefit is critical.\n## Note this only applies to 2D graphics; 3D graphics always use OpenGL.\nif 'linux' in sys.platform: ## linux has numerous bugs in opengl implementation\n useOpenGL = False\nelif 'darwin' in sys.platform: ## openGL can have a major impact on mac, but also has serious bugs\n useOpenGL = False\n if QtGui.QApplication.instance() is not None:\n print('Warning: QApplication was created before pyqtgraph was imported; there may be problems (to avoid bugs, call QApplication.setGraphicsSystem(\"raster\") before the QApplication is created).')\n if QtGui.QApplication.setGraphicsSystem:\n QtGui.QApplication.setGraphicsSystem('raster') ## work around a variety of bugs in the native graphics system \nelse:\n useOpenGL = False ## on windows there's a more even performance / bugginess tradeoff. \n \nCONFIG_OPTIONS = {\n 'useOpenGL': useOpenGL, ## by default, this is platform-dependent (see widgets/GraphicsView). Set to True or False to explicitly enable/disable opengl.\n 'leftButtonPan': True, ## if false, left button drags a rubber band for zooming in viewbox\n # foreground/background take any arguments to the 'mkColor' in /pyqtgraph/functions.py\n 'foreground': 'd', ## default foreground color for axes, labels, etc.\n 'background': 'k', ## default background for GraphicsWidget\n 'antialias': False,\n 'editorCommand': None, ## command used to invoke code editor from ConsoleWidgets\n 'useWeave': False, ## Use weave to speed up some operations, if it is available\n 'weaveDebug': False, ## Print full error message if weave compile fails\n 'exitCleanup': True, ## Attempt to work around some exit crash bugs in PyQt and PySide\n 'enableExperimental': False, ## Enable experimental features (the curious can search for this key in the code)\n 'crashWarning': False, # If True, print warnings about situations that may result in a crash\n 'imageAxisOrder': 'col-major', # For 'row-major', image data is expected in the standard (row, col) order.\n # For 'col-major', image data is expected in reversed (col, row) order.\n # The default is 'col-major' for backward compatibility, but this may\n # change in the future.\n} \n\n\ndef setConfigOption(opt, value):\n if opt not in CONFIG_OPTIONS:\n raise KeyError('Unknown configuration option \"%s\"' % opt)\n if opt == 'imageAxisOrder' and value not in ('row-major', 'col-major'):\n raise ValueError('imageAxisOrder must be either \"row-major\" or \"col-major\"')\n CONFIG_OPTIONS[opt] = value\n\ndef setConfigOptions(**opts):\n \"\"\"Set global configuration options. \n \n Each keyword argument sets one global option. \n \"\"\"\n for k,v in opts.items():\n setConfigOption(k, v)\n\ndef getConfigOption(opt):\n \"\"\"Return the value of a single global configuration option.\n \"\"\"\n return CONFIG_OPTIONS[opt]\n\n\ndef systemInfo():\n print(\"sys.platform: %s\" % sys.platform)\n print(\"sys.version: %s\" % sys.version)\n from .Qt import VERSION_INFO\n print(\"qt bindings: %s\" % VERSION_INFO)\n \n global __version__\n rev = None\n if __version__ is None: ## this code was probably checked out from bzr; look up the last-revision file\n lastRevFile = os.path.join(os.path.dirname(__file__), '..', '.bzr', 'branch', 'last-revision')\n if os.path.exists(lastRevFile):\n rev = open(lastRevFile, 'r').read().strip()\n \n print(\"pyqtgraph: %s; %s\" % (__version__, rev))\n print(\"config:\")\n import pprint\n pprint.pprint(CONFIG_OPTIONS)\n\n## Rename orphaned .pyc files. This is *probably* safe :)\n## We only do this if __version__ is None, indicating the code was probably pulled\n## from the repository. \ndef renamePyc(startDir):\n ### Used to rename orphaned .pyc files\n ### When a python file changes its location in the repository, usually the .pyc file\n ### is left behind, possibly causing mysterious and difficult to track bugs. \n\n ### Note that this is no longer necessary for python 3.2; from PEP 3147:\n ### \"If the py source file is missing, the pyc file inside __pycache__ will be ignored. \n ### This eliminates the problem of accidental stale pyc file imports.\"\n \n printed = False\n startDir = os.path.abspath(startDir)\n for path, dirs, files in os.walk(startDir):\n if '__pycache__' in path:\n continue\n for f in files:\n fileName = os.path.join(path, f)\n base, ext = os.path.splitext(fileName)\n py = base + \".py\"\n if ext == '.pyc' and not os.path.isfile(py):\n if not printed:\n print(\"NOTE: Renaming orphaned .pyc files:\")\n printed = True\n n = 1\n while True:\n name2 = fileName + \".renamed%d\" % n\n if not os.path.exists(name2):\n break\n n += 1\n print(\" \" + fileName + \" ==>\")\n print(\" \" + name2)\n os.rename(fileName, name2)\n \npath = os.path.split(__file__)[0]\nif __version__ is None and not hasattr(sys, 'frozen') and sys.version_info[0] == 2: ## If we are frozen, there's a good chance we don't have the original .py files anymore.\n renamePyc(path)\n\n\n## Import almost everything to make it available from a single namespace\n## don't import the more complex systems--canvas, parametertree, flowchart, dockarea\n## these must be imported separately.\n#from . import frozenSupport\n#def importModules(path, globals, locals, excludes=()):\n #\"\"\"Import all modules residing within *path*, return a dict of name: module pairs.\n \n #Note that *path* MUST be relative to the module doing the import. \n #\"\"\"\n #d = os.path.join(os.path.split(globals['__file__'])[0], path)\n #files = set()\n #for f in frozenSupport.listdir(d):\n #if frozenSupport.isdir(os.path.join(d, f)) and f not in ['__pycache__', 'tests']:\n #files.add(f)\n #elif f[-3:] == '.py' and f != '__init__.py':\n #files.add(f[:-3])\n #elif f[-4:] == '.pyc' and f != '__init__.pyc':\n #files.add(f[:-4])\n \n #mods = {}\n #path = path.replace(os.sep, '.')\n #for modName in files:\n #if modName in excludes:\n #continue\n #try:\n #if len(path) > 0:\n #modName = path + '.' + modName\n #print( \"from .%s import * \" % modName)\n #mod = __import__(modName, globals, locals, ['*'], 1)\n #mods[modName] = mod\n #except:\n #import traceback\n #traceback.print_stack()\n #sys.excepthook(*sys.exc_info())\n #print(\"[Error importing module: %s]\" % modName)\n \n #return mods\n\n#def importAll(path, globals, locals, excludes=()):\n #\"\"\"Given a list of modules, import all names from each module into the global namespace.\"\"\"\n #mods = importModules(path, globals, locals, excludes)\n #for mod in mods.values():\n #if hasattr(mod, '__all__'):\n #names = mod.__all__\n #else:\n #names = [n for n in dir(mod) if n[0] != '_']\n #for k in names:\n #if hasattr(mod, k):\n #globals[k] = getattr(mod, k)\n\n# Dynamic imports are disabled. This causes too many problems.\n#importAll('graphicsItems', globals(), locals())\n#importAll('widgets', globals(), locals(),\n #excludes=['MatplotlibWidget', 'RawImageWidget', 'RemoteGraphicsView'])\n\nfrom .graphicsItems.VTickGroup import * \nfrom .graphicsItems.GraphicsWidget import * \nfrom .graphicsItems.ScaleBar import * \nfrom .graphicsItems.PlotDataItem import * \nfrom .graphicsItems.GraphItem import * \nfrom .graphicsItems.TextItem import * \nfrom .graphicsItems.GraphicsLayout import * \nfrom .graphicsItems.UIGraphicsItem import * \nfrom .graphicsItems.GraphicsObject import * \nfrom .graphicsItems.PlotItem import * \nfrom .graphicsItems.ROI import * \nfrom .graphicsItems.InfiniteLine import * \nfrom .graphicsItems.HistogramLUTItem import * \nfrom .graphicsItems.GridItem import * \nfrom .graphicsItems.GradientLegend import * \nfrom .graphicsItems.GraphicsItem import * \nfrom .graphicsItems.BarGraphItem import * \nfrom .graphicsItems.ViewBox import * \nfrom .graphicsItems.ArrowItem import * \nfrom .graphicsItems.ImageItem import * \nfrom .graphicsItems.AxisItem import * \nfrom .graphicsItems.LabelItem import * \nfrom .graphicsItems.CurvePoint import * \nfrom .graphicsItems.GraphicsWidgetAnchor import * \nfrom .graphicsItems.PlotCurveItem import * \nfrom .graphicsItems.ButtonItem import * \nfrom .graphicsItems.GradientEditorItem import * \nfrom .graphicsItems.MultiPlotItem import * \nfrom .graphicsItems.ErrorBarItem import * \nfrom .graphicsItems.IsocurveItem import * \nfrom .graphicsItems.LinearRegionItem import * \nfrom .graphicsItems.FillBetweenItem import * \nfrom .graphicsItems.LegendItem import * \nfrom .graphicsItems.ScatterPlotItem import * \nfrom .graphicsItems.ItemGroup import * \n\nfrom .widgets.MultiPlotWidget import * \nfrom .widgets.ScatterPlotWidget import * \nfrom .widgets.ColorMapWidget import * \nfrom .widgets.FileDialog import * \nfrom .widgets.ValueLabel import * \nfrom .widgets.HistogramLUTWidget import * \nfrom .widgets.CheckTable import * \nfrom .widgets.BusyCursor import * \nfrom .widgets.PlotWidget import * \nfrom .widgets.ComboBox import * \nfrom .widgets.GradientWidget import * \nfrom .widgets.DataFilterWidget import * \nfrom .widgets.SpinBox import * \nfrom .widgets.JoystickButton import * \nfrom .widgets.GraphicsLayoutWidget import * \nfrom .widgets.TreeWidget import * \nfrom .widgets.PathButton import * \nfrom .widgets.VerticalLabel import * \nfrom .widgets.FeedbackButton import * \nfrom .widgets.ColorButton import * \nfrom .widgets.DataTreeWidget import * \nfrom .widgets.DiffTreeWidget import * \nfrom .widgets.GraphicsView import * \nfrom .widgets.LayoutWidget import * \nfrom .widgets.TableWidget import * \nfrom .widgets.ProgressDialog import *\nfrom .widgets.GroupBox import GroupBox\n\nfrom .imageview import *\nfrom .WidgetGroup import *\nfrom .Point import Point\nfrom .Vector import Vector\nfrom .SRTTransform import SRTTransform\nfrom .Transform3D import Transform3D\nfrom .SRTTransform3D import SRTTransform3D\nfrom .functions import *\nfrom .graphicsWindows import *\nfrom .SignalProxy import *\nfrom .colormap import *\nfrom .ptime import time\nfrom .Qt import isQObjectAlive\n\n\n##############################################################\n## PyQt and PySide both are prone to crashing on exit. \n## There are two general approaches to dealing with this:\n## 1. Install atexit handlers that assist in tearing down to avoid crashes.\n## This helps, but is never perfect.\n## 2. Terminate the process before python starts tearing down\n## This is potentially dangerous\n\n## Attempts to work around exit crashes:\nimport atexit\n_cleanupCalled = False\ndef cleanup():\n global _cleanupCalled\n if _cleanupCalled:\n return\n \n if not getConfigOption('exitCleanup'):\n return\n \n ViewBox.quit() ## tell ViewBox that it doesn't need to deregister views anymore.\n \n ## Workaround for Qt exit crash:\n ## ALL QGraphicsItems must have a scene before they are deleted.\n ## This is potentially very expensive, but preferred over crashing.\n ## Note: this appears to be fixed in PySide as of 2012.12, but it should be left in for a while longer..\n app = QtGui.QApplication.instance()\n if app is None or not isinstance(app, QtGui.QApplication):\n # app was never constructed is already deleted or is an\n # QCoreApplication/QGuiApplication and not a full QApplication\n return\n import gc\n s = QtGui.QGraphicsScene()\n for o in gc.get_objects():\n try:\n if isinstance(o, QtGui.QGraphicsItem) and isQObjectAlive(o) and o.scene() is None:\n if getConfigOption('crashWarning'):\n sys.stderr.write('Error: graphics item without scene. '\n 'Make sure ViewBox.close() and GraphicsView.close() '\n 'are properly called before app shutdown (%s)\\n' % (o,))\n \n s.addItem(o)\n except (RuntimeError, ReferenceError): ## occurs if a python wrapper no longer has its underlying C++ object\n continue\n _cleanupCalled = True\n\natexit.register(cleanup)\n\n# Call cleanup when QApplication quits. This is necessary because sometimes\n# the QApplication will quit before the atexit callbacks are invoked.\n# Note: cannot connect this function until QApplication has been created, so\n# instead we have GraphicsView.__init__ call this for us.\n_cleanupConnected = False\ndef _connectCleanup():\n global _cleanupConnected\n if _cleanupConnected:\n return\n QtGui.QApplication.instance().aboutToQuit.connect(cleanup)\n _cleanupConnected = True\n\n\n## Optional function for exiting immediately (with some manual teardown)\ndef exit():\n \"\"\"\n Causes python to exit without garbage-collecting any objects, and thus avoids\n calling object destructor methods. This is a sledgehammer workaround for \n a variety of bugs in PyQt and Pyside that cause crashes on exit.\n \n This function does the following in an attempt to 'safely' terminate\n the process:\n \n * Invoke atexit callbacks\n * Close all open file handles\n * os._exit()\n \n Note: there is some potential for causing damage with this function if you\n are using objects that _require_ their destructors to be called (for example,\n to properly terminate log files, disconnect from devices, etc). Situations\n like this are probably quite rare, but use at your own risk.\n \"\"\"\n \n ## first disable our own cleanup function; won't be needing it.\n setConfigOptions(exitCleanup=False)\n \n ## invoke atexit callbacks\n atexit._run_exitfuncs()\n \n ## close file handles\n if sys.platform == 'darwin':\n for fd in range(3, 4096):\n if fd in [7]: # trying to close 7 produces an illegal instruction on the Mac.\n continue\n try:\n os.close(fd)\n except OSError:\n pass\n else:\n os.closerange(3, 4096) ## just guessing on the maximum descriptor count..\n\n os._exit(0)\n \n\n\n## Convenience functions for command-line use\n\nplots = []\nimages = []\nQAPP = None\n\ndef plot(*args, **kargs):\n \"\"\"\n Create and return a :class:`PlotWindow <pyqtgraph.PlotWindow>` \n (this is just a window with :class:`PlotWidget <pyqtgraph.PlotWidget>` inside), plot data in it.\n Accepts a *title* argument to set the title of the window.\n All other arguments are used to plot data. (see :func:`PlotItem.plot() <pyqtgraph.PlotItem.plot>`)\n \"\"\"\n mkQApp()\n #if 'title' in kargs:\n #w = PlotWindow(title=kargs['title'])\n #del kargs['title']\n #else:\n #w = PlotWindow()\n #if len(args)+len(kargs) > 0:\n #w.plot(*args, **kargs)\n \n pwArgList = ['title', 'labels', 'name', 'left', 'right', 'top', 'bottom', 'background']\n pwArgs = {}\n dataArgs = {}\n for k in kargs:\n if k in pwArgList:\n pwArgs[k] = kargs[k]\n else:\n dataArgs[k] = kargs[k]\n \n w = PlotWindow(**pwArgs)\n if len(args) > 0 or len(dataArgs) > 0:\n w.plot(*args, **dataArgs)\n plots.append(w)\n w.show()\n return w\n \ndef image(*args, **kargs):\n \"\"\"\n Create and return an :class:`ImageWindow <pyqtgraph.ImageWindow>` \n (this is just a window with :class:`ImageView <pyqtgraph.ImageView>` widget inside), show image data inside.\n Will show 2D or 3D image data.\n Accepts a *title* argument to set the title of the window.\n All other arguments are used to show data. (see :func:`ImageView.setImage() <pyqtgraph.ImageView.setImage>`)\n \"\"\"\n mkQApp()\n w = ImageWindow(*args, **kargs)\n images.append(w)\n w.show()\n return w\nshow = image ## for backward compatibility\n\ndef dbg(*args, **kwds):\n \"\"\"\n Create a console window and begin watching for exceptions.\n \n All arguments are passed to :func:`ConsoleWidget.__init__() <pyqtgraph.console.ConsoleWidget.__init__>`.\n \"\"\"\n mkQApp()\n from . import console\n c = console.ConsoleWidget(*args, **kwds)\n c.catchAllExceptions()\n c.show()\n global consoles\n try:\n consoles.append(c)\n except NameError:\n consoles = [c]\n return c\n\n\ndef stack(*args, **kwds):\n \"\"\"\n Create a console window and show the current stack trace.\n \n All arguments are passed to :func:`ConsoleWidget.__init__() <pyqtgraph.console.ConsoleWidget.__init__>`.\n \"\"\"\n mkQApp()\n from . import console\n c = console.ConsoleWidget(*args, **kwds)\n c.setStack()\n c.show()\n global consoles\n try:\n consoles.append(c)\n except NameError:\n consoles = [c]\n return c\n", "path": "pyqtgraph/__init__.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\n\"\"\"\nPyQtGraph - Scientific Graphics and GUI Library for Python\nwww.pyqtgraph.org\n\"\"\"\n\n__version__ = '0.11.0.dev0'\n\n### import all the goodies and add some helper functions for easy CLI use\n\n## 'Qt' is a local module; it is intended mainly to cover up the differences\n## between PyQt4 and PySide.\nfrom .Qt import QtGui, mkQApp\n\n## not really safe--If we accidentally create another QApplication, the process hangs (and it is very difficult to trace the cause)\n#if QtGui.QApplication.instance() is None:\n #app = QtGui.QApplication([])\n\nimport numpy ## pyqtgraph requires numpy\n ## (import here to avoid massive error dump later on if numpy is not available)\n\nimport os, sys\n\n## check python version\n## Allow anything >= 2.7\nif sys.version_info[0] < 2 or (sys.version_info[0] == 2 and sys.version_info[1] < 6):\n raise Exception(\"Pyqtgraph requires Python version 2.6 or greater (this is %d.%d)\" % (sys.version_info[0], sys.version_info[1]))\n\n## helpers for 2/3 compatibility\nfrom . import python2_3\n\n## in general openGL is poorly supported with Qt+GraphicsView.\n## we only enable it where the performance benefit is critical.\n## Note this only applies to 2D graphics; 3D graphics always use OpenGL.\nif 'linux' in sys.platform: ## linux has numerous bugs in opengl implementation\n useOpenGL = False\nelif 'darwin' in sys.platform: ## openGL can have a major impact on mac, but also has serious bugs\n useOpenGL = False\n if QtGui.QApplication.instance() is not None:\n print('Warning: QApplication was created before pyqtgraph was imported; there may be problems (to avoid bugs, call QApplication.setGraphicsSystem(\"raster\") before the QApplication is created).')\n if QtGui.QApplication.setGraphicsSystem:\n QtGui.QApplication.setGraphicsSystem('raster') ## work around a variety of bugs in the native graphics system \nelse:\n useOpenGL = False ## on windows there's a more even performance / bugginess tradeoff. \n \nCONFIG_OPTIONS = {\n 'useOpenGL': useOpenGL, ## by default, this is platform-dependent (see widgets/GraphicsView). Set to True or False to explicitly enable/disable opengl.\n 'leftButtonPan': True, ## if false, left button drags a rubber band for zooming in viewbox\n # foreground/background take any arguments to the 'mkColor' in /pyqtgraph/functions.py\n 'foreground': 'd', ## default foreground color for axes, labels, etc.\n 'background': 'k', ## default background for GraphicsWidget\n 'antialias': False,\n 'editorCommand': None, ## command used to invoke code editor from ConsoleWidgets\n 'useWeave': False, ## Use weave to speed up some operations, if it is available\n 'weaveDebug': False, ## Print full error message if weave compile fails\n 'exitCleanup': True, ## Attempt to work around some exit crash bugs in PyQt and PySide\n 'enableExperimental': False, ## Enable experimental features (the curious can search for this key in the code)\n 'crashWarning': False, # If True, print warnings about situations that may result in a crash\n 'imageAxisOrder': 'col-major', # For 'row-major', image data is expected in the standard (row, col) order.\n # For 'col-major', image data is expected in reversed (col, row) order.\n # The default is 'col-major' for backward compatibility, but this may\n # change in the future.\n} \n\n\ndef setConfigOption(opt, value):\n if opt not in CONFIG_OPTIONS:\n raise KeyError('Unknown configuration option \"%s\"' % opt)\n if opt == 'imageAxisOrder' and value not in ('row-major', 'col-major'):\n raise ValueError('imageAxisOrder must be either \"row-major\" or \"col-major\"')\n CONFIG_OPTIONS[opt] = value\n\ndef setConfigOptions(**opts):\n \"\"\"Set global configuration options. \n \n Each keyword argument sets one global option. \n \"\"\"\n for k,v in opts.items():\n setConfigOption(k, v)\n\ndef getConfigOption(opt):\n \"\"\"Return the value of a single global configuration option.\n \"\"\"\n return CONFIG_OPTIONS[opt]\n\n\ndef systemInfo():\n print(\"sys.platform: %s\" % sys.platform)\n print(\"sys.version: %s\" % sys.version)\n from .Qt import VERSION_INFO\n print(\"qt bindings: %s\" % VERSION_INFO)\n \n global __version__\n rev = None\n if __version__ is None: ## this code was probably checked out from bzr; look up the last-revision file\n lastRevFile = os.path.join(os.path.dirname(__file__), '..', '.bzr', 'branch', 'last-revision')\n if os.path.exists(lastRevFile):\n rev = open(lastRevFile, 'r').read().strip()\n \n print(\"pyqtgraph: %s; %s\" % (__version__, rev))\n print(\"config:\")\n import pprint\n pprint.pprint(CONFIG_OPTIONS)\n\n## Rename orphaned .pyc files. This is *probably* safe :)\n## We only do this if __version__ is None, indicating the code was probably pulled\n## from the repository. \ndef renamePyc(startDir):\n ### Used to rename orphaned .pyc files\n ### When a python file changes its location in the repository, usually the .pyc file\n ### is left behind, possibly causing mysterious and difficult to track bugs. \n\n ### Note that this is no longer necessary for python 3.2; from PEP 3147:\n ### \"If the py source file is missing, the pyc file inside __pycache__ will be ignored. \n ### This eliminates the problem of accidental stale pyc file imports.\"\n \n printed = False\n startDir = os.path.abspath(startDir)\n for path, dirs, files in os.walk(startDir):\n if '__pycache__' in path:\n continue\n for f in files:\n fileName = os.path.join(path, f)\n base, ext = os.path.splitext(fileName)\n py = base + \".py\"\n if ext == '.pyc' and not os.path.isfile(py):\n if not printed:\n print(\"NOTE: Renaming orphaned .pyc files:\")\n printed = True\n n = 1\n while True:\n name2 = fileName + \".renamed%d\" % n\n if not os.path.exists(name2):\n break\n n += 1\n print(\" \" + fileName + \" ==>\")\n print(\" \" + name2)\n os.rename(fileName, name2)\n \npath = os.path.split(__file__)[0]\nif __version__ is None and not hasattr(sys, 'frozen') and sys.version_info[0] == 2: ## If we are frozen, there's a good chance we don't have the original .py files anymore.\n renamePyc(path)\n\n\n## Import almost everything to make it available from a single namespace\n## don't import the more complex systems--canvas, parametertree, flowchart, dockarea\n## these must be imported separately.\n#from . import frozenSupport\n#def importModules(path, globals, locals, excludes=()):\n #\"\"\"Import all modules residing within *path*, return a dict of name: module pairs.\n \n #Note that *path* MUST be relative to the module doing the import. \n #\"\"\"\n #d = os.path.join(os.path.split(globals['__file__'])[0], path)\n #files = set()\n #for f in frozenSupport.listdir(d):\n #if frozenSupport.isdir(os.path.join(d, f)) and f not in ['__pycache__', 'tests']:\n #files.add(f)\n #elif f[-3:] == '.py' and f != '__init__.py':\n #files.add(f[:-3])\n #elif f[-4:] == '.pyc' and f != '__init__.pyc':\n #files.add(f[:-4])\n \n #mods = {}\n #path = path.replace(os.sep, '.')\n #for modName in files:\n #if modName in excludes:\n #continue\n #try:\n #if len(path) > 0:\n #modName = path + '.' + modName\n #print( \"from .%s import * \" % modName)\n #mod = __import__(modName, globals, locals, ['*'], 1)\n #mods[modName] = mod\n #except:\n #import traceback\n #traceback.print_stack()\n #sys.excepthook(*sys.exc_info())\n #print(\"[Error importing module: %s]\" % modName)\n \n #return mods\n\n#def importAll(path, globals, locals, excludes=()):\n #\"\"\"Given a list of modules, import all names from each module into the global namespace.\"\"\"\n #mods = importModules(path, globals, locals, excludes)\n #for mod in mods.values():\n #if hasattr(mod, '__all__'):\n #names = mod.__all__\n #else:\n #names = [n for n in dir(mod) if n[0] != '_']\n #for k in names:\n #if hasattr(mod, k):\n #globals[k] = getattr(mod, k)\n\n# Dynamic imports are disabled. This causes too many problems.\n#importAll('graphicsItems', globals(), locals())\n#importAll('widgets', globals(), locals(),\n #excludes=['MatplotlibWidget', 'RawImageWidget', 'RemoteGraphicsView'])\n\nfrom .graphicsItems.VTickGroup import * \nfrom .graphicsItems.GraphicsWidget import * \nfrom .graphicsItems.ScaleBar import * \nfrom .graphicsItems.PlotDataItem import * \nfrom .graphicsItems.GraphItem import * \nfrom .graphicsItems.TextItem import * \nfrom .graphicsItems.GraphicsLayout import * \nfrom .graphicsItems.UIGraphicsItem import * \nfrom .graphicsItems.GraphicsObject import * \nfrom .graphicsItems.PlotItem import * \nfrom .graphicsItems.ROI import * \nfrom .graphicsItems.InfiniteLine import * \nfrom .graphicsItems.HistogramLUTItem import * \nfrom .graphicsItems.GridItem import * \nfrom .graphicsItems.GradientLegend import * \nfrom .graphicsItems.GraphicsItem import * \nfrom .graphicsItems.BarGraphItem import * \nfrom .graphicsItems.ViewBox import * \nfrom .graphicsItems.ArrowItem import * \nfrom .graphicsItems.ImageItem import * \nfrom .graphicsItems.AxisItem import * \nfrom .graphicsItems.LabelItem import * \nfrom .graphicsItems.CurvePoint import * \nfrom .graphicsItems.GraphicsWidgetAnchor import * \nfrom .graphicsItems.PlotCurveItem import * \nfrom .graphicsItems.ButtonItem import * \nfrom .graphicsItems.GradientEditorItem import * \nfrom .graphicsItems.MultiPlotItem import * \nfrom .graphicsItems.ErrorBarItem import * \nfrom .graphicsItems.IsocurveItem import * \nfrom .graphicsItems.LinearRegionItem import * \nfrom .graphicsItems.FillBetweenItem import * \nfrom .graphicsItems.LegendItem import * \nfrom .graphicsItems.ScatterPlotItem import * \nfrom .graphicsItems.ItemGroup import * \n\nfrom .widgets.MultiPlotWidget import * \nfrom .widgets.ScatterPlotWidget import * \nfrom .widgets.ColorMapWidget import * \nfrom .widgets.FileDialog import * \nfrom .widgets.ValueLabel import * \nfrom .widgets.HistogramLUTWidget import * \nfrom .widgets.CheckTable import * \nfrom .widgets.BusyCursor import * \nfrom .widgets.PlotWidget import * \nfrom .widgets.ComboBox import * \nfrom .widgets.GradientWidget import * \nfrom .widgets.DataFilterWidget import * \nfrom .widgets.SpinBox import * \nfrom .widgets.JoystickButton import * \nfrom .widgets.GraphicsLayoutWidget import * \nfrom .widgets.TreeWidget import * \nfrom .widgets.PathButton import * \nfrom .widgets.VerticalLabel import * \nfrom .widgets.FeedbackButton import * \nfrom .widgets.ColorButton import * \nfrom .widgets.DataTreeWidget import * \nfrom .widgets.DiffTreeWidget import * \nfrom .widgets.GraphicsView import * \nfrom .widgets.LayoutWidget import * \nfrom .widgets.TableWidget import * \nfrom .widgets.ProgressDialog import *\nfrom .widgets.GroupBox import GroupBox\nfrom .widgets.RemoteGraphicsView import RemoteGraphicsView\n\nfrom .imageview import *\nfrom .WidgetGroup import *\nfrom .Point import Point\nfrom .Vector import Vector\nfrom .SRTTransform import SRTTransform\nfrom .Transform3D import Transform3D\nfrom .SRTTransform3D import SRTTransform3D\nfrom .functions import *\nfrom .graphicsWindows import *\nfrom .SignalProxy import *\nfrom .colormap import *\nfrom .ptime import time\nfrom .Qt import isQObjectAlive\n\n\n##############################################################\n## PyQt and PySide both are prone to crashing on exit. \n## There are two general approaches to dealing with this:\n## 1. Install atexit handlers that assist in tearing down to avoid crashes.\n## This helps, but is never perfect.\n## 2. Terminate the process before python starts tearing down\n## This is potentially dangerous\n\n## Attempts to work around exit crashes:\nimport atexit\n_cleanupCalled = False\ndef cleanup():\n global _cleanupCalled\n if _cleanupCalled:\n return\n \n if not getConfigOption('exitCleanup'):\n return\n \n ViewBox.quit() ## tell ViewBox that it doesn't need to deregister views anymore.\n \n ## Workaround for Qt exit crash:\n ## ALL QGraphicsItems must have a scene before they are deleted.\n ## This is potentially very expensive, but preferred over crashing.\n ## Note: this appears to be fixed in PySide as of 2012.12, but it should be left in for a while longer..\n app = QtGui.QApplication.instance()\n if app is None or not isinstance(app, QtGui.QApplication):\n # app was never constructed is already deleted or is an\n # QCoreApplication/QGuiApplication and not a full QApplication\n return\n import gc\n s = QtGui.QGraphicsScene()\n for o in gc.get_objects():\n try:\n if isinstance(o, QtGui.QGraphicsItem) and isQObjectAlive(o) and o.scene() is None:\n if getConfigOption('crashWarning'):\n sys.stderr.write('Error: graphics item without scene. '\n 'Make sure ViewBox.close() and GraphicsView.close() '\n 'are properly called before app shutdown (%s)\\n' % (o,))\n \n s.addItem(o)\n except (RuntimeError, ReferenceError): ## occurs if a python wrapper no longer has its underlying C++ object\n continue\n _cleanupCalled = True\n\natexit.register(cleanup)\n\n# Call cleanup when QApplication quits. This is necessary because sometimes\n# the QApplication will quit before the atexit callbacks are invoked.\n# Note: cannot connect this function until QApplication has been created, so\n# instead we have GraphicsView.__init__ call this for us.\n_cleanupConnected = False\ndef _connectCleanup():\n global _cleanupConnected\n if _cleanupConnected:\n return\n QtGui.QApplication.instance().aboutToQuit.connect(cleanup)\n _cleanupConnected = True\n\n\n## Optional function for exiting immediately (with some manual teardown)\ndef exit():\n \"\"\"\n Causes python to exit without garbage-collecting any objects, and thus avoids\n calling object destructor methods. This is a sledgehammer workaround for \n a variety of bugs in PyQt and Pyside that cause crashes on exit.\n \n This function does the following in an attempt to 'safely' terminate\n the process:\n \n * Invoke atexit callbacks\n * Close all open file handles\n * os._exit()\n \n Note: there is some potential for causing damage with this function if you\n are using objects that _require_ their destructors to be called (for example,\n to properly terminate log files, disconnect from devices, etc). Situations\n like this are probably quite rare, but use at your own risk.\n \"\"\"\n \n ## first disable our own cleanup function; won't be needing it.\n setConfigOptions(exitCleanup=False)\n \n ## invoke atexit callbacks\n atexit._run_exitfuncs()\n \n ## close file handles\n if sys.platform == 'darwin':\n for fd in range(3, 4096):\n if fd in [7]: # trying to close 7 produces an illegal instruction on the Mac.\n continue\n try:\n os.close(fd)\n except OSError:\n pass\n else:\n os.closerange(3, 4096) ## just guessing on the maximum descriptor count..\n\n os._exit(0)\n \n\n\n## Convenience functions for command-line use\n\nplots = []\nimages = []\nQAPP = None\n\ndef plot(*args, **kargs):\n \"\"\"\n Create and return a :class:`PlotWindow <pyqtgraph.PlotWindow>` \n (this is just a window with :class:`PlotWidget <pyqtgraph.PlotWidget>` inside), plot data in it.\n Accepts a *title* argument to set the title of the window.\n All other arguments are used to plot data. (see :func:`PlotItem.plot() <pyqtgraph.PlotItem.plot>`)\n \"\"\"\n mkQApp()\n #if 'title' in kargs:\n #w = PlotWindow(title=kargs['title'])\n #del kargs['title']\n #else:\n #w = PlotWindow()\n #if len(args)+len(kargs) > 0:\n #w.plot(*args, **kargs)\n \n pwArgList = ['title', 'labels', 'name', 'left', 'right', 'top', 'bottom', 'background']\n pwArgs = {}\n dataArgs = {}\n for k in kargs:\n if k in pwArgList:\n pwArgs[k] = kargs[k]\n else:\n dataArgs[k] = kargs[k]\n \n w = PlotWindow(**pwArgs)\n if len(args) > 0 or len(dataArgs) > 0:\n w.plot(*args, **dataArgs)\n plots.append(w)\n w.show()\n return w\n \ndef image(*args, **kargs):\n \"\"\"\n Create and return an :class:`ImageWindow <pyqtgraph.ImageWindow>` \n (this is just a window with :class:`ImageView <pyqtgraph.ImageView>` widget inside), show image data inside.\n Will show 2D or 3D image data.\n Accepts a *title* argument to set the title of the window.\n All other arguments are used to show data. (see :func:`ImageView.setImage() <pyqtgraph.ImageView.setImage>`)\n \"\"\"\n mkQApp()\n w = ImageWindow(*args, **kargs)\n images.append(w)\n w.show()\n return w\nshow = image ## for backward compatibility\n\ndef dbg(*args, **kwds):\n \"\"\"\n Create a console window and begin watching for exceptions.\n \n All arguments are passed to :func:`ConsoleWidget.__init__() <pyqtgraph.console.ConsoleWidget.__init__>`.\n \"\"\"\n mkQApp()\n from . import console\n c = console.ConsoleWidget(*args, **kwds)\n c.catchAllExceptions()\n c.show()\n global consoles\n try:\n consoles.append(c)\n except NameError:\n consoles = [c]\n return c\n\n\ndef stack(*args, **kwds):\n \"\"\"\n Create a console window and show the current stack trace.\n \n All arguments are passed to :func:`ConsoleWidget.__init__() <pyqtgraph.console.ConsoleWidget.__init__>`.\n \"\"\"\n mkQApp()\n from . import console\n c = console.ConsoleWidget(*args, **kwds)\n c.setStack()\n c.show()\n global consoles\n try:\n consoles.append(c)\n except NameError:\n consoles = [c]\n return c\n", "path": "pyqtgraph/__init__.py" } ]
diff --git a/pyqtgraph/__init__.py b/pyqtgraph/__init__.py index aad5c3c801..d2ba61ee52 100644 --- a/pyqtgraph/__init__.py +++ b/pyqtgraph/__init__.py @@ -260,6 +260,7 @@ def renamePyc(startDir): from .widgets.TableWidget import * from .widgets.ProgressDialog import * from .widgets.GroupBox import GroupBox +from .widgets.RemoteGraphicsView import RemoteGraphicsView from .imageview import * from .WidgetGroup import *
tobymao__sqlglot-2365
Support '' to escape single quote character in a string in Redshift dialect **Fully reproducible code snippet** ```python import sqlglot sql_code = """ CREATE TABLE IF NOT EXISTS myschema.mytable ( mycolumn bigint, ) DISTKEY (mycolumn) SORTKEY (mycolumn) ; COMMENT ON COLUMN myschema.mytable.mycolumn IS 'my example = \\'working\\''; COMMENT ON COLUMN myschema.mytable.mycolumn IS 'my example = ''not working'''; """ expressions = sqlglot.parse(sql_code, read="redshift") ``` Error: ```console Traceback (most recent call last): ... raise error sqlglot.errors.ParseError: Invalid expression / Unexpected token. Line 9, Col: 75. column IS 'my example = \'working\''; COMMENT ON COLUMN myschema.mytable.mycolumn IS 'my example = ''not working'''; ``` **Official Documentation** I couldn't find the right documentation on AWS that explains this, but I ran the query on Redshift and it works perfectly.
[ { "content": "from __future__ import annotations\n\nimport typing as t\n\nfrom sqlglot import exp, transforms\nfrom sqlglot.dialects.dialect import (\n concat_to_dpipe_sql,\n concat_ws_to_dpipe_sql,\n rename_func,\n ts_or_ds_to_date_sql,\n)\nfrom sqlglot.dialects.postgres import Postgres\nfrom sqlglot.helper import seq_get\nfrom sqlglot.tokens import TokenType\n\n\ndef _json_sql(self: Redshift.Generator, expression: exp.JSONExtract | exp.JSONExtractScalar) -> str:\n return f'{self.sql(expression, \"this\")}.\"{expression.expression.name}\"'\n\n\ndef _parse_date_add(args: t.List) -> exp.DateAdd:\n return exp.DateAdd(\n this=exp.TsOrDsToDate(this=seq_get(args, 2)),\n expression=seq_get(args, 1),\n unit=seq_get(args, 0),\n )\n\n\nclass Redshift(Postgres):\n # https://docs.aws.amazon.com/redshift/latest/dg/r_names.html\n RESOLVES_IDENTIFIERS_AS_UPPERCASE = None\n\n SUPPORTS_USER_DEFINED_TYPES = False\n\n TIME_FORMAT = \"'YYYY-MM-DD HH:MI:SS'\"\n TIME_MAPPING = {\n **Postgres.TIME_MAPPING,\n \"MON\": \"%b\",\n \"HH\": \"%H\",\n }\n\n class Parser(Postgres.Parser):\n FUNCTIONS = {\n **Postgres.Parser.FUNCTIONS,\n \"ADD_MONTHS\": lambda args: exp.DateAdd(\n this=exp.TsOrDsToDate(this=seq_get(args, 0)),\n expression=seq_get(args, 1),\n unit=exp.var(\"month\"),\n ),\n \"DATEADD\": _parse_date_add,\n \"DATE_ADD\": _parse_date_add,\n \"DATEDIFF\": lambda args: exp.DateDiff(\n this=exp.TsOrDsToDate(this=seq_get(args, 2)),\n expression=exp.TsOrDsToDate(this=seq_get(args, 1)),\n unit=seq_get(args, 0),\n ),\n \"STRTOL\": exp.FromBase.from_arg_list,\n }\n\n def _parse_types(\n self, check_func: bool = False, schema: bool = False, allow_identifiers: bool = True\n ) -> t.Optional[exp.Expression]:\n this = super()._parse_types(\n check_func=check_func, schema=schema, allow_identifiers=allow_identifiers\n )\n\n if (\n isinstance(this, exp.DataType)\n and this.is_type(\"varchar\")\n and this.expressions\n and this.expressions[0].this == exp.column(\"MAX\")\n ):\n this.set(\"expressions\", [exp.var(\"MAX\")])\n\n return this\n\n def _parse_convert(self, strict: bool) -> t.Optional[exp.Expression]:\n to = self._parse_types()\n self._match(TokenType.COMMA)\n this = self._parse_bitwise()\n return self.expression(exp.TryCast, this=this, to=to)\n\n class Tokenizer(Postgres.Tokenizer):\n BIT_STRINGS = []\n HEX_STRINGS = []\n STRING_ESCAPES = [\"\\\\\"]\n\n KEYWORDS = {\n **Postgres.Tokenizer.KEYWORDS,\n \"HLLSKETCH\": TokenType.HLLSKETCH,\n \"SUPER\": TokenType.SUPER,\n \"SYSDATE\": TokenType.CURRENT_TIMESTAMP,\n \"TOP\": TokenType.TOP,\n \"UNLOAD\": TokenType.COMMAND,\n \"VARBYTE\": TokenType.VARBINARY,\n }\n\n # Redshift allows # to appear as a table identifier prefix\n SINGLE_TOKENS = Postgres.Tokenizer.SINGLE_TOKENS.copy()\n SINGLE_TOKENS.pop(\"#\")\n\n class Generator(Postgres.Generator):\n LOCKING_READS_SUPPORTED = False\n RENAME_TABLE_WITH_DB = False\n QUERY_HINTS = False\n VALUES_AS_TABLE = False\n TZ_TO_WITH_TIME_ZONE = True\n NVL2_SUPPORTED = True\n\n TYPE_MAPPING = {\n **Postgres.Generator.TYPE_MAPPING,\n exp.DataType.Type.BINARY: \"VARBYTE\",\n exp.DataType.Type.INT: \"INTEGER\",\n exp.DataType.Type.TIMETZ: \"TIME\",\n exp.DataType.Type.TIMESTAMPTZ: \"TIMESTAMP\",\n exp.DataType.Type.VARBINARY: \"VARBYTE\",\n }\n\n PROPERTIES_LOCATION = {\n **Postgres.Generator.PROPERTIES_LOCATION,\n exp.LikeProperty: exp.Properties.Location.POST_WITH,\n }\n\n TRANSFORMS = {\n **Postgres.Generator.TRANSFORMS,\n exp.Concat: concat_to_dpipe_sql,\n exp.ConcatWs: concat_ws_to_dpipe_sql,\n exp.CurrentTimestamp: lambda self, e: \"SYSDATE\",\n exp.DateAdd: lambda self, e: self.func(\n \"DATEADD\", exp.var(e.text(\"unit\") or \"day\"), e.expression, e.this\n ),\n exp.DateDiff: lambda self, e: self.func(\n \"DATEDIFF\", exp.var(e.text(\"unit\") or \"day\"), e.expression, e.this\n ),\n exp.DistKeyProperty: lambda self, e: f\"DISTKEY({e.name})\",\n exp.DistStyleProperty: lambda self, e: self.naked_property(e),\n exp.FromBase: rename_func(\"STRTOL\"),\n exp.JSONExtract: _json_sql,\n exp.JSONExtractScalar: _json_sql,\n exp.SafeConcat: concat_to_dpipe_sql,\n exp.Select: transforms.preprocess(\n [transforms.eliminate_distinct_on, transforms.eliminate_semi_and_anti_joins]\n ),\n exp.SortKeyProperty: lambda self, e: f\"{'COMPOUND ' if e.args['compound'] else ''}SORTKEY({self.format_args(*e.this)})\",\n exp.TsOrDsToDate: ts_or_ds_to_date_sql(\"redshift\"),\n }\n\n # Postgres maps exp.Pivot to no_pivot_sql, but Redshift support pivots\n TRANSFORMS.pop(exp.Pivot)\n\n # Redshift uses the POW | POWER (expr1, expr2) syntax instead of expr1 ^ expr2 (postgres)\n TRANSFORMS.pop(exp.Pow)\n\n # Redshift supports ANY_VALUE(..)\n TRANSFORMS.pop(exp.AnyValue)\n\n RESERVED_KEYWORDS = {*Postgres.Generator.RESERVED_KEYWORDS, \"snapshot\", \"type\"}\n\n def with_properties(self, properties: exp.Properties) -> str:\n \"\"\"Redshift doesn't have `WITH` as part of their with_properties so we remove it\"\"\"\n return self.properties(properties, prefix=\" \", suffix=\"\")\n\n def datatype_sql(self, expression: exp.DataType) -> str:\n \"\"\"\n Redshift converts the `TEXT` data type to `VARCHAR(255)` by default when people more generally mean\n VARCHAR of max length which is `VARCHAR(max)` in Redshift. Therefore if we get a `TEXT` data type\n without precision we convert it to `VARCHAR(max)` and if it does have precision then we just convert\n `TEXT` to `VARCHAR`.\n \"\"\"\n if expression.is_type(\"text\"):\n expression = expression.copy()\n expression.set(\"this\", exp.DataType.Type.VARCHAR)\n precision = expression.args.get(\"expressions\")\n\n if not precision:\n expression.append(\"expressions\", exp.var(\"MAX\"))\n\n return super().datatype_sql(expression)\n", "path": "sqlglot/dialects/redshift.py" } ]
[ { "content": "from __future__ import annotations\n\nimport typing as t\n\nfrom sqlglot import exp, transforms\nfrom sqlglot.dialects.dialect import (\n concat_to_dpipe_sql,\n concat_ws_to_dpipe_sql,\n rename_func,\n ts_or_ds_to_date_sql,\n)\nfrom sqlglot.dialects.postgres import Postgres\nfrom sqlglot.helper import seq_get\nfrom sqlglot.tokens import TokenType\n\n\ndef _json_sql(self: Redshift.Generator, expression: exp.JSONExtract | exp.JSONExtractScalar) -> str:\n return f'{self.sql(expression, \"this\")}.\"{expression.expression.name}\"'\n\n\ndef _parse_date_add(args: t.List) -> exp.DateAdd:\n return exp.DateAdd(\n this=exp.TsOrDsToDate(this=seq_get(args, 2)),\n expression=seq_get(args, 1),\n unit=seq_get(args, 0),\n )\n\n\nclass Redshift(Postgres):\n # https://docs.aws.amazon.com/redshift/latest/dg/r_names.html\n RESOLVES_IDENTIFIERS_AS_UPPERCASE = None\n\n SUPPORTS_USER_DEFINED_TYPES = False\n\n TIME_FORMAT = \"'YYYY-MM-DD HH:MI:SS'\"\n TIME_MAPPING = {\n **Postgres.TIME_MAPPING,\n \"MON\": \"%b\",\n \"HH\": \"%H\",\n }\n\n class Parser(Postgres.Parser):\n FUNCTIONS = {\n **Postgres.Parser.FUNCTIONS,\n \"ADD_MONTHS\": lambda args: exp.DateAdd(\n this=exp.TsOrDsToDate(this=seq_get(args, 0)),\n expression=seq_get(args, 1),\n unit=exp.var(\"month\"),\n ),\n \"DATEADD\": _parse_date_add,\n \"DATE_ADD\": _parse_date_add,\n \"DATEDIFF\": lambda args: exp.DateDiff(\n this=exp.TsOrDsToDate(this=seq_get(args, 2)),\n expression=exp.TsOrDsToDate(this=seq_get(args, 1)),\n unit=seq_get(args, 0),\n ),\n \"STRTOL\": exp.FromBase.from_arg_list,\n }\n\n def _parse_types(\n self, check_func: bool = False, schema: bool = False, allow_identifiers: bool = True\n ) -> t.Optional[exp.Expression]:\n this = super()._parse_types(\n check_func=check_func, schema=schema, allow_identifiers=allow_identifiers\n )\n\n if (\n isinstance(this, exp.DataType)\n and this.is_type(\"varchar\")\n and this.expressions\n and this.expressions[0].this == exp.column(\"MAX\")\n ):\n this.set(\"expressions\", [exp.var(\"MAX\")])\n\n return this\n\n def _parse_convert(self, strict: bool) -> t.Optional[exp.Expression]:\n to = self._parse_types()\n self._match(TokenType.COMMA)\n this = self._parse_bitwise()\n return self.expression(exp.TryCast, this=this, to=to)\n\n class Tokenizer(Postgres.Tokenizer):\n BIT_STRINGS = []\n HEX_STRINGS = []\n STRING_ESCAPES = [\"\\\\\", \"'\"]\n\n KEYWORDS = {\n **Postgres.Tokenizer.KEYWORDS,\n \"HLLSKETCH\": TokenType.HLLSKETCH,\n \"SUPER\": TokenType.SUPER,\n \"SYSDATE\": TokenType.CURRENT_TIMESTAMP,\n \"TOP\": TokenType.TOP,\n \"UNLOAD\": TokenType.COMMAND,\n \"VARBYTE\": TokenType.VARBINARY,\n }\n\n # Redshift allows # to appear as a table identifier prefix\n SINGLE_TOKENS = Postgres.Tokenizer.SINGLE_TOKENS.copy()\n SINGLE_TOKENS.pop(\"#\")\n\n class Generator(Postgres.Generator):\n LOCKING_READS_SUPPORTED = False\n RENAME_TABLE_WITH_DB = False\n QUERY_HINTS = False\n VALUES_AS_TABLE = False\n TZ_TO_WITH_TIME_ZONE = True\n NVL2_SUPPORTED = True\n\n TYPE_MAPPING = {\n **Postgres.Generator.TYPE_MAPPING,\n exp.DataType.Type.BINARY: \"VARBYTE\",\n exp.DataType.Type.INT: \"INTEGER\",\n exp.DataType.Type.TIMETZ: \"TIME\",\n exp.DataType.Type.TIMESTAMPTZ: \"TIMESTAMP\",\n exp.DataType.Type.VARBINARY: \"VARBYTE\",\n }\n\n PROPERTIES_LOCATION = {\n **Postgres.Generator.PROPERTIES_LOCATION,\n exp.LikeProperty: exp.Properties.Location.POST_WITH,\n }\n\n TRANSFORMS = {\n **Postgres.Generator.TRANSFORMS,\n exp.Concat: concat_to_dpipe_sql,\n exp.ConcatWs: concat_ws_to_dpipe_sql,\n exp.CurrentTimestamp: lambda self, e: \"SYSDATE\",\n exp.DateAdd: lambda self, e: self.func(\n \"DATEADD\", exp.var(e.text(\"unit\") or \"day\"), e.expression, e.this\n ),\n exp.DateDiff: lambda self, e: self.func(\n \"DATEDIFF\", exp.var(e.text(\"unit\") or \"day\"), e.expression, e.this\n ),\n exp.DistKeyProperty: lambda self, e: f\"DISTKEY({e.name})\",\n exp.DistStyleProperty: lambda self, e: self.naked_property(e),\n exp.FromBase: rename_func(\"STRTOL\"),\n exp.JSONExtract: _json_sql,\n exp.JSONExtractScalar: _json_sql,\n exp.SafeConcat: concat_to_dpipe_sql,\n exp.Select: transforms.preprocess(\n [transforms.eliminate_distinct_on, transforms.eliminate_semi_and_anti_joins]\n ),\n exp.SortKeyProperty: lambda self, e: f\"{'COMPOUND ' if e.args['compound'] else ''}SORTKEY({self.format_args(*e.this)})\",\n exp.TsOrDsToDate: ts_or_ds_to_date_sql(\"redshift\"),\n }\n\n # Postgres maps exp.Pivot to no_pivot_sql, but Redshift support pivots\n TRANSFORMS.pop(exp.Pivot)\n\n # Redshift uses the POW | POWER (expr1, expr2) syntax instead of expr1 ^ expr2 (postgres)\n TRANSFORMS.pop(exp.Pow)\n\n # Redshift supports ANY_VALUE(..)\n TRANSFORMS.pop(exp.AnyValue)\n\n RESERVED_KEYWORDS = {*Postgres.Generator.RESERVED_KEYWORDS, \"snapshot\", \"type\"}\n\n def with_properties(self, properties: exp.Properties) -> str:\n \"\"\"Redshift doesn't have `WITH` as part of their with_properties so we remove it\"\"\"\n return self.properties(properties, prefix=\" \", suffix=\"\")\n\n def datatype_sql(self, expression: exp.DataType) -> str:\n \"\"\"\n Redshift converts the `TEXT` data type to `VARCHAR(255)` by default when people more generally mean\n VARCHAR of max length which is `VARCHAR(max)` in Redshift. Therefore if we get a `TEXT` data type\n without precision we convert it to `VARCHAR(max)` and if it does have precision then we just convert\n `TEXT` to `VARCHAR`.\n \"\"\"\n if expression.is_type(\"text\"):\n expression = expression.copy()\n expression.set(\"this\", exp.DataType.Type.VARCHAR)\n precision = expression.args.get(\"expressions\")\n\n if not precision:\n expression.append(\"expressions\", exp.var(\"MAX\"))\n\n return super().datatype_sql(expression)\n", "path": "sqlglot/dialects/redshift.py" } ]
diff --git a/sqlglot/dialects/redshift.py b/sqlglot/dialects/redshift.py index 2145844ec7..88e4448c12 100644 --- a/sqlglot/dialects/redshift.py +++ b/sqlglot/dialects/redshift.py @@ -83,7 +83,7 @@ def _parse_convert(self, strict: bool) -> t.Optional[exp.Expression]: class Tokenizer(Postgres.Tokenizer): BIT_STRINGS = [] HEX_STRINGS = [] - STRING_ESCAPES = ["\\"] + STRING_ESCAPES = ["\\", "'"] KEYWORDS = { **Postgres.Tokenizer.KEYWORDS, diff --git a/tests/dialects/test_redshift.py b/tests/dialects/test_redshift.py index 5f337b0524..ae1b987e0c 100644 --- a/tests/dialects/test_redshift.py +++ b/tests/dialects/test_redshift.py @@ -6,6 +6,11 @@ class TestRedshift(Validator): dialect = "redshift" def test_redshift(self): + self.validate_identity( + "SELECT 'a''b'", + "SELECT 'a\\'b'", + ) + self.validate_all( "x ~* 'pat'", write={
wagtail__wagtail-6433
Change code block style in the docs The colours in our existing code blocks fail WCAG AA on contrast: https://webaim.org/resources/contrastchecker/?fcolor=408090&bcolor=EEFFCC See an example here: https://docs.wagtail.io/en/stable/advanced_topics/performance.html#cache It looks like ``sphinx-rtd-theme`` uses a different style for their own docs: https://sphinx-rtd-theme.readthedocs.io/en/latest/demo/demo.html#code-blocks so maybe we should switch to that.
[ { "content": "# -*- coding: utf-8 -*-\n#\n# Wagtail documentation build configuration file, created by\n# sphinx-quickstart on Tue Jan 14 17:38:55 2014.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\n\nfrom datetime import datetime\n\n\n# on_rtd is whether we are on readthedocs.org, this line of code grabbed from docs.readthedocs.org\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\nif not on_rtd: # only import and set the theme if we're building docs locally\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('..'))\n\n# Autodoc may need to import some models modules which require django settings\n# be configured\nos.environ['DJANGO_SETTINGS_MODULE'] = 'wagtail.tests.settings'\nimport django\ndjango.setup()\n\n# Use SQLite3 database engine so it doesn't attempt to use psycopg2 on RTD\nos.environ['DATABASE_ENGINE'] = 'django.db.backends.sqlite3'\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n]\n\nif not on_rtd:\n extensions.append('sphinxcontrib.spelling')\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Wagtail'\ncopyright = u'{year:d}, Torchbox'.format(year=datetime.now().year)\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n\n# Get Wagtail version\nfrom wagtail import __version__, VERSION\n\n# The short X.Y version.\nversion = '{}.{}'.format(VERSION[0], VERSION[1])\n# The full version, including alpha/beta/rc tags.\nrelease = __version__\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# splhinxcontrib.spelling settings\n\nspelling_lang = 'en_GB'\nspelling_word_list_filename='spelling_wordlist.txt'\n\n# sphinx.ext.intersphinx settings\nintersphinx_mapping = {\n 'django': ('https://docs.djangoproject.com/en/stable/', 'https://docs.djangoproject.com/en/stable/_objects/')\n}\n\n\n# -- Options for HTML output ----------------------------------------------\n\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = 'logo.png'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = 'favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Wagtaildoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'Wagtail.tex', u'Wagtail Documentation',\n u'Torchbox', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'wagtail', u'Wagtail Documentation',\n [u'Torchbox'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'Wagtail', u'Wagtail Documentation',\n u'Torchbox', 'Wagtail', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\ndef setup(app):\n app.add_css_file('css/custom.css')\n app.add_js_file('js/banner.js')\n", "path": "docs/conf.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\n#\n# Wagtail documentation build configuration file, created by\n# sphinx-quickstart on Tue Jan 14 17:38:55 2014.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\n\nfrom datetime import datetime\n\n\n# on_rtd is whether we are on readthedocs.org, this line of code grabbed from docs.readthedocs.org\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\nif not on_rtd: # only import and set the theme if we're building docs locally\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('..'))\n\n# Autodoc may need to import some models modules which require django settings\n# be configured\nos.environ['DJANGO_SETTINGS_MODULE'] = 'wagtail.tests.settings'\nimport django\ndjango.setup()\n\n# Use SQLite3 database engine so it doesn't attempt to use psycopg2 on RTD\nos.environ['DATABASE_ENGINE'] = 'django.db.backends.sqlite3'\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n]\n\nif not on_rtd:\n extensions.append('sphinxcontrib.spelling')\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Wagtail'\ncopyright = u'{year:d}, Torchbox'.format(year=datetime.now().year)\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n\n# Get Wagtail version\nfrom wagtail import __version__, VERSION\n\n# The short X.Y version.\nversion = '{}.{}'.format(VERSION[0], VERSION[1])\n# The full version, including alpha/beta/rc tags.\nrelease = __version__\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'default'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n\n# splhinxcontrib.spelling settings\n\nspelling_lang = 'en_GB'\nspelling_word_list_filename='spelling_wordlist.txt'\n\n# sphinx.ext.intersphinx settings\nintersphinx_mapping = {\n 'django': ('https://docs.djangoproject.com/en/stable/', 'https://docs.djangoproject.com/en/stable/_objects/')\n}\n\n\n# -- Options for HTML output ----------------------------------------------\n\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = 'logo.png'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = 'favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Wagtaildoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'Wagtail.tex', u'Wagtail Documentation',\n u'Torchbox', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('index', 'wagtail', u'Wagtail Documentation',\n [u'Torchbox'], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'Wagtail', u'Wagtail Documentation',\n u'Torchbox', 'Wagtail', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\n\ndef setup(app):\n app.add_css_file('css/custom.css')\n app.add_js_file('js/banner.js')\n", "path": "docs/conf.py" } ]
diff --git a/docs/conf.py b/docs/conf.py index fbc26a1f7f60..65bd3c605a42 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -115,7 +115,7 @@ #show_authors = False # The name of the Pygments (syntax highlighting) style to use. -pygments_style = 'sphinx' +pygments_style = 'default' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = []
mitmproxy__mitmproxy-6382
An error occurred when trying to open a punycode domain #### Problem Description When trying to open a punycode domain, for example https://xn--80afnfom.xn--80ahmohdapg.xn--80asehdb/login, an error occurs in mitmproxy mitmproxy log ``` [13:35:19.966][192.168.20.31:53287] client connect [13:35:20.032][192.168.20.31:53287] server connect мойгаз.смородина.онлайн:443 (194.226.55.22:443) [13:35:20.074] Addon error: DNSName values should be passed as an A-label string. This means unicode characters should be encoded via a library like idna. Traceback (most recent call last): File "mitmproxy\certs.py", line 271, in dummy_cert File "ipaddress.py", line 54, in ip_address ValueError: 'мойгаз.смородина.онлайн' does not appear to be an IPv4 or IPv6 address During handling of the above exception, another exception occurred: Traceback (most recent call last): File "cryptography\x509\general_name.py", line 84, in __init__ UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-5: ordinal not in range(128) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "mitmproxy\addons\tlsconfig.py", line 177, in tls_start_client File "mitmproxy\addons\tlsconfig.py", line 516, in get_cert File "mitmproxy\certs.py", line 526, in get_cert File "mitmproxy\certs.py", line 273, in dummy_cert File "cryptography\x509\general_name.py", line 86, in __init__ ValueError: DNSName values should be passed as an A-label string. This means unicode characters should be encoded via a library like idna. [13:35:20.078][192.168.20.31:53287] No TLS context was provided, failing connection. [13:35:20.079][192.168.20.31:53287] client disconnect [13:35:20.079][192.168.20.31:53287] server disconnect мойгаз.смородина.онлайн:443 (194.226.55.22:443) ``` #### Steps to reproduce the behavior: 1. Open in browser https://xn--80afnfom.xn--80ahmohdapg.xn--80asehdb/login 2. Result ![image](https://github.com/mitmproxy/mitmproxy/assets/4923679/9bf298e3-bcb2-44d9-8b22-aefe123a9be8) #### System ``` Mitmproxy: 10.0.0 binary Python: 3.11.4 OpenSSL: OpenSSL 3.0.7 1 Nov 2022 Platform: Windows-10-10.0.19045-SP0 ```
[ { "content": "import contextlib\nimport datetime\nimport ipaddress\nimport os\nimport re\nimport sys\nfrom dataclasses import dataclass\nfrom pathlib import Path\nfrom typing import cast\nfrom typing import NewType\nfrom typing import Optional\nfrom typing import Union\n\nimport OpenSSL\nfrom cryptography import x509\nfrom cryptography.hazmat.primitives import hashes\nfrom cryptography.hazmat.primitives import serialization\nfrom cryptography.hazmat.primitives.asymmetric import dsa\nfrom cryptography.hazmat.primitives.asymmetric import ec\nfrom cryptography.hazmat.primitives.asymmetric import rsa\nfrom cryptography.hazmat.primitives.serialization import pkcs12\nfrom cryptography.x509 import ExtendedKeyUsageOID\nfrom cryptography.x509 import NameOID\n\nfrom mitmproxy.coretypes import serializable\n\n# Default expiry must not be too long: https://github.com/mitmproxy/mitmproxy/issues/815\nCA_EXPIRY = datetime.timedelta(days=10 * 365)\nCERT_EXPIRY = datetime.timedelta(days=365)\n\n# Generated with \"openssl dhparam\". It's too slow to generate this on startup.\nDEFAULT_DHPARAM = b\"\"\"\n-----BEGIN DH PARAMETERS-----\nMIICCAKCAgEAyT6LzpwVFS3gryIo29J5icvgxCnCebcdSe/NHMkD8dKJf8suFCg3\nO2+dguLakSVif/t6dhImxInJk230HmfC8q93hdcg/j8rLGJYDKu3ik6H//BAHKIv\nj5O9yjU3rXCfmVJQic2Nne39sg3CreAepEts2TvYHhVv3TEAzEqCtOuTjgDv0ntJ\nGwpj+BJBRQGG9NvprX1YGJ7WOFBP/hWU7d6tgvE6Xa7T/u9QIKpYHMIkcN/l3ZFB\nchZEqVlyrcngtSXCROTPcDOQ6Q8QzhaBJS+Z6rcsd7X+haiQqvoFcmaJ08Ks6LQC\nZIL2EtYJw8V8z7C0igVEBIADZBI6OTbuuhDwRw//zU1uq52Oc48CIZlGxTYG/Evq\no9EWAXUYVzWkDSTeBH1r4z/qLPE2cnhtMxbFxuvK53jGB0emy2y1Ei6IhKshJ5qX\nIB/aE7SSHyQ3MDHHkCmQJCsOd4Mo26YX61NZ+n501XjqpCBQ2+DfZCBh8Va2wDyv\nA2Ryg9SUz8j0AXViRNMJgJrr446yro/FuJZwnQcO3WQnXeqSBnURqKjmqkeFP+d8\n6mk2tqJaY507lRNqtGlLnj7f5RNoBFJDCLBNurVgfvq9TCVWKDIFD4vZRjCrnl6I\nrD693XKIHUCWOjMh1if6omGXKHH40QuME2gNa50+YPn1iYDl88uDbbMCAQI=\n-----END DH PARAMETERS-----\n\"\"\"\n\n\nclass Cert(serializable.Serializable):\n \"\"\"Representation of a (TLS) certificate.\"\"\"\n\n _cert: x509.Certificate\n\n def __init__(self, cert: x509.Certificate):\n assert isinstance(cert, x509.Certificate)\n self._cert = cert\n\n def __eq__(self, other):\n return self.fingerprint() == other.fingerprint()\n\n def __repr__(self):\n return f\"<Cert(cn={self.cn!r}, altnames={self.altnames!r})>\"\n\n def __hash__(self):\n return self._cert.__hash__()\n\n @classmethod\n def from_state(cls, state):\n return cls.from_pem(state)\n\n def get_state(self):\n return self.to_pem()\n\n def set_state(self, state):\n self._cert = x509.load_pem_x509_certificate(state)\n\n @classmethod\n def from_pem(cls, data: bytes) -> \"Cert\":\n cert = x509.load_pem_x509_certificate(data) # type: ignore\n return cls(cert)\n\n def to_pem(self) -> bytes:\n return self._cert.public_bytes(serialization.Encoding.PEM)\n\n @classmethod\n def from_pyopenssl(self, x509: OpenSSL.crypto.X509) -> \"Cert\":\n return Cert(x509.to_cryptography())\n\n def to_pyopenssl(self) -> OpenSSL.crypto.X509:\n return OpenSSL.crypto.X509.from_cryptography(self._cert)\n\n def fingerprint(self) -> bytes:\n return self._cert.fingerprint(hashes.SHA256())\n\n @property\n def issuer(self) -> list[tuple[str, str]]:\n return _name_to_keyval(self._cert.issuer)\n\n @property\n def notbefore(self) -> datetime.datetime:\n # x509.Certificate.not_valid_before is a naive datetime in UTC\n return self._cert.not_valid_before.replace(tzinfo=datetime.timezone.utc)\n\n @property\n def notafter(self) -> datetime.datetime:\n # x509.Certificate.not_valid_after is a naive datetime in UTC\n return self._cert.not_valid_after.replace(tzinfo=datetime.timezone.utc)\n\n def has_expired(self) -> bool:\n return datetime.datetime.utcnow() > self._cert.not_valid_after\n\n @property\n def subject(self) -> list[tuple[str, str]]:\n return _name_to_keyval(self._cert.subject)\n\n @property\n def serial(self) -> int:\n return self._cert.serial_number\n\n @property\n def keyinfo(self) -> tuple[str, int]:\n public_key = self._cert.public_key()\n if isinstance(public_key, rsa.RSAPublicKey):\n return \"RSA\", public_key.key_size\n if isinstance(public_key, dsa.DSAPublicKey):\n return \"DSA\", public_key.key_size\n if isinstance(public_key, ec.EllipticCurvePublicKey):\n return f\"EC ({public_key.curve.name})\", public_key.key_size\n return (\n public_key.__class__.__name__.replace(\"PublicKey\", \"\").replace(\"_\", \"\"),\n getattr(public_key, \"key_size\", -1),\n ) # pragma: no cover\n\n @property\n def cn(self) -> str | None:\n attrs = self._cert.subject.get_attributes_for_oid(x509.NameOID.COMMON_NAME)\n if attrs:\n return cast(str, attrs[0].value)\n return None\n\n @property\n def organization(self) -> str | None:\n attrs = self._cert.subject.get_attributes_for_oid(\n x509.NameOID.ORGANIZATION_NAME\n )\n if attrs:\n return cast(str, attrs[0].value)\n return None\n\n @property\n def altnames(self) -> list[str]:\n \"\"\"\n Get all SubjectAlternativeName DNS altnames.\n \"\"\"\n try:\n ext = self._cert.extensions.get_extension_for_class(\n x509.SubjectAlternativeName\n ).value\n except x509.ExtensionNotFound:\n return []\n else:\n return ext.get_values_for_type(x509.DNSName) + [\n str(x) for x in ext.get_values_for_type(x509.IPAddress)\n ]\n\n\ndef _name_to_keyval(name: x509.Name) -> list[tuple[str, str]]:\n parts = []\n for attr in name:\n k = attr.rfc4514_string().partition(\"=\")[0]\n v = cast(str, attr.value)\n parts.append((k, v))\n return parts\n\n\ndef create_ca(\n organization: str,\n cn: str,\n key_size: int,\n) -> tuple[rsa.RSAPrivateKeyWithSerialization, x509.Certificate]:\n now = datetime.datetime.now()\n\n private_key = rsa.generate_private_key(\n public_exponent=65537,\n key_size=key_size,\n ) # type: ignore\n name = x509.Name(\n [\n x509.NameAttribute(NameOID.COMMON_NAME, cn),\n x509.NameAttribute(NameOID.ORGANIZATION_NAME, organization),\n ]\n )\n builder = x509.CertificateBuilder()\n builder = builder.serial_number(x509.random_serial_number())\n builder = builder.subject_name(name)\n builder = builder.not_valid_before(now - datetime.timedelta(days=2))\n builder = builder.not_valid_after(now + CA_EXPIRY)\n builder = builder.issuer_name(name)\n builder = builder.public_key(private_key.public_key())\n builder = builder.add_extension(\n x509.BasicConstraints(ca=True, path_length=None), critical=True\n )\n builder = builder.add_extension(\n x509.ExtendedKeyUsage([ExtendedKeyUsageOID.SERVER_AUTH]), critical=False\n )\n builder = builder.add_extension(\n x509.KeyUsage(\n digital_signature=False,\n content_commitment=False,\n key_encipherment=False,\n data_encipherment=False,\n key_agreement=False,\n key_cert_sign=True,\n crl_sign=True,\n encipher_only=False,\n decipher_only=False,\n ),\n critical=True,\n )\n builder = builder.add_extension(\n x509.SubjectKeyIdentifier.from_public_key(private_key.public_key()),\n critical=False,\n )\n cert = builder.sign(private_key=private_key, algorithm=hashes.SHA256()) # type: ignore\n return private_key, cert\n\n\ndef dummy_cert(\n privkey: rsa.RSAPrivateKey,\n cacert: x509.Certificate,\n commonname: str | None,\n sans: list[str],\n organization: str | None = None,\n) -> Cert:\n \"\"\"\n Generates a dummy certificate.\n\n privkey: CA private key\n cacert: CA certificate\n commonname: Common name for the generated certificate.\n sans: A list of Subject Alternate Names.\n organization: Organization name for the generated certificate.\n\n Returns cert if operation succeeded, None if not.\n \"\"\"\n builder = x509.CertificateBuilder()\n builder = builder.issuer_name(cacert.subject)\n builder = builder.add_extension(\n x509.ExtendedKeyUsage([ExtendedKeyUsageOID.SERVER_AUTH]), critical=False\n )\n builder = builder.public_key(cacert.public_key())\n\n now = datetime.datetime.now()\n builder = builder.not_valid_before(now - datetime.timedelta(days=2))\n builder = builder.not_valid_after(now + CERT_EXPIRY)\n\n subject = []\n is_valid_commonname = commonname is not None and len(commonname) < 64\n if is_valid_commonname:\n assert commonname is not None\n subject.append(x509.NameAttribute(NameOID.COMMON_NAME, commonname))\n if organization is not None:\n assert organization is not None\n subject.append(x509.NameAttribute(NameOID.ORGANIZATION_NAME, organization))\n builder = builder.subject_name(x509.Name(subject))\n builder = builder.serial_number(x509.random_serial_number())\n\n ss: list[x509.GeneralName] = []\n for x in sans:\n try:\n ip = ipaddress.ip_address(x)\n except ValueError:\n ss.append(x509.DNSName(x))\n else:\n ss.append(x509.IPAddress(ip))\n # RFC 5280 §4.2.1.6: subjectAltName is critical if subject is empty.\n builder = builder.add_extension(\n x509.SubjectAlternativeName(ss), critical=not is_valid_commonname\n )\n cert = builder.sign(private_key=privkey, algorithm=hashes.SHA256()) # type: ignore\n return Cert(cert)\n\n\n@dataclass(frozen=True)\nclass CertStoreEntry:\n cert: Cert\n privatekey: rsa.RSAPrivateKey\n chain_file: Path | None\n chain_certs: list[Cert]\n\n\nTCustomCertId = str # manually provided certs (e.g. mitmproxy's --certs)\nTGeneratedCertId = tuple[Optional[str], tuple[str, ...]] # (common_name, sans)\nTCertId = Union[TCustomCertId, TGeneratedCertId]\n\nDHParams = NewType(\"DHParams\", bytes)\n\n\nclass CertStore:\n \"\"\"\n Implements an in-memory certificate store.\n \"\"\"\n\n STORE_CAP = 100\n certs: dict[TCertId, CertStoreEntry]\n expire_queue: list[CertStoreEntry]\n\n def __init__(\n self,\n default_privatekey: rsa.RSAPrivateKey,\n default_ca: Cert,\n default_chain_file: Path | None,\n dhparams: DHParams,\n ):\n self.default_privatekey = default_privatekey\n self.default_ca = default_ca\n self.default_chain_file = default_chain_file\n self.default_chain_certs = (\n [\n Cert.from_pem(chunk)\n for chunk in re.split(\n rb\"(?=-----BEGIN( [A-Z]+)+-----)\",\n self.default_chain_file.read_bytes(),\n )\n if chunk.startswith(b\"-----BEGIN CERTIFICATE-----\")\n ]\n if self.default_chain_file\n else [default_ca]\n )\n self.dhparams = dhparams\n self.certs = {}\n self.expire_queue = []\n\n def expire(self, entry: CertStoreEntry) -> None:\n self.expire_queue.append(entry)\n if len(self.expire_queue) > self.STORE_CAP:\n d = self.expire_queue.pop(0)\n self.certs = {k: v for k, v in self.certs.items() if v != d}\n\n @staticmethod\n def load_dhparam(path: Path) -> DHParams:\n # mitmproxy<=0.10 doesn't generate a dhparam file.\n # Create it now if necessary.\n if not path.exists():\n path.write_bytes(DEFAULT_DHPARAM)\n\n # we could use cryptography for this, but it's unclear how to convert cryptography's object to pyOpenSSL's\n # expected format.\n bio = OpenSSL.SSL._lib.BIO_new_file(str(path).encode(sys.getfilesystemencoding()), b\"r\") # type: ignore\n if bio != OpenSSL.SSL._ffi.NULL: # type: ignore\n bio = OpenSSL.SSL._ffi.gc(bio, OpenSSL.SSL._lib.BIO_free) # type: ignore\n dh = OpenSSL.SSL._lib.PEM_read_bio_DHparams( # type: ignore\n bio,\n OpenSSL.SSL._ffi.NULL, # type: ignore\n OpenSSL.SSL._ffi.NULL, # type: ignore\n OpenSSL.SSL._ffi.NULL, # type: ignore\n )\n dh = OpenSSL.SSL._ffi.gc(dh, OpenSSL.SSL._lib.DH_free) # type: ignore\n return dh\n raise RuntimeError(\"Error loading DH Params.\") # pragma: no cover\n\n @classmethod\n def from_store(\n cls,\n path: Path | str,\n basename: str,\n key_size: int,\n passphrase: bytes | None = None,\n ) -> \"CertStore\":\n path = Path(path)\n ca_file = path / f\"{basename}-ca.pem\"\n dhparam_file = path / f\"{basename}-dhparam.pem\"\n if not ca_file.exists():\n cls.create_store(path, basename, key_size)\n return cls.from_files(ca_file, dhparam_file, passphrase)\n\n @classmethod\n def from_files(\n cls, ca_file: Path, dhparam_file: Path, passphrase: bytes | None = None\n ) -> \"CertStore\":\n raw = ca_file.read_bytes()\n key = load_pem_private_key(raw, passphrase)\n dh = cls.load_dhparam(dhparam_file)\n certs = re.split(rb\"(?=-----BEGIN CERTIFICATE-----)\", raw)\n ca = Cert.from_pem(certs[1])\n if len(certs) > 2:\n chain_file: Path | None = ca_file\n else:\n chain_file = None\n return cls(key, ca, chain_file, dh)\n\n @staticmethod\n @contextlib.contextmanager\n def umask_secret():\n \"\"\"\n Context to temporarily set umask to its original value bitor 0o77.\n Useful when writing private keys to disk so that only the owner\n will be able to read them.\n \"\"\"\n original_umask = os.umask(0)\n os.umask(original_umask | 0o77)\n try:\n yield\n finally:\n os.umask(original_umask)\n\n @staticmethod\n def create_store(\n path: Path, basename: str, key_size: int, organization=None, cn=None\n ) -> None:\n path.mkdir(parents=True, exist_ok=True)\n\n organization = organization or basename\n cn = cn or basename\n\n key: rsa.RSAPrivateKeyWithSerialization\n ca: x509.Certificate\n key, ca = create_ca(organization=organization, cn=cn, key_size=key_size)\n\n # Dump the CA plus private key.\n with CertStore.umask_secret():\n # PEM format\n (path / f\"{basename}-ca.pem\").write_bytes(\n key.private_bytes(\n encoding=serialization.Encoding.PEM,\n format=serialization.PrivateFormat.TraditionalOpenSSL,\n encryption_algorithm=serialization.NoEncryption(),\n )\n + ca.public_bytes(serialization.Encoding.PEM)\n )\n\n # PKCS12 format for Windows devices\n (path / f\"{basename}-ca.p12\").write_bytes(\n pkcs12.serialize_key_and_certificates( # type: ignore\n name=basename.encode(),\n key=key,\n cert=ca,\n cas=None,\n encryption_algorithm=serialization.NoEncryption(),\n )\n )\n\n # Dump the certificate in PEM format\n pem_cert = ca.public_bytes(serialization.Encoding.PEM)\n (path / f\"{basename}-ca-cert.pem\").write_bytes(pem_cert)\n # Create a .cer file with the same contents for Android\n (path / f\"{basename}-ca-cert.cer\").write_bytes(pem_cert)\n\n # Dump the certificate in PKCS12 format for Windows devices\n (path / f\"{basename}-ca-cert.p12\").write_bytes(\n pkcs12.serialize_key_and_certificates(\n name=basename.encode(),\n key=None, # type: ignore\n cert=ca,\n cas=None,\n encryption_algorithm=serialization.NoEncryption(),\n )\n )\n\n (path / f\"{basename}-dhparam.pem\").write_bytes(DEFAULT_DHPARAM)\n\n def add_cert_file(\n self, spec: str, path: Path, passphrase: bytes | None = None\n ) -> None:\n raw = path.read_bytes()\n cert = Cert.from_pem(raw)\n try:\n key = load_pem_private_key(raw, password=passphrase)\n except ValueError:\n key = self.default_privatekey\n\n self.add_cert(CertStoreEntry(cert, key, path, [cert]), spec)\n\n def add_cert(self, entry: CertStoreEntry, *names: str) -> None:\n \"\"\"\n Adds a cert to the certstore. We register the CN in the cert plus\n any SANs, and also the list of names provided as an argument.\n \"\"\"\n if entry.cert.cn:\n self.certs[entry.cert.cn] = entry\n for i in entry.cert.altnames:\n self.certs[i] = entry\n for i in names:\n self.certs[i] = entry\n\n @staticmethod\n def asterisk_forms(dn: str) -> list[str]:\n \"\"\"\n Return all asterisk forms for a domain. For example, for www.example.com this will return\n [b\"www.example.com\", b\"*.example.com\", b\"*.com\"]. The single wildcard \"*\" is omitted.\n \"\"\"\n parts = dn.split(\".\")\n ret = [dn]\n for i in range(1, len(parts)):\n ret.append(\"*.\" + \".\".join(parts[i:]))\n return ret\n\n def get_cert(\n self,\n commonname: str | None,\n sans: list[str],\n organization: str | None = None,\n ) -> CertStoreEntry:\n \"\"\"\n commonname: Common name for the generated certificate. Must be a\n valid, plain-ASCII, IDNA-encoded domain name.\n\n sans: A list of Subject Alternate Names.\n\n organization: Organization name for the generated certificate.\n \"\"\"\n\n potential_keys: list[TCertId] = []\n if commonname:\n potential_keys.extend(self.asterisk_forms(commonname))\n for s in sans:\n potential_keys.extend(self.asterisk_forms(s))\n potential_keys.append(\"*\")\n potential_keys.append((commonname, tuple(sans)))\n\n name = next(filter(lambda key: key in self.certs, potential_keys), None)\n if name:\n entry = self.certs[name]\n else:\n entry = CertStoreEntry(\n cert=dummy_cert(\n self.default_privatekey,\n self.default_ca._cert,\n commonname,\n sans,\n organization,\n ),\n privatekey=self.default_privatekey,\n chain_file=self.default_chain_file,\n chain_certs=self.default_chain_certs,\n )\n self.certs[(commonname, tuple(sans))] = entry\n self.expire(entry)\n\n return entry\n\n\ndef load_pem_private_key(data: bytes, password: bytes | None) -> rsa.RSAPrivateKey:\n \"\"\"\n like cryptography's load_pem_private_key, but silently falls back to not using a password\n if the private key is unencrypted.\n \"\"\"\n try:\n return serialization.load_pem_private_key(data, password) # type: ignore\n except TypeError:\n if password is not None:\n return load_pem_private_key(data, None)\n raise\n", "path": "mitmproxy/certs.py" } ]
[ { "content": "import contextlib\nimport datetime\nimport ipaddress\nimport os\nimport re\nimport sys\nfrom dataclasses import dataclass\nfrom pathlib import Path\nfrom typing import cast\nfrom typing import NewType\nfrom typing import Optional\nfrom typing import Union\n\nimport OpenSSL\nfrom cryptography import x509\nfrom cryptography.hazmat.primitives import hashes\nfrom cryptography.hazmat.primitives import serialization\nfrom cryptography.hazmat.primitives.asymmetric import dsa\nfrom cryptography.hazmat.primitives.asymmetric import ec\nfrom cryptography.hazmat.primitives.asymmetric import rsa\nfrom cryptography.hazmat.primitives.serialization import pkcs12\nfrom cryptography.x509 import ExtendedKeyUsageOID\nfrom cryptography.x509 import NameOID\n\nfrom mitmproxy.coretypes import serializable\n\n# Default expiry must not be too long: https://github.com/mitmproxy/mitmproxy/issues/815\nCA_EXPIRY = datetime.timedelta(days=10 * 365)\nCERT_EXPIRY = datetime.timedelta(days=365)\n\n# Generated with \"openssl dhparam\". It's too slow to generate this on startup.\nDEFAULT_DHPARAM = b\"\"\"\n-----BEGIN DH PARAMETERS-----\nMIICCAKCAgEAyT6LzpwVFS3gryIo29J5icvgxCnCebcdSe/NHMkD8dKJf8suFCg3\nO2+dguLakSVif/t6dhImxInJk230HmfC8q93hdcg/j8rLGJYDKu3ik6H//BAHKIv\nj5O9yjU3rXCfmVJQic2Nne39sg3CreAepEts2TvYHhVv3TEAzEqCtOuTjgDv0ntJ\nGwpj+BJBRQGG9NvprX1YGJ7WOFBP/hWU7d6tgvE6Xa7T/u9QIKpYHMIkcN/l3ZFB\nchZEqVlyrcngtSXCROTPcDOQ6Q8QzhaBJS+Z6rcsd7X+haiQqvoFcmaJ08Ks6LQC\nZIL2EtYJw8V8z7C0igVEBIADZBI6OTbuuhDwRw//zU1uq52Oc48CIZlGxTYG/Evq\no9EWAXUYVzWkDSTeBH1r4z/qLPE2cnhtMxbFxuvK53jGB0emy2y1Ei6IhKshJ5qX\nIB/aE7SSHyQ3MDHHkCmQJCsOd4Mo26YX61NZ+n501XjqpCBQ2+DfZCBh8Va2wDyv\nA2Ryg9SUz8j0AXViRNMJgJrr446yro/FuJZwnQcO3WQnXeqSBnURqKjmqkeFP+d8\n6mk2tqJaY507lRNqtGlLnj7f5RNoBFJDCLBNurVgfvq9TCVWKDIFD4vZRjCrnl6I\nrD693XKIHUCWOjMh1if6omGXKHH40QuME2gNa50+YPn1iYDl88uDbbMCAQI=\n-----END DH PARAMETERS-----\n\"\"\"\n\n\nclass Cert(serializable.Serializable):\n \"\"\"Representation of a (TLS) certificate.\"\"\"\n\n _cert: x509.Certificate\n\n def __init__(self, cert: x509.Certificate):\n assert isinstance(cert, x509.Certificate)\n self._cert = cert\n\n def __eq__(self, other):\n return self.fingerprint() == other.fingerprint()\n\n def __repr__(self):\n return f\"<Cert(cn={self.cn!r}, altnames={self.altnames!r})>\"\n\n def __hash__(self):\n return self._cert.__hash__()\n\n @classmethod\n def from_state(cls, state):\n return cls.from_pem(state)\n\n def get_state(self):\n return self.to_pem()\n\n def set_state(self, state):\n self._cert = x509.load_pem_x509_certificate(state)\n\n @classmethod\n def from_pem(cls, data: bytes) -> \"Cert\":\n cert = x509.load_pem_x509_certificate(data) # type: ignore\n return cls(cert)\n\n def to_pem(self) -> bytes:\n return self._cert.public_bytes(serialization.Encoding.PEM)\n\n @classmethod\n def from_pyopenssl(self, x509: OpenSSL.crypto.X509) -> \"Cert\":\n return Cert(x509.to_cryptography())\n\n def to_pyopenssl(self) -> OpenSSL.crypto.X509:\n return OpenSSL.crypto.X509.from_cryptography(self._cert)\n\n def fingerprint(self) -> bytes:\n return self._cert.fingerprint(hashes.SHA256())\n\n @property\n def issuer(self) -> list[tuple[str, str]]:\n return _name_to_keyval(self._cert.issuer)\n\n @property\n def notbefore(self) -> datetime.datetime:\n # x509.Certificate.not_valid_before is a naive datetime in UTC\n return self._cert.not_valid_before.replace(tzinfo=datetime.timezone.utc)\n\n @property\n def notafter(self) -> datetime.datetime:\n # x509.Certificate.not_valid_after is a naive datetime in UTC\n return self._cert.not_valid_after.replace(tzinfo=datetime.timezone.utc)\n\n def has_expired(self) -> bool:\n return datetime.datetime.utcnow() > self._cert.not_valid_after\n\n @property\n def subject(self) -> list[tuple[str, str]]:\n return _name_to_keyval(self._cert.subject)\n\n @property\n def serial(self) -> int:\n return self._cert.serial_number\n\n @property\n def keyinfo(self) -> tuple[str, int]:\n public_key = self._cert.public_key()\n if isinstance(public_key, rsa.RSAPublicKey):\n return \"RSA\", public_key.key_size\n if isinstance(public_key, dsa.DSAPublicKey):\n return \"DSA\", public_key.key_size\n if isinstance(public_key, ec.EllipticCurvePublicKey):\n return f\"EC ({public_key.curve.name})\", public_key.key_size\n return (\n public_key.__class__.__name__.replace(\"PublicKey\", \"\").replace(\"_\", \"\"),\n getattr(public_key, \"key_size\", -1),\n ) # pragma: no cover\n\n @property\n def cn(self) -> str | None:\n attrs = self._cert.subject.get_attributes_for_oid(x509.NameOID.COMMON_NAME)\n if attrs:\n return cast(str, attrs[0].value)\n return None\n\n @property\n def organization(self) -> str | None:\n attrs = self._cert.subject.get_attributes_for_oid(\n x509.NameOID.ORGANIZATION_NAME\n )\n if attrs:\n return cast(str, attrs[0].value)\n return None\n\n @property\n def altnames(self) -> list[str]:\n \"\"\"\n Get all SubjectAlternativeName DNS altnames.\n \"\"\"\n try:\n ext = self._cert.extensions.get_extension_for_class(\n x509.SubjectAlternativeName\n ).value\n except x509.ExtensionNotFound:\n return []\n else:\n return ext.get_values_for_type(x509.DNSName) + [\n str(x) for x in ext.get_values_for_type(x509.IPAddress)\n ]\n\n\ndef _name_to_keyval(name: x509.Name) -> list[tuple[str, str]]:\n parts = []\n for attr in name:\n k = attr.rfc4514_string().partition(\"=\")[0]\n v = cast(str, attr.value)\n parts.append((k, v))\n return parts\n\n\ndef create_ca(\n organization: str,\n cn: str,\n key_size: int,\n) -> tuple[rsa.RSAPrivateKeyWithSerialization, x509.Certificate]:\n now = datetime.datetime.now()\n\n private_key = rsa.generate_private_key(\n public_exponent=65537,\n key_size=key_size,\n ) # type: ignore\n name = x509.Name(\n [\n x509.NameAttribute(NameOID.COMMON_NAME, cn),\n x509.NameAttribute(NameOID.ORGANIZATION_NAME, organization),\n ]\n )\n builder = x509.CertificateBuilder()\n builder = builder.serial_number(x509.random_serial_number())\n builder = builder.subject_name(name)\n builder = builder.not_valid_before(now - datetime.timedelta(days=2))\n builder = builder.not_valid_after(now + CA_EXPIRY)\n builder = builder.issuer_name(name)\n builder = builder.public_key(private_key.public_key())\n builder = builder.add_extension(\n x509.BasicConstraints(ca=True, path_length=None), critical=True\n )\n builder = builder.add_extension(\n x509.ExtendedKeyUsage([ExtendedKeyUsageOID.SERVER_AUTH]), critical=False\n )\n builder = builder.add_extension(\n x509.KeyUsage(\n digital_signature=False,\n content_commitment=False,\n key_encipherment=False,\n data_encipherment=False,\n key_agreement=False,\n key_cert_sign=True,\n crl_sign=True,\n encipher_only=False,\n decipher_only=False,\n ),\n critical=True,\n )\n builder = builder.add_extension(\n x509.SubjectKeyIdentifier.from_public_key(private_key.public_key()),\n critical=False,\n )\n cert = builder.sign(private_key=private_key, algorithm=hashes.SHA256()) # type: ignore\n return private_key, cert\n\n\ndef dummy_cert(\n privkey: rsa.RSAPrivateKey,\n cacert: x509.Certificate,\n commonname: str | None,\n sans: list[str],\n organization: str | None = None,\n) -> Cert:\n \"\"\"\n Generates a dummy certificate.\n\n privkey: CA private key\n cacert: CA certificate\n commonname: Common name for the generated certificate.\n sans: A list of Subject Alternate Names.\n organization: Organization name for the generated certificate.\n\n Returns cert if operation succeeded, None if not.\n \"\"\"\n builder = x509.CertificateBuilder()\n builder = builder.issuer_name(cacert.subject)\n builder = builder.add_extension(\n x509.ExtendedKeyUsage([ExtendedKeyUsageOID.SERVER_AUTH]), critical=False\n )\n builder = builder.public_key(cacert.public_key())\n\n now = datetime.datetime.now()\n builder = builder.not_valid_before(now - datetime.timedelta(days=2))\n builder = builder.not_valid_after(now + CERT_EXPIRY)\n\n subject = []\n is_valid_commonname = commonname is not None and len(commonname) < 64\n if is_valid_commonname:\n assert commonname is not None\n subject.append(x509.NameAttribute(NameOID.COMMON_NAME, commonname))\n if organization is not None:\n assert organization is not None\n subject.append(x509.NameAttribute(NameOID.ORGANIZATION_NAME, organization))\n builder = builder.subject_name(x509.Name(subject))\n builder = builder.serial_number(x509.random_serial_number())\n\n ss: list[x509.GeneralName] = []\n for x in sans:\n try:\n ip = ipaddress.ip_address(x)\n except ValueError:\n x = x.encode(\"idna\").decode()\n ss.append(x509.DNSName(x))\n else:\n ss.append(x509.IPAddress(ip))\n # RFC 5280 §4.2.1.6: subjectAltName is critical if subject is empty.\n builder = builder.add_extension(\n x509.SubjectAlternativeName(ss), critical=not is_valid_commonname\n )\n cert = builder.sign(private_key=privkey, algorithm=hashes.SHA256()) # type: ignore\n return Cert(cert)\n\n\n@dataclass(frozen=True)\nclass CertStoreEntry:\n cert: Cert\n privatekey: rsa.RSAPrivateKey\n chain_file: Path | None\n chain_certs: list[Cert]\n\n\nTCustomCertId = str # manually provided certs (e.g. mitmproxy's --certs)\nTGeneratedCertId = tuple[Optional[str], tuple[str, ...]] # (common_name, sans)\nTCertId = Union[TCustomCertId, TGeneratedCertId]\n\nDHParams = NewType(\"DHParams\", bytes)\n\n\nclass CertStore:\n \"\"\"\n Implements an in-memory certificate store.\n \"\"\"\n\n STORE_CAP = 100\n certs: dict[TCertId, CertStoreEntry]\n expire_queue: list[CertStoreEntry]\n\n def __init__(\n self,\n default_privatekey: rsa.RSAPrivateKey,\n default_ca: Cert,\n default_chain_file: Path | None,\n dhparams: DHParams,\n ):\n self.default_privatekey = default_privatekey\n self.default_ca = default_ca\n self.default_chain_file = default_chain_file\n self.default_chain_certs = (\n [\n Cert.from_pem(chunk)\n for chunk in re.split(\n rb\"(?=-----BEGIN( [A-Z]+)+-----)\",\n self.default_chain_file.read_bytes(),\n )\n if chunk.startswith(b\"-----BEGIN CERTIFICATE-----\")\n ]\n if self.default_chain_file\n else [default_ca]\n )\n self.dhparams = dhparams\n self.certs = {}\n self.expire_queue = []\n\n def expire(self, entry: CertStoreEntry) -> None:\n self.expire_queue.append(entry)\n if len(self.expire_queue) > self.STORE_CAP:\n d = self.expire_queue.pop(0)\n self.certs = {k: v for k, v in self.certs.items() if v != d}\n\n @staticmethod\n def load_dhparam(path: Path) -> DHParams:\n # mitmproxy<=0.10 doesn't generate a dhparam file.\n # Create it now if necessary.\n if not path.exists():\n path.write_bytes(DEFAULT_DHPARAM)\n\n # we could use cryptography for this, but it's unclear how to convert cryptography's object to pyOpenSSL's\n # expected format.\n bio = OpenSSL.SSL._lib.BIO_new_file(str(path).encode(sys.getfilesystemencoding()), b\"r\") # type: ignore\n if bio != OpenSSL.SSL._ffi.NULL: # type: ignore\n bio = OpenSSL.SSL._ffi.gc(bio, OpenSSL.SSL._lib.BIO_free) # type: ignore\n dh = OpenSSL.SSL._lib.PEM_read_bio_DHparams( # type: ignore\n bio,\n OpenSSL.SSL._ffi.NULL, # type: ignore\n OpenSSL.SSL._ffi.NULL, # type: ignore\n OpenSSL.SSL._ffi.NULL, # type: ignore\n )\n dh = OpenSSL.SSL._ffi.gc(dh, OpenSSL.SSL._lib.DH_free) # type: ignore\n return dh\n raise RuntimeError(\"Error loading DH Params.\") # pragma: no cover\n\n @classmethod\n def from_store(\n cls,\n path: Path | str,\n basename: str,\n key_size: int,\n passphrase: bytes | None = None,\n ) -> \"CertStore\":\n path = Path(path)\n ca_file = path / f\"{basename}-ca.pem\"\n dhparam_file = path / f\"{basename}-dhparam.pem\"\n if not ca_file.exists():\n cls.create_store(path, basename, key_size)\n return cls.from_files(ca_file, dhparam_file, passphrase)\n\n @classmethod\n def from_files(\n cls, ca_file: Path, dhparam_file: Path, passphrase: bytes | None = None\n ) -> \"CertStore\":\n raw = ca_file.read_bytes()\n key = load_pem_private_key(raw, passphrase)\n dh = cls.load_dhparam(dhparam_file)\n certs = re.split(rb\"(?=-----BEGIN CERTIFICATE-----)\", raw)\n ca = Cert.from_pem(certs[1])\n if len(certs) > 2:\n chain_file: Path | None = ca_file\n else:\n chain_file = None\n return cls(key, ca, chain_file, dh)\n\n @staticmethod\n @contextlib.contextmanager\n def umask_secret():\n \"\"\"\n Context to temporarily set umask to its original value bitor 0o77.\n Useful when writing private keys to disk so that only the owner\n will be able to read them.\n \"\"\"\n original_umask = os.umask(0)\n os.umask(original_umask | 0o77)\n try:\n yield\n finally:\n os.umask(original_umask)\n\n @staticmethod\n def create_store(\n path: Path, basename: str, key_size: int, organization=None, cn=None\n ) -> None:\n path.mkdir(parents=True, exist_ok=True)\n\n organization = organization or basename\n cn = cn or basename\n\n key: rsa.RSAPrivateKeyWithSerialization\n ca: x509.Certificate\n key, ca = create_ca(organization=organization, cn=cn, key_size=key_size)\n\n # Dump the CA plus private key.\n with CertStore.umask_secret():\n # PEM format\n (path / f\"{basename}-ca.pem\").write_bytes(\n key.private_bytes(\n encoding=serialization.Encoding.PEM,\n format=serialization.PrivateFormat.TraditionalOpenSSL,\n encryption_algorithm=serialization.NoEncryption(),\n )\n + ca.public_bytes(serialization.Encoding.PEM)\n )\n\n # PKCS12 format for Windows devices\n (path / f\"{basename}-ca.p12\").write_bytes(\n pkcs12.serialize_key_and_certificates( # type: ignore\n name=basename.encode(),\n key=key,\n cert=ca,\n cas=None,\n encryption_algorithm=serialization.NoEncryption(),\n )\n )\n\n # Dump the certificate in PEM format\n pem_cert = ca.public_bytes(serialization.Encoding.PEM)\n (path / f\"{basename}-ca-cert.pem\").write_bytes(pem_cert)\n # Create a .cer file with the same contents for Android\n (path / f\"{basename}-ca-cert.cer\").write_bytes(pem_cert)\n\n # Dump the certificate in PKCS12 format for Windows devices\n (path / f\"{basename}-ca-cert.p12\").write_bytes(\n pkcs12.serialize_key_and_certificates(\n name=basename.encode(),\n key=None, # type: ignore\n cert=ca,\n cas=None,\n encryption_algorithm=serialization.NoEncryption(),\n )\n )\n\n (path / f\"{basename}-dhparam.pem\").write_bytes(DEFAULT_DHPARAM)\n\n def add_cert_file(\n self, spec: str, path: Path, passphrase: bytes | None = None\n ) -> None:\n raw = path.read_bytes()\n cert = Cert.from_pem(raw)\n try:\n key = load_pem_private_key(raw, password=passphrase)\n except ValueError:\n key = self.default_privatekey\n\n self.add_cert(CertStoreEntry(cert, key, path, [cert]), spec)\n\n def add_cert(self, entry: CertStoreEntry, *names: str) -> None:\n \"\"\"\n Adds a cert to the certstore. We register the CN in the cert plus\n any SANs, and also the list of names provided as an argument.\n \"\"\"\n if entry.cert.cn:\n self.certs[entry.cert.cn] = entry\n for i in entry.cert.altnames:\n self.certs[i] = entry\n for i in names:\n self.certs[i] = entry\n\n @staticmethod\n def asterisk_forms(dn: str) -> list[str]:\n \"\"\"\n Return all asterisk forms for a domain. For example, for www.example.com this will return\n [b\"www.example.com\", b\"*.example.com\", b\"*.com\"]. The single wildcard \"*\" is omitted.\n \"\"\"\n parts = dn.split(\".\")\n ret = [dn]\n for i in range(1, len(parts)):\n ret.append(\"*.\" + \".\".join(parts[i:]))\n return ret\n\n def get_cert(\n self,\n commonname: str | None,\n sans: list[str],\n organization: str | None = None,\n ) -> CertStoreEntry:\n \"\"\"\n commonname: Common name for the generated certificate. Must be a\n valid, plain-ASCII, IDNA-encoded domain name.\n\n sans: A list of Subject Alternate Names.\n\n organization: Organization name for the generated certificate.\n \"\"\"\n\n potential_keys: list[TCertId] = []\n if commonname:\n potential_keys.extend(self.asterisk_forms(commonname))\n for s in sans:\n potential_keys.extend(self.asterisk_forms(s))\n potential_keys.append(\"*\")\n potential_keys.append((commonname, tuple(sans)))\n\n name = next(filter(lambda key: key in self.certs, potential_keys), None)\n if name:\n entry = self.certs[name]\n else:\n entry = CertStoreEntry(\n cert=dummy_cert(\n self.default_privatekey,\n self.default_ca._cert,\n commonname,\n sans,\n organization,\n ),\n privatekey=self.default_privatekey,\n chain_file=self.default_chain_file,\n chain_certs=self.default_chain_certs,\n )\n self.certs[(commonname, tuple(sans))] = entry\n self.expire(entry)\n\n return entry\n\n\ndef load_pem_private_key(data: bytes, password: bytes | None) -> rsa.RSAPrivateKey:\n \"\"\"\n like cryptography's load_pem_private_key, but silently falls back to not using a password\n if the private key is unencrypted.\n \"\"\"\n try:\n return serialization.load_pem_private_key(data, password) # type: ignore\n except TypeError:\n if password is not None:\n return load_pem_private_key(data, None)\n raise\n", "path": "mitmproxy/certs.py" } ]
diff --git a/CHANGELOG.md b/CHANGELOG.md index 18b65ab3fd..865390d8ef 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,8 @@ ## Unreleased: mitmproxy next +* Fix certificate generation for punycode domains. + ([#6382](https://github.com/mitmproxy/mitmproxy/pull/6382), @mhils) ## 24 September 2023: mitmproxy 10.1.0 diff --git a/mitmproxy/certs.py b/mitmproxy/certs.py index 7477c61f78..a260cb9811 100644 --- a/mitmproxy/certs.py +++ b/mitmproxy/certs.py @@ -270,6 +270,7 @@ def dummy_cert( try: ip = ipaddress.ip_address(x) except ValueError: + x = x.encode("idna").decode() ss.append(x509.DNSName(x)) else: ss.append(x509.IPAddress(ip)) diff --git a/test/mitmproxy/test_certs.py b/test/mitmproxy/test_certs.py index 815a84c613..e46245f1b5 100644 --- a/test/mitmproxy/test_certs.py +++ b/test/mitmproxy/test_certs.py @@ -141,11 +141,17 @@ def test_with_ca(self, tstore): tstore.default_privatekey, tstore.default_ca._cert, "foo.com", - ["one.com", "two.com", "*.three.com", "127.0.0.1"], + ["one.com", "two.com", "*.three.com", "127.0.0.1", "bücher.example"], "Foo Ltd.", ) assert r.cn == "foo.com" - assert r.altnames == ["one.com", "two.com", "*.three.com", "127.0.0.1"] + assert r.altnames == [ + "one.com", + "two.com", + "*.three.com", + "xn--bcher-kva.example", + "127.0.0.1", + ] assert r.organization == "Foo Ltd." r = certs.dummy_cert(
mitmproxy__mitmproxy-1425
Cannot clear flows ##### Steps to reproduce the problem: 1. Start mitmproxy 2. Press z ##### What is the expected behavior? No crash ##### What went wrong? ``` Traceback (most recent call last): File "/media/sf_git/mitmproxy/mitmproxy/console/master.py", line 515, in run self.loop.run() File "/usr/local/lib/python3.5/dist-packages/urwid/main_loop.py", line 278, in run self._run() File "/usr/local/lib/python3.5/dist-packages/urwid/main_loop.py", line 376, in _run self.event_loop.run() File "/usr/local/lib/python3.5/dist-packages/urwid/main_loop.py", line 682, in run self._loop() File "/usr/local/lib/python3.5/dist-packages/urwid/main_loop.py", line 719, in _loop self._watch_files[fd]() File "/usr/local/lib/python3.5/dist-packages/urwid/raw_display.py", line 393, in <lambda> event_loop, callback, self.get_available_raw_input()) File "/usr/local/lib/python3.5/dist-packages/urwid/raw_display.py", line 493, in parse_input callback(processed, processed_codes) File "/usr/local/lib/python3.5/dist-packages/urwid/main_loop.py", line 403, in _update self.process_input(keys) File "/usr/local/lib/python3.5/dist-packages/urwid/main_loop.py", line 503, in process_input k = self._topmost_widget.keypress(self.screen_size, k) File "/media/sf_git/mitmproxy/mitmproxy/console/window.py", line 42, in keypress k = super(self.__class__, self).keypress(size, k) File "/usr/local/lib/python3.5/dist-packages/urwid/container.py", line 1128, in keypress return self.body.keypress( (maxcol, remaining), key ) File "/media/sf_git/mitmproxy/mitmproxy/console/flowlist.py", line 361, in keypress self.master.clear_flows() File "/media/sf_git/mitmproxy/mitmproxy/console/master.py", line 686, in clear_flows self.state.clear() File "/media/sf_git/mitmproxy/mitmproxy/console/master.py", line 185, in clear marked_flows = [f for f in self.state.view if f.marked] AttributeError: 'ConsoleState' object has no attribute 'state' ``` @dufferzafar, can you fix this? :smiley: Mitmproxy Version: master Operating System: Ubuntu 14.04 x64
[ { "content": "from __future__ import absolute_import, print_function, division\n\nimport mailcap\nimport mimetypes\nimport os\nimport os.path\nimport shlex\nimport signal\nimport stat\nimport subprocess\nimport sys\nimport tempfile\nimport traceback\nimport weakref\n\nimport urwid\nfrom typing import Optional # noqa\n\nfrom mitmproxy import builtins\nfrom mitmproxy import contentviews\nfrom mitmproxy import controller\nfrom mitmproxy import exceptions\nfrom mitmproxy import flow\nfrom mitmproxy import script\nfrom mitmproxy import utils\nimport mitmproxy.options\nfrom mitmproxy.console import flowlist\nfrom mitmproxy.console import flowview\nfrom mitmproxy.console import grideditor\nfrom mitmproxy.console import help\nfrom mitmproxy.console import options\nfrom mitmproxy.console import palettepicker\nfrom mitmproxy.console import palettes\nfrom mitmproxy.console import signals\nfrom mitmproxy.console import statusbar\nfrom mitmproxy.console import window\nfrom mitmproxy.filt import FMarked\nfrom netlib import tcp, strutils\n\nEVENTLOG_SIZE = 500\n\n\nclass ConsoleState(flow.State):\n\n def __init__(self):\n flow.State.__init__(self)\n self.focus = None\n self.follow_focus = None\n self.default_body_view = contentviews.get(\"Auto\")\n self.flowsettings = weakref.WeakKeyDictionary()\n self.last_search = None\n self.last_filter = \"\"\n self.mark_filter = False\n\n def __setattr__(self, name, value):\n self.__dict__[name] = value\n signals.update_settings.send(self)\n\n def add_flow_setting(self, flow, key, value):\n d = self.flowsettings.setdefault(flow, {})\n d[key] = value\n\n def get_flow_setting(self, flow, key, default=None):\n d = self.flowsettings.get(flow, {})\n return d.get(key, default)\n\n def add_flow(self, f):\n super(ConsoleState, self).add_flow(f)\n self.update_focus()\n return f\n\n def update_flow(self, f):\n super(ConsoleState, self).update_flow(f)\n self.update_focus()\n return f\n\n def set_limit(self, limit):\n ret = super(ConsoleState, self).set_limit(limit)\n self.set_focus(self.focus)\n return ret\n\n def get_focus(self):\n if not self.view or self.focus is None:\n return None, None\n return self.view[self.focus], self.focus\n\n def set_focus(self, idx):\n if self.view:\n if idx is None or idx < 0:\n idx = 0\n elif idx >= len(self.view):\n idx = len(self.view) - 1\n self.focus = idx\n else:\n self.focus = None\n\n def update_focus(self):\n if self.focus is None:\n self.set_focus(0)\n elif self.follow_focus:\n self.set_focus(len(self.view) - 1)\n\n def set_focus_flow(self, f):\n self.set_focus(self.view.index(f))\n\n def get_from_pos(self, pos):\n if len(self.view) <= pos or pos < 0:\n return None, None\n return self.view[pos], pos\n\n def get_next(self, pos):\n return self.get_from_pos(pos + 1)\n\n def get_prev(self, pos):\n return self.get_from_pos(pos - 1)\n\n def delete_flow(self, f):\n if f in self.view and self.view.index(f) <= self.focus:\n self.focus -= 1\n if self.focus < 0:\n self.focus = None\n ret = super(ConsoleState, self).delete_flow(f)\n self.set_focus(self.focus)\n return ret\n\n def get_nearest_matching_flow(self, flow, filt):\n fidx = self.view.index(flow)\n dist = 1\n\n fprev = fnext = True\n while fprev or fnext:\n fprev, _ = self.get_from_pos(fidx - dist)\n fnext, _ = self.get_from_pos(fidx + dist)\n\n if fprev and fprev.match(filt):\n return fprev\n elif fnext and fnext.match(filt):\n return fnext\n\n dist += 1\n\n return None\n\n def enable_marked_filter(self):\n marked_flows = [f for f in self.flows if f.marked]\n if not marked_flows:\n return\n\n marked_filter = \"~%s\" % FMarked.code\n\n # Save Focus\n last_focus, _ = self.get_focus()\n nearest_marked = self.get_nearest_matching_flow(last_focus, marked_filter)\n\n self.last_filter = self.limit_txt\n self.set_limit(marked_filter)\n\n # Restore Focus\n if last_focus.marked:\n self.set_focus_flow(last_focus)\n else:\n self.set_focus_flow(nearest_marked)\n\n self.mark_filter = True\n\n def disable_marked_filter(self):\n marked_filter = \"~%s\" % FMarked.code\n\n # Save Focus\n last_focus, _ = self.get_focus()\n nearest_marked = self.get_nearest_matching_flow(last_focus, marked_filter)\n\n self.set_limit(self.last_filter)\n self.last_filter = \"\"\n\n # Restore Focus\n if last_focus.marked:\n self.set_focus_flow(last_focus)\n else:\n self.set_focus_flow(nearest_marked)\n\n self.mark_filter = False\n\n def clear(self):\n marked_flows = [f for f in self.state.view if f.marked]\n super(ConsoleState, self).clear()\n\n for f in marked_flows:\n self.add_flow(f)\n f.marked = True\n\n if len(self.flows.views) == 0:\n self.focus = None\n else:\n self.focus = 0\n self.set_focus(self.focus)\n\n\nclass Options(mitmproxy.options.Options):\n def __init__(\n self,\n eventlog=False, # type: bool\n follow=False, # type: bool\n intercept=False, # type: bool\n limit=None, # type: Optional[str]\n palette=None, # type: Optional[str]\n palette_transparent=False, # type: bool\n no_mouse=False, # type: bool\n **kwargs\n ):\n self.eventlog = eventlog\n self.follow = follow\n self.intercept = intercept\n self.limit = limit\n self.palette = palette\n self.palette_transparent = palette_transparent\n self.no_mouse = no_mouse\n super(Options, self).__init__(**kwargs)\n\n\nclass ConsoleMaster(flow.FlowMaster):\n palette = []\n\n def __init__(self, server, options):\n flow.FlowMaster.__init__(self, options, server, ConsoleState())\n self.stream_path = None\n # This line is just for type hinting\n self.options = self.options # type: Options\n self.options.errored.connect(self.options_error)\n\n r = self.set_intercept(options.intercept)\n if r:\n print(\"Intercept error: {}\".format(r), file=sys.stderr)\n sys.exit(1)\n\n if options.limit:\n self.set_limit(options.limit)\n\n self.set_stream_large_bodies(options.stream_large_bodies)\n\n self.palette = options.palette\n self.palette_transparent = options.palette_transparent\n\n self.logbuffer = urwid.SimpleListWalker([])\n self.follow = options.follow\n\n if options.client_replay:\n self.client_playback_path(options.client_replay)\n\n if options.server_replay:\n self.server_playback_path(options.server_replay)\n\n self.view_stack = []\n\n if options.app:\n self.start_app(self.options.app_host, self.options.app_port)\n\n signals.call_in.connect(self.sig_call_in)\n signals.pop_view_state.connect(self.sig_pop_view_state)\n signals.push_view_state.connect(self.sig_push_view_state)\n signals.sig_add_log.connect(self.sig_add_log)\n self.addons.add(options, *builtins.default_addons())\n\n def __setattr__(self, name, value):\n self.__dict__[name] = value\n signals.update_settings.send(self)\n\n def options_error(self, opts, exc):\n signals.status_message.send(\n message=str(exc),\n expire=1\n )\n\n def sig_add_log(self, sender, e, level):\n if self.options.verbosity < utils.log_tier(level):\n return\n\n if level == \"error\":\n signals.status_message.send(\n message = \"Error: %s\" % str(e)\n )\n e = urwid.Text((\"error\", str(e)))\n else:\n e = urwid.Text(str(e))\n self.logbuffer.append(e)\n if len(self.logbuffer) > EVENTLOG_SIZE:\n self.logbuffer.pop(0)\n self.logbuffer.set_focus(len(self.logbuffer) - 1)\n\n def add_log(self, e, level):\n signals.add_log(e, level)\n\n def sig_call_in(self, sender, seconds, callback, args=()):\n def cb(*_):\n return callback(*args)\n self.loop.set_alarm_in(seconds, cb)\n\n def sig_pop_view_state(self, sender):\n if len(self.view_stack) > 1:\n self.view_stack.pop()\n self.loop.widget = self.view_stack[-1]\n else:\n signals.status_prompt_onekey.send(\n self,\n prompt = \"Quit\",\n keys = (\n (\"yes\", \"y\"),\n (\"no\", \"n\"),\n ),\n callback = self.quit,\n )\n\n def sig_push_view_state(self, sender, window):\n self.view_stack.append(window)\n self.loop.widget = window\n self.loop.draw_screen()\n\n def _run_script_method(self, method, s, f):\n status, val = s.run(method, f)\n if val:\n if status:\n signals.add_log(\"Method %s return: %s\" % (method, val), \"debug\")\n else:\n signals.add_log(\n \"Method %s error: %s\" %\n (method, val[1]), \"error\")\n\n def run_script_once(self, command, f):\n if not command:\n return\n signals.add_log(\"Running script on flow: %s\" % command, \"debug\")\n\n try:\n s = script.Script(command)\n s.load()\n except script.ScriptException as e:\n signals.status_message.send(\n message='Error loading \"{}\".'.format(command)\n )\n signals.add_log('Error loading \"{}\":\\n{}'.format(command, e), \"error\")\n return\n\n if f.request:\n self._run_script_method(\"request\", s, f)\n if f.response:\n self._run_script_method(\"response\", s, f)\n if f.error:\n self._run_script_method(\"error\", s, f)\n s.unload()\n signals.flow_change.send(self, flow = f)\n\n def toggle_eventlog(self):\n self.options.eventlog = not self.options.eventlog\n signals.pop_view_state.send(self)\n self.view_flowlist()\n\n def _readflows(self, path):\n \"\"\"\n Utitility function that reads a list of flows\n or prints an error to the UI if that fails.\n Returns\n - None, if there was an error.\n - a list of flows, otherwise.\n \"\"\"\n try:\n return flow.read_flows_from_paths(path)\n except exceptions.FlowReadException as e:\n signals.status_message.send(message=str(e))\n\n def client_playback_path(self, path):\n if not isinstance(path, list):\n path = [path]\n flows = self._readflows(path)\n if flows:\n self.start_client_playback(flows, False)\n\n def server_playback_path(self, path):\n if not isinstance(path, list):\n path = [path]\n flows = self._readflows(path)\n if flows:\n self.start_server_playback(\n flows,\n self.options.kill, self.options.rheaders,\n False, self.options.nopop,\n self.options.replay_ignore_params,\n self.options.replay_ignore_content,\n self.options.replay_ignore_payload_params,\n self.options.replay_ignore_host\n )\n\n def spawn_editor(self, data):\n fd, name = tempfile.mkstemp('', \"mproxy\")\n os.write(fd, data)\n os.close(fd)\n c = os.environ.get(\"EDITOR\")\n # if no EDITOR is set, assume 'vi'\n if not c:\n c = \"vi\"\n cmd = shlex.split(c)\n cmd.append(name)\n self.ui.stop()\n try:\n subprocess.call(cmd)\n except:\n signals.status_message.send(\n message = \"Can't start editor: %s\" % \" \".join(c)\n )\n else:\n data = open(name, \"rb\").read()\n self.ui.start()\n os.unlink(name)\n return data\n\n def spawn_external_viewer(self, data, contenttype):\n if contenttype:\n contenttype = contenttype.split(\";\")[0]\n ext = mimetypes.guess_extension(contenttype) or \"\"\n else:\n ext = \"\"\n fd, name = tempfile.mkstemp(ext, \"mproxy\")\n os.write(fd, data)\n os.close(fd)\n\n # read-only to remind the user that this is a view function\n os.chmod(name, stat.S_IREAD)\n\n cmd = None\n shell = False\n\n if contenttype:\n c = mailcap.getcaps()\n cmd, _ = mailcap.findmatch(c, contenttype, filename=name)\n if cmd:\n shell = True\n if not cmd:\n # hm which one should get priority?\n c = os.environ.get(\"PAGER\") or os.environ.get(\"EDITOR\")\n if not c:\n c = \"less\"\n cmd = shlex.split(c)\n cmd.append(name)\n self.ui.stop()\n try:\n subprocess.call(cmd, shell=shell)\n except:\n signals.status_message.send(\n message=\"Can't start external viewer: %s\" % \" \".join(c)\n )\n self.ui.start()\n os.unlink(name)\n\n def set_palette(self, name):\n self.palette = name\n self.ui.register_palette(\n palettes.palettes[name].palette(self.palette_transparent)\n )\n self.ui.clear()\n\n def ticker(self, *userdata):\n changed = self.tick(timeout=0)\n if changed:\n self.loop.draw_screen()\n signals.update_settings.send()\n self.loop.set_alarm_in(0.01, self.ticker)\n\n def run(self):\n self.ui = urwid.raw_display.Screen()\n self.ui.set_terminal_properties(256)\n self.set_palette(self.palette)\n self.loop = urwid.MainLoop(\n urwid.SolidFill(\"x\"),\n screen = self.ui,\n handle_mouse = not self.options.no_mouse,\n )\n self.ab = statusbar.ActionBar()\n\n if self.options.rfile:\n ret = self.load_flows_path(self.options.rfile)\n if ret and self.state.flow_count():\n signals.add_log(\n \"File truncated or corrupted. \"\n \"Loaded as many flows as possible.\",\n \"error\"\n )\n elif ret and not self.state.flow_count():\n self.shutdown()\n print(\"Could not load file: {}\".format(ret), file=sys.stderr)\n sys.exit(1)\n\n self.loop.set_alarm_in(0.01, self.ticker)\n if self.options.http2 and not tcp.HAS_ALPN: # pragma: no cover\n def http2err(*args, **kwargs):\n signals.status_message.send(\n message = \"HTTP/2 disabled - OpenSSL 1.0.2+ required.\"\n \" Use --no-http2 to silence this warning.\",\n expire=5\n )\n self.loop.set_alarm_in(0.01, http2err)\n\n # It's not clear why we need to handle this explicitly - without this,\n # mitmproxy hangs on keyboard interrupt. Remove if we ever figure it\n # out.\n def exit(s, f):\n raise urwid.ExitMainLoop\n signal.signal(signal.SIGINT, exit)\n\n self.loop.set_alarm_in(\n 0.0001,\n lambda *args: self.view_flowlist()\n )\n\n self.start()\n try:\n self.loop.run()\n except Exception:\n self.loop.stop()\n sys.stdout.flush()\n print(traceback.format_exc(), file=sys.stderr)\n print(\"mitmproxy has crashed!\", file=sys.stderr)\n print(\"Please lodge a bug report at:\", file=sys.stderr)\n print(\"\\thttps://github.com/mitmproxy/mitmproxy\", file=sys.stderr)\n print(\"Shutting down...\", file=sys.stderr)\n sys.stderr.flush()\n self.shutdown()\n\n def view_help(self, helpctx):\n signals.push_view_state.send(\n self,\n window = window.Window(\n self,\n help.HelpView(helpctx),\n None,\n statusbar.StatusBar(self, help.footer),\n None\n )\n )\n\n def view_options(self):\n for i in self.view_stack:\n if isinstance(i[\"body\"], options.Options):\n return\n signals.push_view_state.send(\n self,\n window = window.Window(\n self,\n options.Options(self),\n None,\n statusbar.StatusBar(self, options.footer),\n options.help_context,\n )\n )\n\n def view_palette_picker(self):\n signals.push_view_state.send(\n self,\n window = window.Window(\n self,\n palettepicker.PalettePicker(self),\n None,\n statusbar.StatusBar(self, palettepicker.footer),\n palettepicker.help_context,\n )\n )\n\n def view_grideditor(self, ge):\n signals.push_view_state.send(\n self,\n window = window.Window(\n self,\n ge,\n None,\n statusbar.StatusBar(self, grideditor.FOOTER),\n ge.make_help()\n )\n )\n\n def view_flowlist(self):\n if self.ui.started:\n self.ui.clear()\n if self.state.follow_focus:\n self.state.set_focus(self.state.flow_count())\n\n if self.options.eventlog:\n body = flowlist.BodyPile(self)\n else:\n body = flowlist.FlowListBox(self)\n\n if self.follow:\n self.toggle_follow_flows()\n\n signals.push_view_state.send(\n self,\n window = window.Window(\n self,\n body,\n None,\n statusbar.StatusBar(self, flowlist.footer),\n flowlist.help_context\n )\n )\n\n def view_flow(self, flow, tab_offset=0):\n self.state.set_focus_flow(flow)\n signals.push_view_state.send(\n self,\n window = window.Window(\n self,\n flowview.FlowView(self, self.state, flow, tab_offset),\n flowview.FlowViewHeader(self, flow),\n statusbar.StatusBar(self, flowview.footer),\n flowview.help_context\n )\n )\n\n def _write_flows(self, path, flows):\n if not path:\n return\n path = os.path.expanduser(path)\n try:\n f = open(path, \"wb\")\n fw = flow.FlowWriter(f)\n for i in flows:\n fw.add(i)\n f.close()\n except IOError as v:\n signals.status_message.send(message=v.strerror)\n\n def save_one_flow(self, path, flow):\n return self._write_flows(path, [flow])\n\n def save_flows(self, path):\n return self._write_flows(path, self.state.view)\n\n def load_flows_callback(self, path):\n if not path:\n return\n ret = self.load_flows_path(path)\n return ret or \"Flows loaded from %s\" % path\n\n def load_flows_path(self, path):\n reterr = None\n try:\n flow.FlowMaster.load_flows_file(self, path)\n except exceptions.FlowReadException as e:\n reterr = str(e)\n signals.flowlist_change.send(self)\n return reterr\n\n def accept_all(self):\n self.state.accept_all(self)\n\n def set_limit(self, txt):\n v = self.state.set_limit(txt)\n signals.flowlist_change.send(self)\n return v\n\n def set_intercept(self, txt):\n return self.state.set_intercept(txt)\n\n def change_default_display_mode(self, t):\n v = contentviews.get_by_shortcut(t)\n self.state.default_body_view = v\n self.refresh_focus()\n\n def edit_scripts(self, scripts):\n self.options.scripts = [x[0] for x in scripts]\n\n def stop_client_playback_prompt(self, a):\n if a != \"n\":\n self.stop_client_playback()\n\n def stop_server_playback_prompt(self, a):\n if a != \"n\":\n self.stop_server_playback()\n\n def quit(self, a):\n if a != \"n\":\n raise urwid.ExitMainLoop\n\n def shutdown(self):\n self.state.killall(self)\n flow.FlowMaster.shutdown(self)\n\n def clear_flows(self):\n self.state.clear()\n signals.flowlist_change.send(self)\n\n def toggle_follow_flows(self):\n # toggle flow follow\n self.state.follow_focus = not self.state.follow_focus\n # jump to most recent flow if follow is now on\n if self.state.follow_focus:\n self.state.set_focus(self.state.flow_count())\n signals.flowlist_change.send(self)\n\n def delete_flow(self, f):\n self.state.delete_flow(f)\n signals.flowlist_change.send(self)\n\n def refresh_focus(self):\n if self.state.view:\n signals.flow_change.send(\n self,\n flow = self.state.view[self.state.focus]\n )\n\n def process_flow(self, f):\n should_intercept = any(\n [\n self.state.intercept and f.match(self.state.intercept) and not f.request.is_replay,\n f.intercepted,\n ]\n )\n if should_intercept:\n f.intercept(self)\n f.reply.take()\n signals.flowlist_change.send(self)\n signals.flow_change.send(self, flow = f)\n\n def clear_events(self):\n self.logbuffer[:] = []\n\n # Handlers\n @controller.handler\n def error(self, f):\n f = flow.FlowMaster.error(self, f)\n if f:\n self.process_flow(f)\n return f\n\n @controller.handler\n def request(self, f):\n f = flow.FlowMaster.request(self, f)\n if f:\n self.process_flow(f)\n return f\n\n @controller.handler\n def response(self, f):\n f = flow.FlowMaster.response(self, f)\n if f:\n self.process_flow(f)\n return f\n\n @controller.handler\n def tcp_message(self, f):\n super(ConsoleMaster, self).tcp_message(f)\n message = f.messages[-1]\n direction = \"->\" if message.from_client else \"<-\"\n self.add_log(\"{client} {direction} tcp {direction} {server}\".format(\n client=repr(f.client_conn.address),\n server=repr(f.server_conn.address),\n direction=direction,\n ), \"info\")\n self.add_log(strutils.bytes_to_escaped_str(message.content), \"debug\")\n", "path": "mitmproxy/console/master.py" } ]
[ { "content": "from __future__ import absolute_import, print_function, division\n\nimport mailcap\nimport mimetypes\nimport os\nimport os.path\nimport shlex\nimport signal\nimport stat\nimport subprocess\nimport sys\nimport tempfile\nimport traceback\nimport weakref\n\nimport urwid\nfrom typing import Optional # noqa\n\nfrom mitmproxy import builtins\nfrom mitmproxy import contentviews\nfrom mitmproxy import controller\nfrom mitmproxy import exceptions\nfrom mitmproxy import flow\nfrom mitmproxy import script\nfrom mitmproxy import utils\nimport mitmproxy.options\nfrom mitmproxy.console import flowlist\nfrom mitmproxy.console import flowview\nfrom mitmproxy.console import grideditor\nfrom mitmproxy.console import help\nfrom mitmproxy.console import options\nfrom mitmproxy.console import palettepicker\nfrom mitmproxy.console import palettes\nfrom mitmproxy.console import signals\nfrom mitmproxy.console import statusbar\nfrom mitmproxy.console import window\nfrom mitmproxy.filt import FMarked\nfrom netlib import tcp, strutils\n\nEVENTLOG_SIZE = 500\n\n\nclass ConsoleState(flow.State):\n\n def __init__(self):\n flow.State.__init__(self)\n self.focus = None\n self.follow_focus = None\n self.default_body_view = contentviews.get(\"Auto\")\n self.flowsettings = weakref.WeakKeyDictionary()\n self.last_search = None\n self.last_filter = \"\"\n self.mark_filter = False\n\n def __setattr__(self, name, value):\n self.__dict__[name] = value\n signals.update_settings.send(self)\n\n def add_flow_setting(self, flow, key, value):\n d = self.flowsettings.setdefault(flow, {})\n d[key] = value\n\n def get_flow_setting(self, flow, key, default=None):\n d = self.flowsettings.get(flow, {})\n return d.get(key, default)\n\n def add_flow(self, f):\n super(ConsoleState, self).add_flow(f)\n self.update_focus()\n return f\n\n def update_flow(self, f):\n super(ConsoleState, self).update_flow(f)\n self.update_focus()\n return f\n\n def set_limit(self, limit):\n ret = super(ConsoleState, self).set_limit(limit)\n self.set_focus(self.focus)\n return ret\n\n def get_focus(self):\n if not self.view or self.focus is None:\n return None, None\n return self.view[self.focus], self.focus\n\n def set_focus(self, idx):\n if self.view:\n if idx is None or idx < 0:\n idx = 0\n elif idx >= len(self.view):\n idx = len(self.view) - 1\n self.focus = idx\n else:\n self.focus = None\n\n def update_focus(self):\n if self.focus is None:\n self.set_focus(0)\n elif self.follow_focus:\n self.set_focus(len(self.view) - 1)\n\n def set_focus_flow(self, f):\n self.set_focus(self.view.index(f))\n\n def get_from_pos(self, pos):\n if len(self.view) <= pos or pos < 0:\n return None, None\n return self.view[pos], pos\n\n def get_next(self, pos):\n return self.get_from_pos(pos + 1)\n\n def get_prev(self, pos):\n return self.get_from_pos(pos - 1)\n\n def delete_flow(self, f):\n if f in self.view and self.view.index(f) <= self.focus:\n self.focus -= 1\n if self.focus < 0:\n self.focus = None\n ret = super(ConsoleState, self).delete_flow(f)\n self.set_focus(self.focus)\n return ret\n\n def get_nearest_matching_flow(self, flow, filt):\n fidx = self.view.index(flow)\n dist = 1\n\n fprev = fnext = True\n while fprev or fnext:\n fprev, _ = self.get_from_pos(fidx - dist)\n fnext, _ = self.get_from_pos(fidx + dist)\n\n if fprev and fprev.match(filt):\n return fprev\n elif fnext and fnext.match(filt):\n return fnext\n\n dist += 1\n\n return None\n\n def enable_marked_filter(self):\n marked_flows = [f for f in self.flows if f.marked]\n if not marked_flows:\n return\n\n marked_filter = \"~%s\" % FMarked.code\n\n # Save Focus\n last_focus, _ = self.get_focus()\n nearest_marked = self.get_nearest_matching_flow(last_focus, marked_filter)\n\n self.last_filter = self.limit_txt\n self.set_limit(marked_filter)\n\n # Restore Focus\n if last_focus.marked:\n self.set_focus_flow(last_focus)\n else:\n self.set_focus_flow(nearest_marked)\n\n self.mark_filter = True\n\n def disable_marked_filter(self):\n marked_filter = \"~%s\" % FMarked.code\n\n # Save Focus\n last_focus, _ = self.get_focus()\n nearest_marked = self.get_nearest_matching_flow(last_focus, marked_filter)\n\n self.set_limit(self.last_filter)\n self.last_filter = \"\"\n\n # Restore Focus\n if last_focus.marked:\n self.set_focus_flow(last_focus)\n else:\n self.set_focus_flow(nearest_marked)\n\n self.mark_filter = False\n\n def clear(self):\n marked_flows = [f for f in self.view if f.marked]\n super(ConsoleState, self).clear()\n\n for f in marked_flows:\n self.add_flow(f)\n f.marked = True\n\n if len(self.flows.views) == 0:\n self.focus = None\n else:\n self.focus = 0\n self.set_focus(self.focus)\n\n\nclass Options(mitmproxy.options.Options):\n def __init__(\n self,\n eventlog=False, # type: bool\n follow=False, # type: bool\n intercept=False, # type: bool\n limit=None, # type: Optional[str]\n palette=None, # type: Optional[str]\n palette_transparent=False, # type: bool\n no_mouse=False, # type: bool\n **kwargs\n ):\n self.eventlog = eventlog\n self.follow = follow\n self.intercept = intercept\n self.limit = limit\n self.palette = palette\n self.palette_transparent = palette_transparent\n self.no_mouse = no_mouse\n super(Options, self).__init__(**kwargs)\n\n\nclass ConsoleMaster(flow.FlowMaster):\n palette = []\n\n def __init__(self, server, options):\n flow.FlowMaster.__init__(self, options, server, ConsoleState())\n self.stream_path = None\n # This line is just for type hinting\n self.options = self.options # type: Options\n self.options.errored.connect(self.options_error)\n\n r = self.set_intercept(options.intercept)\n if r:\n print(\"Intercept error: {}\".format(r), file=sys.stderr)\n sys.exit(1)\n\n if options.limit:\n self.set_limit(options.limit)\n\n self.set_stream_large_bodies(options.stream_large_bodies)\n\n self.palette = options.palette\n self.palette_transparent = options.palette_transparent\n\n self.logbuffer = urwid.SimpleListWalker([])\n self.follow = options.follow\n\n if options.client_replay:\n self.client_playback_path(options.client_replay)\n\n if options.server_replay:\n self.server_playback_path(options.server_replay)\n\n self.view_stack = []\n\n if options.app:\n self.start_app(self.options.app_host, self.options.app_port)\n\n signals.call_in.connect(self.sig_call_in)\n signals.pop_view_state.connect(self.sig_pop_view_state)\n signals.push_view_state.connect(self.sig_push_view_state)\n signals.sig_add_log.connect(self.sig_add_log)\n self.addons.add(options, *builtins.default_addons())\n\n def __setattr__(self, name, value):\n self.__dict__[name] = value\n signals.update_settings.send(self)\n\n def options_error(self, opts, exc):\n signals.status_message.send(\n message=str(exc),\n expire=1\n )\n\n def sig_add_log(self, sender, e, level):\n if self.options.verbosity < utils.log_tier(level):\n return\n\n if level == \"error\":\n signals.status_message.send(\n message = \"Error: %s\" % str(e)\n )\n e = urwid.Text((\"error\", str(e)))\n else:\n e = urwid.Text(str(e))\n self.logbuffer.append(e)\n if len(self.logbuffer) > EVENTLOG_SIZE:\n self.logbuffer.pop(0)\n self.logbuffer.set_focus(len(self.logbuffer) - 1)\n\n def add_log(self, e, level):\n signals.add_log(e, level)\n\n def sig_call_in(self, sender, seconds, callback, args=()):\n def cb(*_):\n return callback(*args)\n self.loop.set_alarm_in(seconds, cb)\n\n def sig_pop_view_state(self, sender):\n if len(self.view_stack) > 1:\n self.view_stack.pop()\n self.loop.widget = self.view_stack[-1]\n else:\n signals.status_prompt_onekey.send(\n self,\n prompt = \"Quit\",\n keys = (\n (\"yes\", \"y\"),\n (\"no\", \"n\"),\n ),\n callback = self.quit,\n )\n\n def sig_push_view_state(self, sender, window):\n self.view_stack.append(window)\n self.loop.widget = window\n self.loop.draw_screen()\n\n def _run_script_method(self, method, s, f):\n status, val = s.run(method, f)\n if val:\n if status:\n signals.add_log(\"Method %s return: %s\" % (method, val), \"debug\")\n else:\n signals.add_log(\n \"Method %s error: %s\" %\n (method, val[1]), \"error\")\n\n def run_script_once(self, command, f):\n if not command:\n return\n signals.add_log(\"Running script on flow: %s\" % command, \"debug\")\n\n try:\n s = script.Script(command)\n s.load()\n except script.ScriptException as e:\n signals.status_message.send(\n message='Error loading \"{}\".'.format(command)\n )\n signals.add_log('Error loading \"{}\":\\n{}'.format(command, e), \"error\")\n return\n\n if f.request:\n self._run_script_method(\"request\", s, f)\n if f.response:\n self._run_script_method(\"response\", s, f)\n if f.error:\n self._run_script_method(\"error\", s, f)\n s.unload()\n signals.flow_change.send(self, flow = f)\n\n def toggle_eventlog(self):\n self.options.eventlog = not self.options.eventlog\n signals.pop_view_state.send(self)\n self.view_flowlist()\n\n def _readflows(self, path):\n \"\"\"\n Utitility function that reads a list of flows\n or prints an error to the UI if that fails.\n Returns\n - None, if there was an error.\n - a list of flows, otherwise.\n \"\"\"\n try:\n return flow.read_flows_from_paths(path)\n except exceptions.FlowReadException as e:\n signals.status_message.send(message=str(e))\n\n def client_playback_path(self, path):\n if not isinstance(path, list):\n path = [path]\n flows = self._readflows(path)\n if flows:\n self.start_client_playback(flows, False)\n\n def server_playback_path(self, path):\n if not isinstance(path, list):\n path = [path]\n flows = self._readflows(path)\n if flows:\n self.start_server_playback(\n flows,\n self.options.kill, self.options.rheaders,\n False, self.options.nopop,\n self.options.replay_ignore_params,\n self.options.replay_ignore_content,\n self.options.replay_ignore_payload_params,\n self.options.replay_ignore_host\n )\n\n def spawn_editor(self, data):\n fd, name = tempfile.mkstemp('', \"mproxy\")\n os.write(fd, data)\n os.close(fd)\n c = os.environ.get(\"EDITOR\")\n # if no EDITOR is set, assume 'vi'\n if not c:\n c = \"vi\"\n cmd = shlex.split(c)\n cmd.append(name)\n self.ui.stop()\n try:\n subprocess.call(cmd)\n except:\n signals.status_message.send(\n message = \"Can't start editor: %s\" % \" \".join(c)\n )\n else:\n data = open(name, \"rb\").read()\n self.ui.start()\n os.unlink(name)\n return data\n\n def spawn_external_viewer(self, data, contenttype):\n if contenttype:\n contenttype = contenttype.split(\";\")[0]\n ext = mimetypes.guess_extension(contenttype) or \"\"\n else:\n ext = \"\"\n fd, name = tempfile.mkstemp(ext, \"mproxy\")\n os.write(fd, data)\n os.close(fd)\n\n # read-only to remind the user that this is a view function\n os.chmod(name, stat.S_IREAD)\n\n cmd = None\n shell = False\n\n if contenttype:\n c = mailcap.getcaps()\n cmd, _ = mailcap.findmatch(c, contenttype, filename=name)\n if cmd:\n shell = True\n if not cmd:\n # hm which one should get priority?\n c = os.environ.get(\"PAGER\") or os.environ.get(\"EDITOR\")\n if not c:\n c = \"less\"\n cmd = shlex.split(c)\n cmd.append(name)\n self.ui.stop()\n try:\n subprocess.call(cmd, shell=shell)\n except:\n signals.status_message.send(\n message=\"Can't start external viewer: %s\" % \" \".join(c)\n )\n self.ui.start()\n os.unlink(name)\n\n def set_palette(self, name):\n self.palette = name\n self.ui.register_palette(\n palettes.palettes[name].palette(self.palette_transparent)\n )\n self.ui.clear()\n\n def ticker(self, *userdata):\n changed = self.tick(timeout=0)\n if changed:\n self.loop.draw_screen()\n signals.update_settings.send()\n self.loop.set_alarm_in(0.01, self.ticker)\n\n def run(self):\n self.ui = urwid.raw_display.Screen()\n self.ui.set_terminal_properties(256)\n self.set_palette(self.palette)\n self.loop = urwid.MainLoop(\n urwid.SolidFill(\"x\"),\n screen = self.ui,\n handle_mouse = not self.options.no_mouse,\n )\n self.ab = statusbar.ActionBar()\n\n if self.options.rfile:\n ret = self.load_flows_path(self.options.rfile)\n if ret and self.state.flow_count():\n signals.add_log(\n \"File truncated or corrupted. \"\n \"Loaded as many flows as possible.\",\n \"error\"\n )\n elif ret and not self.state.flow_count():\n self.shutdown()\n print(\"Could not load file: {}\".format(ret), file=sys.stderr)\n sys.exit(1)\n\n self.loop.set_alarm_in(0.01, self.ticker)\n if self.options.http2 and not tcp.HAS_ALPN: # pragma: no cover\n def http2err(*args, **kwargs):\n signals.status_message.send(\n message = \"HTTP/2 disabled - OpenSSL 1.0.2+ required.\"\n \" Use --no-http2 to silence this warning.\",\n expire=5\n )\n self.loop.set_alarm_in(0.01, http2err)\n\n # It's not clear why we need to handle this explicitly - without this,\n # mitmproxy hangs on keyboard interrupt. Remove if we ever figure it\n # out.\n def exit(s, f):\n raise urwid.ExitMainLoop\n signal.signal(signal.SIGINT, exit)\n\n self.loop.set_alarm_in(\n 0.0001,\n lambda *args: self.view_flowlist()\n )\n\n self.start()\n try:\n self.loop.run()\n except Exception:\n self.loop.stop()\n sys.stdout.flush()\n print(traceback.format_exc(), file=sys.stderr)\n print(\"mitmproxy has crashed!\", file=sys.stderr)\n print(\"Please lodge a bug report at:\", file=sys.stderr)\n print(\"\\thttps://github.com/mitmproxy/mitmproxy\", file=sys.stderr)\n print(\"Shutting down...\", file=sys.stderr)\n sys.stderr.flush()\n self.shutdown()\n\n def view_help(self, helpctx):\n signals.push_view_state.send(\n self,\n window = window.Window(\n self,\n help.HelpView(helpctx),\n None,\n statusbar.StatusBar(self, help.footer),\n None\n )\n )\n\n def view_options(self):\n for i in self.view_stack:\n if isinstance(i[\"body\"], options.Options):\n return\n signals.push_view_state.send(\n self,\n window = window.Window(\n self,\n options.Options(self),\n None,\n statusbar.StatusBar(self, options.footer),\n options.help_context,\n )\n )\n\n def view_palette_picker(self):\n signals.push_view_state.send(\n self,\n window = window.Window(\n self,\n palettepicker.PalettePicker(self),\n None,\n statusbar.StatusBar(self, palettepicker.footer),\n palettepicker.help_context,\n )\n )\n\n def view_grideditor(self, ge):\n signals.push_view_state.send(\n self,\n window = window.Window(\n self,\n ge,\n None,\n statusbar.StatusBar(self, grideditor.FOOTER),\n ge.make_help()\n )\n )\n\n def view_flowlist(self):\n if self.ui.started:\n self.ui.clear()\n if self.state.follow_focus:\n self.state.set_focus(self.state.flow_count())\n\n if self.options.eventlog:\n body = flowlist.BodyPile(self)\n else:\n body = flowlist.FlowListBox(self)\n\n if self.follow:\n self.toggle_follow_flows()\n\n signals.push_view_state.send(\n self,\n window = window.Window(\n self,\n body,\n None,\n statusbar.StatusBar(self, flowlist.footer),\n flowlist.help_context\n )\n )\n\n def view_flow(self, flow, tab_offset=0):\n self.state.set_focus_flow(flow)\n signals.push_view_state.send(\n self,\n window = window.Window(\n self,\n flowview.FlowView(self, self.state, flow, tab_offset),\n flowview.FlowViewHeader(self, flow),\n statusbar.StatusBar(self, flowview.footer),\n flowview.help_context\n )\n )\n\n def _write_flows(self, path, flows):\n if not path:\n return\n path = os.path.expanduser(path)\n try:\n f = open(path, \"wb\")\n fw = flow.FlowWriter(f)\n for i in flows:\n fw.add(i)\n f.close()\n except IOError as v:\n signals.status_message.send(message=v.strerror)\n\n def save_one_flow(self, path, flow):\n return self._write_flows(path, [flow])\n\n def save_flows(self, path):\n return self._write_flows(path, self.state.view)\n\n def load_flows_callback(self, path):\n if not path:\n return\n ret = self.load_flows_path(path)\n return ret or \"Flows loaded from %s\" % path\n\n def load_flows_path(self, path):\n reterr = None\n try:\n flow.FlowMaster.load_flows_file(self, path)\n except exceptions.FlowReadException as e:\n reterr = str(e)\n signals.flowlist_change.send(self)\n return reterr\n\n def accept_all(self):\n self.state.accept_all(self)\n\n def set_limit(self, txt):\n v = self.state.set_limit(txt)\n signals.flowlist_change.send(self)\n return v\n\n def set_intercept(self, txt):\n return self.state.set_intercept(txt)\n\n def change_default_display_mode(self, t):\n v = contentviews.get_by_shortcut(t)\n self.state.default_body_view = v\n self.refresh_focus()\n\n def edit_scripts(self, scripts):\n self.options.scripts = [x[0] for x in scripts]\n\n def stop_client_playback_prompt(self, a):\n if a != \"n\":\n self.stop_client_playback()\n\n def stop_server_playback_prompt(self, a):\n if a != \"n\":\n self.stop_server_playback()\n\n def quit(self, a):\n if a != \"n\":\n raise urwid.ExitMainLoop\n\n def shutdown(self):\n self.state.killall(self)\n flow.FlowMaster.shutdown(self)\n\n def clear_flows(self):\n self.state.clear()\n signals.flowlist_change.send(self)\n\n def toggle_follow_flows(self):\n # toggle flow follow\n self.state.follow_focus = not self.state.follow_focus\n # jump to most recent flow if follow is now on\n if self.state.follow_focus:\n self.state.set_focus(self.state.flow_count())\n signals.flowlist_change.send(self)\n\n def delete_flow(self, f):\n self.state.delete_flow(f)\n signals.flowlist_change.send(self)\n\n def refresh_focus(self):\n if self.state.view:\n signals.flow_change.send(\n self,\n flow = self.state.view[self.state.focus]\n )\n\n def process_flow(self, f):\n should_intercept = any(\n [\n self.state.intercept and f.match(self.state.intercept) and not f.request.is_replay,\n f.intercepted,\n ]\n )\n if should_intercept:\n f.intercept(self)\n f.reply.take()\n signals.flowlist_change.send(self)\n signals.flow_change.send(self, flow = f)\n\n def clear_events(self):\n self.logbuffer[:] = []\n\n # Handlers\n @controller.handler\n def error(self, f):\n f = flow.FlowMaster.error(self, f)\n if f:\n self.process_flow(f)\n return f\n\n @controller.handler\n def request(self, f):\n f = flow.FlowMaster.request(self, f)\n if f:\n self.process_flow(f)\n return f\n\n @controller.handler\n def response(self, f):\n f = flow.FlowMaster.response(self, f)\n if f:\n self.process_flow(f)\n return f\n\n @controller.handler\n def tcp_message(self, f):\n super(ConsoleMaster, self).tcp_message(f)\n message = f.messages[-1]\n direction = \"->\" if message.from_client else \"<-\"\n self.add_log(\"{client} {direction} tcp {direction} {server}\".format(\n client=repr(f.client_conn.address),\n server=repr(f.server_conn.address),\n direction=direction,\n ), \"info\")\n self.add_log(strutils.bytes_to_escaped_str(message.content), \"debug\")\n", "path": "mitmproxy/console/master.py" } ]
diff --git a/mitmproxy/console/master.py b/mitmproxy/console/master.py index f7c99ecb9c..db4141471b 100644 --- a/mitmproxy/console/master.py +++ b/mitmproxy/console/master.py @@ -182,7 +182,7 @@ def disable_marked_filter(self): self.mark_filter = False def clear(self): - marked_flows = [f for f in self.state.view if f.marked] + marked_flows = [f for f in self.view if f.marked] super(ConsoleState, self).clear() for f in marked_flows:
sql-machine-learning__elasticdl-1810
Worker occasionally crashes when reports evaluation task result. The error log: ``` status = StatusCode.UNKNOWN details = "Exception calling application: 'NoneType' object has no attribute 'complete_task'" debug_error_string = "{"created":"@1582833503.778925101","description":"Error received from peer ipv4:11.253.195.11:50001","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Exception calling application: 'NoneType' object has no attribute 'complete_task'","grpc_status":2}" ```
[ { "content": "import threading\nimport time\nfrom threading import Thread\n\nfrom elasticdl.proto import elasticdl_pb2\nfrom elasticdl.python.common.evaluation_utils import EvaluationMetrics\nfrom elasticdl.python.common.log_utils import default_logger as logger\nfrom elasticdl.python.common.tensor_utils import pb_to_ndarray\n\n\nclass EvaluationJob(object):\n \"\"\"Representation of an evaluation job\"\"\"\n\n def __init__(self, metrics_dict, model_version, total_tasks=-1):\n \"\"\"\n Args:\n metrics_dict: A python dictionary. If model has only one output,\n `metrics_dict` is a dictionary of `{metric_name: metric}`,\n i.e. `{\"acc\": tf.keras.metrics.Accuracy()}`.\n If model has multiple outputs, `metric_dict` is a dictionary of\n `{output_name: {metric_name: metric}}`,\n i.e. `{\n \"output_a\": {\"acc\": tf.keras.metrics.Accuracy()},\n \"output_b\": {\"auc\": tf.keras.metrics.AUC()},\n }`. Note that for model with multiple outputs, each metric\n only uses one output.\n model_version: The version of the model to be evaluated.\n total_tasks: The number of evaluation tasks.\n \"\"\"\n\n self.model_version = model_version\n self._total_tasks = total_tasks\n self._completed_tasks = 0\n self.evaluation_metrics = EvaluationMetrics(metrics_dict)\n\n def complete_task(self):\n self._completed_tasks += 1\n\n def finished(self):\n return self._completed_tasks >= self._total_tasks\n\n def report_evaluation_metrics(self, model_outputs_pb, labels):\n labels = pb_to_ndarray(labels)\n model_outputs = {}\n for name, tensor_pb in model_outputs_pb.items():\n model_outputs[name] = pb_to_ndarray(tensor_pb)\n self.evaluation_metrics.update_evaluation_metrics(\n model_outputs, labels\n )\n\n\nclass _EvaluationTrigger(Thread):\n \"\"\"A trigger which generates evaluation tasks periodically\"\"\"\n\n def __init__(self, eval_service, start_delay_secs, throttle_secs):\n Thread.__init__(self)\n self._eval_service = eval_service\n self._stopper = threading.Event()\n self._throttle_secs = throttle_secs\n self._eval_min_time = time.time() + start_delay_secs\n\n def stop(self):\n self._stopper.set()\n\n def _wait_enough_time(self, cur_time_secs, previous_round_start_secs):\n if cur_time_secs < self._eval_min_time:\n return False\n if (\n previous_round_start_secs != -1\n and cur_time_secs - previous_round_start_secs < self._throttle_secs\n ):\n return False\n return True\n\n def run(self):\n previous_round_start_secs = -1\n\n while not self._stopper.is_set():\n time_now = time.time()\n if self._wait_enough_time(time_now, previous_round_start_secs):\n # Time is up, add an evaluation task\n self._eval_service.add_evaluation_task(is_time_based_eval=True)\n previous_round_start_secs = time_now\n time.sleep(5)\n\n\nclass EvaluationService(object):\n \"\"\"Evaluation service\"\"\"\n\n def __init__(\n self,\n tensorboard_service,\n task_d,\n start_delay_secs,\n throttle_secs,\n eval_steps,\n eval_only,\n eval_metrics_fn,\n ):\n self._tensorboard_service = tensorboard_service\n self._task_d = task_d\n self._lock = threading.Lock()\n self._eval_job = None\n self.trigger = _EvaluationTrigger(\n self, start_delay_secs, throttle_secs\n )\n self._time_based_eval = throttle_secs > 0\n self._eval_steps = eval_steps\n self._eval_checkpoint_versions = []\n self._last_eval_checkpoint_version = -1\n self._eval_only = eval_only\n self._eval_metrics_fn = eval_metrics_fn\n\n def start(self):\n if self._time_based_eval and not self._eval_only:\n self.trigger.start()\n\n def stop(self):\n if self._time_based_eval and not self._eval_only:\n self.trigger.stop()\n\n def set_master_servicer(self, master_servicer):\n self._master_servicer = master_servicer\n\n def init_eval_only_job(self, num_task):\n self._eval_job = EvaluationJob(self._eval_metrics_fn(), -1, num_task)\n\n def add_evaluation_task(\n self, is_time_based_eval, master_locking=True, model_version=None\n ):\n \"\"\"\n Add evaluation task with current model_version.\n \"\"\"\n # Do not create time-based eval after all tasks are done\n if is_time_based_eval and self._task_d.finished():\n return\n if not model_version:\n model_version = self._master_servicer.get_model_version()\n if model_version == self._last_eval_checkpoint_version:\n return\n\n checkpoint_version = model_version\n with self._lock:\n self._eval_checkpoint_versions.append(checkpoint_version)\n self._last_eval_checkpoint_version = checkpoint_version\n self.try_to_create_new_job()\n\n def try_to_create_new_job(self):\n \"\"\"\n Add eval task into task dispatcher if current eval_job is done\n and there are pending eval tasks\n \"\"\"\n with self._lock:\n if self._eval_job is None and self._eval_checkpoint_versions:\n checkpoint_version = self._eval_checkpoint_versions.pop(0)\n self._task_d.create_tasks(\n elasticdl_pb2.EVALUATION, checkpoint_version\n )\n task_count = len(self._task_d._eval_todo)\n if self._eval_job is None:\n self._eval_job = EvaluationJob(\n self._eval_metrics_fn(), checkpoint_version, task_count\n )\n else:\n self._eval_job.model_version = checkpoint_version\n self._eval_job._total_tasks = task_count\n self._eval_job.reset_metric_states()\n return True\n return False\n\n def add_evaluation_task_if_needed(self, master_locking, model_version):\n \"\"\"\n Add step-based evaluation task\n \"\"\"\n if not model_version:\n model_version = self._master_servicer.get_model_version()\n if (\n self._eval_steps\n and model_version % self._eval_steps == 0\n and model_version > self._last_eval_checkpoint_version\n ):\n self.add_evaluation_task(\n is_time_based_eval=False,\n master_locking=master_locking,\n model_version=model_version,\n )\n\n def report_evaluation_metrics(self, model_outputs, labels):\n if self._eval_job is None:\n return False\n with self._lock:\n return self._eval_job.report_evaluation_metrics(\n model_outputs, labels\n )\n\n def complete_task(self):\n self._eval_job.complete_task()\n if self._eval_job.finished():\n evaluation_metrics = (\n self._eval_job.evaluation_metrics.get_evaluation_summary()\n )\n if self._tensorboard_service and evaluation_metrics:\n self._tensorboard_service.write_dict_to_summary(\n evaluation_metrics, version=self._eval_job.model_version\n )\n logger.info(\n \"Evaluation metrics[v=%d]: %s\"\n % (\n self._eval_job.model_version\n if self._eval_job.model_version >= 0\n else self._master_servicer.get_model_version(),\n str(evaluation_metrics),\n )\n )\n if not self._eval_only:\n # delete checkpoint file\n self._eval_job = None\n # create new eval job if possible\n self.try_to_create_new_job()\n return evaluation_metrics\n", "path": "elasticdl/python/master/evaluation_service.py" } ]
[ { "content": "import threading\nimport time\nfrom threading import Thread\n\nfrom elasticdl.proto import elasticdl_pb2\nfrom elasticdl.python.common.evaluation_utils import EvaluationMetrics\nfrom elasticdl.python.common.log_utils import default_logger as logger\nfrom elasticdl.python.common.tensor_utils import pb_to_ndarray\n\n\nclass EvaluationJob(object):\n \"\"\"Representation of an evaluation job\"\"\"\n\n def __init__(self, metrics_dict, model_version, total_tasks=-1):\n \"\"\"\n Args:\n metrics_dict: A python dictionary. If model has only one output,\n `metrics_dict` is a dictionary of `{metric_name: metric}`,\n i.e. `{\"acc\": tf.keras.metrics.Accuracy()}`.\n If model has multiple outputs, `metric_dict` is a dictionary of\n `{output_name: {metric_name: metric}}`,\n i.e. `{\n \"output_a\": {\"acc\": tf.keras.metrics.Accuracy()},\n \"output_b\": {\"auc\": tf.keras.metrics.AUC()},\n }`. Note that for model with multiple outputs, each metric\n only uses one output.\n model_version: The version of the model to be evaluated.\n total_tasks: The number of evaluation tasks.\n \"\"\"\n\n self.model_version = model_version\n self._total_tasks = total_tasks\n self._completed_tasks = 0\n self.evaluation_metrics = EvaluationMetrics(metrics_dict)\n\n def complete_task(self):\n self._completed_tasks += 1\n\n def finished(self):\n return self._completed_tasks >= self._total_tasks\n\n def report_evaluation_metrics(self, model_outputs_pb, labels):\n labels = pb_to_ndarray(labels)\n model_outputs = {}\n for name, tensor_pb in model_outputs_pb.items():\n model_outputs[name] = pb_to_ndarray(tensor_pb)\n self.evaluation_metrics.update_evaluation_metrics(\n model_outputs, labels\n )\n\n\nclass _EvaluationTrigger(Thread):\n \"\"\"A trigger which generates evaluation tasks periodically\"\"\"\n\n def __init__(self, eval_service, start_delay_secs, throttle_secs):\n Thread.__init__(self)\n self._eval_service = eval_service\n self._stopper = threading.Event()\n self._throttle_secs = throttle_secs\n self._eval_min_time = time.time() + start_delay_secs\n\n def stop(self):\n self._stopper.set()\n\n def _wait_enough_time(self, cur_time_secs, previous_round_start_secs):\n if cur_time_secs < self._eval_min_time:\n return False\n if (\n previous_round_start_secs != -1\n and cur_time_secs - previous_round_start_secs < self._throttle_secs\n ):\n return False\n return True\n\n def run(self):\n previous_round_start_secs = -1\n\n while not self._stopper.is_set():\n time_now = time.time()\n if self._wait_enough_time(time_now, previous_round_start_secs):\n # Time is up, add an evaluation task\n self._eval_service.add_evaluation_task(is_time_based_eval=True)\n previous_round_start_secs = time_now\n time.sleep(5)\n\n\nclass EvaluationService(object):\n \"\"\"Evaluation service\"\"\"\n\n def __init__(\n self,\n tensorboard_service,\n task_d,\n start_delay_secs,\n throttle_secs,\n eval_steps,\n eval_only,\n eval_metrics_fn,\n ):\n self._tensorboard_service = tensorboard_service\n self._task_d = task_d\n self._lock = threading.Lock()\n self._eval_job = None\n self.trigger = _EvaluationTrigger(\n self, start_delay_secs, throttle_secs\n )\n self._time_based_eval = throttle_secs > 0\n self._eval_steps = eval_steps\n self._eval_checkpoint_versions = []\n self._last_eval_checkpoint_version = -1\n self._eval_only = eval_only\n self._eval_metrics_fn = eval_metrics_fn\n\n def start(self):\n if self._time_based_eval and not self._eval_only:\n self.trigger.start()\n\n def stop(self):\n if self._time_based_eval and not self._eval_only:\n self.trigger.stop()\n\n def set_master_servicer(self, master_servicer):\n self._master_servicer = master_servicer\n\n def init_eval_only_job(self, num_task):\n self._eval_job = EvaluationJob(self._eval_metrics_fn(), -1, num_task)\n\n def add_evaluation_task(\n self, is_time_based_eval, master_locking=True, model_version=None\n ):\n \"\"\"\n Add evaluation task with current model_version.\n \"\"\"\n # Do not create time-based eval after all tasks are done\n if is_time_based_eval and self._task_d.finished():\n return\n if not model_version:\n model_version = self._master_servicer.get_model_version()\n if model_version == self._last_eval_checkpoint_version:\n return\n\n checkpoint_version = model_version\n with self._lock:\n self._eval_checkpoint_versions.append(checkpoint_version)\n self._last_eval_checkpoint_version = checkpoint_version\n self.try_to_create_new_job()\n\n def try_to_create_new_job(self):\n \"\"\"\n Add eval task into task dispatcher if current eval_job is done\n and there are pending eval tasks\n \"\"\"\n with self._lock:\n if self._eval_job is None and self._eval_checkpoint_versions:\n checkpoint_version = self._eval_checkpoint_versions.pop(0)\n self._task_d.create_tasks(\n elasticdl_pb2.EVALUATION, checkpoint_version\n )\n task_count = len(self._task_d._eval_todo)\n if self._eval_job is None:\n self._eval_job = EvaluationJob(\n self._eval_metrics_fn(), checkpoint_version, task_count\n )\n else:\n self._eval_job.model_version = checkpoint_version\n self._eval_job._total_tasks = task_count\n self._eval_job.reset_metric_states()\n return True\n return False\n\n def add_evaluation_task_if_needed(self, master_locking, model_version):\n \"\"\"\n Add step-based evaluation task\n \"\"\"\n if not model_version:\n model_version = self._master_servicer.get_model_version()\n if (\n self._eval_steps\n and model_version % self._eval_steps == 0\n and model_version > self._last_eval_checkpoint_version\n ):\n self.add_evaluation_task(\n is_time_based_eval=False,\n master_locking=master_locking,\n model_version=model_version,\n )\n\n def report_evaluation_metrics(self, model_outputs, labels):\n if self._eval_job is None:\n return False\n with self._lock:\n return self._eval_job.report_evaluation_metrics(\n model_outputs, labels\n )\n\n def complete_task(self):\n if self._eval_job is None:\n return\n self._eval_job.complete_task()\n if self._eval_job.finished():\n evaluation_metrics = (\n self._eval_job.evaluation_metrics.get_evaluation_summary()\n )\n if self._tensorboard_service and evaluation_metrics:\n self._tensorboard_service.write_dict_to_summary(\n evaluation_metrics, version=self._eval_job.model_version\n )\n logger.info(\n \"Evaluation metrics[v=%d]: %s\"\n % (\n self._eval_job.model_version\n if self._eval_job.model_version >= 0\n else self._master_servicer.get_model_version(),\n str(evaluation_metrics),\n )\n )\n if not self._eval_only:\n # delete checkpoint file\n self._eval_job = None\n # create new eval job if possible\n self.try_to_create_new_job()\n return evaluation_metrics\n", "path": "elasticdl/python/master/evaluation_service.py" } ]
diff --git a/elasticdl/python/master/evaluation_service.py b/elasticdl/python/master/evaluation_service.py index 310e4493a..8cd76131d 100644 --- a/elasticdl/python/master/evaluation_service.py +++ b/elasticdl/python/master/evaluation_service.py @@ -194,6 +194,8 @@ def report_evaluation_metrics(self, model_outputs, labels): ) def complete_task(self): + if self._eval_job is None: + return self._eval_job.complete_task() if self._eval_job.finished(): evaluation_metrics = (
googleapis__python-bigquery-974
Python to construct CASE WHEN update SQL statement I try to update 2K rows in BQ ``` def update_bq_ads_status_failed(self, update_ads): affected_rows = 0 for update_ads_chunk in split(update_ads, _UPDATE_CHUNK_SIZE): ad_ids = [item["ad_id"] for item in update_ads_chunk] removal_errors = [item["removal_error"] for item in update_ads_chunk] update_removal_error = "" for ad_id, removal_error in zip(ad_ids, removal_errors): update_removal_error = update_removal_error + \ f''' WHEN ad_id = '{ad_id}' Then '{removal_error}' ''' affected_rows += self.update_bq_ads_status(f""" UPDATE '{table_full_name}' SET status = 'Failed Removing' SET removal_error = CASE {update_removal_error} END WHERE ad_id IN {str(ad_ids)} """) return affected_rows ``` I'm getting this error. I know it's too vague and not possible to debug like this. > timeout=300.0, headers={'X-Server-Timeout': '300.0', > 'Accept-Encoding': 'gzip', 'Content-Type': 'application/json', > 'X-Goog-API-Client': 'gl-python/3.8.10 grpc/1.39.0 gax/2.0.0 > gapic/2.26.0 gccl/2.26.0', 'User-Agent': 'gl-python/3.8.10 grpc/1.39.0 > gax/2.0.0 gapic/2.26.0 gccl/2.26.0'})), last exception: ('Connection > aborted.', RemoteDisconnected('Remote end closed connection without > response')) I'm trying to eliminate errors. Is my BQ update syntactically correct? What's the BQ update timeout?
[ { "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom google.api_core import exceptions\nfrom google.api_core import retry\nfrom google.auth import exceptions as auth_exceptions\nimport requests.exceptions\n\n\n_RETRYABLE_REASONS = frozenset(\n [\"rateLimitExceeded\", \"backendError\", \"internalError\", \"badGateway\"]\n)\n\n_UNSTRUCTURED_RETRYABLE_TYPES = (\n ConnectionError,\n exceptions.TooManyRequests,\n exceptions.InternalServerError,\n exceptions.BadGateway,\n requests.exceptions.ChunkedEncodingError,\n requests.exceptions.ConnectionError,\n requests.exceptions.Timeout,\n auth_exceptions.TransportError,\n)\n\n_DEFAULT_JOB_DEADLINE = 60.0 * 10.0 # seconds\n\n\ndef _should_retry(exc):\n \"\"\"Predicate for determining when to retry.\n\n We retry if and only if the 'reason' is 'backendError'\n or 'rateLimitExceeded'.\n \"\"\"\n if not hasattr(exc, \"errors\") or len(exc.errors) == 0:\n # Check for unstructured error returns, e.g. from GFE\n return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES)\n\n reason = exc.errors[0][\"reason\"]\n return reason in _RETRYABLE_REASONS\n\n\nDEFAULT_RETRY = retry.Retry(predicate=_should_retry, deadline=600.0)\n\"\"\"The default retry object.\n\nAny method with a ``retry`` parameter will be retried automatically,\nwith reasonable defaults. To disable retry, pass ``retry=None``.\nTo modify the default retry behavior, call a ``with_XXX`` method\non ``DEFAULT_RETRY``. For example, to change the deadline to 30 seconds,\npass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.\n\"\"\"\n\nDEFAULT_TIMEOUT = 5.0 * 60.0\n\"\"\"The default API timeout.\n\nThis is the time to wait per request. To adjust the total wait time, set a\ndeadline on the retry object.\n\"\"\"\n\njob_retry_reasons = \"rateLimitExceeded\", \"backendError\"\n\n\ndef _job_should_retry(exc):\n if not hasattr(exc, \"errors\") or len(exc.errors) == 0:\n return False\n\n reason = exc.errors[0][\"reason\"]\n return reason in job_retry_reasons\n\n\nDEFAULT_JOB_RETRY = retry.Retry(\n predicate=_job_should_retry, deadline=_DEFAULT_JOB_DEADLINE\n)\n\"\"\"\nThe default job retry object.\n\"\"\"\n", "path": "google/cloud/bigquery/retry.py" } ]
[ { "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom google.api_core import exceptions\nfrom google.api_core import retry\nfrom google.auth import exceptions as auth_exceptions\nimport requests.exceptions\n\n\n_RETRYABLE_REASONS = frozenset(\n [\"rateLimitExceeded\", \"backendError\", \"internalError\", \"badGateway\"]\n)\n\n_UNSTRUCTURED_RETRYABLE_TYPES = (\n ConnectionError,\n exceptions.TooManyRequests,\n exceptions.InternalServerError,\n exceptions.BadGateway,\n requests.exceptions.ChunkedEncodingError,\n requests.exceptions.ConnectionError,\n requests.exceptions.Timeout,\n auth_exceptions.TransportError,\n)\n\n_DEFAULT_JOB_DEADLINE = 60.0 * 10.0 # seconds\n\n\ndef _should_retry(exc):\n \"\"\"Predicate for determining when to retry.\n\n We retry if and only if the 'reason' is 'backendError'\n or 'rateLimitExceeded'.\n \"\"\"\n if not hasattr(exc, \"errors\") or len(exc.errors) == 0:\n # Check for unstructured error returns, e.g. from GFE\n return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES)\n\n reason = exc.errors[0][\"reason\"]\n return reason in _RETRYABLE_REASONS\n\n\nDEFAULT_RETRY = retry.Retry(predicate=_should_retry, deadline=600.0)\n\"\"\"The default retry object.\n\nAny method with a ``retry`` parameter will be retried automatically,\nwith reasonable defaults. To disable retry, pass ``retry=None``.\nTo modify the default retry behavior, call a ``with_XXX`` method\non ``DEFAULT_RETRY``. For example, to change the deadline to 30 seconds,\npass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.\n\"\"\"\n\nDEFAULT_TIMEOUT = None\n\"\"\"The default API timeout.\n\nThis is the time to wait per request. To adjust the total wait time, set a\ndeadline on the retry object.\n\"\"\"\n\njob_retry_reasons = \"rateLimitExceeded\", \"backendError\"\n\n\ndef _job_should_retry(exc):\n if not hasattr(exc, \"errors\") or len(exc.errors) == 0:\n return False\n\n reason = exc.errors[0][\"reason\"]\n return reason in job_retry_reasons\n\n\nDEFAULT_JOB_RETRY = retry.Retry(\n predicate=_job_should_retry, deadline=_DEFAULT_JOB_DEADLINE\n)\n\"\"\"\nThe default job retry object.\n\"\"\"\n", "path": "google/cloud/bigquery/retry.py" } ]
diff --git a/google/cloud/bigquery/retry.py b/google/cloud/bigquery/retry.py index 830582322..8a86973cd 100644 --- a/google/cloud/bigquery/retry.py +++ b/google/cloud/bigquery/retry.py @@ -60,7 +60,7 @@ def _should_retry(exc): pass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``. """ -DEFAULT_TIMEOUT = 5.0 * 60.0 +DEFAULT_TIMEOUT = None """The default API timeout. This is the time to wait per request. To adjust the total wait time, set a
docker__docker-py-1709
.dockerignore does not work with patterns begin with slash docker version: ``` docker -v Docker version 17.03.1-ce, build c6d412e ``` reproduce: ``` mkdir app cd app mkdir foo touch foo/bar echo '/foo/bar' > .dockerignore printf 'FROM alpine:3.1\nWORKDIR /app\nCOPY . .\n' > Dockerfile docker build -t app . docker run --rm app find foo ``` output: ``` foo foo/bar ``` It seems the statement from [the official document](https://docs.docker.com/engine/reference/builder/#dockerignore-file) below is not correct: > For example, the patterns `/foo/bar` and `foo/bar` both exclude a file or directory named `bar` in the `foo` subdirectory of `PATH` or in the root of the git repository located at `URL`. We should either amend the document or fix the bug.
[ { "content": "import os\n\nfrom ..constants import IS_WINDOWS_PLATFORM\nfrom .fnmatch import fnmatch\nfrom .utils import create_archive\n\n\ndef tar(path, exclude=None, dockerfile=None, fileobj=None, gzip=False):\n root = os.path.abspath(path)\n exclude = exclude or []\n\n return create_archive(\n files=sorted(exclude_paths(root, exclude, dockerfile=dockerfile)),\n root=root, fileobj=fileobj, gzip=gzip\n )\n\n\ndef exclude_paths(root, patterns, dockerfile=None):\n \"\"\"\n Given a root directory path and a list of .dockerignore patterns, return\n an iterator of all paths (both regular files and directories) in the root\n directory that do *not* match any of the patterns.\n\n All paths returned are relative to the root.\n \"\"\"\n if dockerfile is None:\n dockerfile = 'Dockerfile'\n\n exceptions = [p for p in patterns if p.startswith('!')]\n\n include_patterns = [p[1:] for p in exceptions]\n include_patterns += [dockerfile, '.dockerignore']\n\n exclude_patterns = list(set(patterns) - set(exceptions))\n\n paths = get_paths(root, exclude_patterns, include_patterns,\n has_exceptions=len(exceptions) > 0)\n\n return set(paths).union(\n # If the Dockerfile is in a subdirectory that is excluded, get_paths\n # will not descend into it and the file will be skipped. This ensures\n # it doesn't happen.\n set([dockerfile.replace('/', os.path.sep)])\n if os.path.exists(os.path.join(root, dockerfile)) else set()\n )\n\n\ndef should_include(path, exclude_patterns, include_patterns):\n \"\"\"\n Given a path, a list of exclude patterns, and a list of inclusion patterns:\n\n 1. Returns True if the path doesn't match any exclusion pattern\n 2. Returns False if the path matches an exclusion pattern and doesn't match\n an inclusion pattern\n 3. Returns true if the path matches an exclusion pattern and matches an\n inclusion pattern\n \"\"\"\n for pattern in exclude_patterns:\n if match_path(path, pattern):\n for pattern in include_patterns:\n if match_path(path, pattern):\n return True\n return False\n return True\n\n\ndef should_check_directory(directory_path, exclude_patterns, include_patterns):\n \"\"\"\n Given a directory path, a list of exclude patterns, and a list of inclusion\n patterns:\n\n 1. Returns True if the directory path should be included according to\n should_include.\n 2. Returns True if the directory path is the prefix for an inclusion\n pattern\n 3. Returns False otherwise\n \"\"\"\n\n # To account for exception rules, check directories if their path is a\n # a prefix to an inclusion pattern. This logic conforms with the current\n # docker logic (2016-10-27):\n # https://github.com/docker/docker/blob/bc52939b0455116ab8e0da67869ec81c1a1c3e2c/pkg/archive/archive.go#L640-L671\n\n def normalize_path(path):\n return path.replace(os.path.sep, '/')\n\n path_with_slash = normalize_path(directory_path) + '/'\n possible_child_patterns = [\n pattern for pattern in map(normalize_path, include_patterns)\n if (pattern + '/').startswith(path_with_slash)\n ]\n directory_included = should_include(\n directory_path, exclude_patterns, include_patterns\n )\n return directory_included or len(possible_child_patterns) > 0\n\n\ndef get_paths(root, exclude_patterns, include_patterns, has_exceptions=False):\n paths = []\n\n for parent, dirs, files in os.walk(root, topdown=True, followlinks=False):\n parent = os.path.relpath(parent, root)\n if parent == '.':\n parent = ''\n\n # Remove excluded patterns from the list of directories to traverse\n # by mutating the dirs we're iterating over.\n # This looks strange, but is considered the correct way to skip\n # traversal. See https://docs.python.org/2/library/os.html#os.walk\n dirs[:] = [\n d for d in dirs if should_check_directory(\n os.path.join(parent, d), exclude_patterns, include_patterns\n )\n ]\n\n for path in dirs:\n if should_include(os.path.join(parent, path),\n exclude_patterns, include_patterns):\n paths.append(os.path.join(parent, path))\n\n for path in files:\n if should_include(os.path.join(parent, path),\n exclude_patterns, include_patterns):\n paths.append(os.path.join(parent, path))\n\n return paths\n\n\ndef match_path(path, pattern):\n pattern = pattern.rstrip('/' + os.path.sep)\n if pattern:\n pattern = os.path.relpath(pattern)\n\n pattern_components = pattern.split(os.path.sep)\n if len(pattern_components) == 1 and IS_WINDOWS_PLATFORM:\n pattern_components = pattern.split('/')\n\n if '**' not in pattern:\n path_components = path.split(os.path.sep)[:len(pattern_components)]\n else:\n path_components = path.split(os.path.sep)\n return fnmatch('/'.join(path_components), '/'.join(pattern_components))\n", "path": "docker/utils/build.py" } ]
[ { "content": "import os\n\nfrom ..constants import IS_WINDOWS_PLATFORM\nfrom .fnmatch import fnmatch\nfrom .utils import create_archive\n\n\ndef tar(path, exclude=None, dockerfile=None, fileobj=None, gzip=False):\n root = os.path.abspath(path)\n exclude = exclude or []\n\n return create_archive(\n files=sorted(exclude_paths(root, exclude, dockerfile=dockerfile)),\n root=root, fileobj=fileobj, gzip=gzip\n )\n\n\ndef exclude_paths(root, patterns, dockerfile=None):\n \"\"\"\n Given a root directory path and a list of .dockerignore patterns, return\n an iterator of all paths (both regular files and directories) in the root\n directory that do *not* match any of the patterns.\n\n All paths returned are relative to the root.\n \"\"\"\n if dockerfile is None:\n dockerfile = 'Dockerfile'\n\n patterns = [p.lstrip('/') for p in patterns]\n exceptions = [p for p in patterns if p.startswith('!')]\n\n include_patterns = [p[1:] for p in exceptions]\n include_patterns += [dockerfile, '.dockerignore']\n\n exclude_patterns = list(set(patterns) - set(exceptions))\n\n paths = get_paths(root, exclude_patterns, include_patterns,\n has_exceptions=len(exceptions) > 0)\n\n return set(paths).union(\n # If the Dockerfile is in a subdirectory that is excluded, get_paths\n # will not descend into it and the file will be skipped. This ensures\n # it doesn't happen.\n set([dockerfile.replace('/', os.path.sep)])\n if os.path.exists(os.path.join(root, dockerfile)) else set()\n )\n\n\ndef should_include(path, exclude_patterns, include_patterns):\n \"\"\"\n Given a path, a list of exclude patterns, and a list of inclusion patterns:\n\n 1. Returns True if the path doesn't match any exclusion pattern\n 2. Returns False if the path matches an exclusion pattern and doesn't match\n an inclusion pattern\n 3. Returns true if the path matches an exclusion pattern and matches an\n inclusion pattern\n \"\"\"\n for pattern in exclude_patterns:\n if match_path(path, pattern):\n for pattern in include_patterns:\n if match_path(path, pattern):\n return True\n return False\n return True\n\n\ndef should_check_directory(directory_path, exclude_patterns, include_patterns):\n \"\"\"\n Given a directory path, a list of exclude patterns, and a list of inclusion\n patterns:\n\n 1. Returns True if the directory path should be included according to\n should_include.\n 2. Returns True if the directory path is the prefix for an inclusion\n pattern\n 3. Returns False otherwise\n \"\"\"\n\n # To account for exception rules, check directories if their path is a\n # a prefix to an inclusion pattern. This logic conforms with the current\n # docker logic (2016-10-27):\n # https://github.com/docker/docker/blob/bc52939b0455116ab8e0da67869ec81c1a1c3e2c/pkg/archive/archive.go#L640-L671\n\n def normalize_path(path):\n return path.replace(os.path.sep, '/')\n\n path_with_slash = normalize_path(directory_path) + '/'\n possible_child_patterns = [\n pattern for pattern in map(normalize_path, include_patterns)\n if (pattern + '/').startswith(path_with_slash)\n ]\n directory_included = should_include(\n directory_path, exclude_patterns, include_patterns\n )\n return directory_included or len(possible_child_patterns) > 0\n\n\ndef get_paths(root, exclude_patterns, include_patterns, has_exceptions=False):\n paths = []\n\n for parent, dirs, files in os.walk(root, topdown=True, followlinks=False):\n parent = os.path.relpath(parent, root)\n if parent == '.':\n parent = ''\n\n # Remove excluded patterns from the list of directories to traverse\n # by mutating the dirs we're iterating over.\n # This looks strange, but is considered the correct way to skip\n # traversal. See https://docs.python.org/2/library/os.html#os.walk\n dirs[:] = [\n d for d in dirs if should_check_directory(\n os.path.join(parent, d), exclude_patterns, include_patterns\n )\n ]\n\n for path in dirs:\n if should_include(os.path.join(parent, path),\n exclude_patterns, include_patterns):\n paths.append(os.path.join(parent, path))\n\n for path in files:\n if should_include(os.path.join(parent, path),\n exclude_patterns, include_patterns):\n paths.append(os.path.join(parent, path))\n\n return paths\n\n\ndef match_path(path, pattern):\n pattern = pattern.rstrip('/' + os.path.sep)\n if pattern:\n pattern = os.path.relpath(pattern)\n\n pattern_components = pattern.split(os.path.sep)\n if len(pattern_components) == 1 and IS_WINDOWS_PLATFORM:\n pattern_components = pattern.split('/')\n\n if '**' not in pattern:\n path_components = path.split(os.path.sep)[:len(pattern_components)]\n else:\n path_components = path.split(os.path.sep)\n return fnmatch('/'.join(path_components), '/'.join(pattern_components))\n", "path": "docker/utils/build.py" } ]
diff --git a/docker/utils/build.py b/docker/utils/build.py index 79b72495d..d4223e749 100644 --- a/docker/utils/build.py +++ b/docker/utils/build.py @@ -26,6 +26,7 @@ def exclude_paths(root, patterns, dockerfile=None): if dockerfile is None: dockerfile = 'Dockerfile' + patterns = [p.lstrip('/') for p in patterns] exceptions = [p for p in patterns if p.startswith('!')] include_patterns = [p[1:] for p in exceptions] diff --git a/tests/unit/utils_test.py b/tests/unit/utils_test.py index 7045d23c2..4a391facb 100644 --- a/tests/unit/utils_test.py +++ b/tests/unit/utils_test.py @@ -768,6 +768,11 @@ def test_single_subdir_single_filename(self): self.all_paths - set(['foo/a.py']) ) + def test_single_subdir_single_filename_leading_slash(self): + assert self.exclude(['/foo/a.py']) == convert_paths( + self.all_paths - set(['foo/a.py']) + ) + def test_single_subdir_with_path_traversal(self): assert self.exclude(['foo/whoops/../a.py']) == convert_paths( self.all_paths - set(['foo/a.py'])
optuna__optuna-5056
Use `__future__.annotations` everywhere in the Optuna code base ### Motivation Optuna drops Python 3.6 from v3.1, so we can use `__future__.annotations`, which simplifies the code base. See [PEP 563](https://peps.python.org/pep-0563/), [PEP584](https://peps.python.org/pep-0584/), [PEP 585](https://peps.python.org/pep-0585/), and [PEP 604](https://peps.python.org/pep-0604/) for more details. This issue suggests to use the module and simplifies the code base. ### Suggestion Use `__future__.annotations` for each file and simplify the type annotations. The list of classes whose type annotations can be simplified is [here](https://peps.python.org/pep-0585/#implementation). The list of files where the `__future__.annotations` can be used is as follows. In order to reduce review costs and to encourage more contributors to work on it, please, as a rule, fix one file per PR. - [x] optuna/_convert_positional_args.py - [x] optuna/visualization/_optimization_history.py - [x] optuna/visualization/_hypervolume_history.py - [x] optuna/visualization/_edf.py - [x] optuna/visualization/_pareto_front.py - [x] optuna/visualization/matplotlib/_optimization_history.py - [x] optuna/visualization/matplotlib/_hypervolume_history.py - [x] optuna/visualization/matplotlib/_edf.py - [x] optuna/visualization/matplotlib/_pareto_front.py - [x] optuna/visualization/matplotlib/_contour.py - [x] optuna/visualization/_utils.py - [x] optuna/logging.py - [ ] optuna/storages/_base.py - [ ] optuna/storages/_cached_storage.py - [ ] optuna/storages/__init__.py - [ ] optuna/storages/_heartbeat.py - [ ] optuna/storages/_in_memory.py - [ ] optuna/storages/_rdb/models.py - [ ] optuna/storages/_rdb/storage.py - [ ] optuna/storages/_rdb/alembic/versions/v3.0.0.c.py - [ ] optuna/storages/_rdb/alembic/versions/v3.0.0.d.py - [ ] optuna/storages/_rdb/alembic/versions/v3.0.0.a.py - [ ] optuna/storages/_journal/file.py - [ ] optuna/storages/_journal/redis.py - [ ] optuna/storages/_journal/storage.py - [ ] optuna/storages/_journal/base.py - [ ] optuna/study/_dataframe.py - [ ] optuna/study/_optimize.py - [ ] optuna/study/_tell.py - [ ] optuna/study/_multi_objective.py - [ ] optuna/study/_frozen.py - [ ] optuna/study/study.py - [ ] optuna/study/_study_summary.py - [ ] optuna/search_space/group_decomposed.py - [ ] optuna/search_space/intersection.py - [ ] optuna/_typing.py - [ ] optuna/_deprecated.py - [ ] optuna/pruners/_hyperband.py - [ ] optuna/pruners/_patient.py - [ ] optuna/pruners/_successive_halving.py - [ ] optuna/pruners/_percentile.py - [ ] optuna/pruners/_threshold.py - [ ] optuna/trial/_base.py - [ ] optuna/trial/_fixed.py - [ ] optuna/trial/_trial.py - [ ] optuna/trial/_frozen.py - [ ] optuna/integration/cma.py - [ ] optuna/integration/shap.py - [ ] optuna/integration/lightgbm.py - [ ] optuna/integration/pytorch_distributed.py - [ ] optuna/integration/_lightgbm_tuner/optimize.py - [ ] optuna/integration/_lightgbm_tuner/alias.py - [ ] optuna/integration/mlflow.py - [ ] optuna/integration/wandb.py - [ ] optuna/integration/catboost.py - [ ] optuna/integration/skopt.py - [ ] optuna/integration/botorch.py - [ ] optuna/integration/dask.py - [x] optuna/integration/sklearn.py - [ ] optuna/integration/tensorboard.py - [ ] optuna/terminator/callback.py - [ ] optuna/terminator/terminator.py - [ ] optuna/terminator/improvement/_preprocessing.py - [ ] optuna/terminator/improvement/gp/botorch.py - [ ] optuna/terminator/improvement/gp/base.py - [ ] optuna/terminator/improvement/evaluator.py - [ ] optuna/importance/_base.py - [ ] optuna/importance/_mean_decrease_impurity.py - [ ] optuna/importance/__init__.py - [ ] optuna/importance/_fanova/_fanova.py - [ ] optuna/importance/_fanova/_evaluator.py - [ ] optuna/importance/_fanova/_tree.py - [ ] optuna/_imports.py - [ ] optuna/testing/tempfile_pool.py - [ ] optuna/testing/threading.py - [ ] optuna/testing/distributions.py - [ ] optuna/testing/samplers.py - [ ] optuna/testing/storages.py - [ ] optuna/distributions.py - [ ] optuna/cli.py - [ ] optuna/multi_objective/visualization/_pareto_front.py - [ ] optuna/multi_objective/trial.py - [ ] optuna/multi_objective/samplers/_base.py - [ ] optuna/multi_objective/samplers/_nsga2.py - [ ] optuna/multi_objective/samplers/_adapter.py - [ ] optuna/multi_objective/samplers/_random.py - [ ] optuna/multi_objective/samplers/_motpe.py - [ ] optuna/multi_objective/study.py - [ ] optuna/_experimental.py - [ ] optuna/samplers/_base.py - [ ] optuna/samplers/nsgaii/_crossovers/_undx.py - [ ] optuna/samplers/nsgaii/_crossovers/_spx.py - [ ] optuna/samplers/nsgaii/_crossovers/_sbx.py - [ ] optuna/samplers/nsgaii/_crossovers/_vsbx.py - [ ] optuna/samplers/nsgaii/_sampler.py - [ ] optuna/samplers/nsgaii/_crossover.py - [ ] optuna/samplers/_search_space/intersection.py - [ ] optuna/samplers/_qmc.py - [ ] optuna/samplers/_tpe/probability_distributions.py - [ ] optuna/samplers/_tpe/_truncnorm.py - [ ] optuna/samplers/_tpe/multi_objective_sampler.py - [ ] optuna/samplers/_tpe/parzen_estimator.py - [ ] optuna/samplers/_tpe/sampler.py - [ ] optuna/samplers/_random.py - [ ] optuna/samplers/_cmaes.py - [ ] optuna/samplers/_partial_fixed.py - [ ] optuna/samplers/_brute_force.py - [ ] optuna/samplers/_nsgaiii.py - [ ] optuna/samplers/_grid.py - [ ] optuna/_hypervolume/wfg.py - [ ] optuna/_hypervolume/hssp.py - [ ] optuna/progress_bar.py - [ ] optuna/_transform.py - [ ] optuna/_callbacks.py - [ ] tests/multi_objective_tests/test_study.py - [ ] tests/multi_objective_tests/samplers_tests/test_motpe.py - [ ] tests/multi_objective_tests/samplers_tests/test_nsga2.py - [ ] tests/multi_objective_tests/test_trial.py - [ ] tests/multi_objective_tests/visualization_tests/test_pareto_front.py - [ ] tests/trial_tests/test_frozen.py - [ ] tests/trial_tests/test_trials.py - [ ] tests/trial_tests/test_trial.py - [ ] tests/pruners_tests/test_percentile.py - [ ] tests/pruners_tests/test_median.py - [ ] tests/pruners_tests/test_patient.py - [ ] tests/pruners_tests/test_successive_halving.py - [ ] tests/study_tests/test_optimize.py - [ ] tests/study_tests/test_study.py - [ ] tests/hypervolume_tests/test_hssp.py - [x] tests/integration_tests/test_skopt.py - [x] tests/integration_tests/test_pytorch_lightning.py - [ ] tests/integration_tests/test_shap.py - [ ] tests/integration_tests/test_cma.py - [ ] tests/integration_tests/test_pytorch_distributed.py - [ ] tests/integration_tests/lightgbm_tuner_tests/test_optimize.py - [ ] tests/integration_tests/lightgbm_tuner_tests/test_alias.py - [ ] tests/integration_tests/test_botorch.py - [ ] tests/integration_tests/test_mlflow.py - [ ] tests/integration_tests/test_mxnet.py - [ ] tests/integration_tests/test_wandb.py - [ ] tests/importance_tests/fanova_tests/test_tree.py - [ ] tests/importance_tests/test_mean_decrease_impurity.py - [ ] tests/importance_tests/test_fanova.py - [ ] tests/importance_tests/test_init.py - [ ] tests/test_convert_positional_args.py - [ ] tests/test_deprecated.py - [ ] tests/storages_tests/test_journal.py - [ ] tests/storages_tests/test_heartbeat.py - [ ] tests/storages_tests/test_storages.py - [ ] tests/storages_tests/rdb_tests/test_storage.py - [ ] tests/storages_tests/rdb_tests/create_db.py - [ ] tests/storages_tests/test_with_server.py - [ ] tests/samplers_tests/test_grid.py - [ ] tests/samplers_tests/tpe_tests/test_parzen_estimator.py - [ ] tests/samplers_tests/tpe_tests/test_multi_objective_sampler.py - [ ] tests/samplers_tests/tpe_tests/test_sampler.py - [ ] tests/samplers_tests/test_cmaes.py - [ ] tests/samplers_tests/test_samplers.py - [x] tests/samplers_tests/test_nsgaii.py - [x] tests/samplers_tests/test_nsgaiii.py - [ ] tests/samplers_tests/test_qmc.py - [ ] tests/test_distributions.py - [ ] tests/test_multi_objective.py - [ ] tests/test_cli.py - [ ] tests/visualization_tests/test_hypervolume_history.py - [ ] tests/visualization_tests/test_pareto_front.py - [ ] tests/terminator_tests/improvement_tests/test_evaluator.py - [ ] benchmarks/kurobako/problems/wfg/transformation_functions.py - [ ] benchmarks/bayesmark/report_bayesmark.py - [ ] benchmarks/bayesmark/optuna_optimizer.py ### Additional context (optional) The above list is generated by the following script. <details> <summary>script</summary> ```python import os import pathlib PATTERS = [ "from typing import Union", "from typing import Optional", "from typing import Tuple", "from typing import List", "from typing import Dict", "from typing import Set", "from typing import FrozenSet", "from typing import Type", "from typing import FrozenSet", "from typing import Sequence", ] def get_filenames_to_be_simplified(dir_path): ret = [] for f in os.listdir(dir_path): file_path = os.path.join(dir_path, f) if not os.path.isfile(file_path): ret.extend(get_filenames_to_be_simplified(file_path)) else: try: with open(file_path) as fd: contents = fd.read() if any([s in contents for s in PATTERS]): ret.append(str(file_path)) except UnicodeDecodeError as e: pass return ret def main(): dirs = ["optuna", "tests", "benchmarks"] for dir_name in dirs: filenames = get_filenames_to_be_simplified(pathlib.Path(dir_name)) for filename in filenames: print(f"- [ ] {filename}") if __name__ == "__main__": main() ``` </details>
[ { "content": "from __future__ import annotations\n\nfrom typing import Callable\nfrom typing import Sequence\n\nfrom optuna._experimental import experimental_func\nfrom optuna.logging import get_logger\nfrom optuna.study import Study\nfrom optuna.trial import FrozenTrial\nfrom optuna.visualization._edf import _get_edf_info\nfrom optuna.visualization.matplotlib._matplotlib_imports import _imports\n\n\nif _imports.is_successful():\n from optuna.visualization.matplotlib._matplotlib_imports import Axes\n from optuna.visualization.matplotlib._matplotlib_imports import plt\n\n_logger = get_logger(__name__)\n\n\n@experimental_func(\"2.2.0\")\ndef plot_edf(\n study: Study | Sequence[Study],\n *,\n target: Callable[[FrozenTrial], float] | None = None,\n target_name: str = \"Objective Value\",\n) -> \"Axes\":\n \"\"\"Plot the objective value EDF (empirical distribution function) of a study with Matplotlib.\n\n Note that only the complete trials are considered when plotting the EDF.\n\n .. seealso::\n Please refer to :func:`optuna.visualization.plot_edf` for an example,\n where this function can be replaced with it.\n\n .. note::\n\n Please refer to `matplotlib.pyplot.legend\n <https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.legend.html>`_\n to adjust the style of the generated legend.\n\n Example:\n\n The following code snippet shows how to plot EDF.\n\n .. plot::\n\n import math\n\n import optuna\n\n\n def ackley(x, y):\n a = 20 * math.exp(-0.2 * math.sqrt(0.5 * (x ** 2 + y ** 2)))\n b = math.exp(0.5 * (math.cos(2 * math.pi * x) + math.cos(2 * math.pi * y)))\n return -a - b + math.e + 20\n\n\n def objective(trial, low, high):\n x = trial.suggest_float(\"x\", low, high)\n y = trial.suggest_float(\"y\", low, high)\n return ackley(x, y)\n\n\n sampler = optuna.samplers.RandomSampler(seed=10)\n\n # Widest search space.\n study0 = optuna.create_study(study_name=\"x=[0,5), y=[0,5)\", sampler=sampler)\n study0.optimize(lambda t: objective(t, 0, 5), n_trials=500)\n\n # Narrower search space.\n study1 = optuna.create_study(study_name=\"x=[0,4), y=[0,4)\", sampler=sampler)\n study1.optimize(lambda t: objective(t, 0, 4), n_trials=500)\n\n # Narrowest search space but it doesn't include the global optimum point.\n study2 = optuna.create_study(study_name=\"x=[1,3), y=[1,3)\", sampler=sampler)\n study2.optimize(lambda t: objective(t, 1, 3), n_trials=500)\n\n optuna.visualization.matplotlib.plot_edf([study0, study1, study2])\n\n Args:\n study:\n A target :class:`~optuna.study.Study` object.\n You can pass multiple studies if you want to compare those EDFs.\n target:\n A function to specify the value to display. If it is :obj:`None` and ``study`` is being\n used for single-objective optimization, the objective values are plotted.\n\n .. note::\n Specify this argument if ``study`` is being used for multi-objective optimization.\n target_name:\n Target's name to display on the axis label.\n\n Returns:\n A :class:`matplotlib.axes.Axes` object.\n \"\"\"\n\n _imports.check()\n\n # Set up the graph style.\n plt.style.use(\"ggplot\") # Use ggplot style sheet for similar outputs to plotly.\n _, ax = plt.subplots()\n ax.set_title(\"Empirical Distribution Function Plot\")\n ax.set_xlabel(target_name)\n ax.set_ylabel(\"Cumulative Probability\")\n ax.set_ylim(0, 1)\n cmap = plt.get_cmap(\"tab20\") # Use tab20 colormap for multiple line plots.\n\n info = _get_edf_info(study, target, target_name)\n edf_lines = info.lines\n\n if len(edf_lines) == 0:\n return ax\n\n for i, (study_name, y_values) in enumerate(edf_lines):\n ax.plot(info.x_values, y_values, color=cmap(i), alpha=0.7, label=study_name)\n\n if len(edf_lines) >= 2:\n ax.legend()\n\n return ax\n", "path": "optuna/visualization/matplotlib/_edf.py" } ]
[ { "content": "from __future__ import annotations\n\nfrom collections.abc import Callable\nfrom collections.abc import Sequence\n\nfrom optuna._experimental import experimental_func\nfrom optuna.logging import get_logger\nfrom optuna.study import Study\nfrom optuna.trial import FrozenTrial\nfrom optuna.visualization._edf import _get_edf_info\nfrom optuna.visualization.matplotlib._matplotlib_imports import _imports\n\n\nif _imports.is_successful():\n from optuna.visualization.matplotlib._matplotlib_imports import Axes\n from optuna.visualization.matplotlib._matplotlib_imports import plt\n\n_logger = get_logger(__name__)\n\n\n@experimental_func(\"2.2.0\")\ndef plot_edf(\n study: Study | Sequence[Study],\n *,\n target: Callable[[FrozenTrial], float] | None = None,\n target_name: str = \"Objective Value\",\n) -> \"Axes\":\n \"\"\"Plot the objective value EDF (empirical distribution function) of a study with Matplotlib.\n\n Note that only the complete trials are considered when plotting the EDF.\n\n .. seealso::\n Please refer to :func:`optuna.visualization.plot_edf` for an example,\n where this function can be replaced with it.\n\n .. note::\n\n Please refer to `matplotlib.pyplot.legend\n <https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.legend.html>`_\n to adjust the style of the generated legend.\n\n Example:\n\n The following code snippet shows how to plot EDF.\n\n .. plot::\n\n import math\n\n import optuna\n\n\n def ackley(x, y):\n a = 20 * math.exp(-0.2 * math.sqrt(0.5 * (x ** 2 + y ** 2)))\n b = math.exp(0.5 * (math.cos(2 * math.pi * x) + math.cos(2 * math.pi * y)))\n return -a - b + math.e + 20\n\n\n def objective(trial, low, high):\n x = trial.suggest_float(\"x\", low, high)\n y = trial.suggest_float(\"y\", low, high)\n return ackley(x, y)\n\n\n sampler = optuna.samplers.RandomSampler(seed=10)\n\n # Widest search space.\n study0 = optuna.create_study(study_name=\"x=[0,5), y=[0,5)\", sampler=sampler)\n study0.optimize(lambda t: objective(t, 0, 5), n_trials=500)\n\n # Narrower search space.\n study1 = optuna.create_study(study_name=\"x=[0,4), y=[0,4)\", sampler=sampler)\n study1.optimize(lambda t: objective(t, 0, 4), n_trials=500)\n\n # Narrowest search space but it doesn't include the global optimum point.\n study2 = optuna.create_study(study_name=\"x=[1,3), y=[1,3)\", sampler=sampler)\n study2.optimize(lambda t: objective(t, 1, 3), n_trials=500)\n\n optuna.visualization.matplotlib.plot_edf([study0, study1, study2])\n\n Args:\n study:\n A target :class:`~optuna.study.Study` object.\n You can pass multiple studies if you want to compare those EDFs.\n target:\n A function to specify the value to display. If it is :obj:`None` and ``study`` is being\n used for single-objective optimization, the objective values are plotted.\n\n .. note::\n Specify this argument if ``study`` is being used for multi-objective optimization.\n target_name:\n Target's name to display on the axis label.\n\n Returns:\n A :class:`matplotlib.axes.Axes` object.\n \"\"\"\n\n _imports.check()\n\n # Set up the graph style.\n plt.style.use(\"ggplot\") # Use ggplot style sheet for similar outputs to plotly.\n _, ax = plt.subplots()\n ax.set_title(\"Empirical Distribution Function Plot\")\n ax.set_xlabel(target_name)\n ax.set_ylabel(\"Cumulative Probability\")\n ax.set_ylim(0, 1)\n cmap = plt.get_cmap(\"tab20\") # Use tab20 colormap for multiple line plots.\n\n info = _get_edf_info(study, target, target_name)\n edf_lines = info.lines\n\n if len(edf_lines) == 0:\n return ax\n\n for i, (study_name, y_values) in enumerate(edf_lines):\n ax.plot(info.x_values, y_values, color=cmap(i), alpha=0.7, label=study_name)\n\n if len(edf_lines) >= 2:\n ax.legend()\n\n return ax\n", "path": "optuna/visualization/matplotlib/_edf.py" } ]
diff --git a/optuna/visualization/matplotlib/_edf.py b/optuna/visualization/matplotlib/_edf.py index ee138b9149..3625dbbe70 100644 --- a/optuna/visualization/matplotlib/_edf.py +++ b/optuna/visualization/matplotlib/_edf.py @@ -1,7 +1,7 @@ from __future__ import annotations -from typing import Callable -from typing import Sequence +from collections.abc import Callable +from collections.abc import Sequence from optuna._experimental import experimental_func from optuna.logging import get_logger
ivy-llc__ivy-19363
T
[ { "content": "# global\n\n# local\nimport ivy\nimport ivy.functional.frontends.jax as jax_frontend\n\n\nclass DeviceArray:\n def __init__(self, array, weak_type=False):\n self._ivy_array = array if isinstance(array, ivy.Array) else ivy.array(array)\n self.weak_type = weak_type\n\n def __repr__(self):\n main = (\n str(self.ivy_array.__repr__())\n .replace(\"ivy.array\", \"ivy.frontends.jax.DeviceArray\")\n .replace(\")\", \"\")\n + \", dtype=\"\n + str(self.ivy_array.dtype)\n )\n if self.weak_type:\n return main + \", weak_type=True)\"\n return main + \")\"\n\n # Properties #\n # ---------- #\n\n @property\n def ivy_array(self):\n return self._ivy_array\n\n @property\n def dtype(self):\n return self.ivy_array.dtype\n\n @property\n def shape(self):\n return self.ivy_array.shape\n\n @property\n def at(self):\n return jax_frontend._src.numpy.lax_numpy._IndexUpdateHelper(self.ivy_array)\n\n # Instance Methods #\n # ---------------- #\n\n def all(self, *, axis=None, out=None, keepdims=False):\n return jax_frontend.numpy.all(\n self._ivy_array, axis=axis, keepdims=keepdims, out=out\n )\n\n def argmax(\n self,\n /,\n *,\n axis=None,\n out=None,\n keepdims=False,\n ):\n return jax_frontend.numpy.argmax(\n self,\n axis=axis,\n out=out,\n keepdims=keepdims,\n )\n\n def conj(self, /):\n return jax_frontend.numpy.conj(self._ivy_array)\n\n def conjugate(self, /):\n return jax_frontend.numpy.conjugate(self._ivy_array)\n\n def mean(self, *, axis=None, dtype=None, out=None, keepdims=False, where=None):\n return jax_frontend.numpy.mean(\n self._ivy_array,\n axis=axis,\n dtype=dtype,\n out=out,\n keepdims=keepdims,\n where=where,\n )\n\n def cumprod(self, axis=None, dtype=None, out=None):\n return jax_frontend.numpy.cumprod(\n self,\n axis=axis,\n dtype=dtype,\n out=out,\n )\n\n def cumsum(self, axis=None, dtype=None, out=None):\n return jax_frontend.numpy.cumsum(\n self,\n axis=axis,\n dtype=dtype,\n out=out,\n )\n\n def nonzero(self, *, size=None, fill_value=None):\n return jax_frontend.numpy.nonzero(\n self,\n size=size,\n fill_value=fill_value,\n )\n\n def ravel(self, order=\"C\"):\n return jax_frontend.numpy.ravel(\n self,\n order=order,\n )\n\n def sort(self, axis=-1, order=None):\n return jax_frontend.numpy.sort(\n self,\n axis=axis,\n order=order,\n )\n\n def __add__(self, other):\n return jax_frontend.numpy.add(self, other)\n\n def __radd__(self, other):\n return jax_frontend.numpy.add(other, self)\n\n def __sub__(self, other):\n return jax_frontend.lax.sub(self, other)\n\n def __rsub__(self, other):\n return jax_frontend.lax.sub(other, self)\n\n def __mul__(self, other):\n return jax_frontend.lax.mul(self, other)\n\n def __rmul__(self, other):\n return jax_frontend.lax.mul(other, self)\n\n def __div__(self, other):\n return jax_frontend.numpy.divide(self, other)\n\n def __rdiv__(self, other):\n return jax_frontend.numpy.divide(other, self)\n\n def __mod__(self, other):\n return jax_frontend.numpy.mod(self, other)\n\n def __rmod__(self, other):\n return jax_frontend.numpy.mod(other, self)\n\n def __truediv__(self, other):\n return jax_frontend.numpy.divide(self, other)\n\n def __rtruediv__(self, other):\n return jax_frontend.numpy.divide(other, self)\n\n def __matmul__(self, other):\n return jax_frontend.numpy.dot(self, other)\n\n def __rmatmul__(self, other):\n return jax_frontend.numpy.dot(other, self)\n\n def __pos__(self):\n return self\n\n def __neg__(self):\n return jax_frontend.lax.neg(self)\n\n def __eq__(self, other):\n return jax_frontend.lax.eq(self, other)\n\n def __ne__(self, other):\n return jax_frontend.lax.ne(self, other)\n\n def __lt__(self, other):\n return jax_frontend.lax.lt(self, other)\n\n def __le__(self, other):\n return jax_frontend.lax.le(self, other)\n\n def __gt__(self, other):\n return jax_frontend.lax.gt(self, other)\n\n def __ge__(self, other):\n return jax_frontend.lax.ge(self, other)\n\n def __abs__(self):\n return jax_frontend.numpy.abs(self)\n\n def __pow__(self, other):\n return jax_frontend.lax.pow(self, other)\n\n def __rpow__(self, other):\n other = ivy.asarray(other)\n return jax_frontend.lax.pow(other, self)\n\n def __and__(self, other):\n return jax_frontend.numpy.bitwise_and(self, other)\n\n def __rand__(self, other):\n return jax_frontend.numpy.bitwise_and(other, self)\n\n def __or__(self, other):\n return jax_frontend.numpy.bitwise_or(self, other)\n\n def __ror__(self, other):\n return jax_frontend.numpy.bitwise_or(other, self)\n\n def __xor__(self, other):\n return jax_frontend.lax.bitwise_xor(self, other)\n\n def __rxor__(self, other):\n return jax_frontend.lax.bitwise_xor(other, self)\n\n def __invert__(self):\n return jax_frontend.lax.bitwise_not(self)\n\n def __lshift__(self, other):\n return jax_frontend.lax.shift_left(self, other)\n\n def __rlshift__(self, other):\n return jax_frontend.lax.shift_left(other, self)\n\n def __rshift__(self, other):\n return jax_frontend.lax.shift_right_logical(self, other)\n\n def __rrshift__(self, other):\n return jax_frontend.lax.shift_right_logical(other, self)\n\n def __getitem__(self, idx):\n return self.at[idx].get()\n\n def __setitem__(self, idx, val):\n raise ivy.utils.exceptions.IvyException(\n \"ivy.functional.frontends.jax.DeviceArray object doesn't support assignment\"\n )\n\n def __iter__(self):\n ndim = len(self.shape)\n if ndim == 0:\n raise TypeError(\"iteration over a 0-d devicearray not supported\")\n for i in range(self.shape[0]):\n yield self[i]\n\n def round(self, decimals=0):\n return jax_frontend.numpy.round(self, decimals)\n", "path": "ivy/functional/frontends/jax/devicearray.py" } ]
[ { "content": "# global\n\n# local\nimport ivy\nimport ivy.functional.frontends.jax as jax_frontend\n\n\nclass DeviceArray:\n def __init__(self, array, weak_type=False):\n self._ivy_array = array if isinstance(array, ivy.Array) else ivy.array(array)\n self.weak_type = weak_type\n\n def __repr__(self):\n main = (\n str(self.ivy_array.__repr__())\n .replace(\"ivy.array\", \"ivy.frontends.jax.DeviceArray\")\n .replace(\")\", \"\")\n + \", dtype=\"\n + str(self.ivy_array.dtype)\n )\n if self.weak_type:\n return main + \", weak_type=True)\"\n return main + \")\"\n\n # Properties #\n # ---------- #\n\n @property\n def ivy_array(self):\n return self._ivy_array\n\n @property\n def dtype(self):\n return self.ivy_array.dtype\n\n @property\n def shape(self):\n return self.ivy_array.shape\n\n @property\n def at(self):\n return jax_frontend._src.numpy.lax_numpy._IndexUpdateHelper(self.ivy_array)\n\n @property\n def T(self):\n return self.ivy_array.T\n\n # Instance Methods #\n # ---------------- #\n\n def all(self, *, axis=None, out=None, keepdims=False):\n return jax_frontend.numpy.all(\n self._ivy_array, axis=axis, keepdims=keepdims, out=out\n )\n\n def argmax(\n self,\n /,\n *,\n axis=None,\n out=None,\n keepdims=False,\n ):\n return jax_frontend.numpy.argmax(\n self,\n axis=axis,\n out=out,\n keepdims=keepdims,\n )\n\n def conj(self, /):\n return jax_frontend.numpy.conj(self._ivy_array)\n\n def conjugate(self, /):\n return jax_frontend.numpy.conjugate(self._ivy_array)\n\n def mean(self, *, axis=None, dtype=None, out=None, keepdims=False, where=None):\n return jax_frontend.numpy.mean(\n self._ivy_array,\n axis=axis,\n dtype=dtype,\n out=out,\n keepdims=keepdims,\n where=where,\n )\n\n def cumprod(self, axis=None, dtype=None, out=None):\n return jax_frontend.numpy.cumprod(\n self,\n axis=axis,\n dtype=dtype,\n out=out,\n )\n\n def cumsum(self, axis=None, dtype=None, out=None):\n return jax_frontend.numpy.cumsum(\n self,\n axis=axis,\n dtype=dtype,\n out=out,\n )\n\n def nonzero(self, *, size=None, fill_value=None):\n return jax_frontend.numpy.nonzero(\n self,\n size=size,\n fill_value=fill_value,\n )\n\n def ravel(self, order=\"C\"):\n return jax_frontend.numpy.ravel(\n self,\n order=order,\n )\n\n def sort(self, axis=-1, order=None):\n return jax_frontend.numpy.sort(\n self,\n axis=axis,\n order=order,\n )\n\n def __add__(self, other):\n return jax_frontend.numpy.add(self, other)\n\n def __radd__(self, other):\n return jax_frontend.numpy.add(other, self)\n\n def __sub__(self, other):\n return jax_frontend.lax.sub(self, other)\n\n def __rsub__(self, other):\n return jax_frontend.lax.sub(other, self)\n\n def __mul__(self, other):\n return jax_frontend.lax.mul(self, other)\n\n def __rmul__(self, other):\n return jax_frontend.lax.mul(other, self)\n\n def __div__(self, other):\n return jax_frontend.numpy.divide(self, other)\n\n def __rdiv__(self, other):\n return jax_frontend.numpy.divide(other, self)\n\n def __mod__(self, other):\n return jax_frontend.numpy.mod(self, other)\n\n def __rmod__(self, other):\n return jax_frontend.numpy.mod(other, self)\n\n def __truediv__(self, other):\n return jax_frontend.numpy.divide(self, other)\n\n def __rtruediv__(self, other):\n return jax_frontend.numpy.divide(other, self)\n\n def __matmul__(self, other):\n return jax_frontend.numpy.dot(self, other)\n\n def __rmatmul__(self, other):\n return jax_frontend.numpy.dot(other, self)\n\n def __pos__(self):\n return self\n\n def __neg__(self):\n return jax_frontend.lax.neg(self)\n\n def __eq__(self, other):\n return jax_frontend.lax.eq(self, other)\n\n def __ne__(self, other):\n return jax_frontend.lax.ne(self, other)\n\n def __lt__(self, other):\n return jax_frontend.lax.lt(self, other)\n\n def __le__(self, other):\n return jax_frontend.lax.le(self, other)\n\n def __gt__(self, other):\n return jax_frontend.lax.gt(self, other)\n\n def __ge__(self, other):\n return jax_frontend.lax.ge(self, other)\n\n def __abs__(self):\n return jax_frontend.numpy.abs(self)\n\n def __pow__(self, other):\n return jax_frontend.lax.pow(self, other)\n\n def __rpow__(self, other):\n other = ivy.asarray(other)\n return jax_frontend.lax.pow(other, self)\n\n def __and__(self, other):\n return jax_frontend.numpy.bitwise_and(self, other)\n\n def __rand__(self, other):\n return jax_frontend.numpy.bitwise_and(other, self)\n\n def __or__(self, other):\n return jax_frontend.numpy.bitwise_or(self, other)\n\n def __ror__(self, other):\n return jax_frontend.numpy.bitwise_or(other, self)\n\n def __xor__(self, other):\n return jax_frontend.lax.bitwise_xor(self, other)\n\n def __rxor__(self, other):\n return jax_frontend.lax.bitwise_xor(other, self)\n\n def __invert__(self):\n return jax_frontend.lax.bitwise_not(self)\n\n def __lshift__(self, other):\n return jax_frontend.lax.shift_left(self, other)\n\n def __rlshift__(self, other):\n return jax_frontend.lax.shift_left(other, self)\n\n def __rshift__(self, other):\n return jax_frontend.lax.shift_right_logical(self, other)\n\n def __rrshift__(self, other):\n return jax_frontend.lax.shift_right_logical(other, self)\n\n def __getitem__(self, idx):\n return self.at[idx].get()\n\n def __setitem__(self, idx, val):\n raise ivy.utils.exceptions.IvyException(\n \"ivy.functional.frontends.jax.DeviceArray object doesn't support assignment\"\n )\n\n def __iter__(self):\n ndim = len(self.shape)\n if ndim == 0:\n raise TypeError(\"iteration over a 0-d devicearray not supported\")\n for i in range(self.shape[0]):\n yield self[i]\n\n def round(self, decimals=0):\n return jax_frontend.numpy.round(self, decimals)\n", "path": "ivy/functional/frontends/jax/devicearray.py" } ]
diff --git a/ivy/functional/frontends/jax/devicearray.py b/ivy/functional/frontends/jax/devicearray.py index b04cecb8a1095..dabd2464a1f7a 100644 --- a/ivy/functional/frontends/jax/devicearray.py +++ b/ivy/functional/frontends/jax/devicearray.py @@ -41,6 +41,10 @@ def shape(self): def at(self): return jax_frontend._src.numpy.lax_numpy._IndexUpdateHelper(self.ivy_array) + @property + def T(self): + return self.ivy_array.T + # Instance Methods # # ---------------- # diff --git a/ivy_tests/test_ivy/test_frontends/test_jax/test_devicearray.py b/ivy_tests/test_ivy/test_frontends/test_jax/test_devicearray.py index a2401a4c69a30..9089fbb60a565 100644 --- a/ivy_tests/test_ivy/test_frontends/test_jax/test_devicearray.py +++ b/ivy_tests/test_ivy/test_frontends/test_jax/test_devicearray.py @@ -59,6 +59,30 @@ def test_jax_devicearray_property_shape( assert x.shape == shape [email protected] +def _transpose_helper(draw): + dtype_x = draw( + helpers.dtype_and_values( + available_dtypes=helpers.get_dtypes("valid", prune_function=False), + min_num_dims=2, + max_num_dims=2, + min_dim_size=2, + ) + ) + + _, data = dtype_x + x = data[0] + xT = np.transpose(x) + return x, xT + + +@given(x_transpose=_transpose_helper()) +def test_jax_devicearray_property_T(x_transpose): + x, xT = x_transpose + x = DeviceArray(x) + assert np.array_equal(x.T, xT) + + @st.composite def _at_helper(draw): _, data, shape = draw(
OpenMined__PySyft-676
Implement rsqrt Functionality in FloatTensor with CPU/GPU Backend Support ### User Story: As a Data Scientist using PySyft's FloatTensor type, I want to leverage a wide range of methods which use our new Unity backend. For this ticket to be complete, the rsqrt() should be added to our FloatTensor class with the appropriate functionality, returning a new tensor. Furthermore, the function should automatically determine which backend to use (CPU/GPU) based on where the data is located. If the data is located on the CPU, a performant CPU implementation should run but if the data for a given FloatTensor is located on a GPU, it should be run using an HLSL kernel where appropriate. Obviously, if no GPU is available, it should automatically fall back to the CPU implementation. ### Every Reference You Might Need for this Issue: - For a reference on the operation this performs check out [PyTorch](http://pytorch.org/docs/master/tensors.html)'s documentation. - For a reference on how to program in Unity, check out [this basic tutorial](https://unity3d.com/learn/tutorials/projects/roll-ball-tutorial) - For a reference on how to write HLSL code, check out [this basic tutorial](http://kylehalladay.com/blog/tutorial/2014/06/27/Compute-Shaders-Are-Nifty.html) - For a complete tutorial on how to add functions to FloatTensor (step by step guide) see [this Google Document](https://docs.google.com/document/d/1WRd7gGLFN0Awtf86AICYIHtg3gfFWLBa5wYTthsB3i0/edit) - For a reference on how other functions like this have been implemented check out the functions in [this notebook](https://github.com/OpenMined/OpenMined/blob/master/notebooks/Syft%20Tensor%20Example%20Notebook.ipynb) as well as the corresponding files that made it possible ([SyftController](https://github.com/OpenMined/OpenMined/blob/master/Assets/OpenMined/Network/Controllers/SyftController.cs), [FloatTensor.Ops](https://github.com/OpenMined/OpenMined/blob/master/Assets/OpenMined/Syft/Tensor/FloatTensor.Ops.cs), [FloatTensor.ShaderOps](https://github.com/OpenMined/OpenMined/blob/master/Assets/OpenMined/Syft/Tensor/FloatTensor.ShaderOps.cs), [FloatTensorShaders](https://github.com/OpenMined/OpenMined/blob/master/Assets/OpenMined/Syft/Math/Shaders/FloatTensorShaders.compute), and [FloatTensorTest](https://github.com/OpenMined/OpenMined/blob/master/Assets/OpenMined.Tests/Editor/FloatTensorTest.cs)). - And of course, please consider our [Contributor Guidelines](https://github.com/OpenMined/Docs/blob/master/contributing/guidelines.md) for all contributions. ### Acceptance Criteria: - [ ] an integration test in PySyft demonstrating the correct CPU and GPU operation implemented over a FloatTensor while connected to a Unity backend - [ ] a Unit Test in OpenMined/OpenMined demonstrating the correct operation on a FloatTensor - [ ] [inline](http://pytorch.org/docs/master/tensors.html) documentation in the python code. For inspiration on inline documentation, please check out PyTorch's documentation for this operator. - [ ] Link your Pull Request back to this Issue so that it gets closed appropriately when the PR is merged.
[ { "content": "import zmq\nimport uuid\n\n\nclass FloatTensor():\n\n def __init__(self, controller, data, data_is_pointer=False, verbose=False):\n self.verbose = verbose\n self.controller = controller\n if(data is not None and not data_is_pointer):\n data = data.astype('float')\n controller.socket.send_json({\"objectType\": \"tensor\",\n \"functionCall\": \"create\",\n \"data\": list(data.flatten()),\n \"shape\": data.shape})\n self.id = int(controller.socket.recv_string())\n if(verbose):\n print(\"FloatTensor.__init__: \" + str(self.id))\n\n elif(data_is_pointer):\n self.id = int(data)\n\n def __del__(self):\n self.delete_tensor()\n\n def abs(self):\n return self.no_params_func(\"abs\", return_response=True)\n\n def abs_(self):\n return self.no_params_func(\"abs_\")\n\n def acos(self):\n return self.no_params_func(\"acos\", return_response=True)\n\n def acos_(self):\n return self.no_params_func(\"acos_\")\n\n def asin(self):\n return self.no_params_func(\"asin\", return_response=True)\n\n def asin_(self):\n return self.no_params_func(\"asin_\")\n\n def atan(self):\n return self.no_params_func(\"atan\", return_response=True)\n\n def atan_(self):\n return self.no_params_func(\"atan_\")\n\n def addmm_(self, x, y):\n return self.params_func(\"addmm_\", [x.id, y.id])\n\n def addmm(self, x, y):\n copy = self.copy()\n copy.params_func(\"addmm_\", [x.id, y.id])\n return copy\n\n def addmv_(self, x, y):\n return self.params_func(\"addmv_\", [x.id, y.id])\n\n def addmv(self, x, y):\n copy = self.copy()\n copy.params_func(\"addmv_\", [x.id, y.id])\n return copy\n\n def __add__(self, x):\n return self.arithmetic_operation(x, \"add\", False)\n\n def __iadd__(self, x):\n return self.arithmetic_operation(x, \"add\", True)\n\n def copy(self):\n return self.no_params_func(\"copy\", return_response=True)\n\n def cos(self):\n return self.no_params_func(\"cos\", return_response=True)\n\n def cos_(self):\n return self.no_params_func(\"cos_\")\n\n def cosh(self):\n return self.no_params_func(\"cosh\", return_response=True)\n\n def cosh_(self):\n return self.no_params_func(\"cosh_\")\n\n def __truediv__(self, x):\n return self.arithmetic_operation(x, \"div\", False)\n\n def __itruediv__(self, x):\n return self.arithmetic_operation(x, \"div\", True)\n\n def floor_(self):\n return self.no_params_func(\"floor_\")\n\n def __mul__(self, x):\n return self.arithmetic_operation(x, \"mul\", False)\n\n def __imul__(self, x):\n return self.arithmetic_operation(x, \"mul\", True)\n\n def neg(self):\n return self.no_params_func(\"neg\", return_response=True)\n\n def sigmoid_(self):\n return self.no_params_func(\"sigmoid_\")\n\n def sign(self):\n return self.no_params_func(\"sign\", return_response=True)\n\n def sin(self):\n return self.no_params_func(\"sin\", return_response=True)\n\n def sin_(self):\n return self.no_params_func(\"sin_\")\n\n def size(self):\n \"\"\"\n Returns the size of the self tensor as a FloatTensor.\n\n Note:\n The returned value currently is a FloatTensor because it leverages\n the messaging mechanism with Unity.\n \"\"\"\n return self.no_params_func(\"size\", return_response=True)\n\n def sqrt(self):\n return self.no_params_func(\"sqrt\", return_response=True)\n\n def trunc(self):\n return self.no_params_func(\"trunc\", return_response=True)\n\n def __sub__(self, x):\n return self.arithmetic_operation(x, \"sub\", False)\n\n def __isub__(self,x):\n return self.arithmetic_operation(x,\"sub\",True)\n\n def sum(self,dim):\n assert type(dim) == int\n return self.arithmetic_operation(dim, \"sum\", False)\n\n def view(self, *args):\n new_dim = list(args)\n assert type(new_dim) == list\n assert type(new_dim[0]) == int\n return self.params_func(\"view\", new_dim, return_response=True)\n\n def view_(self, *args):\n new_dim = list(args)\n assert type(new_dim) == list\n assert type(new_dim[0]) == int\n self.params_func(\"view_\", new_dim, return_response=False)\n return self\n\n def T(self):\n return self.no_params_func(\"transpose\", return_response=True)\n\n def triu(self, k=0):\n return self.params_func(\"triu\", [k], return_response=True)\n\n def triu_(self, k=0):\n return self.params_func(\"triu_\", [k])\n\n # Fills this tensor with zeros.\n def zero_(self):\n return self.no_params_func(\"zero_\")\n\n def __repr__(self):\n return self.no_params_func(\"print\", True, False)\n\n def __str__(self):\n return self.no_params_func(\"print\", True, False)\n\n def cpu(self):\n return self.no_params_func(\"cpu\")\n\n def gpu(self):\n return self.no_params_func(\"gpu\")\n\n def cmd(self, functionCall, tensorIndexParams=[]):\n cmd = {\n 'functionCall': functionCall,\n 'objectType': 'tensor',\n 'objectIndex': self.id,\n 'tensorIndexParams': tensorIndexParams}\n return cmd\n\n def params_func(self, name, params, return_response=False, return_as_tensor=True):\n # send the command\n self.controller.socket.send_json(\n self.cmd(name, tensorIndexParams=params))\n # receive output from command\n res = self.controller.socket.recv_string()\n\n if(self.verbose):\n print(res)\n\n if(return_response):\n if(return_as_tensor):\n if(self.verbose):\n print(\"FloatTensor.__init__: \" + res)\n return FloatTensor(self.controller,int(res),True)\n else:\n return res\n return self\n\n def no_params_func(self, name, return_response=False, return_as_tensor=True):\n return(self.params_func(name, [], return_response, return_as_tensor))\n\n def arithmetic_operation(self, x, name, inline=False):\n\n operation_cmd = name\n\n if(type(x) == FloatTensor):\n operation_cmd += \"_elem\"\n parameter = x.id\n else:\n operation_cmd += \"_scalar\"\n parameter = str(x)\n\n if(inline):\n operation_cmd += \"_\"\n\n self.controller.socket.send_json(\n self.cmd(operation_cmd, [parameter])) # sends the command\n return FloatTensor(self.controller, int(self.controller.socket.recv_string()), True)\n\n def delete_tensor(self):\n if(self.id is not None):\n self.no_params_func(\"delete\")\n self.verbose = None\n self.controller = None\n self.id = None\n\n def T(self):\n return self.no_params_func(\"transpose\", return_response=True)\n\n def is_contiguous(self):\n return self.no_params_func(\"is_contiguous\", return_response=True, return_as_tensor=False)\n\n def sinh(self):\n return self.no_params_func(\"sinh\", return_response=True)\n\n def sinh_(self):\n return self.no_params_func(\"sinh_\")\n\n def tan(self):\n return self.no_params_func(\"tan\", return_response=True)\n\n def tan_(self):\n return self.no_params_func(\"tan_\")\n\n def tanh(self):\n return self.no_params_func(\"tanh\", return_response=True)\n\n\nclass SyftController():\n\n def __init__(self,verbose=True):\n\n self.identity = str(uuid.uuid4())\n\n context = zmq.Context()\n self.socket = context.socket(zmq.DEALER)\n self.socket.setsockopt_string(zmq.IDENTITY, self.identity)\n self.socket.connect(\"tcp://localhost:5555\")\n self.verbose=verbose\n\n def FloatTensor(self, data):\n verbose = self.verbose\n return FloatTensor(self, data,verbose=verbose)\n", "path": "syft/syft.py" } ]
[ { "content": "import zmq\nimport uuid\n\n\nclass FloatTensor():\n\n def __init__(self, controller, data, data_is_pointer=False, verbose=False):\n self.verbose = verbose\n self.controller = controller\n if(data is not None and not data_is_pointer):\n data = data.astype('float')\n controller.socket.send_json({\"objectType\": \"tensor\",\n \"functionCall\": \"create\",\n \"data\": list(data.flatten()),\n \"shape\": data.shape})\n self.id = int(controller.socket.recv_string())\n if(verbose):\n print(\"FloatTensor.__init__: \" + str(self.id))\n\n elif(data_is_pointer):\n self.id = int(data)\n\n def __del__(self):\n self.delete_tensor()\n\n def abs(self):\n return self.no_params_func(\"abs\", return_response=True)\n\n def abs_(self):\n return self.no_params_func(\"abs_\")\n\n def acos(self):\n return self.no_params_func(\"acos\", return_response=True)\n\n def acos_(self):\n return self.no_params_func(\"acos_\")\n\n def asin(self):\n return self.no_params_func(\"asin\", return_response=True)\n\n def asin_(self):\n return self.no_params_func(\"asin_\")\n\n def atan(self):\n return self.no_params_func(\"atan\", return_response=True)\n\n def atan_(self):\n return self.no_params_func(\"atan_\")\n\n def addmm_(self, x, y):\n return self.params_func(\"addmm_\", [x.id, y.id])\n\n def addmm(self, x, y):\n copy = self.copy()\n copy.params_func(\"addmm_\", [x.id, y.id])\n return copy\n\n def addmv_(self, x, y):\n return self.params_func(\"addmv_\", [x.id, y.id])\n\n def addmv(self, x, y):\n copy = self.copy()\n copy.params_func(\"addmv_\", [x.id, y.id])\n return copy\n\n def __add__(self, x):\n return self.arithmetic_operation(x, \"add\", False)\n\n def __iadd__(self, x):\n return self.arithmetic_operation(x, \"add\", True)\n\n def copy(self):\n return self.no_params_func(\"copy\", return_response=True)\n\n def cos(self):\n return self.no_params_func(\"cos\", return_response=True)\n\n def cos_(self):\n return self.no_params_func(\"cos_\")\n\n def cosh(self):\n return self.no_params_func(\"cosh\", return_response=True)\n\n def cosh_(self):\n return self.no_params_func(\"cosh_\")\n\n def __truediv__(self, x):\n return self.arithmetic_operation(x, \"div\", False)\n\n def __itruediv__(self, x):\n return self.arithmetic_operation(x, \"div\", True)\n\n def floor_(self):\n return self.no_params_func(\"floor_\")\n\n def __mul__(self, x):\n return self.arithmetic_operation(x, \"mul\", False)\n\n def __imul__(self, x):\n return self.arithmetic_operation(x, \"mul\", True)\n\n def neg(self):\n return self.no_params_func(\"neg\", return_response=True)\n\n def rsqrt(self):\n return self.no_params_func(\"rsqrt\",return_response=True)\n\n def sigmoid_(self):\n return self.no_params_func(\"sigmoid_\")\n\n def sign(self):\n return self.no_params_func(\"sign\", return_response=True)\n\n def sin(self):\n return self.no_params_func(\"sin\", return_response=True)\n\n def sin_(self):\n return self.no_params_func(\"sin_\")\n\n def size(self):\n \"\"\"\n Returns the size of the self tensor as a FloatTensor.\n\n Note:\n The returned value currently is a FloatTensor because it leverages\n the messaging mechanism with Unity.\n \"\"\"\n return self.no_params_func(\"size\", return_response=True)\n\n def sqrt(self):\n return self.no_params_func(\"sqrt\", return_response=True)\n\n def trunc(self):\n return self.no_params_func(\"trunc\", return_response=True)\n\n def __sub__(self, x):\n return self.arithmetic_operation(x, \"sub\", False)\n\n def __isub__(self,x):\n return self.arithmetic_operation(x,\"sub\",True)\n\n def sum(self,dim):\n assert type(dim) == int\n return self.arithmetic_operation(dim, \"sum\", False)\n\n def view(self, *args):\n new_dim = list(args)\n assert type(new_dim) == list\n assert type(new_dim[0]) == int\n return self.params_func(\"view\", new_dim, return_response=True)\n\n def view_(self, *args):\n new_dim = list(args)\n assert type(new_dim) == list\n assert type(new_dim[0]) == int\n self.params_func(\"view_\", new_dim, return_response=False)\n return self\n\n def T(self):\n return self.no_params_func(\"transpose\", return_response=True)\n\n def triu(self, k=0):\n return self.params_func(\"triu\", [k], return_response=True)\n\n def triu_(self, k=0):\n return self.params_func(\"triu_\", [k])\n\n # Fills this tensor with zeros.\n def zero_(self):\n return self.no_params_func(\"zero_\")\n\n def __repr__(self):\n return self.no_params_func(\"print\", True, False)\n\n def __str__(self):\n return self.no_params_func(\"print\", True, False)\n\n def cpu(self):\n return self.no_params_func(\"cpu\")\n\n def gpu(self):\n return self.no_params_func(\"gpu\")\n\n def cmd(self, functionCall, tensorIndexParams=[]):\n cmd = {\n 'functionCall': functionCall,\n 'objectType': 'tensor',\n 'objectIndex': self.id,\n 'tensorIndexParams': tensorIndexParams}\n return cmd\n\n def params_func(self, name, params, return_response=False, return_as_tensor=True):\n # send the command\n self.controller.socket.send_json(\n self.cmd(name, tensorIndexParams=params))\n # receive output from command\n res = self.controller.socket.recv_string()\n\n if(self.verbose):\n print(res)\n\n if(return_response):\n if(return_as_tensor):\n if(self.verbose):\n print(\"FloatTensor.__init__: \" + res)\n return FloatTensor(self.controller,int(res),True)\n else:\n return res\n return self\n\n def no_params_func(self, name, return_response=False, return_as_tensor=True):\n return(self.params_func(name, [], return_response, return_as_tensor))\n\n def arithmetic_operation(self, x, name, inline=False):\n\n operation_cmd = name\n\n if(type(x) == FloatTensor):\n operation_cmd += \"_elem\"\n parameter = x.id\n else:\n operation_cmd += \"_scalar\"\n parameter = str(x)\n\n if(inline):\n operation_cmd += \"_\"\n\n self.controller.socket.send_json(\n self.cmd(operation_cmd, [parameter])) # sends the command\n return FloatTensor(self.controller, int(self.controller.socket.recv_string()), True)\n\n def delete_tensor(self):\n if(self.id is not None):\n self.no_params_func(\"delete\")\n self.verbose = None\n self.controller = None\n self.id = None\n\n def T(self):\n return self.no_params_func(\"transpose\", return_response=True)\n\n def is_contiguous(self):\n return self.no_params_func(\"is_contiguous\", return_response=True, return_as_tensor=False)\n\n def sinh(self):\n return self.no_params_func(\"sinh\", return_response=True)\n\n def sinh_(self):\n return self.no_params_func(\"sinh_\")\n\n def tan(self):\n return self.no_params_func(\"tan\", return_response=True)\n\n def tan_(self):\n return self.no_params_func(\"tan_\")\n\n def tanh(self):\n return self.no_params_func(\"tanh\", return_response=True)\n\n\nclass SyftController():\n\n def __init__(self,verbose=True):\n\n self.identity = str(uuid.uuid4())\n\n context = zmq.Context()\n self.socket = context.socket(zmq.DEALER)\n self.socket.setsockopt_string(zmq.IDENTITY, self.identity)\n self.socket.connect(\"tcp://localhost:5555\")\n self.verbose=verbose\n\n def FloatTensor(self, data):\n verbose = self.verbose\n return FloatTensor(self, data,verbose=verbose)\n", "path": "syft/syft.py" } ]
diff --git a/syft/syft.py b/syft/syft.py index 67414ce731e..1f8b47a80f6 100644 --- a/syft/syft.py +++ b/syft/syft.py @@ -102,6 +102,9 @@ def __imul__(self, x): def neg(self): return self.no_params_func("neg", return_response=True) + def rsqrt(self): + return self.no_params_func("rsqrt",return_response=True) + def sigmoid_(self): return self.no_params_func("sigmoid_")
talonhub__community-742
'go <arrow keys>' conflicts with generic_editor 'go left/right/up/down' This is really only a problem if the edit actions are overridden, but one might want to do so for `vim`...
[ { "content": "from typing import Set\n\nfrom talon import Module, Context, actions, app\nimport sys\n\ndefault_alphabet = \"air bat cap drum each fine gust harp sit jury crunch look made near odd pit quench red sun trap urge vest whale plex yank zip\".split(\n \" \"\n)\nletters_string = \"abcdefghijklmnopqrstuvwxyz\"\n\ndefault_digits = \"zero one two three four five six seven eight nine\".split(\" \")\nnumbers = [str(i) for i in range(10)]\ndefault_f_digits = \"one two three four five six seven eight nine ten eleven twelve\".split(\n \" \"\n)\n\nmod = Module()\nmod.list(\"letter\", desc=\"The spoken phonetic alphabet\")\nmod.list(\"symbol_key\", desc=\"All symbols from the keyboard\")\nmod.list(\"arrow_key\", desc=\"All arrow keys\")\nmod.list(\"number_key\", desc=\"All number keys\")\nmod.list(\"modifier_key\", desc=\"All modifier keys\")\nmod.list(\"function_key\", desc=\"All function keys\")\nmod.list(\"special_key\", desc=\"All special keys\")\nmod.list(\"punctuation\", desc=\"words for inserting punctuation into text\")\n\n\[email protected](rule=\"{self.modifier_key}+\")\ndef modifiers(m) -> str:\n \"One or more modifier keys\"\n return \"-\".join(m.modifier_key_list)\n\n\[email protected](rule=\"{self.arrow_key}\")\ndef arrow_key(m) -> str:\n \"One directional arrow key\"\n return m.arrow_key\n\n\[email protected](rule=\"<self.arrow_key>+\")\ndef arrow_keys(m) -> str:\n \"One or more arrow keys separated by a space\"\n return str(m)\n\n\[email protected](rule=\"{self.number_key}\")\ndef number_key(m) -> str:\n \"One number key\"\n return m.number_key\n\n\[email protected](rule=\"{self.letter}\")\ndef letter(m) -> str:\n \"One letter key\"\n return m.letter\n\n\[email protected](rule=\"{self.special_key}\")\ndef special_key(m) -> str:\n \"One special key\"\n return m.special_key\n\n\[email protected](rule=\"{self.symbol_key}\")\ndef symbol_key(m) -> str:\n \"One symbol key\"\n return m.symbol_key\n\n\[email protected](rule=\"{self.function_key}\")\ndef function_key(m) -> str:\n \"One function key\"\n return m.function_key\n\n\[email protected](rule=\"( <self.letter> | <self.number_key> | <self.symbol_key> )\")\ndef any_alphanumeric_key(m) -> str:\n \"any alphanumeric key\"\n return str(m)\n\n\[email protected](\n rule=\"( <self.letter> | <self.number_key> | <self.symbol_key> \"\n \"| <self.arrow_key> | <self.function_key> | <self.special_key> )\"\n)\ndef unmodified_key(m) -> str:\n \"A single key with no modifiers\"\n return str(m)\n\n\[email protected](rule=\"{self.modifier_key}* <self.unmodified_key>\")\ndef key(m) -> str:\n \"A single key with optional modifiers\"\n try:\n mods = m.modifier_key_list\n except AttributeError:\n mods = []\n return \"-\".join(mods + [m.unmodified_key])\n\n\[email protected](rule=\"<self.key>+\")\ndef keys(m) -> str:\n \"A sequence of one or more keys with optional modifiers\"\n return \" \".join(m.key_list)\n\n\[email protected](rule=\"{self.letter}+\")\ndef letters(m) -> str:\n \"Multiple letter keys\"\n return \"\".join(m.letter_list)\n\n\nctx = Context()\nmodifier_keys = {\n # If you find 'alt' is often misrecognized, try using 'alter'.\n \"alt\": \"alt\", #'alter': 'alt',\n \"control\": \"ctrl\", #'troll': 'ctrl',\n \"shift\": \"shift\", #'sky': 'shift',\n \"super\": \"super\",\n}\nif app.platform == \"mac\":\n modifier_keys[\"command\"] = \"cmd\"\n modifier_keys[\"option\"] = \"alt\"\nctx.lists[\"self.modifier_key\"] = modifier_keys\nalphabet = dict(zip(default_alphabet, letters_string))\nctx.lists[\"self.letter\"] = alphabet\n\n# `punctuation_words` is for words you want available BOTH in dictation and as key names in command mode.\n# `symbol_key_words` is for key names that should be available in command mode, but NOT during dictation.\npunctuation_words = {\n # TODO: I'm not sure why we need these, I think it has something to do with\n # Dragon. Possibly it has been fixed by later improvements to talon? -rntz\n \"`\": \"`\",\n \",\": \",\", # <== these things\n \"back tick\": \"`\",\n \"grave\": \"`\",\n \"comma\": \",\",\n \"period\": \".\",\n \"full stop\": \".\",\n \"semicolon\": \";\",\n \"colon\": \":\",\n \"forward slash\": \"/\",\n \"question mark\": \"?\",\n \"exclamation mark\": \"!\",\n \"exclamation point\": \"!\",\n \"asterisk\": \"*\",\n \"hash sign\": \"#\",\n \"number sign\": \"#\",\n \"percent sign\": \"%\",\n \"at sign\": \"@\",\n \"and sign\": \"&\",\n \"ampersand\": \"&\",\n\n # Currencies\n \"dollar sign\": \"$\",\n \"pound sign\": \"£\",\n}\nsymbol_key_words = {\n \"dot\": \".\",\n \"point\": \".\",\n \"quote\": \"'\",\n \"apostrophe\": \"'\",\n \"L square\": \"[\",\n \"left square\": \"[\",\n \"square\": \"[\",\n \"R square\": \"]\",\n \"right square\": \"]\",\n \"slash\": \"/\",\n \"backslash\": \"\\\\\",\n \"minus\": \"-\",\n \"dash\": \"-\",\n \"equals\": \"=\",\n \"plus\": \"+\",\n \"tilde\": \"~\",\n \"bang\": \"!\",\n \"down score\": \"_\",\n \"under score\": \"_\",\n \"paren\": \"(\",\n \"L paren\": \"(\",\n \"left paren\": \"(\",\n \"R paren\": \")\",\n \"right paren\": \")\",\n \"brace\": \"{\",\n \"left brace\": \"{\",\n \"R brace\": \"}\",\n \"right brace\": \"}\",\n \"angle\": \"<\",\n \"left angle\": \"<\",\n \"less than\": \"<\",\n \"rangle\": \">\",\n \"R angle\": \">\",\n \"right angle\": \">\",\n \"greater than\": \">\",\n \"star\": \"*\",\n \"hash\": \"#\",\n \"percent\": \"%\",\n \"caret\": \"^\",\n \"amper\": \"&\",\n \"pipe\": \"|\",\n \"dubquote\": '\"',\n \"double quote\": '\"',\n\n # Currencies\n \"dollar\": \"$\",\n \"pound\": \"£\",\n}\n\n# make punctuation words also included in {user.symbol_keys}\nsymbol_key_words.update(punctuation_words)\nctx.lists[\"self.punctuation\"] = punctuation_words\nctx.lists[\"self.symbol_key\"] = symbol_key_words\nctx.lists[\"self.number_key\"] = dict(zip(default_digits, numbers))\nctx.lists[\"self.arrow_key\"] = {\n \"down\": \"down\",\n \"left\": \"left\",\n \"right\": \"right\",\n \"up\": \"up\",\n}\n\nsimple_keys = [\n \"end\",\n \"enter\",\n \"escape\",\n \"home\",\n \"insert\",\n \"pagedown\",\n \"pageup\",\n \"space\",\n \"tab\",\n]\n\nalternate_keys = {\n \"delete\": \"backspace\",\n \"forward delete\": \"delete\",\n #'junk': 'backspace',\n \"page up\": \"pageup\",\n \"page down\": \"pagedown\",\n}\n# mac apparently doesn't have the menu key.\nif app.platform in (\"windows\", \"linux\"):\n alternate_keys[\"menu key\"] = \"menu\"\n alternate_keys[\"print screen\"] = \"printscr\"\n\nspecial_keys = {k: k for k in simple_keys}\nspecial_keys.update(alternate_keys)\nctx.lists[\"self.special_key\"] = special_keys\nctx.lists[\"self.function_key\"] = {\n f\"F {default_f_digits[i]}\": f\"f{i + 1}\" for i in range(12)\n}\n\n\n", "path": "code/keys.py" } ]
[ { "content": "from typing import Set\n\nfrom talon import Module, Context, actions, app\nimport sys\n\ndefault_alphabet = \"air bat cap drum each fine gust harp sit jury crunch look made near odd pit quench red sun trap urge vest whale plex yank zip\".split(\n \" \"\n)\nletters_string = \"abcdefghijklmnopqrstuvwxyz\"\n\ndefault_digits = \"zero one two three four five six seven eight nine\".split(\" \")\nnumbers = [str(i) for i in range(10)]\ndefault_f_digits = \"one two three four five six seven eight nine ten eleven twelve\".split(\n \" \"\n)\n\nmod = Module()\nmod.list(\"letter\", desc=\"The spoken phonetic alphabet\")\nmod.list(\"symbol_key\", desc=\"All symbols from the keyboard\")\nmod.list(\"arrow_key\", desc=\"All arrow keys\")\nmod.list(\"number_key\", desc=\"All number keys\")\nmod.list(\"modifier_key\", desc=\"All modifier keys\")\nmod.list(\"function_key\", desc=\"All function keys\")\nmod.list(\"special_key\", desc=\"All special keys\")\nmod.list(\"punctuation\", desc=\"words for inserting punctuation into text\")\n\n\[email protected](rule=\"{self.modifier_key}+\")\ndef modifiers(m) -> str:\n \"One or more modifier keys\"\n return \"-\".join(m.modifier_key_list)\n\n\[email protected](rule=\"{self.arrow_key}\")\ndef arrow_key(m) -> str:\n \"One directional arrow key\"\n return m.arrow_key\n\n\[email protected](rule=\"<self.arrow_key>+\")\ndef arrow_keys(m) -> str:\n \"One or more arrow keys separated by a space\"\n return str(m)\n\n\[email protected](rule=\"{self.number_key}\")\ndef number_key(m) -> str:\n \"One number key\"\n return m.number_key\n\n\[email protected](rule=\"{self.letter}\")\ndef letter(m) -> str:\n \"One letter key\"\n return m.letter\n\n\[email protected](rule=\"{self.special_key}\")\ndef special_key(m) -> str:\n \"One special key\"\n return m.special_key\n\n\[email protected](rule=\"{self.symbol_key}\")\ndef symbol_key(m) -> str:\n \"One symbol key\"\n return m.symbol_key\n\n\[email protected](rule=\"{self.function_key}\")\ndef function_key(m) -> str:\n \"One function key\"\n return m.function_key\n\n\[email protected](rule=\"( <self.letter> | <self.number_key> | <self.symbol_key> )\")\ndef any_alphanumeric_key(m) -> str:\n \"any alphanumeric key\"\n return str(m)\n\n\[email protected](\n rule=\"( <self.letter> | <self.number_key> | <self.symbol_key> \"\n \"| <self.arrow_key> | <self.function_key> | <self.special_key> )\"\n)\ndef unmodified_key(m) -> str:\n \"A single key with no modifiers\"\n return str(m)\n\n\[email protected](rule=\"{self.modifier_key}* <self.unmodified_key>\")\ndef key(m) -> str:\n \"A single key with optional modifiers\"\n try:\n mods = m.modifier_key_list\n except AttributeError:\n mods = []\n return \"-\".join(mods + [m.unmodified_key])\n\n\[email protected](rule=\"<self.key>+\")\ndef keys(m) -> str:\n \"A sequence of one or more keys with optional modifiers\"\n return \" \".join(m.key_list)\n\n\[email protected](rule=\"{self.letter}+\")\ndef letters(m) -> str:\n \"Multiple letter keys\"\n return \"\".join(m.letter_list)\n\n\nctx = Context()\nmodifier_keys = {\n # If you find 'alt' is often misrecognized, try using 'alter'.\n \"alt\": \"alt\", #'alter': 'alt',\n \"control\": \"ctrl\", #'troll': 'ctrl',\n \"shift\": \"shift\", #'sky': 'shift',\n \"super\": \"super\",\n}\nif app.platform == \"mac\":\n modifier_keys[\"command\"] = \"cmd\"\n modifier_keys[\"option\"] = \"alt\"\nctx.lists[\"self.modifier_key\"] = modifier_keys\nalphabet = dict(zip(default_alphabet, letters_string))\nctx.lists[\"self.letter\"] = alphabet\n\n# `punctuation_words` is for words you want available BOTH in dictation and as key names in command mode.\n# `symbol_key_words` is for key names that should be available in command mode, but NOT during dictation.\npunctuation_words = {\n # TODO: I'm not sure why we need these, I think it has something to do with\n # Dragon. Possibly it has been fixed by later improvements to talon? -rntz\n \"`\": \"`\",\n \",\": \",\", # <== these things\n \"back tick\": \"`\",\n \"grave\": \"`\",\n \"comma\": \",\",\n \"period\": \".\",\n \"full stop\": \".\",\n \"semicolon\": \";\",\n \"colon\": \":\",\n \"forward slash\": \"/\",\n \"question mark\": \"?\",\n \"exclamation mark\": \"!\",\n \"exclamation point\": \"!\",\n \"asterisk\": \"*\",\n \"hash sign\": \"#\",\n \"number sign\": \"#\",\n \"percent sign\": \"%\",\n \"at sign\": \"@\",\n \"and sign\": \"&\",\n \"ampersand\": \"&\",\n\n # Currencies\n \"dollar sign\": \"$\",\n \"pound sign\": \"£\",\n}\nsymbol_key_words = {\n \"dot\": \".\",\n \"point\": \".\",\n \"quote\": \"'\",\n \"apostrophe\": \"'\",\n \"L square\": \"[\",\n \"left square\": \"[\",\n \"square\": \"[\",\n \"R square\": \"]\",\n \"right square\": \"]\",\n \"slash\": \"/\",\n \"backslash\": \"\\\\\",\n \"minus\": \"-\",\n \"dash\": \"-\",\n \"equals\": \"=\",\n \"plus\": \"+\",\n \"tilde\": \"~\",\n \"bang\": \"!\",\n \"down score\": \"_\",\n \"under score\": \"_\",\n \"paren\": \"(\",\n \"L paren\": \"(\",\n \"left paren\": \"(\",\n \"R paren\": \")\",\n \"right paren\": \")\",\n \"brace\": \"{\",\n \"left brace\": \"{\",\n \"R brace\": \"}\",\n \"right brace\": \"}\",\n \"angle\": \"<\",\n \"left angle\": \"<\",\n \"less than\": \"<\",\n \"rangle\": \">\",\n \"R angle\": \">\",\n \"right angle\": \">\",\n \"greater than\": \">\",\n \"star\": \"*\",\n \"hash\": \"#\",\n \"percent\": \"%\",\n \"caret\": \"^\",\n \"amper\": \"&\",\n \"pipe\": \"|\",\n \"dubquote\": '\"',\n \"double quote\": '\"',\n\n # Currencies\n \"dollar\": \"$\",\n \"pound\": \"£\",\n}\n\n# make punctuation words also included in {user.symbol_keys}\nsymbol_key_words.update(punctuation_words)\nctx.lists[\"self.punctuation\"] = punctuation_words\nctx.lists[\"self.symbol_key\"] = symbol_key_words\nctx.lists[\"self.number_key\"] = dict(zip(default_digits, numbers))\nctx.lists[\"self.arrow_key\"] = {\n \"down\": \"down\",\n \"left\": \"left\",\n \"right\": \"right\",\n \"up\": \"up\",\n}\n\nsimple_keys = [\n \"end\",\n \"enter\",\n \"escape\",\n \"home\",\n \"insert\",\n \"pagedown\",\n \"pageup\",\n \"space\",\n \"tab\",\n]\n\nalternate_keys = {\n \"delete\": \"backspace\",\n \"forward delete\": \"delete\",\n #'junk': 'backspace',\n \"page up\": \"pageup\",\n \"page down\": \"pagedown\",\n}\n# mac apparently doesn't have the menu key.\nif app.platform in (\"windows\", \"linux\"):\n alternate_keys[\"menu key\"] = \"menu\"\n alternate_keys[\"print screen\"] = \"printscr\"\n\nspecial_keys = {k: k for k in simple_keys}\nspecial_keys.update(alternate_keys)\nctx.lists[\"self.special_key\"] = special_keys\nctx.lists[\"self.function_key\"] = {\n f\"F {default_f_digits[i]}\": f\"f{i + 1}\" for i in range(12)\n}\n\n\[email protected]_class\nclass Actions:\n def move_cursor(s: str):\n \"\"\"Given a sequence of directions, eg. 'left left up', moves the cursor accordingly using edit.{left,right,up,down}.\"\"\"\n for d in s.split():\n if d in ('left','right','up','down'):\n getattr(actions.edit, d)()\n else:\n raise RuntimeError(f'invalid arrow key: {d}')\n", "path": "code/keys.py" } ]
diff --git a/code/keys.py b/code/keys.py index a6764aedd8..10c0dc699a 100644 --- a/code/keys.py +++ b/code/keys.py @@ -249,3 +249,12 @@ def letters(m) -> str: } [email protected]_class +class Actions: + def move_cursor(s: str): + """Given a sequence of directions, eg. 'left left up', moves the cursor accordingly using edit.{left,right,up,down}.""" + for d in s.split(): + if d in ('left','right','up','down'): + getattr(actions.edit, d)() + else: + raise RuntimeError(f'invalid arrow key: {d}') diff --git a/misc/keys.talon b/misc/keys.talon index eeca9257bb..610671f38b 100644 --- a/misc/keys.talon +++ b/misc/keys.talon @@ -1,4 +1,4 @@ -go <user.arrow_keys>: key(arrow_keys) +go <user.arrow_keys>: user.move_cursor(arrow_keys) <user.letter>: key(letter) (ship | uppercase) <user.letters> [(lowercase | sunk)]: user.insert_formatted(letters, "ALL_CAPS") @@ -6,4 +6,7 @@ go <user.arrow_keys>: key(arrow_keys) <user.function_key>: key(function_key) <user.special_key>: key(special_key) <user.modifiers> <user.unmodified_key>: key("{modifiers}-{unmodified_key}") +# for key combos consisting only of modifiers, eg. `press super`. press <user.modifiers>: key(modifiers) +# for consistency with dictation mode and explicit arrow keys if you need them. +press <user.keys>: key(keys) diff --git a/modes/dictation_mode.talon b/modes/dictation_mode.talon index c6c4563e7a..3dc0684df7 100644 --- a/modes/dictation_mode.talon +++ b/modes/dictation_mode.talon @@ -1,6 +1,7 @@ mode: dictation - -^press <user.keys>$: key("{keys}") +^press <user.modifiers>$: key(modifiers) +^press <user.keys>$: key(keys) # Everything here should call `auto_insert()` (instead of `insert()`), to preserve the state to correctly auto-capitalize/auto-space. # (Talonscript string literals implicitly call `auto_insert`, so there's no need to wrap those)
SeldonIO__MLServer-192
Support other common names for SKLearn runtime Add support for models named `model.pickle` and `model.pkl`
[ { "content": "import joblib\n\nfrom typing import List\n\nfrom mlserver import types\nfrom mlserver.model import MLModel\nfrom mlserver.errors import InferenceError\nfrom mlserver.utils import get_model_uri, to_ndarray\n\n\nPREDICT_OUTPUT = \"predict\"\nPREDICT_PROBA_OUTPUT = \"predict_proba\"\nVALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]\n\nWELLKNOWN_MODEL_FILENAMES = [\"model.joblib\"]\n\n\nclass SKLearnModel(MLModel):\n \"\"\"\n Implementation of the MLModel interface to load and serve `scikit-learn`\n models persisted with `joblib`.\n \"\"\"\n\n async def load(self) -> bool:\n # TODO: Log info message\n model_uri = await get_model_uri(\n self._settings, wellknown_filenames=WELLKNOWN_MODEL_FILENAMES\n )\n self._model = joblib.load(model_uri)\n\n self.ready = True\n return self.ready\n\n async def predict(self, payload: types.InferenceRequest) -> types.InferenceResponse:\n payload = self._check_request(payload)\n\n return types.InferenceResponse(\n model_name=self.name,\n model_version=self.version,\n outputs=self._predict_outputs(payload),\n )\n\n def _check_request(self, payload: types.InferenceRequest) -> types.InferenceRequest:\n if len(payload.inputs) != 1:\n raise InferenceError(\n \"SKLearnModel only supports a single input tensor \"\n f\"({len(payload.inputs)} were received)\"\n )\n\n if not payload.outputs:\n # By default, only return the result of `predict()`\n payload.outputs = [types.RequestOutput(name=PREDICT_OUTPUT)]\n else:\n for request_output in payload.outputs:\n if request_output.name not in VALID_OUTPUTS:\n raise InferenceError(\n f\"SKLearnModel only supports '{PREDICT_OUTPUT}' and \"\n f\"'{PREDICT_PROBA_OUTPUT}' as outputs \"\n f\"({request_output.name} was received)\"\n )\n\n return payload\n\n def _predict_outputs(\n self, payload: types.InferenceRequest\n ) -> List[types.ResponseOutput]:\n model_input = payload.inputs[0]\n input_data = to_ndarray(model_input)\n\n outputs = []\n for request_output in payload.outputs: # type: ignore\n predict_fn = getattr(self._model, request_output.name)\n y = predict_fn(input_data)\n\n # TODO: Set datatype (cast from numpy?)\n outputs.append(\n types.ResponseOutput(\n name=request_output.name,\n shape=y.shape,\n datatype=\"FP32\",\n data=y.tolist(),\n )\n )\n\n return outputs\n", "path": "runtimes/sklearn/mlserver_sklearn/sklearn.py" } ]
[ { "content": "import joblib\n\nfrom typing import List\n\nfrom mlserver import types\nfrom mlserver.model import MLModel\nfrom mlserver.errors import InferenceError\nfrom mlserver.utils import get_model_uri, to_ndarray\n\n\nPREDICT_OUTPUT = \"predict\"\nPREDICT_PROBA_OUTPUT = \"predict_proba\"\nVALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT]\n\nWELLKNOWN_MODEL_FILENAMES = [\"model.joblib\", \"model.pickle\", \"model.pkl\"]\n\n\nclass SKLearnModel(MLModel):\n \"\"\"\n Implementation of the MLModel interface to load and serve `scikit-learn`\n models persisted with `joblib`.\n \"\"\"\n\n async def load(self) -> bool:\n # TODO: Log info message\n model_uri = await get_model_uri(\n self._settings, wellknown_filenames=WELLKNOWN_MODEL_FILENAMES\n )\n self._model = joblib.load(model_uri)\n\n self.ready = True\n return self.ready\n\n async def predict(self, payload: types.InferenceRequest) -> types.InferenceResponse:\n payload = self._check_request(payload)\n\n return types.InferenceResponse(\n model_name=self.name,\n model_version=self.version,\n outputs=self._predict_outputs(payload),\n )\n\n def _check_request(self, payload: types.InferenceRequest) -> types.InferenceRequest:\n if len(payload.inputs) != 1:\n raise InferenceError(\n \"SKLearnModel only supports a single input tensor \"\n f\"({len(payload.inputs)} were received)\"\n )\n\n if not payload.outputs:\n # By default, only return the result of `predict()`\n payload.outputs = [types.RequestOutput(name=PREDICT_OUTPUT)]\n else:\n for request_output in payload.outputs:\n if request_output.name not in VALID_OUTPUTS:\n raise InferenceError(\n f\"SKLearnModel only supports '{PREDICT_OUTPUT}' and \"\n f\"'{PREDICT_PROBA_OUTPUT}' as outputs \"\n f\"({request_output.name} was received)\"\n )\n\n return payload\n\n def _predict_outputs(\n self, payload: types.InferenceRequest\n ) -> List[types.ResponseOutput]:\n model_input = payload.inputs[0]\n input_data = to_ndarray(model_input)\n\n outputs = []\n for request_output in payload.outputs: # type: ignore\n predict_fn = getattr(self._model, request_output.name)\n y = predict_fn(input_data)\n\n # TODO: Set datatype (cast from numpy?)\n outputs.append(\n types.ResponseOutput(\n name=request_output.name,\n shape=y.shape,\n datatype=\"FP32\",\n data=y.tolist(),\n )\n )\n\n return outputs\n", "path": "runtimes/sklearn/mlserver_sklearn/sklearn.py" } ]
diff --git a/runtimes/sklearn/mlserver_sklearn/sklearn.py b/runtimes/sklearn/mlserver_sklearn/sklearn.py index a419b8c89..312ea4067 100644 --- a/runtimes/sklearn/mlserver_sklearn/sklearn.py +++ b/runtimes/sklearn/mlserver_sklearn/sklearn.py @@ -12,7 +12,7 @@ PREDICT_PROBA_OUTPUT = "predict_proba" VALID_OUTPUTS = [PREDICT_OUTPUT, PREDICT_PROBA_OUTPUT] -WELLKNOWN_MODEL_FILENAMES = ["model.joblib"] +WELLKNOWN_MODEL_FILENAMES = ["model.joblib", "model.pickle", "model.pkl"] class SKLearnModel(MLModel):
nonebot__nonebot2-140
Bug: 发送窗口抖动消息,在msg中是无消息内容的,如果通过[0]索引或报错 cqhttp包下bot.py模块中 first_msg_seg = event.message[0] 超出索引 问题如上,另外建议连续二次索引时增加第一次索引的默认值防止第二次索引超出,state["_prefix"]["command"] in commands
[ { "content": "import re\nimport sys\nimport hmac\nimport json\nimport asyncio\nfrom typing import Any, Dict, Union, Optional, TYPE_CHECKING\n\nimport httpx\nfrom nonebot.log import logger\nfrom nonebot.config import Config\nfrom nonebot.typing import overrides\nfrom nonebot.message import handle_event\nfrom nonebot.adapters import Bot as BaseBot\nfrom nonebot.exception import RequestDenied\n\nfrom .utils import log, escape\nfrom .message import Message, MessageSegment\nfrom .event import Reply, Event, MessageEvent, get_event_model\nfrom .exception import NetworkError, ApiNotAvailable, ActionFailed\n\nif TYPE_CHECKING:\n from nonebot.drivers import Driver, WebSocket\n\n\ndef get_auth_bearer(access_token: Optional[str] = None) -> Optional[str]:\n if not access_token:\n return None\n scheme, _, param = access_token.partition(\" \")\n if scheme.lower() not in [\"bearer\", \"token\"]:\n raise RequestDenied(401, \"Not authenticated\")\n return param\n\n\nasync def _check_reply(bot: \"Bot\", event: \"Event\"):\n \"\"\"\n :说明:\n\n 检查消息中存在的回复,去除并赋值 ``event.reply``, ``event.to_me``\n\n :参数:\n\n * ``bot: Bot``: Bot 对象\n * ``event: Event``: Event 对象\n \"\"\"\n if not isinstance(event, MessageEvent):\n return\n\n try:\n index = list(map(lambda x: x.type == \"reply\",\n event.message)).index(True)\n except ValueError:\n return\n msg_seg = event.message[index]\n event.reply = Reply.parse_obj(await\n bot.get_msg(message_id=msg_seg.data[\"id\"]))\n # ensure string comparation\n if str(event.reply.sender.user_id) == str(event.self_id):\n event.to_me = True\n del event.message[index]\n if len(event.message) > index and event.message[index].type == \"at\":\n del event.message[index]\n if len(event.message) > index and event.message[index].type == \"text\":\n event.message[index].data[\"text\"] = event.message[index].data[\n \"text\"].lstrip()\n if not event.message[index].data[\"text\"]:\n del event.message[index]\n if not event.message:\n event.message.append(MessageSegment.text(\"\"))\n\n\ndef _check_at_me(bot: \"Bot\", event: \"Event\"):\n \"\"\"\n :说明:\n\n 检查消息开头或结尾是否存在 @机器人,去除并赋值 ``event.to_me``\n\n :参数:\n\n * ``bot: Bot``: Bot 对象\n * ``event: Event``: Event 对象\n \"\"\"\n if not isinstance(event, MessageEvent):\n return\n\n if event.message_type == \"private\":\n event.to_me = True\n else:\n at_me_seg = MessageSegment.at(event.self_id)\n\n # check the first segment\n if event.message[0] == at_me_seg:\n event.to_me = True\n del event.message[0]\n if event.message and event.message[0].type == \"text\":\n event.message[0].data[\"text\"] = event.message[0].data[\n \"text\"].lstrip()\n if not event.message[0].data[\"text\"]:\n del event.message[0]\n if event.message and event.message[0] == at_me_seg:\n del event.message[0]\n if event.message and event.message[0].type == \"text\":\n event.message[0].data[\"text\"] = event.message[0].data[\n \"text\"].lstrip()\n if not event.message[0].data[\"text\"]:\n del event.message[0]\n\n if not event.to_me:\n # check the last segment\n i = -1\n last_msg_seg = event.message[i]\n if last_msg_seg.type == \"text\" and \\\n not last_msg_seg.data[\"text\"].strip() and \\\n len(event.message) >= 2:\n i -= 1\n last_msg_seg = event.message[i]\n\n if last_msg_seg == at_me_seg:\n event.to_me = True\n del event.message[i:]\n\n if not event.message:\n event.message.append(MessageSegment.text(\"\"))\n\n\ndef _check_nickname(bot: \"Bot\", event: \"Event\"):\n \"\"\"\n :说明:\n\n 检查消息开头是否存在,去除并赋值 ``event.to_me``\n\n :参数:\n\n * ``bot: Bot``: Bot 对象\n * ``event: Event``: Event 对象\n \"\"\"\n if not isinstance(event, MessageEvent):\n return\n\n first_msg_seg = event.message[0]\n if first_msg_seg.type != \"text\":\n return\n\n first_text = first_msg_seg.data[\"text\"]\n\n nicknames = set(filter(lambda n: n, bot.config.nickname))\n if nicknames:\n # check if the user is calling me with my nickname\n nickname_regex = \"|\".join(nicknames)\n m = re.search(rf\"^({nickname_regex})([\\s,,]*|$)\", first_text,\n re.IGNORECASE)\n if m:\n nickname = m.group(1)\n log(\"DEBUG\", f\"User is calling me {nickname}\")\n event.to_me = True\n first_msg_seg.data[\"text\"] = first_text[m.end():]\n\n\ndef _handle_api_result(result: Optional[Dict[str, Any]]) -> Any:\n \"\"\"\n :说明:\n\n 处理 API 请求返回值。\n\n :参数:\n\n * ``result: Optional[Dict[str, Any]]``: API 返回数据\n\n :返回:\n\n - ``Any``: API 调用返回数据\n\n :异常:\n\n - ``ActionFailed``: API 调用失败\n \"\"\"\n if isinstance(result, dict):\n if result.get(\"status\") == \"failed\":\n raise ActionFailed(**result)\n return result.get(\"data\")\n\n\nclass ResultStore:\n _seq = 1\n _futures: Dict[int, asyncio.Future] = {}\n\n @classmethod\n def get_seq(cls) -> int:\n s = cls._seq\n cls._seq = (cls._seq + 1) % sys.maxsize\n return s\n\n @classmethod\n def add_result(cls, result: Dict[str, Any]):\n if isinstance(result.get(\"echo\"), dict) and \\\n isinstance(result[\"echo\"].get(\"seq\"), int):\n future = cls._futures.get(result[\"echo\"][\"seq\"])\n if future:\n future.set_result(result)\n\n @classmethod\n async def fetch(cls, seq: int, timeout: Optional[float]) -> Dict[str, Any]:\n future = asyncio.get_event_loop().create_future()\n cls._futures[seq] = future\n try:\n return await asyncio.wait_for(future, timeout)\n except asyncio.TimeoutError:\n raise NetworkError(\"WebSocket API call timeout\") from None\n finally:\n del cls._futures[seq]\n\n\nclass Bot(BaseBot):\n \"\"\"\n CQHTTP 协议 Bot 适配。继承属性参考 `BaseBot <./#class-basebot>`_ 。\n \"\"\"\n\n def __init__(self,\n driver: \"Driver\",\n connection_type: str,\n config: Config,\n self_id: str,\n *,\n websocket: Optional[\"WebSocket\"] = None):\n\n super().__init__(driver,\n connection_type,\n config,\n self_id,\n websocket=websocket)\n\n @property\n @overrides(BaseBot)\n def type(self) -> str:\n \"\"\"\n - 返回: ``\"cqhttp\"``\n \"\"\"\n return \"cqhttp\"\n\n @classmethod\n @overrides(BaseBot)\n async def check_permission(cls, driver: \"Driver\", connection_type: str,\n headers: dict, body: Optional[dict]) -> str:\n \"\"\"\n :说明:\n\n CQHTTP (OneBot) 协议鉴权。参考 `鉴权 <https://github.com/howmanybots/onebot/blob/master/v11/specs/communication/authorization.md>`_\n \"\"\"\n x_self_id = headers.get(\"x-self-id\")\n x_signature = headers.get(\"x-signature\")\n token = get_auth_bearer(headers.get(\"authorization\"))\n\n # 检查连接方式\n if connection_type not in [\"http\", \"websocket\"]:\n log(\"WARNING\", \"Unsupported connection type\")\n raise RequestDenied(405, \"Unsupported connection type\")\n\n # 检查self_id\n if not x_self_id:\n log(\"WARNING\", \"Missing X-Self-ID Header\")\n raise RequestDenied(400, \"Missing X-Self-ID Header\")\n\n # 检查签名\n secret = driver.config.secret\n if secret and connection_type == \"http\":\n if not x_signature:\n log(\"WARNING\", \"Missing Signature Header\")\n raise RequestDenied(401, \"Missing Signature\")\n sig = hmac.new(secret.encode(\"utf-8\"),\n json.dumps(body).encode(), \"sha1\").hexdigest()\n if x_signature != \"sha1=\" + sig:\n log(\"WARNING\", \"Signature Header is invalid\")\n raise RequestDenied(403, \"Signature is invalid\")\n\n access_token = driver.config.access_token\n if access_token and access_token != token:\n log(\n \"WARNING\", \"Authorization Header is invalid\"\n if token else \"Missing Authorization Header\")\n raise RequestDenied(\n 403, \"Authorization Header is invalid\"\n if token else \"Missing Authorization Header\")\n return str(x_self_id)\n\n @overrides(BaseBot)\n async def handle_message(self, message: dict):\n \"\"\"\n :说明:\n\n 调用 `_check_reply <#async-check-reply-bot-event>`_, `_check_at_me <#check-at-me-bot-event>`_, `_check_nickname <#check-nickname-bot-event>`_ 处理事件并转换为 `Event <#class-event>`_\n \"\"\"\n if not message:\n return\n\n if \"post_type\" not in message:\n ResultStore.add_result(message)\n return\n\n try:\n post_type = message['post_type']\n detail_type = message.get(f\"{post_type}_type\")\n detail_type = f\".{detail_type}\" if detail_type else \"\"\n sub_type = message.get(\"sub_type\")\n sub_type = f\".{sub_type}\" if sub_type else \"\"\n models = get_event_model(post_type + detail_type + sub_type)\n for model in models:\n try:\n event = model.parse_obj(message)\n break\n except Exception as e:\n log(\"DEBUG\", \"Event Parser Error\", e)\n else:\n event = Event.parse_obj(message)\n\n # Check whether user is calling me\n await _check_reply(self, event)\n _check_at_me(self, event)\n _check_nickname(self, event)\n\n await handle_event(self, event)\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n f\"<r><bg #f8bbd0>Failed to handle event. Raw: {message}</bg #f8bbd0></r>\"\n )\n\n @overrides(BaseBot)\n async def call_api(self, api: str, **data) -> Any:\n \"\"\"\n :说明:\n\n 调用 CQHTTP 协议 API\n\n :参数:\n\n * ``api: str``: API 名称\n * ``**data: Any``: API 参数\n\n :返回:\n\n - ``Any``: API 调用返回数据\n\n :异常:\n\n - ``NetworkError``: 网络错误\n - ``ActionFailed``: API 调用失败\n \"\"\"\n if \"self_id\" in data:\n self_id = data.pop(\"self_id\")\n if self_id:\n bot = self.driver.bots[str(self_id)]\n return await bot.call_api(api, **data)\n\n log(\"DEBUG\", f\"Calling API <y>{api}</y>\")\n if self.connection_type == \"websocket\":\n seq = ResultStore.get_seq()\n await self.websocket.send({\n \"action\": api,\n \"params\": data,\n \"echo\": {\n \"seq\": seq\n }\n })\n return _handle_api_result(await ResultStore.fetch(\n seq, self.config.api_timeout))\n\n elif self.connection_type == \"http\":\n api_root = self.config.api_root.get(self.self_id)\n if not api_root:\n raise ApiNotAvailable\n elif not api_root.endswith(\"/\"):\n api_root += \"/\"\n\n headers = {}\n if self.config.access_token is not None:\n headers[\"Authorization\"] = \"Bearer \" + self.config.access_token\n\n try:\n async with httpx.AsyncClient(headers=headers) as client:\n response = await client.post(\n api_root + api,\n json=data,\n timeout=self.config.api_timeout)\n\n if 200 <= response.status_code < 300:\n result = response.json()\n return _handle_api_result(result)\n raise NetworkError(f\"HTTP request received unexpected \"\n f\"status code: {response.status_code}\")\n except httpx.InvalidURL:\n raise NetworkError(\"API root url invalid\")\n except httpx.HTTPError:\n raise NetworkError(\"HTTP request failed\")\n\n @overrides(BaseBot)\n async def send(self,\n event: Event,\n message: Union[str, Message, MessageSegment],\n at_sender: bool = False,\n **kwargs) -> Any:\n \"\"\"\n :说明:\n\n 根据 ``event`` 向触发事件的主体发送消息。\n\n :参数:\n\n * ``event: Event``: Event 对象\n * ``message: Union[str, Message, MessageSegment]``: 要发送的消息\n * ``at_sender: bool``: 是否 @ 事件主体\n * ``**kwargs``: 覆盖默认参数\n\n :返回:\n\n - ``Any``: API 调用返回数据\n\n :异常:\n\n - ``ValueError``: 缺少 ``user_id``, ``group_id``\n - ``NetworkError``: 网络错误\n - ``ActionFailed``: API 调用失败\n \"\"\"\n message = escape(message) if isinstance(message, str) else message\n msg = message if isinstance(message, Message) else Message(message)\n\n at_sender = at_sender and getattr(event, \"user_id\", None)\n\n params = {}\n if getattr(event, \"user_id\", None):\n params[\"user_id\"] = getattr(event, \"user_id\")\n if getattr(event, \"group_id\", None):\n params[\"group_id\"] = getattr(event, \"group_id\")\n params.update(kwargs)\n\n if \"message_type\" not in params:\n if params.get(\"group_id\", None):\n params[\"message_type\"] = \"group\"\n elif params.get(\"user_id\", None):\n params[\"message_type\"] = \"private\"\n else:\n raise ValueError(\"Cannot guess message type to reply!\")\n\n if at_sender and params[\"message_type\"] != \"private\":\n params[\"message\"] = MessageSegment.at(params[\"user_id\"]) + \\\n MessageSegment.text(\" \") + msg\n else:\n params[\"message\"] = msg\n return await self.send_msg(**params)\n", "path": "nonebot/adapters/cqhttp/bot.py" } ]
[ { "content": "import re\nimport sys\nimport hmac\nimport json\nimport asyncio\nfrom typing import Any, Dict, Union, Optional, TYPE_CHECKING\n\nimport httpx\nfrom nonebot.log import logger\nfrom nonebot.config import Config\nfrom nonebot.typing import overrides\nfrom nonebot.message import handle_event\nfrom nonebot.adapters import Bot as BaseBot\nfrom nonebot.exception import RequestDenied\n\nfrom .utils import log, escape\nfrom .message import Message, MessageSegment\nfrom .event import Reply, Event, MessageEvent, get_event_model\nfrom .exception import NetworkError, ApiNotAvailable, ActionFailed\n\nif TYPE_CHECKING:\n from nonebot.drivers import Driver, WebSocket\n\n\ndef get_auth_bearer(access_token: Optional[str] = None) -> Optional[str]:\n if not access_token:\n return None\n scheme, _, param = access_token.partition(\" \")\n if scheme.lower() not in [\"bearer\", \"token\"]:\n raise RequestDenied(401, \"Not authenticated\")\n return param\n\n\nasync def _check_reply(bot: \"Bot\", event: \"Event\"):\n \"\"\"\n :说明:\n\n 检查消息中存在的回复,去除并赋值 ``event.reply``, ``event.to_me``\n\n :参数:\n\n * ``bot: Bot``: Bot 对象\n * ``event: Event``: Event 对象\n \"\"\"\n if not isinstance(event, MessageEvent):\n return\n\n try:\n index = list(map(lambda x: x.type == \"reply\",\n event.message)).index(True)\n except ValueError:\n return\n msg_seg = event.message[index]\n event.reply = Reply.parse_obj(await\n bot.get_msg(message_id=msg_seg.data[\"id\"]))\n # ensure string comparation\n if str(event.reply.sender.user_id) == str(event.self_id):\n event.to_me = True\n del event.message[index]\n if len(event.message) > index and event.message[index].type == \"at\":\n del event.message[index]\n if len(event.message) > index and event.message[index].type == \"text\":\n event.message[index].data[\"text\"] = event.message[index].data[\n \"text\"].lstrip()\n if not event.message[index].data[\"text\"]:\n del event.message[index]\n if not event.message:\n event.message.append(MessageSegment.text(\"\"))\n\n\ndef _check_at_me(bot: \"Bot\", event: \"Event\"):\n \"\"\"\n :说明:\n\n 检查消息开头或结尾是否存在 @机器人,去除并赋值 ``event.to_me``\n\n :参数:\n\n * ``bot: Bot``: Bot 对象\n * ``event: Event``: Event 对象\n \"\"\"\n if not isinstance(event, MessageEvent):\n return\n\n # ensure message not empty\n if not event.message:\n event.message.append(MessageSegment.text(\"\"))\n\n if event.message_type == \"private\":\n event.to_me = True\n else:\n at_me_seg = MessageSegment.at(event.self_id)\n\n # check the first segment\n if event.message[0] == at_me_seg:\n event.to_me = True\n del event.message[0]\n if event.message and event.message[0].type == \"text\":\n event.message[0].data[\"text\"] = event.message[0].data[\n \"text\"].lstrip()\n if not event.message[0].data[\"text\"]:\n del event.message[0]\n if event.message and event.message[0] == at_me_seg:\n del event.message[0]\n if event.message and event.message[0].type == \"text\":\n event.message[0].data[\"text\"] = event.message[0].data[\n \"text\"].lstrip()\n if not event.message[0].data[\"text\"]:\n del event.message[0]\n\n if not event.to_me:\n # check the last segment\n i = -1\n last_msg_seg = event.message[i]\n if last_msg_seg.type == \"text\" and \\\n not last_msg_seg.data[\"text\"].strip() and \\\n len(event.message) >= 2:\n i -= 1\n last_msg_seg = event.message[i]\n\n if last_msg_seg == at_me_seg:\n event.to_me = True\n del event.message[i:]\n\n if not event.message:\n event.message.append(MessageSegment.text(\"\"))\n\n\ndef _check_nickname(bot: \"Bot\", event: \"Event\"):\n \"\"\"\n :说明:\n\n 检查消息开头是否存在,去除并赋值 ``event.to_me``\n\n :参数:\n\n * ``bot: Bot``: Bot 对象\n * ``event: Event``: Event 对象\n \"\"\"\n if not isinstance(event, MessageEvent):\n return\n\n first_msg_seg = event.message[0]\n if first_msg_seg.type != \"text\":\n return\n\n first_text = first_msg_seg.data[\"text\"]\n\n nicknames = set(filter(lambda n: n, bot.config.nickname))\n if nicknames:\n # check if the user is calling me with my nickname\n nickname_regex = \"|\".join(nicknames)\n m = re.search(rf\"^({nickname_regex})([\\s,,]*|$)\", first_text,\n re.IGNORECASE)\n if m:\n nickname = m.group(1)\n log(\"DEBUG\", f\"User is calling me {nickname}\")\n event.to_me = True\n first_msg_seg.data[\"text\"] = first_text[m.end():]\n\n\ndef _handle_api_result(result: Optional[Dict[str, Any]]) -> Any:\n \"\"\"\n :说明:\n\n 处理 API 请求返回值。\n\n :参数:\n\n * ``result: Optional[Dict[str, Any]]``: API 返回数据\n\n :返回:\n\n - ``Any``: API 调用返回数据\n\n :异常:\n\n - ``ActionFailed``: API 调用失败\n \"\"\"\n if isinstance(result, dict):\n if result.get(\"status\") == \"failed\":\n raise ActionFailed(**result)\n return result.get(\"data\")\n\n\nclass ResultStore:\n _seq = 1\n _futures: Dict[int, asyncio.Future] = {}\n\n @classmethod\n def get_seq(cls) -> int:\n s = cls._seq\n cls._seq = (cls._seq + 1) % sys.maxsize\n return s\n\n @classmethod\n def add_result(cls, result: Dict[str, Any]):\n if isinstance(result.get(\"echo\"), dict) and \\\n isinstance(result[\"echo\"].get(\"seq\"), int):\n future = cls._futures.get(result[\"echo\"][\"seq\"])\n if future:\n future.set_result(result)\n\n @classmethod\n async def fetch(cls, seq: int, timeout: Optional[float]) -> Dict[str, Any]:\n future = asyncio.get_event_loop().create_future()\n cls._futures[seq] = future\n try:\n return await asyncio.wait_for(future, timeout)\n except asyncio.TimeoutError:\n raise NetworkError(\"WebSocket API call timeout\") from None\n finally:\n del cls._futures[seq]\n\n\nclass Bot(BaseBot):\n \"\"\"\n CQHTTP 协议 Bot 适配。继承属性参考 `BaseBot <./#class-basebot>`_ 。\n \"\"\"\n\n def __init__(self,\n driver: \"Driver\",\n connection_type: str,\n config: Config,\n self_id: str,\n *,\n websocket: Optional[\"WebSocket\"] = None):\n\n super().__init__(driver,\n connection_type,\n config,\n self_id,\n websocket=websocket)\n\n @property\n @overrides(BaseBot)\n def type(self) -> str:\n \"\"\"\n - 返回: ``\"cqhttp\"``\n \"\"\"\n return \"cqhttp\"\n\n @classmethod\n @overrides(BaseBot)\n async def check_permission(cls, driver: \"Driver\", connection_type: str,\n headers: dict, body: Optional[dict]) -> str:\n \"\"\"\n :说明:\n\n CQHTTP (OneBot) 协议鉴权。参考 `鉴权 <https://github.com/howmanybots/onebot/blob/master/v11/specs/communication/authorization.md>`_\n \"\"\"\n x_self_id = headers.get(\"x-self-id\")\n x_signature = headers.get(\"x-signature\")\n token = get_auth_bearer(headers.get(\"authorization\"))\n\n # 检查连接方式\n if connection_type not in [\"http\", \"websocket\"]:\n log(\"WARNING\", \"Unsupported connection type\")\n raise RequestDenied(405, \"Unsupported connection type\")\n\n # 检查self_id\n if not x_self_id:\n log(\"WARNING\", \"Missing X-Self-ID Header\")\n raise RequestDenied(400, \"Missing X-Self-ID Header\")\n\n # 检查签名\n secret = driver.config.secret\n if secret and connection_type == \"http\":\n if not x_signature:\n log(\"WARNING\", \"Missing Signature Header\")\n raise RequestDenied(401, \"Missing Signature\")\n sig = hmac.new(secret.encode(\"utf-8\"),\n json.dumps(body).encode(), \"sha1\").hexdigest()\n if x_signature != \"sha1=\" + sig:\n log(\"WARNING\", \"Signature Header is invalid\")\n raise RequestDenied(403, \"Signature is invalid\")\n\n access_token = driver.config.access_token\n if access_token and access_token != token:\n log(\n \"WARNING\", \"Authorization Header is invalid\"\n if token else \"Missing Authorization Header\")\n raise RequestDenied(\n 403, \"Authorization Header is invalid\"\n if token else \"Missing Authorization Header\")\n return str(x_self_id)\n\n @overrides(BaseBot)\n async def handle_message(self, message: dict):\n \"\"\"\n :说明:\n\n 调用 `_check_reply <#async-check-reply-bot-event>`_, `_check_at_me <#check-at-me-bot-event>`_, `_check_nickname <#check-nickname-bot-event>`_ 处理事件并转换为 `Event <#class-event>`_\n \"\"\"\n if not message:\n return\n\n if \"post_type\" not in message:\n ResultStore.add_result(message)\n return\n\n try:\n post_type = message['post_type']\n detail_type = message.get(f\"{post_type}_type\")\n detail_type = f\".{detail_type}\" if detail_type else \"\"\n sub_type = message.get(\"sub_type\")\n sub_type = f\".{sub_type}\" if sub_type else \"\"\n models = get_event_model(post_type + detail_type + sub_type)\n for model in models:\n try:\n event = model.parse_obj(message)\n break\n except Exception as e:\n log(\"DEBUG\", \"Event Parser Error\", e)\n else:\n event = Event.parse_obj(message)\n\n # Check whether user is calling me\n await _check_reply(self, event)\n _check_at_me(self, event)\n _check_nickname(self, event)\n\n await handle_event(self, event)\n except Exception as e:\n logger.opt(colors=True, exception=e).error(\n f\"<r><bg #f8bbd0>Failed to handle event. Raw: {message}</bg #f8bbd0></r>\"\n )\n\n @overrides(BaseBot)\n async def call_api(self, api: str, **data) -> Any:\n \"\"\"\n :说明:\n\n 调用 CQHTTP 协议 API\n\n :参数:\n\n * ``api: str``: API 名称\n * ``**data: Any``: API 参数\n\n :返回:\n\n - ``Any``: API 调用返回数据\n\n :异常:\n\n - ``NetworkError``: 网络错误\n - ``ActionFailed``: API 调用失败\n \"\"\"\n if \"self_id\" in data:\n self_id = data.pop(\"self_id\")\n if self_id:\n bot = self.driver.bots[str(self_id)]\n return await bot.call_api(api, **data)\n\n log(\"DEBUG\", f\"Calling API <y>{api}</y>\")\n if self.connection_type == \"websocket\":\n seq = ResultStore.get_seq()\n await self.websocket.send({\n \"action\": api,\n \"params\": data,\n \"echo\": {\n \"seq\": seq\n }\n })\n return _handle_api_result(await ResultStore.fetch(\n seq, self.config.api_timeout))\n\n elif self.connection_type == \"http\":\n api_root = self.config.api_root.get(self.self_id)\n if not api_root:\n raise ApiNotAvailable\n elif not api_root.endswith(\"/\"):\n api_root += \"/\"\n\n headers = {}\n if self.config.access_token is not None:\n headers[\"Authorization\"] = \"Bearer \" + self.config.access_token\n\n try:\n async with httpx.AsyncClient(headers=headers) as client:\n response = await client.post(\n api_root + api,\n json=data,\n timeout=self.config.api_timeout)\n\n if 200 <= response.status_code < 300:\n result = response.json()\n return _handle_api_result(result)\n raise NetworkError(f\"HTTP request received unexpected \"\n f\"status code: {response.status_code}\")\n except httpx.InvalidURL:\n raise NetworkError(\"API root url invalid\")\n except httpx.HTTPError:\n raise NetworkError(\"HTTP request failed\")\n\n @overrides(BaseBot)\n async def send(self,\n event: Event,\n message: Union[str, Message, MessageSegment],\n at_sender: bool = False,\n **kwargs) -> Any:\n \"\"\"\n :说明:\n\n 根据 ``event`` 向触发事件的主体发送消息。\n\n :参数:\n\n * ``event: Event``: Event 对象\n * ``message: Union[str, Message, MessageSegment]``: 要发送的消息\n * ``at_sender: bool``: 是否 @ 事件主体\n * ``**kwargs``: 覆盖默认参数\n\n :返回:\n\n - ``Any``: API 调用返回数据\n\n :异常:\n\n - ``ValueError``: 缺少 ``user_id``, ``group_id``\n - ``NetworkError``: 网络错误\n - ``ActionFailed``: API 调用失败\n \"\"\"\n message = escape(message) if isinstance(message, str) else message\n msg = message if isinstance(message, Message) else Message(message)\n\n at_sender = at_sender and getattr(event, \"user_id\", None)\n\n params = {}\n if getattr(event, \"user_id\", None):\n params[\"user_id\"] = getattr(event, \"user_id\")\n if getattr(event, \"group_id\", None):\n params[\"group_id\"] = getattr(event, \"group_id\")\n params.update(kwargs)\n\n if \"message_type\" not in params:\n if params.get(\"group_id\", None):\n params[\"message_type\"] = \"group\"\n elif params.get(\"user_id\", None):\n params[\"message_type\"] = \"private\"\n else:\n raise ValueError(\"Cannot guess message type to reply!\")\n\n if at_sender and params[\"message_type\"] != \"private\":\n params[\"message\"] = MessageSegment.at(params[\"user_id\"]) + \\\n MessageSegment.text(\" \") + msg\n else:\n params[\"message\"] = msg\n return await self.send_msg(**params)\n", "path": "nonebot/adapters/cqhttp/bot.py" } ]
diff --git a/nonebot/adapters/cqhttp/bot.py b/nonebot/adapters/cqhttp/bot.py index 1aff6d5d2e97..d62ff8296b26 100644 --- a/nonebot/adapters/cqhttp/bot.py +++ b/nonebot/adapters/cqhttp/bot.py @@ -82,6 +82,10 @@ def _check_at_me(bot: "Bot", event: "Event"): if not isinstance(event, MessageEvent): return + # ensure message not empty + if not event.message: + event.message.append(MessageSegment.text("")) + if event.message_type == "private": event.to_me = True else:
locustio__locust-1359
Add --config parameter <!-- If you have a general question about how to use Locust, please check Stack Overflow first https://stackoverflow.com/questions/tagged/locust You can also ask new questions on SO, https://stackoverflow.com/questions/ask just remember to tag your question with "locust". --> ### Is your feature request related to a problem? Please describe. <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> I would like to be able to have multiple config files stored in version control for different purposes (eg one that uses step-mode and one that doesn't). ### Describe the solution you'd like <!-- A clear and concise description of what you want to happen --> `ConfigArgParse` makes it easy to add this capability. I think it could be done with a single line in `parse_options`: `parser.add_argument('--config', is_config_file_arg=True, help='config file path')` ### Describe alternatives you've considered <!-- A clear and concise description of any alternative solutions or features you've considered --> Current workaround is to keep multiple sets of configurations in `locust.conf`, with the active one uncommented.
[ { "content": "import argparse\nimport os\nimport sys\nimport textwrap\n\nimport configargparse\n\nimport locust\n\nversion = locust.__version__\n\n\nDEFAULT_CONFIG_FILES = ['~/.locust.conf','locust.conf']\n\n\ndef _is_package(path):\n \"\"\"\n Is the given path a Python package?\n \"\"\"\n return (\n os.path.isdir(path)\n and os.path.exists(os.path.join(path, '__init__.py'))\n )\n\ndef find_locustfile(locustfile):\n \"\"\"\n Attempt to locate a locustfile, either explicitly or by searching parent dirs.\n \"\"\"\n # Obtain env value\n names = [locustfile]\n # Create .py version if necessary\n if not names[0].endswith('.py'):\n names.append(names[0] + '.py')\n # Does the name contain path elements?\n if os.path.dirname(names[0]):\n # If so, expand home-directory markers and test for existence\n for name in names:\n expanded = os.path.expanduser(name)\n if os.path.exists(expanded):\n if name.endswith('.py') or _is_package(expanded):\n return os.path.abspath(expanded)\n else:\n # Otherwise, start in cwd and work downwards towards filesystem root\n path = os.path.abspath('.')\n while True:\n for name in names:\n joined = os.path.join(path, name)\n if os.path.exists(joined):\n if name.endswith('.py') or _is_package(joined):\n return os.path.abspath(joined)\n parent_path = os.path.dirname(path)\n if parent_path == path:\n # we've reached the root path which has been checked this iteration\n break\n path = parent_path\n # Implicit 'return None' if nothing was found\n\n\ndef get_empty_argument_parser(add_help=True, default_config_files=DEFAULT_CONFIG_FILES):\n parser = configargparse.ArgumentParser(\n default_config_files=default_config_files, \n add_env_var_help=False,\n add_config_file_help=False,\n add_help=add_help,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n usage=argparse.SUPPRESS,\n description=textwrap.dedent(\"\"\"\n Usage: locust [OPTIONS] [UserClass ...]\n \n \"\"\"),\n #epilog=\"\",\n )\n parser.add_argument(\n '-f', '--locustfile',\n default='locustfile',\n help=\"Python module file to import, e.g. '../other.py'. Default: locustfile\",\n env_var=\"LOCUST_LOCUSTFILE\",\n )\n return parser\n\n\ndef parse_locustfile_option(args=None):\n \"\"\"\n Construct a command line parser that is only used to parse the -f argument so that we can \n import the test scripts in case any of them adds additional command line arguments to the \n parser\n \"\"\" \n parser = get_empty_argument_parser(add_help=False)\n parser.add_argument(\n '-h', '--help',\n action='store_true',\n default=False,\n )\n parser.add_argument(\n '--version', '-V',\n action='store_true',\n default=False,\n )\n \n options, _ = parser.parse_known_args(args=args)\n \n locustfile = find_locustfile(options.locustfile)\n \n if not locustfile:\n if options.help or options.version:\n # if --help or --version is specified we'll call parse_options which will print the help/version message\n parse_options(args=args)\n sys.stderr.write(\"Could not find any locustfile! Ensure file ends in '.py' and see --help for available options.\\n\")\n sys.exit(1)\n \n if locustfile == \"locust.py\":\n sys.stderr.write(\"The locustfile must not be named `locust.py`. Please rename the file and try again.\\n\")\n sys.exit(1)\n \n return locustfile\n\n\ndef setup_parser_arguments(parser):\n \"\"\"\n Setup command-line options\n \n Takes a configargparse.ArgumentParser as argument and calls it's add_argument \n for each of the supported arguments\n \"\"\"\n parser._optionals.title = \"Common options\"\n parser.add_argument(\n '-H', '--host',\n help=\"Host to load test in the following format: http://10.21.32.33\",\n env_var=\"LOCUST_HOST\",\n )\n parser.add_argument(\n '-u', '--users',\n type=int,\n dest='num_users',\n help=\"Number of concurrent Locust users. Only used together with --headless\",\n env_var=\"LOCUST_USERS\",\n )\n parser.add_argument(\n '-r', '--hatch-rate',\n type=float,\n help=\"The rate per second in which users are spawned. Only used together with --headless\",\n env_var=\"LOCUST_HATCH_RATE\",\n )\n parser.add_argument(\n '-t', '--run-time',\n help=\"Stop after the specified amount of time, e.g. (300s, 20m, 3h, 1h30m, etc.). Only used together with --headless\",\n env_var=\"LOCUST_RUN_TIME\",\n )\n parser.add_argument(\n '-l', '--list',\n action='store_true',\n dest='list_commands',\n help=\"Show list of possible User classes and exit\",\n )\n \n web_ui_group = parser.add_argument_group(\"Web UI options\")\n web_ui_group.add_argument(\n '--web-host',\n default=\"\",\n help=\"Host to bind the web interface to. Defaults to '*' (all interfaces)\",\n env_var=\"LOCUST_WEB_HOST\",\n )\n web_ui_group.add_argument(\n '--web-port', '-P',\n type=int,\n default=8089,\n help=\"Port on which to run web host\",\n env_var=\"LOCUST_WEB_PORT\",\n )\n web_ui_group.add_argument(\n '--headless',\n action='store_true',\n help=\"Disable the web interface, and instead start the load test immediately. Requires -u and -t to be specified.\",\n env_var=\"LOCUST_HEADLESS\",\n )\n web_ui_group.add_argument(\n '--web-auth',\n type=str,\n dest='web_auth',\n default=None,\n help='Turn on Basic Auth for the web interface. Should be supplied in the following format: username:password',\n env_var=\"LOCUST_WEB_AUTH\",\n )\n web_ui_group.add_argument(\n '--tls-cert',\n default=\"\",\n help=\"Optional path to TLS certificate to use to serve over HTTPS\",\n env_var=\"LOCUST_TLS_CERT\",\n )\n web_ui_group.add_argument(\n '--tls-key',\n default=\"\",\n help=\"Optional path to TLS private key to use to serve over HTTPS\",\n env_var=\"LOCUST_TLS_KEY\",\n )\n \n master_group = parser.add_argument_group(\n \"Master options\", \n \"Options for running a Locust Master node when running Locust distributed. A Master node need Worker nodes that connect to it before it can run load tests.\",\n )\n # if locust should be run in distributed mode as master\n master_group.add_argument(\n '--master',\n action='store_true',\n help=\"Set locust to run in distributed mode with this process as master\",\n env_var='LOCUST_MODE_MASTER',\n )\n master_group.add_argument(\n '--master-bind-host',\n default=\"*\",\n help=\"Interfaces (hostname, ip) that locust master should bind to. Only used when running with --master. Defaults to * (all available interfaces).\",\n env_var=\"LOCUST_MASTER_BIND_HOST\",\n )\n master_group.add_argument(\n '--master-bind-port',\n type=int,\n default=5557,\n help=\"Port that locust master should bind to. Only used when running with --master. Defaults to 5557.\",\n env_var=\"LOCUST_MASTER_BIND_PORT\",\n )\n master_group.add_argument(\n '--expect-workers',\n type=int,\n default=1,\n help=\"How many workers master should expect to connect before starting the test (only when --headless used).\",\n env_var=\"LOCUST_EXPECT_WORKERS\",\n )\n master_group.add_argument(\n '--expect-slaves',\n action='store_true',\n help=configargparse.SUPPRESS,\n )\n \n worker_group = parser.add_argument_group(\n \"Worker options\", \n textwrap.dedent(\"\"\"\n Options for running a Locust Worker node when running Locust distributed. \n Only the LOCUSTFILE (-f option) need to be specified when starting a Worker, since other options such as -u, -r, -t are specified on the Master node.\n \"\"\"),\n )\n # if locust should be run in distributed mode as worker\n worker_group.add_argument(\n '--worker',\n action='store_true',\n help=\"Set locust to run in distributed mode with this process as worker\",\n env_var=\"LOCUST_MODE_WORKER\",\n )\n worker_group.add_argument(\n '--slave',\n action='store_true',\n help=configargparse.SUPPRESS,\n )\n # master host options\n worker_group.add_argument(\n '--master-host',\n default=\"127.0.0.1\",\n help=\"Host or IP address of locust master for distributed load testing. Only used when running with --worker. Defaults to 127.0.0.1.\",\n env_var=\"LOCUST_MASTER_NODE_HOST\",\n )\n worker_group.add_argument(\n '--master-port',\n type=int,\n default=5557,\n help=\"The port to connect to that is used by the locust master for distributed load testing. Only used when running with --worker. Defaults to 5557.\",\n env_var=\"LOCUST_MASTER_NODE_PORT\",\n )\n \n stats_group = parser.add_argument_group(\"Request statistics options\")\n stats_group.add_argument(\n '--csv',\n dest=\"csv_prefix\",\n help=\"Store current request stats to files in CSV format. Setting this option will generate three files: [CSV_PREFIX]_stats.csv, [CSV_PREFIX]_stats_history.csv and [CSV_PREFIX]_failures.csv\",\n env_var=\"LOCUST_CSV\",\n )\n stats_group.add_argument(\n '--csv-full-history',\n action='store_true',\n default=False,\n dest='stats_history_enabled',\n help=\"Store each stats entry in CSV format to _stats_history.csv file\",\n env_var=\"LOCUST_CSV_FULL_HISTORY\",\n ) \n stats_group.add_argument(\n '--print-stats',\n action='store_true',\n help=\"Print stats in the console\",\n env_var=\"LOCUST_PRINT_STATS\",\n )\n stats_group.add_argument(\n '--only-summary',\n action='store_true',\n help='Only print the summary stats',\n env_var=\"LOCUST_ONLY_SUMMARY\",\n )\n stats_group.add_argument(\n '--reset-stats',\n action='store_true',\n help=\"Reset statistics once hatching has been completed. Should be set on both master and workers when running in distributed mode\",\n env_var=\"LOCUST_RESET_STATS\",\n )\n \n log_group = parser.add_argument_group(\"Logging options\")\n log_group.add_argument(\n '--skip-log-setup',\n action='store_true',\n dest='skip_log_setup',\n default=False,\n help=\"Disable Locust's logging setup. Instead, the configuration is provided by the Locust test or Python defaults.\",\n env_var=\"LOCUST_SKIP_LOG_SETUP\",\n )\n log_group.add_argument(\n '--loglevel', '-L',\n default='INFO',\n help=\"Choose between DEBUG/INFO/WARNING/ERROR/CRITICAL. Default is INFO.\",\n env_var=\"LOCUST_LOGLEVEL\",\n )\n log_group.add_argument(\n '--logfile',\n help=\"Path to log file. If not set, log will go to stdout/stderr\",\n env_var=\"LOCUST_LOGFILE\",\n )\n \n step_load_group = parser.add_argument_group(\"Step load options\")\n step_load_group.add_argument(\n '--step-load',\n action='store_true',\n help=\"Enable Step Load mode to monitor how performance metrics varies when user load increases. Requires --step-users and --step-time to be specified.\",\n env_var=\"LOCUST_STEP_LOAD\",\n )\n step_load_group.add_argument(\n '--step-users',\n type=int,\n help=\"User count to increase by step in Step Load mode. Only used together with --step-load\",\n env_var=\"LOCUST_STEP_USERS\",\n )\n step_load_group.add_argument(\n '--step-clients',\n action='store_true',\n help=configargparse.SUPPRESS\n )\n step_load_group.add_argument(\n '--step-time',\n help=\"Step duration in Step Load mode, e.g. (300s, 20m, 3h, 1h30m, etc.). Only used together with --step-load\",\n env_var=\"LOCUST_STEP_TIME\",\n )\n \n \n other_group = parser.add_argument_group(\"Other options\")\n other_group.add_argument(\n '--show-task-ratio',\n action='store_true',\n help=\"Print table of the User classes' task execution ratio\"\n )\n other_group.add_argument(\n '--show-task-ratio-json',\n action='store_true',\n help=\"Print json data of the User classes' task execution ratio\"\n )\n # optparse gives you --version but we have to do it ourselves to get -V too\n other_group.add_argument(\n '--version', '-V',\n action='version',\n help=\"Show program's version number and exit\",\n version='%(prog)s {}'.format(version),\n )\n other_group.add_argument(\n '--exit-code-on-error',\n type=int,\n default=1,\n help=\"Sets the process exit code to use when a test result contain any failure or error\",\n env_var=\"LOCUST_EXIT_CODE_ON_ERROR\",\n )\n other_group.add_argument(\n '-s', '--stop-timeout',\n action='store',\n type=int,\n dest='stop_timeout',\n default=None,\n help=\"Number of seconds to wait for a simulated user to complete any executing task before exiting. Default is to terminate immediately. This parameter only needs to be specified for the master process when running Locust distributed.\",\n env_var=\"LOCUST_STOP_TIMEOUT\",\n )\n \n user_classes_group = parser.add_argument_group(\"User classes\")\n user_classes_group.add_argument(\n 'user_classes',\n nargs='*',\n metavar='UserClass',\n help=\"Optionally specify which User classes that should be used (available User classes can be listed with -l or --list)\",\n )\n\ndef get_parser(default_config_files=DEFAULT_CONFIG_FILES):\n # get a parser that is only able to parse the -f argument\n parser = get_empty_argument_parser(add_help=True, default_config_files=default_config_files)\n # add all the other supported arguments\n setup_parser_arguments(parser)\n # fire event to provide a hook for locustscripts and plugins to add command line arguments\n locust.events.init_command_line_parser.fire(parser=parser)\n return parser\n\n\ndef parse_options(args=None):\n return get_parser().parse_args(args=args)", "path": "locust/argument_parser.py" } ]
[ { "content": "import argparse\nimport os\nimport sys\nimport textwrap\n\nimport configargparse\n\nimport locust\n\nversion = locust.__version__\n\n\nDEFAULT_CONFIG_FILES = ['~/.locust.conf','locust.conf']\n\n\ndef _is_package(path):\n \"\"\"\n Is the given path a Python package?\n \"\"\"\n return (\n os.path.isdir(path)\n and os.path.exists(os.path.join(path, '__init__.py'))\n )\n\ndef find_locustfile(locustfile):\n \"\"\"\n Attempt to locate a locustfile, either explicitly or by searching parent dirs.\n \"\"\"\n # Obtain env value\n names = [locustfile]\n # Create .py version if necessary\n if not names[0].endswith('.py'):\n names.append(names[0] + '.py')\n # Does the name contain path elements?\n if os.path.dirname(names[0]):\n # If so, expand home-directory markers and test for existence\n for name in names:\n expanded = os.path.expanduser(name)\n if os.path.exists(expanded):\n if name.endswith('.py') or _is_package(expanded):\n return os.path.abspath(expanded)\n else:\n # Otherwise, start in cwd and work downwards towards filesystem root\n path = os.path.abspath('.')\n while True:\n for name in names:\n joined = os.path.join(path, name)\n if os.path.exists(joined):\n if name.endswith('.py') or _is_package(joined):\n return os.path.abspath(joined)\n parent_path = os.path.dirname(path)\n if parent_path == path:\n # we've reached the root path which has been checked this iteration\n break\n path = parent_path\n # Implicit 'return None' if nothing was found\n\n\ndef get_empty_argument_parser(add_help=True, default_config_files=DEFAULT_CONFIG_FILES):\n parser = configargparse.ArgumentParser(\n default_config_files=default_config_files, \n add_env_var_help=False,\n add_config_file_help=False,\n add_help=add_help,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n usage=argparse.SUPPRESS,\n description=textwrap.dedent(\"\"\"\n Usage: locust [OPTIONS] [UserClass ...]\n \n \"\"\"),\n #epilog=\"\",\n )\n parser.add_argument(\n '-f', '--locustfile',\n default='locustfile',\n help=\"Python module file to import, e.g. '../other.py'. Default: locustfile\",\n env_var=\"LOCUST_LOCUSTFILE\",\n )\n parser.add_argument('--config', is_config_file_arg=True, help='Config file path')\n\n return parser\n\n\ndef parse_locustfile_option(args=None):\n \"\"\"\n Construct a command line parser that is only used to parse the -f argument so that we can \n import the test scripts in case any of them adds additional command line arguments to the \n parser\n \"\"\" \n parser = get_empty_argument_parser(add_help=False)\n parser.add_argument(\n '-h', '--help',\n action='store_true',\n default=False,\n )\n parser.add_argument(\n '--version', '-V',\n action='store_true',\n default=False,\n )\n \n options, _ = parser.parse_known_args(args=args)\n \n locustfile = find_locustfile(options.locustfile)\n \n if not locustfile:\n if options.help or options.version:\n # if --help or --version is specified we'll call parse_options which will print the help/version message\n parse_options(args=args)\n sys.stderr.write(\"Could not find any locustfile! Ensure file ends in '.py' and see --help for available options.\\n\")\n sys.exit(1)\n \n if locustfile == \"locust.py\":\n sys.stderr.write(\"The locustfile must not be named `locust.py`. Please rename the file and try again.\\n\")\n sys.exit(1)\n \n return locustfile\n\n\ndef setup_parser_arguments(parser):\n \"\"\"\n Setup command-line options\n \n Takes a configargparse.ArgumentParser as argument and calls it's add_argument \n for each of the supported arguments\n \"\"\"\n parser._optionals.title = \"Common options\"\n parser.add_argument(\n '-H', '--host',\n help=\"Host to load test in the following format: http://10.21.32.33\",\n env_var=\"LOCUST_HOST\",\n )\n parser.add_argument(\n '-u', '--users',\n type=int,\n dest='num_users',\n help=\"Number of concurrent Locust users. Only used together with --headless\",\n env_var=\"LOCUST_USERS\",\n )\n parser.add_argument(\n '-r', '--hatch-rate',\n type=float,\n help=\"The rate per second in which users are spawned. Only used together with --headless\",\n env_var=\"LOCUST_HATCH_RATE\",\n )\n parser.add_argument(\n '-t', '--run-time',\n help=\"Stop after the specified amount of time, e.g. (300s, 20m, 3h, 1h30m, etc.). Only used together with --headless\",\n env_var=\"LOCUST_RUN_TIME\",\n )\n parser.add_argument(\n '-l', '--list',\n action='store_true',\n dest='list_commands',\n help=\"Show list of possible User classes and exit\",\n )\n \n web_ui_group = parser.add_argument_group(\"Web UI options\")\n web_ui_group.add_argument(\n '--web-host',\n default=\"\",\n help=\"Host to bind the web interface to. Defaults to '*' (all interfaces)\",\n env_var=\"LOCUST_WEB_HOST\",\n )\n web_ui_group.add_argument(\n '--web-port', '-P',\n type=int,\n default=8089,\n help=\"Port on which to run web host\",\n env_var=\"LOCUST_WEB_PORT\",\n )\n web_ui_group.add_argument(\n '--headless',\n action='store_true',\n help=\"Disable the web interface, and instead start the load test immediately. Requires -u and -t to be specified.\",\n env_var=\"LOCUST_HEADLESS\",\n )\n web_ui_group.add_argument(\n '--web-auth',\n type=str,\n dest='web_auth',\n default=None,\n help='Turn on Basic Auth for the web interface. Should be supplied in the following format: username:password',\n env_var=\"LOCUST_WEB_AUTH\",\n )\n web_ui_group.add_argument(\n '--tls-cert',\n default=\"\",\n help=\"Optional path to TLS certificate to use to serve over HTTPS\",\n env_var=\"LOCUST_TLS_CERT\",\n )\n web_ui_group.add_argument(\n '--tls-key',\n default=\"\",\n help=\"Optional path to TLS private key to use to serve over HTTPS\",\n env_var=\"LOCUST_TLS_KEY\",\n )\n \n master_group = parser.add_argument_group(\n \"Master options\", \n \"Options for running a Locust Master node when running Locust distributed. A Master node need Worker nodes that connect to it before it can run load tests.\",\n )\n # if locust should be run in distributed mode as master\n master_group.add_argument(\n '--master',\n action='store_true',\n help=\"Set locust to run in distributed mode with this process as master\",\n env_var='LOCUST_MODE_MASTER',\n )\n master_group.add_argument(\n '--master-bind-host',\n default=\"*\",\n help=\"Interfaces (hostname, ip) that locust master should bind to. Only used when running with --master. Defaults to * (all available interfaces).\",\n env_var=\"LOCUST_MASTER_BIND_HOST\",\n )\n master_group.add_argument(\n '--master-bind-port',\n type=int,\n default=5557,\n help=\"Port that locust master should bind to. Only used when running with --master. Defaults to 5557.\",\n env_var=\"LOCUST_MASTER_BIND_PORT\",\n )\n master_group.add_argument(\n '--expect-workers',\n type=int,\n default=1,\n help=\"How many workers master should expect to connect before starting the test (only when --headless used).\",\n env_var=\"LOCUST_EXPECT_WORKERS\",\n )\n master_group.add_argument(\n '--expect-slaves',\n action='store_true',\n help=configargparse.SUPPRESS,\n )\n \n worker_group = parser.add_argument_group(\n \"Worker options\", \n textwrap.dedent(\"\"\"\n Options for running a Locust Worker node when running Locust distributed. \n Only the LOCUSTFILE (-f option) need to be specified when starting a Worker, since other options such as -u, -r, -t are specified on the Master node.\n \"\"\"),\n )\n # if locust should be run in distributed mode as worker\n worker_group.add_argument(\n '--worker',\n action='store_true',\n help=\"Set locust to run in distributed mode with this process as worker\",\n env_var=\"LOCUST_MODE_WORKER\",\n )\n worker_group.add_argument(\n '--slave',\n action='store_true',\n help=configargparse.SUPPRESS,\n )\n # master host options\n worker_group.add_argument(\n '--master-host',\n default=\"127.0.0.1\",\n help=\"Host or IP address of locust master for distributed load testing. Only used when running with --worker. Defaults to 127.0.0.1.\",\n env_var=\"LOCUST_MASTER_NODE_HOST\",\n )\n worker_group.add_argument(\n '--master-port',\n type=int,\n default=5557,\n help=\"The port to connect to that is used by the locust master for distributed load testing. Only used when running with --worker. Defaults to 5557.\",\n env_var=\"LOCUST_MASTER_NODE_PORT\",\n )\n \n stats_group = parser.add_argument_group(\"Request statistics options\")\n stats_group.add_argument(\n '--csv',\n dest=\"csv_prefix\",\n help=\"Store current request stats to files in CSV format. Setting this option will generate three files: [CSV_PREFIX]_stats.csv, [CSV_PREFIX]_stats_history.csv and [CSV_PREFIX]_failures.csv\",\n env_var=\"LOCUST_CSV\",\n )\n stats_group.add_argument(\n '--csv-full-history',\n action='store_true',\n default=False,\n dest='stats_history_enabled',\n help=\"Store each stats entry in CSV format to _stats_history.csv file\",\n env_var=\"LOCUST_CSV_FULL_HISTORY\",\n ) \n stats_group.add_argument(\n '--print-stats',\n action='store_true',\n help=\"Print stats in the console\",\n env_var=\"LOCUST_PRINT_STATS\",\n )\n stats_group.add_argument(\n '--only-summary',\n action='store_true',\n help='Only print the summary stats',\n env_var=\"LOCUST_ONLY_SUMMARY\",\n )\n stats_group.add_argument(\n '--reset-stats',\n action='store_true',\n help=\"Reset statistics once hatching has been completed. Should be set on both master and workers when running in distributed mode\",\n env_var=\"LOCUST_RESET_STATS\",\n )\n \n log_group = parser.add_argument_group(\"Logging options\")\n log_group.add_argument(\n '--skip-log-setup',\n action='store_true',\n dest='skip_log_setup',\n default=False,\n help=\"Disable Locust's logging setup. Instead, the configuration is provided by the Locust test or Python defaults.\",\n env_var=\"LOCUST_SKIP_LOG_SETUP\",\n )\n log_group.add_argument(\n '--loglevel', '-L',\n default='INFO',\n help=\"Choose between DEBUG/INFO/WARNING/ERROR/CRITICAL. Default is INFO.\",\n env_var=\"LOCUST_LOGLEVEL\",\n )\n log_group.add_argument(\n '--logfile',\n help=\"Path to log file. If not set, log will go to stdout/stderr\",\n env_var=\"LOCUST_LOGFILE\",\n )\n \n step_load_group = parser.add_argument_group(\"Step load options\")\n step_load_group.add_argument(\n '--step-load',\n action='store_true',\n help=\"Enable Step Load mode to monitor how performance metrics varies when user load increases. Requires --step-users and --step-time to be specified.\",\n env_var=\"LOCUST_STEP_LOAD\",\n )\n step_load_group.add_argument(\n '--step-users',\n type=int,\n help=\"User count to increase by step in Step Load mode. Only used together with --step-load\",\n env_var=\"LOCUST_STEP_USERS\",\n )\n step_load_group.add_argument(\n '--step-clients',\n action='store_true',\n help=configargparse.SUPPRESS\n )\n step_load_group.add_argument(\n '--step-time',\n help=\"Step duration in Step Load mode, e.g. (300s, 20m, 3h, 1h30m, etc.). Only used together with --step-load\",\n env_var=\"LOCUST_STEP_TIME\",\n )\n \n \n other_group = parser.add_argument_group(\"Other options\")\n other_group.add_argument(\n '--show-task-ratio',\n action='store_true',\n help=\"Print table of the User classes' task execution ratio\"\n )\n other_group.add_argument(\n '--show-task-ratio-json',\n action='store_true',\n help=\"Print json data of the User classes' task execution ratio\"\n )\n # optparse gives you --version but we have to do it ourselves to get -V too\n other_group.add_argument(\n '--version', '-V',\n action='version',\n help=\"Show program's version number and exit\",\n version='%(prog)s {}'.format(version),\n )\n other_group.add_argument(\n '--exit-code-on-error',\n type=int,\n default=1,\n help=\"Sets the process exit code to use when a test result contain any failure or error\",\n env_var=\"LOCUST_EXIT_CODE_ON_ERROR\",\n )\n other_group.add_argument(\n '-s', '--stop-timeout',\n action='store',\n type=int,\n dest='stop_timeout',\n default=None,\n help=\"Number of seconds to wait for a simulated user to complete any executing task before exiting. Default is to terminate immediately. This parameter only needs to be specified for the master process when running Locust distributed.\",\n env_var=\"LOCUST_STOP_TIMEOUT\",\n )\n \n user_classes_group = parser.add_argument_group(\"User classes\")\n user_classes_group.add_argument(\n 'user_classes',\n nargs='*',\n metavar='UserClass',\n help=\"Optionally specify which User classes that should be used (available User classes can be listed with -l or --list)\",\n )\n\ndef get_parser(default_config_files=DEFAULT_CONFIG_FILES):\n # get a parser that is only able to parse the -f argument\n parser = get_empty_argument_parser(add_help=True, default_config_files=default_config_files)\n # add all the other supported arguments\n setup_parser_arguments(parser)\n # fire event to provide a hook for locustscripts and plugins to add command line arguments\n locust.events.init_command_line_parser.fire(parser=parser)\n return parser\n\n\ndef parse_options(args=None):\n return get_parser().parse_args(args=args)", "path": "locust/argument_parser.py" } ]
diff --git a/docs/configuration.rst b/docs/configuration.rst index a876e34a3b..a7f95a6c9d 100644 --- a/docs/configuration.rst +++ b/docs/configuration.rst @@ -27,3 +27,33 @@ Most of the configuration that can be set through command line arguments can als environment variables. Here's a table of all the available environment variables: .. include:: env-options.rst + + + +.. _configuration-files: + +Configuration files +------------------- + +Any of the configuration that can be set through command line arguments can also be set by a +configuration file in the `config file <https://github.com/bw2/ConfigArgParse#config-file-syntax>`_ +format. +Locust will look for ``locust.conf`` or ``~/.locust.conf`` by default, or a file may be specified +with the ``--config`` flag. Parameters passed as command line arguments will override the settings +from the config file. + + +.. code-block:: + + # step_mode.conf in current directory + locustfile locust_files/my_locust_file.py + host localhost + users 100 + hatch-rate 10 + step-load + step-users 20 + step-time 60 + +.. code-block:: console + + $ locust --config=step_mode.conf diff --git a/docs/quickstart.rst b/docs/quickstart.rst index d79f26775d..2da12ff1fd 100644 --- a/docs/quickstart.rst +++ b/docs/quickstart.rst @@ -144,8 +144,10 @@ host defaults to 127.0.0.1): $ locust -f locust_files/my_locust_file.py --worker --master-host=192.168.0.100 -Parameters can also be set as :ref:`environment variables <environment-variables>`, or in a -`config file <https://github.com/bw2/ConfigArgParse#config-file-syntax>`_ (``locust.conf`` or ``~/.locust.conf``). +Parameters can also be set as :ref:`environment variables <environment-variables>`, or in a +`config file <https://github.com/bw2/ConfigArgParse#config-file-syntax>`_. +Locust will look for ``locust.conf`` or ``~/.locust.conf`` by default, or a file may be specified +with the ``--config`` flag. For example: (this will do the same thing as the previous command) diff --git a/locust/argument_parser.py b/locust/argument_parser.py index 3d0bf758ce..36ddd843fd 100644 --- a/locust/argument_parser.py +++ b/locust/argument_parser.py @@ -76,6 +76,8 @@ def get_empty_argument_parser(add_help=True, default_config_files=DEFAULT_CONFIG help="Python module file to import, e.g. '../other.py'. Default: locustfile", env_var="LOCUST_LOCUSTFILE", ) + parser.add_argument('--config', is_config_file_arg=True, help='Config file path') + return parser diff --git a/locust/test/test_main.py b/locust/test/test_main.py index 34ef2bbe17..c0065686a6 100644 --- a/locust/test/test_main.py +++ b/locust/test/test_main.py @@ -77,6 +77,40 @@ def test_create_environment(self): self.assertEqual(None, env.host) self.assertFalse(env.reset_stats) + def test_specify_config_file(self): + with temporary_file(textwrap.dedent(""" + host = localhost # With "=" + u 100 # Short form + hatch-rate 5 # long form + headless # boolean + """), suffix=".conf") as conf_file_path: + options = parse_options(args=[ + "--config", conf_file_path, + ]) + self.assertEqual(conf_file_path, options.config) + self.assertEqual("localhost", options.host) + self.assertEqual(100, options.num_users) + self.assertEqual(5, options.hatch_rate) + self.assertTrue(options.headless) + + def test_command_line_arguments_override_config_file(self): + with temporary_file("host=from_file", suffix=".conf") as conf_file_path: + options = parse_options(args=[ + "--config", conf_file_path, + "--host", "from_args", + ]) + self.assertEqual("from_args", options.host) + + def test_locustfile_can_be_set_in_config_file(self): + with temporary_file( + "locustfile my_locust_file.py", + suffix=".conf", + ) as conf_file_path: + options = parse_options(args=[ + "--config", conf_file_path, + ]) + self.assertEqual("my_locust_file.py", options.locustfile) + class LocustProcessIntegrationTest(TestCase): def setUp(self):
quantumlib__Cirq-2952
Update cirq.google.Bristlecone/Foxtail with accurate duration numbers
[ { "content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import (Any, Collection, Dict, Optional, Iterable, List, Set, Tuple,\n TYPE_CHECKING)\n\nfrom cirq._doc import document\nfrom cirq.devices import GridQubit\nfrom cirq.google import gate_sets, serializable_gate_set\nfrom cirq.google.api import v2\nfrom cirq.google.api.v2 import device_pb2\nfrom cirq.google.devices.serializable_device import SerializableDevice\nfrom cirq.google.devices.xmon_device import XmonDevice\nfrom cirq.ops import MeasurementGate, SingleQubitGate, WaitGate\nfrom cirq.value import Duration\n\nif TYPE_CHECKING:\n import cirq\n\n_2_QUBIT_TARGET_SET = \"2_qubit_targets\"\n_MEAS_TARGET_SET = \"meas_targets\"\n\n\ndef _parse_device(s: str) -> Tuple[List[GridQubit], Dict[str, Set[GridQubit]]]:\n \"\"\"Parse ASCIIart device layout into info about qubits and connectivity.\n\n Args:\n s: String representing the qubit layout. Each line represents a row,\n and each character in the row is a qubit, or a blank site if the\n character is a hyphen '-'. Different letters for the qubit specify\n which measurement line that qubit is connected to, e.g. all 'A'\n qubits share a measurement line. Leading and trailing spaces on\n each line are ignored.\n\n Returns:\n A list of qubits and a dict mapping measurement line name to the qubits\n on that measurement line.\n \"\"\"\n lines = s.strip().split('\\n')\n qubits = [] # type: List[GridQubit]\n measurement_lines = {} # type: Dict[str, Set[GridQubit]]\n for row, line in enumerate(lines):\n for col, c in enumerate(line.strip()):\n if c != '-':\n qubit = GridQubit(row, col)\n qubits.append(qubit)\n measurement_line = measurement_lines.setdefault(c, set())\n measurement_line.add(qubit)\n return qubits, measurement_lines\n\n\ndef create_device_proto_from_diagram(\n ascii_grid: str,\n gate_sets: Optional[Iterable[\n serializable_gate_set.SerializableGateSet]] = None,\n durations_picos: Optional[Dict[str, int]] = None,\n out: Optional[device_pb2.DeviceSpecification] = None,\n) -> device_pb2.DeviceSpecification:\n \"\"\"Parse ASCIIart device layout into DeviceSpecification proto.\n\n This function assumes that all pairs of adjacent qubits are valid targets\n for two-qubit gates.\n\n Args:\n ascii_grid: ASCII version of the grid (see _parse_device for details).\n gate_sets: Gate sets that define the translation between gate ids and\n cirq Gate objects.\n durations_picos: A map from gate ids to gate durations in picoseconds.\n out: If given, populate this proto, otherwise create a new proto.\n \"\"\"\n qubits, _ = _parse_device(ascii_grid)\n\n # Create a list of all adjacent pairs on the grid for two-qubit gates.\n qubit_set = frozenset(qubits)\n pairs: List[Tuple['cirq.Qid', 'cirq.Qid']] = []\n for qubit in qubits:\n for neighbor in sorted(qubit.neighbors()):\n if neighbor > qubit and neighbor in qubit_set:\n pairs.append((qubit, neighbor))\n\n return create_device_proto_for_qubits(qubits, pairs, gate_sets,\n durations_picos, out)\n\n\ndef create_device_proto_for_qubits(\n qubits: Collection['cirq.Qid'],\n pairs: Collection[Tuple['cirq.Qid', 'cirq.Qid']],\n gate_sets: Optional[Iterable[\n serializable_gate_set.SerializableGateSet]] = None,\n durations_picos: Optional[Dict[str, int]] = None,\n out: Optional[device_pb2.DeviceSpecification] = None,\n) -> device_pb2.DeviceSpecification:\n \"\"\"Create device spec for the given qubits and coupled pairs.\n\n Args:\n qubits: Qubits that can perform single-qubit gates.\n pairs: Pairs of coupled qubits that can perform two-qubit gates.\n gate_sets: Gate sets that define the translation between gate ids and\n cirq Gate objects.\n durations_picos: A map from gate ids to gate durations in picoseconds.\n out: If given, populate this proto, otherwise create a new proto.\n \"\"\"\n if out is None:\n out = device_pb2.DeviceSpecification()\n\n # Create valid qubit list\n out.valid_qubits.extend(v2.qubit_to_proto_id(q) for q in qubits)\n\n # Set up a target set for measurement (any qubit permutation)\n meas_targets = out.valid_targets.add()\n meas_targets.name = _MEAS_TARGET_SET\n meas_targets.target_ordering = device_pb2.TargetSet.SUBSET_PERMUTATION\n\n # Set up a target set for 2 qubit gates (specified qubit pairs)\n grid_targets = out.valid_targets.add()\n grid_targets.name = _2_QUBIT_TARGET_SET\n grid_targets.target_ordering = device_pb2.TargetSet.SYMMETRIC\n for pair in pairs:\n new_target = grid_targets.targets.add()\n new_target.ids.extend(v2.qubit_to_proto_id(q) for q in pair)\n\n # Create gate sets\n arg_def = device_pb2.ArgDefinition\n for gate_set in gate_sets or []:\n gs_proto = out.valid_gate_sets.add()\n gs_proto.name = gate_set.gate_set_name\n gate_ids: Set[str] = set()\n for gate_type in gate_set.serializers:\n for serializer in gate_set.serializers[gate_type]:\n gate_id = serializer.serialized_gate_id\n if gate_id in gate_ids:\n # Only add each type once\n continue\n\n gate_ids.add(gate_id)\n gate = gs_proto.valid_gates.add()\n gate.id = gate_id\n\n # Choose target set and number of qubits based on gate type.\n\n # Note: if it is not a measurement gate and doesn't inherit\n # from SingleQubitGate, it's assumed to be a two qubit gate.\n if gate_type == MeasurementGate:\n gate.valid_targets.append(_MEAS_TARGET_SET)\n elif gate_type == WaitGate:\n # TODO(#2537): Refactor gate-sets / device to eliminate\n # The need for checking type here.\n gate.number_of_qubits = 1\n elif issubclass(gate_type, SingleQubitGate):\n gate.number_of_qubits = 1\n else:\n # This must be a two-qubit gate\n gate.valid_targets.append(_2_QUBIT_TARGET_SET)\n gate.number_of_qubits = 2\n\n # Add gate duration\n if (durations_picos is not None and gate.id in durations_picos):\n gate.gate_duration_picos = durations_picos[gate.id]\n\n # Add argument names and types for each gate.\n for arg in serializer.args:\n new_arg = gate.valid_args.add()\n if arg.serialized_type == str:\n new_arg.type = arg_def.STRING\n if arg.serialized_type == float:\n new_arg.type = arg_def.FLOAT\n if arg.serialized_type == List[bool]:\n new_arg.type = arg_def.REPEATED_BOOLEAN\n new_arg.name = arg.serialized_name\n # Note: this does not yet support adding allowed_ranges\n\n return out\n\n\n_FOXTAIL_GRID = \"\"\"\nAAAAABBBBBB\nCCCCCCDDDDD\n\"\"\"\n\n\nclass _NamedConstantXmonDevice(XmonDevice):\n\n def __init__(self, constant: str, **kwargs) -> None:\n super().__init__(**kwargs)\n self._repr = constant\n\n def __repr__(self) -> str:\n return self._repr\n\n @classmethod\n def _from_json_dict_(cls, constant: str, **kwargs):\n if constant == Foxtail._repr:\n return Foxtail\n if constant == Bristlecone._repr:\n return Bristlecone\n raise ValueError(f'Unrecognized xmon device name: {constant!r}')\n\n def _json_dict_(self) -> Dict[str, Any]:\n return {\n 'cirq_type': self.__class__.__name__,\n 'constant': self._repr,\n }\n\n\nFoxtail = _NamedConstantXmonDevice('cirq.google.Foxtail',\n measurement_duration=Duration(nanos=4000),\n exp_w_duration=Duration(nanos=20),\n exp_11_duration=Duration(nanos=50),\n qubits=_parse_device(_FOXTAIL_GRID)[0])\ndocument(Foxtail, f\"\"\"72 xmon qubit device.\n\n**Qubit grid**:\n```\n{str(Foxtail)}\n```\n\"\"\")\n\n# Duration dict in picoseconds\n_DURATIONS_FOR_XMON = {\n 'cz': 50_000,\n 'xy': 20_000,\n 'z': 0,\n 'meas': 4_000_000, # 1000ns for readout, 3000ns for \"ring down\"\n}\n\nFOXTAIL_PROTO = create_device_proto_from_diagram(_FOXTAIL_GRID,\n [gate_sets.XMON],\n _DURATIONS_FOR_XMON)\n\n_BRISTLECONE_GRID = \"\"\"\n-----AB-----\n----ABCD----\n---ABCDEF---\n--ABCDEFGH--\n-ABCDEFGHIJ-\nABCDEFGHIJKL\n-CDEFGHIJKL-\n--EFGHIJKL--\n---GHIJKL---\n----IJKL----\n-----KL-----\n\"\"\"\n\nBristlecone = _NamedConstantXmonDevice(\n 'cirq.google.Bristlecone',\n measurement_duration=Duration(nanos=4000),\n exp_w_duration=Duration(nanos=20),\n exp_11_duration=Duration(nanos=50),\n qubits=_parse_device(_BRISTLECONE_GRID)[0])\ndocument(\n Bristlecone, f\"\"\"72 xmon qubit device.\n\n**Qubit grid**:\n```\n{str(Bristlecone)}\n```\n\"\"\")\n\nBRISTLECONE_PROTO = create_device_proto_from_diagram(_BRISTLECONE_GRID,\n [gate_sets.XMON],\n _DURATIONS_FOR_XMON)\n\n_SYCAMORE_GRID = \"\"\"\n-----AB---\n----ABCD--\n---ABCDEF-\n--ABCDEFGH\n-ABCDEFGHI\nABCDEFGHI-\n-CDEFGHI--\n--EFGHI---\n---GHI----\n----I-----\n\"\"\"\n\n_SYCAMORE_DURATIONS_PICOS = {\n 'xy': 25_000,\n 'xy_half_pi': 25_000,\n 'xy_pi': 25_000,\n 'xyz': 25_000,\n 'fsim_pi_4': 32_000,\n 'inv_fsim_pi_4': 32_000,\n 'syc': 12_000,\n 'z': 0,\n 'meas': 4_000_000, # 1000 ns for readout, 3000ns for ring_down\n}\n\nSYCAMORE_PROTO = create_device_proto_from_diagram(\n _SYCAMORE_GRID,\n [gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET],\n _SYCAMORE_DURATIONS_PICOS,\n)\n\nSycamore = SerializableDevice.from_proto(\n proto=SYCAMORE_PROTO,\n gate_sets=[gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET])\n\n# Subset of the Sycamore grid with a reduced layout.\n_SYCAMORE23_GRID = \"\"\"\n----------\n----------\n----------\n--A-------\n-ABC------\nABCDE-----\n-CDEFG----\n--EFGHI---\n---GHI----\n----I-----\n\"\"\"\n\nSYCAMORE23_PROTO = create_device_proto_from_diagram(\n _SYCAMORE23_GRID,\n [gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET],\n _SYCAMORE_DURATIONS_PICOS,\n)\n\nSycamore23 = SerializableDevice.from_proto(\n proto=SYCAMORE23_PROTO,\n gate_sets=[gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET])\n", "path": "cirq/google/devices/known_devices.py" } ]
[ { "content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import (Any, Collection, Dict, Optional, Iterable, List, Set, Tuple,\n TYPE_CHECKING)\n\nfrom cirq._doc import document\nfrom cirq.devices import GridQubit\nfrom cirq.google import gate_sets, serializable_gate_set\nfrom cirq.google.api import v2\nfrom cirq.google.api.v2 import device_pb2\nfrom cirq.google.devices.serializable_device import SerializableDevice\nfrom cirq.google.devices.xmon_device import XmonDevice\nfrom cirq.ops import MeasurementGate, SingleQubitGate, WaitGate\nfrom cirq.value import Duration\n\nif TYPE_CHECKING:\n import cirq\n\n_2_QUBIT_TARGET_SET = \"2_qubit_targets\"\n_MEAS_TARGET_SET = \"meas_targets\"\n\n\ndef _parse_device(s: str) -> Tuple[List[GridQubit], Dict[str, Set[GridQubit]]]:\n \"\"\"Parse ASCIIart device layout into info about qubits and connectivity.\n\n Args:\n s: String representing the qubit layout. Each line represents a row,\n and each character in the row is a qubit, or a blank site if the\n character is a hyphen '-'. Different letters for the qubit specify\n which measurement line that qubit is connected to, e.g. all 'A'\n qubits share a measurement line. Leading and trailing spaces on\n each line are ignored.\n\n Returns:\n A list of qubits and a dict mapping measurement line name to the qubits\n on that measurement line.\n \"\"\"\n lines = s.strip().split('\\n')\n qubits = [] # type: List[GridQubit]\n measurement_lines = {} # type: Dict[str, Set[GridQubit]]\n for row, line in enumerate(lines):\n for col, c in enumerate(line.strip()):\n if c != '-':\n qubit = GridQubit(row, col)\n qubits.append(qubit)\n measurement_line = measurement_lines.setdefault(c, set())\n measurement_line.add(qubit)\n return qubits, measurement_lines\n\n\ndef create_device_proto_from_diagram(\n ascii_grid: str,\n gate_sets: Optional[Iterable[\n serializable_gate_set.SerializableGateSet]] = None,\n durations_picos: Optional[Dict[str, int]] = None,\n out: Optional[device_pb2.DeviceSpecification] = None,\n) -> device_pb2.DeviceSpecification:\n \"\"\"Parse ASCIIart device layout into DeviceSpecification proto.\n\n This function assumes that all pairs of adjacent qubits are valid targets\n for two-qubit gates.\n\n Args:\n ascii_grid: ASCII version of the grid (see _parse_device for details).\n gate_sets: Gate sets that define the translation between gate ids and\n cirq Gate objects.\n durations_picos: A map from gate ids to gate durations in picoseconds.\n out: If given, populate this proto, otherwise create a new proto.\n \"\"\"\n qubits, _ = _parse_device(ascii_grid)\n\n # Create a list of all adjacent pairs on the grid for two-qubit gates.\n qubit_set = frozenset(qubits)\n pairs: List[Tuple['cirq.Qid', 'cirq.Qid']] = []\n for qubit in qubits:\n for neighbor in sorted(qubit.neighbors()):\n if neighbor > qubit and neighbor in qubit_set:\n pairs.append((qubit, neighbor))\n\n return create_device_proto_for_qubits(qubits, pairs, gate_sets,\n durations_picos, out)\n\n\ndef create_device_proto_for_qubits(\n qubits: Collection['cirq.Qid'],\n pairs: Collection[Tuple['cirq.Qid', 'cirq.Qid']],\n gate_sets: Optional[Iterable[\n serializable_gate_set.SerializableGateSet]] = None,\n durations_picos: Optional[Dict[str, int]] = None,\n out: Optional[device_pb2.DeviceSpecification] = None,\n) -> device_pb2.DeviceSpecification:\n \"\"\"Create device spec for the given qubits and coupled pairs.\n\n Args:\n qubits: Qubits that can perform single-qubit gates.\n pairs: Pairs of coupled qubits that can perform two-qubit gates.\n gate_sets: Gate sets that define the translation between gate ids and\n cirq Gate objects.\n durations_picos: A map from gate ids to gate durations in picoseconds.\n out: If given, populate this proto, otherwise create a new proto.\n \"\"\"\n if out is None:\n out = device_pb2.DeviceSpecification()\n\n # Create valid qubit list\n out.valid_qubits.extend(v2.qubit_to_proto_id(q) for q in qubits)\n\n # Set up a target set for measurement (any qubit permutation)\n meas_targets = out.valid_targets.add()\n meas_targets.name = _MEAS_TARGET_SET\n meas_targets.target_ordering = device_pb2.TargetSet.SUBSET_PERMUTATION\n\n # Set up a target set for 2 qubit gates (specified qubit pairs)\n grid_targets = out.valid_targets.add()\n grid_targets.name = _2_QUBIT_TARGET_SET\n grid_targets.target_ordering = device_pb2.TargetSet.SYMMETRIC\n for pair in pairs:\n new_target = grid_targets.targets.add()\n new_target.ids.extend(v2.qubit_to_proto_id(q) for q in pair)\n\n # Create gate sets\n arg_def = device_pb2.ArgDefinition\n for gate_set in gate_sets or []:\n gs_proto = out.valid_gate_sets.add()\n gs_proto.name = gate_set.gate_set_name\n gate_ids: Set[str] = set()\n for gate_type in gate_set.serializers:\n for serializer in gate_set.serializers[gate_type]:\n gate_id = serializer.serialized_gate_id\n if gate_id in gate_ids:\n # Only add each type once\n continue\n\n gate_ids.add(gate_id)\n gate = gs_proto.valid_gates.add()\n gate.id = gate_id\n\n # Choose target set and number of qubits based on gate type.\n\n # Note: if it is not a measurement gate and doesn't inherit\n # from SingleQubitGate, it's assumed to be a two qubit gate.\n if gate_type == MeasurementGate:\n gate.valid_targets.append(_MEAS_TARGET_SET)\n elif gate_type == WaitGate:\n # TODO(#2537): Refactor gate-sets / device to eliminate\n # The need for checking type here.\n gate.number_of_qubits = 1\n elif issubclass(gate_type, SingleQubitGate):\n gate.number_of_qubits = 1\n else:\n # This must be a two-qubit gate\n gate.valid_targets.append(_2_QUBIT_TARGET_SET)\n gate.number_of_qubits = 2\n\n # Add gate duration\n if (durations_picos is not None and gate.id in durations_picos):\n gate.gate_duration_picos = durations_picos[gate.id]\n\n # Add argument names and types for each gate.\n for arg in serializer.args:\n new_arg = gate.valid_args.add()\n if arg.serialized_type == str:\n new_arg.type = arg_def.STRING\n if arg.serialized_type == float:\n new_arg.type = arg_def.FLOAT\n if arg.serialized_type == List[bool]:\n new_arg.type = arg_def.REPEATED_BOOLEAN\n new_arg.name = arg.serialized_name\n # Note: this does not yet support adding allowed_ranges\n\n return out\n\n\n_FOXTAIL_GRID = \"\"\"\nAAAAABBBBBB\nCCCCCCDDDDD\n\"\"\"\n\n\nclass _NamedConstantXmonDevice(XmonDevice):\n\n def __init__(self, constant: str, **kwargs) -> None:\n super().__init__(**kwargs)\n self._repr = constant\n\n def __repr__(self) -> str:\n return self._repr\n\n @classmethod\n def _from_json_dict_(cls, constant: str, **kwargs):\n if constant == Foxtail._repr:\n return Foxtail\n if constant == Bristlecone._repr:\n return Bristlecone\n raise ValueError(f'Unrecognized xmon device name: {constant!r}')\n\n def _json_dict_(self) -> Dict[str, Any]:\n return {\n 'cirq_type': self.__class__.__name__,\n 'constant': self._repr,\n }\n\n\nFoxtail = _NamedConstantXmonDevice('cirq.google.Foxtail',\n measurement_duration=Duration(nanos=4000),\n exp_w_duration=Duration(nanos=20),\n exp_11_duration=Duration(nanos=50),\n qubits=_parse_device(_FOXTAIL_GRID)[0])\ndocument(Foxtail, f\"\"\"72 xmon qubit device.\n\n**Qubit grid**:\n```\n{str(Foxtail)}\n```\n\"\"\")\n\n# Duration dict in picoseconds\n_DURATIONS_FOR_XMON = {\n 'cz': 45_000,\n 'xy': 15_000,\n 'z': 0,\n 'meas': 4_000_000, # 1000ns for readout, 3000ns for \"ring down\"\n}\n\nFOXTAIL_PROTO = create_device_proto_from_diagram(_FOXTAIL_GRID,\n [gate_sets.XMON],\n _DURATIONS_FOR_XMON)\n\n_BRISTLECONE_GRID = \"\"\"\n-----AB-----\n----ABCD----\n---ABCDEF---\n--ABCDEFGH--\n-ABCDEFGHIJ-\nABCDEFGHIJKL\n-CDEFGHIJKL-\n--EFGHIJKL--\n---GHIJKL---\n----IJKL----\n-----KL-----\n\"\"\"\n\nBristlecone = _NamedConstantXmonDevice(\n 'cirq.google.Bristlecone',\n measurement_duration=Duration(nanos=4000),\n exp_w_duration=Duration(nanos=20),\n exp_11_duration=Duration(nanos=50),\n qubits=_parse_device(_BRISTLECONE_GRID)[0])\ndocument(\n Bristlecone, f\"\"\"72 xmon qubit device.\n\n**Qubit grid**:\n```\n{str(Bristlecone)}\n```\n\"\"\")\n\nBRISTLECONE_PROTO = create_device_proto_from_diagram(_BRISTLECONE_GRID,\n [gate_sets.XMON],\n _DURATIONS_FOR_XMON)\n\n_SYCAMORE_GRID = \"\"\"\n-----AB---\n----ABCD--\n---ABCDEF-\n--ABCDEFGH\n-ABCDEFGHI\nABCDEFGHI-\n-CDEFGHI--\n--EFGHI---\n---GHI----\n----I-----\n\"\"\"\n\n_SYCAMORE_DURATIONS_PICOS = {\n 'xy': 25_000,\n 'xy_half_pi': 25_000,\n 'xy_pi': 25_000,\n 'xyz': 25_000,\n 'fsim_pi_4': 32_000,\n 'inv_fsim_pi_4': 32_000,\n 'syc': 12_000,\n 'z': 0,\n 'meas': 4_000_000, # 1000 ns for readout, 3000ns for ring_down\n}\n\nSYCAMORE_PROTO = create_device_proto_from_diagram(\n _SYCAMORE_GRID,\n [gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET],\n _SYCAMORE_DURATIONS_PICOS,\n)\n\nSycamore = SerializableDevice.from_proto(\n proto=SYCAMORE_PROTO,\n gate_sets=[gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET])\n\n# Subset of the Sycamore grid with a reduced layout.\n_SYCAMORE23_GRID = \"\"\"\n----------\n----------\n----------\n--A-------\n-ABC------\nABCDE-----\n-CDEFG----\n--EFGHI---\n---GHI----\n----I-----\n\"\"\"\n\nSYCAMORE23_PROTO = create_device_proto_from_diagram(\n _SYCAMORE23_GRID,\n [gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET],\n _SYCAMORE_DURATIONS_PICOS,\n)\n\nSycamore23 = SerializableDevice.from_proto(\n proto=SYCAMORE23_PROTO,\n gate_sets=[gate_sets.SQRT_ISWAP_GATESET, gate_sets.SYC_GATESET])\n", "path": "cirq/google/devices/known_devices.py" } ]
diff --git a/cirq/google/devices/known_devices.py b/cirq/google/devices/known_devices.py index 36bf4adc0a3..2caacbe41bd 100644 --- a/cirq/google/devices/known_devices.py +++ b/cirq/google/devices/known_devices.py @@ -228,8 +228,8 @@ def _json_dict_(self) -> Dict[str, Any]: # Duration dict in picoseconds _DURATIONS_FOR_XMON = { - 'cz': 50_000, - 'xy': 20_000, + 'cz': 45_000, + 'xy': 15_000, 'z': 0, 'meas': 4_000_000, # 1000ns for readout, 3000ns for "ring down" } diff --git a/cirq/google/devices/known_devices_test.py b/cirq/google/devices/known_devices_test.py index 587af9df721..400d409c25e 100644 --- a/cirq/google/devices/known_devices_test.py +++ b/cirq/google/devices/known_devices_test.py @@ -43,7 +43,7 @@ def test_foxtail_device_proto(): name: "half_turns" type: FLOAT } - gate_duration_picos: 20000 + gate_duration_picos: 15000 } valid_gates { id: "z" @@ -80,7 +80,7 @@ def test_foxtail_device_proto(): name: "half_turns" type: FLOAT } - gate_duration_picos: 50000 + gate_duration_picos: 45000 valid_targets: "2_qubit_targets" } valid_gates { diff --git a/cirq/google/devices/serializable_device_test.py b/cirq/google/devices/serializable_device_test.py index 5ac3a17ae3c..4d445b5e28a 100644 --- a/cirq/google/devices/serializable_device_test.py +++ b/cirq/google/devices/serializable_device_test.py @@ -148,7 +148,7 @@ def test_duration_of(): proto=cg.devices.known_devices.FOXTAIL_PROTO, gate_sets=[cg.gate_sets.XMON]) - assert foxtail.duration_of(cirq.X(valid_qubit1)) == cirq.Duration(nanos=20) + assert foxtail.duration_of(cirq.X(valid_qubit1)) == cirq.Duration(nanos=15) # Unsupported op with pytest.raises(ValueError):
carpentries__amy-1046
Django Debug Toolbar should have SQL panel ON by default Turning the panel ON allows to spot bottlenecks right on - just by looking at the number of queries (>8? Investigate). This panel is the most useful and most often used by me. The next one is template variables, which is ON by default already. Django Debug Toolbar should have SQL panel ON by default Turning the panel ON allows to spot bottlenecks right on - just by looking at the number of queries (>8? Investigate). This panel is the most useful and most often used by me. The next one is template variables, which is ON by default already.
[ { "content": "\"\"\"\nDjango settings for amy project.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.7/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.7/ref/settings/\n\"\"\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nimport json\nimport os\nimport sys\n\nfrom django.utils.translation import ugettext_lazy as _\n\nBASE_DIR = os.path.dirname(os.path.dirname(__file__))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/\n\n\n# SECURITY WARNING: don't run with DEBUG turned on in production!\nDEBUG = json.loads(os.environ.get('AMY_DEBUG', 'true'))\n# For deployment in production:\n# AMY_DEBUG=false AMY_SECRET_KEY=\"...\" ./manage.py runserver ...\n\nif DEBUG:\n SECRET_KEY = '3l$35+@a%g!(^y^98oi%ei+%+yvtl3y0k^_7-fmx2oj09-ac5@'\nelse:\n SECRET_KEY = None\nSECRET_KEY = os.environ.get('AMY_SECRET_KEY', SECRET_KEY)\n\n# be sure to put these values in your envvars, even for development\nRECAPTCHA_PUBLIC_KEY = os.environ.get('AMY_RECAPTCHA_PUBLIC_KEY', None)\nRECAPTCHA_PRIVATE_KEY = os.environ.get('AMY_RECAPTCHA_PRIVATE_KEY', None)\nRECAPTCHA_USE_SSL = True\nNOCAPTCHA = True # nicer input\n\nif DEBUG:\n # 'PASSED' in the form will always pass the RECAPTCHA test\n NOCAPTCHA = False # uglier input, but possible to manually enter 'PASSED'\n os.environ['RECAPTCHA_TESTING'] = 'True'\nelse:\n # ensure the keys are present on production\n assert RECAPTCHA_PUBLIC_KEY, 'RECAPTCHA site key not present'\n assert RECAPTCHA_PRIVATE_KEY, 'RECAPTCHA secure key not present'\n\n# email settings\nADMINS = (\n ('Sysadmins ML', '[email protected]'),\n)\n# \"From:\" for error messages sent out to ADMINS\nSERVER_EMAIL = os.environ.get('AMY_SERVER_EMAIL', 'root@localhost')\n\n# addresses to receive \"New workshop request\" or \"New profile update request\"\n# notifications\nREQUEST_NOTIFICATIONS_RECIPIENTS = (\n '[email protected]',\n)\nEMAIL_HOST = os.environ.get('AMY_EMAIL_HOST', 'localhost')\nEMAIL_HOST_USER = os.environ.get('AMY_EMAIL_HOST_USER', '')\nEMAIL_HOST_PASSWORD = os.environ.get('AMY_EMAIL_HOST_PASSWORD', '')\nEMAIL_PORT = int(os.environ.get('AMY_EMAIL_PORT', 25))\nEMAIL_TIMEOUT = 10 # timeout for blocking email operations, in seconds\nEMAIL_USE_TLS = json.loads(os.environ.get('AMY_EMAIL_USE_TLS', 'false'))\nEMAIL_USE_SSL = json.loads(os.environ.get('AMY_EMAIL_USE_SSL', 'false'))\n\n# \"From:\" for NOT error messages (ie. sent to whoever we want)\nDEFAULT_FROM_EMAIL = os.environ.get('AMY_DEFAULT_FROM_EMAIL',\n 'webmaster@localhost')\n\nif DEBUG:\n # outgoing mails will be stored in `django.core.mail.outbox`\n EMAIL_BACKEND = 'django.core.mail.backends.locmem.EmailBackend'\n\nSITE_URL = 'https://amy.software-carpentry.org'\nif DEBUG:\n SITE_URL = 'http://127.0.0.1:8000'\n\n# New template settings (for Django >= 1.8)\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'OPTIONS': {\n 'loaders': [\n 'app_namespace.Loader',\n 'django.template.loaders.filesystem.Loader',\n 'django.template.loaders.app_directories.Loader',\n ],\n 'debug': DEBUG,\n 'context_processors': [\n # default processors + request processor\n 'django.contrib.auth.context_processors.auth',\n 'django.template.context_processors.debug',\n 'django.template.context_processors.i18n',\n 'django.template.context_processors.media',\n 'django.template.context_processors.request',\n 'django.template.context_processors.static',\n 'django.template.context_processors.tz',\n 'django.contrib.messages.context_processors.messages',\n # AMY version\n 'workshops.context_processors.version',\n # GitHub auth\n 'social.apps.django_app.context_processors.backends',\n 'social.apps.django_app.context_processors.login_redirect',\n ],\n\n # Warn viewers of invalid template strings\n 'string_if_invalid': 'XXX-unset-variable-XXX',\n }\n }\n]\n\nALLOWED_HOSTS = [\n 'amy.software-carpentry.org',\n]\n\n\n# Application definition\n\nINSTALLED_APPS = (\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'workshops',\n # this should be after 'workshops' because templates in\n # 'templates/registration/' clash\n 'django.contrib.admin',\n 'crispy_forms',\n 'selectable',\n 'django_countries',\n 'django_filters',\n 'reversion',\n 'rest_framework',\n 'api',\n 'captcha',\n 'compressor',\n 'social.apps.django_app.default',\n 'debug_toolbar',\n)\n\nCRISPY_TEMPLATE_PACK = 'bootstrap3'\n\nMIDDLEWARE_CLASSES = (\n 'debug_toolbar.middleware.DebugToolbarMiddleware',\n 'reversion.middleware.RevisionMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'workshops.github_auth.GithubAuthMiddleware',\n)\n\nROOT_URLCONF = 'amy.urls'\n\nWSGI_APPLICATION = 'amy.wsgi.application'\n\nfrom django.contrib.messages import constants as message_constants\nMESSAGE_TAGS = {\n message_constants.INFO: 'alert-info',\n message_constants.SUCCESS: 'alert-success',\n message_constants.WARNING: 'alert-warning',\n message_constants.ERROR: 'alert-danger',\n}\n\n\n# Database\n# https://docs.djangoproject.com/en/1.7/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3',\n 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),\n 'TEST': {},\n }\n}\nif '--keepdb' in sys.argv:\n # By default, Django uses in-memory sqlite3 database, which is much\n # faster than sqlite3 database in a file. However, we may want to keep\n # database between test launches, so that we avoid the overhead of\n # applying migrations on each test launch.\n DATABASES['default']['TEST']['NAME'] = 'test_db.sqlite3'\n\n# Authentication\nAUTH_USER_MODEL = 'workshops.Person'\nVALIDATION = 'django.contrib.auth.password_validation.'\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': VALIDATION + 'UserAttributeSimilarityValidator',\n 'OPTIONS': {\n 'user_attributes': ('username', 'personal', 'middle', 'family',\n 'email')\n }\n },\n {\n 'NAME': VALIDATION + 'MinimumLengthValidator',\n 'OPTIONS': {\n 'min_length': 10,\n }\n },\n {\n 'NAME': VALIDATION + 'CommonPasswordValidator',\n },\n {\n 'NAME': VALIDATION + 'NumericPasswordValidator',\n },\n]\n\n# GitHub Auth\nAUTHENTICATION_BACKENDS = (\n 'social.backends.github.GithubOAuth2',\n 'django.contrib.auth.backends.ModelBackend',\n)\nSOCIAL_AUTH_ADMIN_USER_SEARCH_FIELDS = ['github']\nSOCIAL_AUTH_GITHUB_KEY = os.environ.get('SOCIAL_AUTH_GITHUB_KEY', '').strip()\nSOCIAL_AUTH_GITHUB_SECRET = os.environ.get('SOCIAL_AUTH_GITHUB_SECRET', '').strip()\nif not DEBUG and not (SOCIAL_AUTH_GITHUB_KEY and SOCIAL_AUTH_GITHUB_SECRET):\n print('Logging using github account will *not* work, '\n 'because you didn\\'t set SOCIAL_AUTH_GITHUB_KEY and/or '\n 'SOCIAL_AUTH_GITHUB_SECRET environment variables.',\n file=sys.stderr)\n\n\nSOCIAL_AUTH_PIPELINE = (\n 'social.pipeline.social_auth.social_details',\n 'social.pipeline.social_auth.social_uid',\n 'social.pipeline.social_auth.auth_allowed',\n 'social.pipeline.social_auth.social_user',\n\n # If we can't find Person associated with given github account, abort.\n 'workshops.github_auth.abort_if_no_user_found',\n\n # The default pipeline includes 'social.pipeline.user.create_user' here,\n # but we don't want to register a new Person when somebody logs in\n # using GitHub account that is not associated with any Person.\n\n 'social.pipeline.social_auth.associate_user',\n 'social.pipeline.social_auth.load_extra_data',\n)\n\nSOCIAL_AUTH_USER_MODEL = 'workshops.Person'\n\n# Github API token (optional). Setting this token reduces limits and quotes\n# on Github API.\n\nGITHUB_API_TOKEN = os.environ.get('GITHUB_API_TOKEN', None)\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.7/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'EST'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.7/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static')\nSTATICFILES_DIRS = (\n os.path.join(BASE_DIR, 'bower_components'),\n)\nSTATICFILES_FINDERS = [\n 'django.contrib.staticfiles.finders.FileSystemFinder',\n 'django.contrib.staticfiles.finders.AppDirectoriesFinder',\n 'compressor.finders.CompressorFinder',\n]\n\n# if \"next\" (or \"?next\") variable is not set when logging in, redirect to\n# workshops\nLOGIN_REDIRECT_URL = '/workshops/'\n\n# here's where @login_required redirects to:\nLOGIN_URL = '/account/login/'\n\n# explicitely add European Union as a country\nCOUNTRIES_OVERRIDE = {\n 'EU': _('European Union'),\n 'US': _('United States'),\n 'W3': _('Online'),\n}\n\n# settings for REST API\nREST_FRAMEWORK = {\n 'DEFAULT_PARSER_CLASSES': (\n 'rest_framework.parsers.JSONParser',\n 'rest_framework.parsers.FormParser',\n 'rest_framework.parsers.MultiPartParser',\n 'rest_framework_yaml.parsers.YAMLParser',\n ),\n 'DEFAULT_RENDERER_CLASSES': (\n 'rest_framework.renderers.JSONRenderer',\n 'rest_framework.renderers.BrowsableAPIRenderer',\n 'rest_framework_yaml.renderers.YAMLRenderer',\n ),\n\n 'DEFAULT_THROTTLE_CLASSES': (\n 'rest_framework.throttling.AnonRateThrottle',\n 'rest_framework.throttling.UserRateThrottle'\n ),\n 'DEFAULT_THROTTLE_RATES': {\n 'anon': '50/hour',\n 'user': '200/hour'\n }\n}\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False, # merge with default configuration\n 'handlers': {\n 'null': {\n 'class': 'logging.NullHandler',\n },\n },\n 'loggers': {\n # disable \"Invalid HTTP_HOST\" notifications\n 'django.security.DisallowedHost': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n },\n}\n\n# weaker hasher brings test speedup according to Django docs:\n# https://docs.djangoproject.com/en/1.9/topics/testing/overview/#speeding-up-the-tests\nif DEBUG and 'test' in sys.argv:\n PASSWORD_HASHERS = (\n 'django.contrib.auth.hashers.MD5PasswordHasher',\n )\n\n# Debug Toolbar\nDEBUG_TOOLBAR_PATCH_SETTINGS = False\nINTERNAL_IPS = ['127.0.0.1', '::1']\nDEBUG_TOOLBAR_CONFIG = {\n # Disable all panels (except for timer) by default in order not to slow\n # down page loading.\n 'DISABLE_PANELS': [\n 'debug_toolbar.panels.sql.SQLPanel',\n ],\n}\n", "path": "amy/settings.py" } ]
[ { "content": "\"\"\"\nDjango settings for amy project.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.7/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.7/ref/settings/\n\"\"\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nimport json\nimport os\nimport sys\n\nfrom django.utils.translation import ugettext_lazy as _\n\nBASE_DIR = os.path.dirname(os.path.dirname(__file__))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/\n\n\n# SECURITY WARNING: don't run with DEBUG turned on in production!\nDEBUG = json.loads(os.environ.get('AMY_DEBUG', 'true'))\n# For deployment in production:\n# AMY_DEBUG=false AMY_SECRET_KEY=\"...\" ./manage.py runserver ...\n\nif DEBUG:\n SECRET_KEY = '3l$35+@a%g!(^y^98oi%ei+%+yvtl3y0k^_7-fmx2oj09-ac5@'\nelse:\n SECRET_KEY = None\nSECRET_KEY = os.environ.get('AMY_SECRET_KEY', SECRET_KEY)\n\n# be sure to put these values in your envvars, even for development\nRECAPTCHA_PUBLIC_KEY = os.environ.get('AMY_RECAPTCHA_PUBLIC_KEY', None)\nRECAPTCHA_PRIVATE_KEY = os.environ.get('AMY_RECAPTCHA_PRIVATE_KEY', None)\nRECAPTCHA_USE_SSL = True\nNOCAPTCHA = True # nicer input\n\nif DEBUG:\n # 'PASSED' in the form will always pass the RECAPTCHA test\n NOCAPTCHA = False # uglier input, but possible to manually enter 'PASSED'\n os.environ['RECAPTCHA_TESTING'] = 'True'\nelse:\n # ensure the keys are present on production\n assert RECAPTCHA_PUBLIC_KEY, 'RECAPTCHA site key not present'\n assert RECAPTCHA_PRIVATE_KEY, 'RECAPTCHA secure key not present'\n\n# email settings\nADMINS = (\n ('Sysadmins ML', '[email protected]'),\n)\n# \"From:\" for error messages sent out to ADMINS\nSERVER_EMAIL = os.environ.get('AMY_SERVER_EMAIL', 'root@localhost')\n\n# addresses to receive \"New workshop request\" or \"New profile update request\"\n# notifications\nREQUEST_NOTIFICATIONS_RECIPIENTS = (\n '[email protected]',\n)\nEMAIL_HOST = os.environ.get('AMY_EMAIL_HOST', 'localhost')\nEMAIL_HOST_USER = os.environ.get('AMY_EMAIL_HOST_USER', '')\nEMAIL_HOST_PASSWORD = os.environ.get('AMY_EMAIL_HOST_PASSWORD', '')\nEMAIL_PORT = int(os.environ.get('AMY_EMAIL_PORT', 25))\nEMAIL_TIMEOUT = 10 # timeout for blocking email operations, in seconds\nEMAIL_USE_TLS = json.loads(os.environ.get('AMY_EMAIL_USE_TLS', 'false'))\nEMAIL_USE_SSL = json.loads(os.environ.get('AMY_EMAIL_USE_SSL', 'false'))\n\n# \"From:\" for NOT error messages (ie. sent to whoever we want)\nDEFAULT_FROM_EMAIL = os.environ.get('AMY_DEFAULT_FROM_EMAIL',\n 'webmaster@localhost')\n\nif DEBUG:\n # outgoing mails will be stored in `django.core.mail.outbox`\n EMAIL_BACKEND = 'django.core.mail.backends.locmem.EmailBackend'\n\nSITE_URL = 'https://amy.software-carpentry.org'\nif DEBUG:\n SITE_URL = 'http://127.0.0.1:8000'\n\n# New template settings (for Django >= 1.8)\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'OPTIONS': {\n 'loaders': [\n 'app_namespace.Loader',\n 'django.template.loaders.filesystem.Loader',\n 'django.template.loaders.app_directories.Loader',\n ],\n 'debug': DEBUG,\n 'context_processors': [\n # default processors + request processor\n 'django.contrib.auth.context_processors.auth',\n 'django.template.context_processors.debug',\n 'django.template.context_processors.i18n',\n 'django.template.context_processors.media',\n 'django.template.context_processors.request',\n 'django.template.context_processors.static',\n 'django.template.context_processors.tz',\n 'django.contrib.messages.context_processors.messages',\n # AMY version\n 'workshops.context_processors.version',\n # GitHub auth\n 'social.apps.django_app.context_processors.backends',\n 'social.apps.django_app.context_processors.login_redirect',\n ],\n\n # Warn viewers of invalid template strings\n 'string_if_invalid': 'XXX-unset-variable-XXX',\n }\n }\n]\n\nALLOWED_HOSTS = [\n 'amy.software-carpentry.org',\n]\n\n\n# Application definition\n\nINSTALLED_APPS = (\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'workshops',\n # this should be after 'workshops' because templates in\n # 'templates/registration/' clash\n 'django.contrib.admin',\n 'crispy_forms',\n 'selectable',\n 'django_countries',\n 'django_filters',\n 'reversion',\n 'rest_framework',\n 'api',\n 'captcha',\n 'compressor',\n 'social.apps.django_app.default',\n 'debug_toolbar',\n)\n\nCRISPY_TEMPLATE_PACK = 'bootstrap3'\n\nMIDDLEWARE_CLASSES = (\n 'debug_toolbar.middleware.DebugToolbarMiddleware',\n 'reversion.middleware.RevisionMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'workshops.github_auth.GithubAuthMiddleware',\n)\n\nROOT_URLCONF = 'amy.urls'\n\nWSGI_APPLICATION = 'amy.wsgi.application'\n\nfrom django.contrib.messages import constants as message_constants\nMESSAGE_TAGS = {\n message_constants.INFO: 'alert-info',\n message_constants.SUCCESS: 'alert-success',\n message_constants.WARNING: 'alert-warning',\n message_constants.ERROR: 'alert-danger',\n}\n\n\n# Database\n# https://docs.djangoproject.com/en/1.7/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3',\n 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),\n 'TEST': {},\n }\n}\nif '--keepdb' in sys.argv:\n # By default, Django uses in-memory sqlite3 database, which is much\n # faster than sqlite3 database in a file. However, we may want to keep\n # database between test launches, so that we avoid the overhead of\n # applying migrations on each test launch.\n DATABASES['default']['TEST']['NAME'] = 'test_db.sqlite3'\n\n# Authentication\nAUTH_USER_MODEL = 'workshops.Person'\nVALIDATION = 'django.contrib.auth.password_validation.'\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': VALIDATION + 'UserAttributeSimilarityValidator',\n 'OPTIONS': {\n 'user_attributes': ('username', 'personal', 'middle', 'family',\n 'email')\n }\n },\n {\n 'NAME': VALIDATION + 'MinimumLengthValidator',\n 'OPTIONS': {\n 'min_length': 10,\n }\n },\n {\n 'NAME': VALIDATION + 'CommonPasswordValidator',\n },\n {\n 'NAME': VALIDATION + 'NumericPasswordValidator',\n },\n]\n\n# GitHub Auth\nAUTHENTICATION_BACKENDS = (\n 'social.backends.github.GithubOAuth2',\n 'django.contrib.auth.backends.ModelBackend',\n)\nSOCIAL_AUTH_ADMIN_USER_SEARCH_FIELDS = ['github']\nSOCIAL_AUTH_GITHUB_KEY = os.environ.get('SOCIAL_AUTH_GITHUB_KEY', '').strip()\nSOCIAL_AUTH_GITHUB_SECRET = os.environ.get('SOCIAL_AUTH_GITHUB_SECRET', '').strip()\nif not DEBUG and not (SOCIAL_AUTH_GITHUB_KEY and SOCIAL_AUTH_GITHUB_SECRET):\n print('Logging using github account will *not* work, '\n 'because you didn\\'t set SOCIAL_AUTH_GITHUB_KEY and/or '\n 'SOCIAL_AUTH_GITHUB_SECRET environment variables.',\n file=sys.stderr)\n\n\nSOCIAL_AUTH_PIPELINE = (\n 'social.pipeline.social_auth.social_details',\n 'social.pipeline.social_auth.social_uid',\n 'social.pipeline.social_auth.auth_allowed',\n 'social.pipeline.social_auth.social_user',\n\n # If we can't find Person associated with given github account, abort.\n 'workshops.github_auth.abort_if_no_user_found',\n\n # The default pipeline includes 'social.pipeline.user.create_user' here,\n # but we don't want to register a new Person when somebody logs in\n # using GitHub account that is not associated with any Person.\n\n 'social.pipeline.social_auth.associate_user',\n 'social.pipeline.social_auth.load_extra_data',\n)\n\nSOCIAL_AUTH_USER_MODEL = 'workshops.Person'\n\n# Github API token (optional). Setting this token reduces limits and quotes\n# on Github API.\n\nGITHUB_API_TOKEN = os.environ.get('GITHUB_API_TOKEN', None)\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.7/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'EST'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.7/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static')\nSTATICFILES_DIRS = (\n os.path.join(BASE_DIR, 'bower_components'),\n)\nSTATICFILES_FINDERS = [\n 'django.contrib.staticfiles.finders.FileSystemFinder',\n 'django.contrib.staticfiles.finders.AppDirectoriesFinder',\n 'compressor.finders.CompressorFinder',\n]\n\n# if \"next\" (or \"?next\") variable is not set when logging in, redirect to\n# workshops\nLOGIN_REDIRECT_URL = '/workshops/'\n\n# here's where @login_required redirects to:\nLOGIN_URL = '/account/login/'\n\n# explicitely add European Union as a country\nCOUNTRIES_OVERRIDE = {\n 'EU': _('European Union'),\n 'US': _('United States'),\n 'W3': _('Online'),\n}\n\n# settings for REST API\nREST_FRAMEWORK = {\n 'DEFAULT_PARSER_CLASSES': (\n 'rest_framework.parsers.JSONParser',\n 'rest_framework.parsers.FormParser',\n 'rest_framework.parsers.MultiPartParser',\n 'rest_framework_yaml.parsers.YAMLParser',\n ),\n 'DEFAULT_RENDERER_CLASSES': (\n 'rest_framework.renderers.JSONRenderer',\n 'rest_framework.renderers.BrowsableAPIRenderer',\n 'rest_framework_yaml.renderers.YAMLRenderer',\n ),\n\n 'DEFAULT_THROTTLE_CLASSES': (\n 'rest_framework.throttling.AnonRateThrottle',\n 'rest_framework.throttling.UserRateThrottle'\n ),\n 'DEFAULT_THROTTLE_RATES': {\n 'anon': '50/hour',\n 'user': '200/hour'\n }\n}\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False, # merge with default configuration\n 'handlers': {\n 'null': {\n 'class': 'logging.NullHandler',\n },\n },\n 'loggers': {\n # disable \"Invalid HTTP_HOST\" notifications\n 'django.security.DisallowedHost': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n },\n}\n\n# weaker hasher brings test speedup according to Django docs:\n# https://docs.djangoproject.com/en/1.9/topics/testing/overview/#speeding-up-the-tests\nif DEBUG and 'test' in sys.argv:\n PASSWORD_HASHERS = (\n 'django.contrib.auth.hashers.MD5PasswordHasher',\n )\n\n# Debug Toolbar\nDEBUG_TOOLBAR_PATCH_SETTINGS = False\nINTERNAL_IPS = ['127.0.0.1', '::1']\n", "path": "amy/settings.py" } ]
diff --git a/amy/settings.py b/amy/settings.py index fbf86e346..f51f70bb1 100644 --- a/amy/settings.py +++ b/amy/settings.py @@ -346,10 +346,3 @@ # Debug Toolbar DEBUG_TOOLBAR_PATCH_SETTINGS = False INTERNAL_IPS = ['127.0.0.1', '::1'] -DEBUG_TOOLBAR_CONFIG = { - # Disable all panels (except for timer) by default in order not to slow - # down page loading. - 'DISABLE_PANELS': [ - 'debug_toolbar.panels.sql.SQLPanel', - ], -}
secdev__scapy-2471
Getting "Exception ignored in: <bound method SuperSocket.__del__ of ... >" #### Brief description Using `sniff` method with unreal `iface` parameter leads to `Ignored exception ...` #### Environment - Scapy version: `Version 2.4.3` - Python version: `Python 3.6.9` - Operating System: `Linux 5.3.0-28-generic #30~18.04.1-Ubuntu x86_64 GNU/Linux` #### How to reproduce ``` # (module name : sample.py) import scapy.all as scapy def main(): try: pkts = scapy.sniff(iface="mocked") except Exception as e: print("===========================================") print(str(e)) print("===========================================") exit() if __name__ == "__main__": main() ``` Ran from terminal as: `$ sudo python ./sample.py` #### Actual result ``` $ sudo python ./sample.py =========================================== [Errno 19] No such device =========================================== Exception ignored in: <bound method SuperSocket.__del__ of <scapy.arch.linux.L2ListenSocket object at 0x7f7ca13086a0>> Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/scapy/supersocket.py", line 134, in __del__ self.close() File "/usr/local/lib/python3.6/dist-packages/scapy/arch/linux.py", line 514, in close set_promisc(self.ins, self.iface, 0) File "/usr/local/lib/python3.6/dist-packages/scapy/arch/linux.py", line 165, in set_promisc mreq = struct.pack("IHH8s", get_if_index(iff), PACKET_MR_PROMISC, 0, b"") File "/usr/local/lib/python3.6/dist-packages/scapy/arch/linux.py", line 380, in get_if_index return int(struct.unpack("I", get_if(iff, SIOCGIFINDEX)[16:20])[0]) File "/usr/local/lib/python3.6/dist-packages/scapy/arch/common.py", line 59, in get_if ifreq = ioctl(sck, cmd, struct.pack("16s16x", iff.encode("utf8"))) OSError: [Errno 19] No such device ``` #### Expected result Just a successful exit from a program without additional std::err output. #### Related resources <!-- traces / sample pcaps (stripped to the relevant frames), related standards, RFCs or other resources -->
[ { "content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Philippe Biondi <[email protected]>\n# This program is published under a GPLv2 license\n\n\"\"\"\nLinux specific functions.\n\"\"\"\n\nfrom __future__ import absolute_import\n\n\nimport array\nfrom fcntl import ioctl\nimport os\nfrom select import select\nimport socket\nimport struct\nimport time\nimport re\n\nimport subprocess\n\nfrom scapy.compat import raw, plain_str\nfrom scapy.consts import LOOPBACK_NAME, LINUX\nimport scapy.utils\nimport scapy.utils6\nfrom scapy.packet import Packet, Padding\nfrom scapy.config import conf\nfrom scapy.data import MTU, ETH_P_ALL, SOL_PACKET, SO_ATTACH_FILTER, \\\n SO_TIMESTAMPNS\nfrom scapy.supersocket import SuperSocket\nfrom scapy.error import warning, Scapy_Exception, \\\n ScapyInvalidPlatformException\nfrom scapy.arch.common import get_if, compile_filter\nimport scapy.modules.six as six\nfrom scapy.modules.six.moves import range\n\nfrom scapy.arch.common import get_if_raw_hwaddr # noqa: F401\n\n# From bits/ioctls.h\nSIOCGIFHWADDR = 0x8927 # Get hardware address\nSIOCGIFADDR = 0x8915 # get PA address\nSIOCGIFNETMASK = 0x891b # get network PA mask\nSIOCGIFNAME = 0x8910 # get iface name\nSIOCSIFLINK = 0x8911 # set iface channel\nSIOCGIFCONF = 0x8912 # get iface list\nSIOCGIFFLAGS = 0x8913 # get flags\nSIOCSIFFLAGS = 0x8914 # set flags\nSIOCGIFINDEX = 0x8933 # name -> if_index mapping\nSIOCGIFCOUNT = 0x8938 # get number of devices\nSIOCGSTAMP = 0x8906 # get packet timestamp (as a timeval)\n\n# From if.h\nIFF_UP = 0x1 # Interface is up.\nIFF_BROADCAST = 0x2 # Broadcast address valid.\nIFF_DEBUG = 0x4 # Turn on debugging.\nIFF_LOOPBACK = 0x8 # Is a loopback net.\nIFF_POINTOPOINT = 0x10 # Interface is point-to-point link.\nIFF_NOTRAILERS = 0x20 # Avoid use of trailers.\nIFF_RUNNING = 0x40 # Resources allocated.\nIFF_NOARP = 0x80 # No address resolution protocol.\nIFF_PROMISC = 0x100 # Receive all packets.\n\n# From netpacket/packet.h\nPACKET_ADD_MEMBERSHIP = 1\nPACKET_DROP_MEMBERSHIP = 2\nPACKET_RECV_OUTPUT = 3\nPACKET_RX_RING = 5\nPACKET_STATISTICS = 6\nPACKET_MR_MULTICAST = 0\nPACKET_MR_PROMISC = 1\nPACKET_MR_ALLMULTI = 2\n\n# From net/route.h\nRTF_UP = 0x0001 # Route usable\nRTF_REJECT = 0x0200\n\n# From if_packet.h\nPACKET_HOST = 0 # To us\nPACKET_BROADCAST = 1 # To all\nPACKET_MULTICAST = 2 # To group\nPACKET_OTHERHOST = 3 # To someone else\nPACKET_OUTGOING = 4 # Outgoing of any type\nPACKET_LOOPBACK = 5 # MC/BRD frame looped back\nPACKET_USER = 6 # To user space\nPACKET_KERNEL = 7 # To kernel space\nPACKET_AUXDATA = 8\nPACKET_FASTROUTE = 6 # Fastrouted frame\n# Unused, PACKET_FASTROUTE and PACKET_LOOPBACK are invisible to user space\n\n# Utils\n\n\ndef get_if_raw_addr(iff):\n try:\n return get_if(iff, SIOCGIFADDR)[20:24]\n except IOError:\n return b\"\\0\\0\\0\\0\"\n\n\ndef get_if_list():\n try:\n f = open(\"/proc/net/dev\", \"rb\")\n except IOError:\n f.close()\n warning(\"Can't open /proc/net/dev !\")\n return []\n lst = []\n f.readline()\n f.readline()\n for line in f:\n line = plain_str(line)\n lst.append(line.split(\":\")[0].strip())\n f.close()\n return lst\n\n\ndef get_working_if():\n \"\"\"\n Return the name of the first network interfcace that is up.\n \"\"\"\n for i in get_if_list():\n if i == LOOPBACK_NAME:\n continue\n ifflags = struct.unpack(\"16xH14x\", get_if(i, SIOCGIFFLAGS))[0]\n if ifflags & IFF_UP:\n return i\n return LOOPBACK_NAME\n\n\ndef attach_filter(sock, bpf_filter, iface):\n # XXX We generate the filter on the interface conf.iface\n # because tcpdump open the \"any\" interface and ppp interfaces\n # in cooked mode. As we use them in raw mode, the filter will not\n # work... one solution could be to use \"any\" interface and translate\n # the filter from cooked mode to raw mode\n # mode\n bp = compile_filter(bpf_filter, iface)\n sock.setsockopt(socket.SOL_SOCKET, SO_ATTACH_FILTER, bp)\n\n\ndef set_promisc(s, iff, val=1):\n mreq = struct.pack(\"IHH8s\", get_if_index(iff), PACKET_MR_PROMISC, 0, b\"\")\n if val:\n cmd = PACKET_ADD_MEMBERSHIP\n else:\n cmd = PACKET_DROP_MEMBERSHIP\n s.setsockopt(SOL_PACKET, cmd, mreq)\n\n\ndef get_alias_address(iface_name, ip_mask, gw_str, metric):\n \"\"\"\n Get the correct source IP address of an interface alias\n \"\"\"\n\n # Detect the architecture\n if scapy.consts.IS_64BITS:\n offset, name_len = 16, 40\n else:\n offset, name_len = 32, 32\n\n # Retrieve interfaces structures\n sck = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n names = array.array('B', b'\\0' * 4096)\n ifreq = ioctl(sck.fileno(), SIOCGIFCONF,\n struct.pack(\"iL\", len(names), names.buffer_info()[0]))\n\n # Extract interfaces names\n out = struct.unpack(\"iL\", ifreq)[0]\n names = names.tobytes() if six.PY3 else names.tostring()\n names = [names[i:i + offset].split(b'\\0', 1)[0] for i in range(0, out, name_len)] # noqa: E501\n\n # Look for the IP address\n for ifname in names:\n # Only look for a matching interface name\n if not ifname.decode(\"utf8\").startswith(iface_name):\n continue\n\n # Retrieve and convert addresses\n ifreq = ioctl(sck, SIOCGIFADDR, struct.pack(\"16s16x\", ifname))\n ifaddr = struct.unpack(\">I\", ifreq[20:24])[0]\n ifreq = ioctl(sck, SIOCGIFNETMASK, struct.pack(\"16s16x\", ifname))\n msk = struct.unpack(\">I\", ifreq[20:24])[0]\n\n # Get the full interface name\n ifname = plain_str(ifname)\n if ':' in ifname:\n ifname = ifname[:ifname.index(':')]\n else:\n continue\n\n # Check if the source address is included in the network\n if (ifaddr & msk) == ip_mask:\n sck.close()\n return (ifaddr & msk, msk, gw_str, ifname,\n scapy.utils.ltoa(ifaddr), metric)\n\n sck.close()\n return\n\n\ndef read_routes():\n try:\n f = open(\"/proc/net/route\", \"rb\")\n except IOError:\n warning(\"Can't open /proc/net/route !\")\n return []\n routes = []\n s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n try:\n ifreq = ioctl(s, SIOCGIFADDR, struct.pack(\"16s16x\", scapy.consts.LOOPBACK_NAME.encode(\"utf8\"))) # noqa: E501\n addrfamily = struct.unpack(\"h\", ifreq[16:18])[0]\n if addrfamily == socket.AF_INET:\n ifreq2 = ioctl(s, SIOCGIFNETMASK, struct.pack(\"16s16x\", scapy.consts.LOOPBACK_NAME.encode(\"utf8\"))) # noqa: E501\n msk = socket.ntohl(struct.unpack(\"I\", ifreq2[20:24])[0])\n dst = socket.ntohl(struct.unpack(\"I\", ifreq[20:24])[0]) & msk\n ifaddr = scapy.utils.inet_ntoa(ifreq[20:24])\n routes.append((dst, msk, \"0.0.0.0\", scapy.consts.LOOPBACK_NAME, ifaddr, 1)) # noqa: E501\n else:\n warning(\"Interface %s: unknown address family (%i)\" % (scapy.consts.LOOPBACK_NAME, addrfamily)) # noqa: E501\n except IOError as err:\n if err.errno == 99:\n warning(\"Interface %s: no address assigned\" % scapy.consts.LOOPBACK_NAME) # noqa: E501\n else:\n warning(\"Interface %s: failed to get address config (%s)\" % (scapy.consts.LOOPBACK_NAME, str(err))) # noqa: E501\n\n for line in f.readlines()[1:]:\n line = plain_str(line)\n iff, dst, gw, flags, _, _, metric, msk, _, _, _ = line.split()\n flags = int(flags, 16)\n if flags & RTF_UP == 0:\n continue\n if flags & RTF_REJECT:\n continue\n try:\n ifreq = ioctl(s, SIOCGIFADDR, struct.pack(\"16s16x\", iff.encode(\"utf8\"))) # noqa: E501\n except IOError: # interface is present in routing tables but does not have any assigned IP # noqa: E501\n ifaddr = \"0.0.0.0\"\n ifaddr_int = 0\n else:\n addrfamily = struct.unpack(\"h\", ifreq[16:18])[0]\n if addrfamily == socket.AF_INET:\n ifaddr = scapy.utils.inet_ntoa(ifreq[20:24])\n ifaddr_int = struct.unpack(\"!I\", ifreq[20:24])[0]\n else:\n warning(\"Interface %s: unknown address family (%i)\", iff, addrfamily) # noqa: E501\n continue\n\n # Attempt to detect an interface alias based on addresses inconsistencies # noqa: E501\n dst_int = socket.htonl(int(dst, 16)) & 0xffffffff\n msk_int = socket.htonl(int(msk, 16)) & 0xffffffff\n gw_str = scapy.utils.inet_ntoa(struct.pack(\"I\", int(gw, 16)))\n metric = int(metric)\n\n if ifaddr_int & msk_int != dst_int:\n tmp_route = get_alias_address(iff, dst_int, gw_str, metric)\n if tmp_route:\n routes.append(tmp_route)\n else:\n routes.append((dst_int, msk_int, gw_str, iff, ifaddr, metric))\n\n else:\n routes.append((dst_int, msk_int, gw_str, iff, ifaddr, metric))\n\n f.close()\n s.close()\n return routes\n\n############\n# IPv6 #\n############\n\n\ndef in6_getifaddr():\n \"\"\"\n Returns a list of 3-tuples of the form (addr, scope, iface) where\n 'addr' is the address of scope 'scope' associated to the interface\n 'iface'.\n\n This is the list of all addresses of all interfaces available on\n the system.\n \"\"\"\n ret = []\n try:\n fdesc = open(\"/proc/net/if_inet6\", \"rb\")\n except IOError:\n return ret\n for line in fdesc:\n # addr, index, plen, scope, flags, ifname\n tmp = plain_str(line).split()\n addr = scapy.utils6.in6_ptop(\n b':'.join(\n struct.unpack('4s4s4s4s4s4s4s4s', tmp[0].encode())\n ).decode()\n )\n # (addr, scope, iface)\n ret.append((addr, int(tmp[3], 16), tmp[5]))\n fdesc.close()\n return ret\n\n\ndef read_routes6():\n try:\n f = open(\"/proc/net/ipv6_route\", \"rb\")\n except IOError:\n return []\n # 1. destination network\n # 2. destination prefix length\n # 3. source network displayed\n # 4. source prefix length\n # 5. next hop\n # 6. metric\n # 7. reference counter (?!?)\n # 8. use counter (?!?)\n # 9. flags\n # 10. device name\n routes = []\n\n def proc2r(p):\n ret = struct.unpack('4s4s4s4s4s4s4s4s', p)\n ret = b':'.join(ret).decode()\n return scapy.utils6.in6_ptop(ret)\n\n lifaddr = in6_getifaddr()\n for line in f.readlines():\n d, dp, _, _, nh, metric, rc, us, fl, dev = line.split()\n metric = int(metric, 16)\n fl = int(fl, 16)\n dev = plain_str(dev)\n\n if fl & RTF_UP == 0:\n continue\n if fl & RTF_REJECT:\n continue\n\n d = proc2r(d)\n dp = int(dp, 16)\n nh = proc2r(nh)\n\n cset = [] # candidate set (possible source addresses)\n if dev == LOOPBACK_NAME:\n if d == '::':\n continue\n cset = ['::1']\n else:\n devaddrs = (x for x in lifaddr if x[2] == dev)\n cset = scapy.utils6.construct_source_candidate_set(d, dp, devaddrs)\n\n if len(cset) != 0:\n routes.append((d, dp, nh, dev, cset, metric))\n f.close()\n return routes\n\n\ndef get_if_index(iff):\n return int(struct.unpack(\"I\", get_if(iff, SIOCGIFINDEX)[16:20])[0])\n\n\nif os.uname()[4] in ['x86_64', 'aarch64']:\n def get_last_packet_timestamp(sock):\n ts = ioctl(sock, SIOCGSTAMP, \"1234567890123456\")\n s, us = struct.unpack(\"QQ\", ts)\n return s + us / 1000000.0\nelse:\n def get_last_packet_timestamp(sock):\n ts = ioctl(sock, SIOCGSTAMP, \"12345678\")\n s, us = struct.unpack(\"II\", ts)\n return s + us / 1000000.0\n\n\ndef _flush_fd(fd):\n if hasattr(fd, 'fileno'):\n fd = fd.fileno()\n while True:\n r, w, e = select([fd], [], [], 0)\n if r:\n os.read(fd, MTU)\n else:\n break\n\n\ndef get_iface_mode(iface):\n \"\"\"Return the interface mode.\n params:\n - iface: the iwconfig interface\n \"\"\"\n p = subprocess.Popen([\"iwconfig\", iface], stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT)\n output, err = p.communicate()\n match = re.search(br\"mode:([a-zA-Z]*)\", output.lower())\n if match:\n return plain_str(match.group(1))\n return \"unknown\"\n\n\ndef set_iface_monitor(iface, monitor):\n \"\"\"Sets the monitor mode (or remove it) from an interface.\n params:\n - iface: the iwconfig interface\n - monitor: True if the interface should be set in monitor mode,\n False if it should be in managed mode\n \"\"\"\n mode = get_iface_mode(iface)\n if mode == \"unknown\":\n warning(\"Could not parse iwconfig !\")\n current_monitor = mode == \"monitor\"\n if monitor == current_monitor:\n # Already correct\n return True\n s_mode = \"monitor\" if monitor else \"managed\"\n\n def _check_call(commands):\n p = subprocess.Popen(commands,\n stderr=subprocess.PIPE,\n stdout=subprocess.PIPE)\n stdout, stderr = p.communicate()\n if p.returncode != 0:\n warning(\"%s failed !\" % \" \".join(commands))\n return False\n return True\n if not _check_call([\"ifconfig\", iface, \"down\"]):\n return False\n if not _check_call([\"iwconfig\", iface, \"mode\", s_mode]):\n return False\n if not _check_call([\"ifconfig\", iface, \"up\"]):\n return False\n return True\n\n\nclass L2Socket(SuperSocket):\n desc = \"read/write packets at layer 2 using Linux PF_PACKET sockets\"\n\n def __init__(self, iface=None, type=ETH_P_ALL, promisc=None, filter=None,\n nofilter=0, monitor=None):\n self.iface = conf.iface if iface is None else iface\n self.type = type\n self.promisc = conf.sniff_promisc if promisc is None else promisc\n if monitor is not None:\n warning(\n \"The monitor argument is ineffective on native linux sockets.\"\n \" Use set_iface_monitor instead.\"\n )\n self.ins = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(type)) # noqa: E501\n if not nofilter:\n if conf.except_filter:\n if filter:\n filter = \"(%s) and not (%s)\" % (filter, conf.except_filter)\n else:\n filter = \"not (%s)\" % conf.except_filter\n if filter is not None:\n attach_filter(self.ins, filter, iface)\n if self.promisc:\n set_promisc(self.ins, self.iface)\n self.ins.bind((self.iface, type))\n _flush_fd(self.ins)\n self.ins.setsockopt(\n socket.SOL_SOCKET,\n socket.SO_RCVBUF,\n conf.bufsize\n )\n if not six.PY2:\n # Receive Auxiliary Data (VLAN tags)\n self.ins.setsockopt(SOL_PACKET, PACKET_AUXDATA, 1)\n self.ins.setsockopt(\n socket.SOL_SOCKET,\n SO_TIMESTAMPNS,\n 1\n )\n if isinstance(self, L2ListenSocket):\n self.outs = None\n else:\n self.outs = self.ins\n self.outs.setsockopt(\n socket.SOL_SOCKET,\n socket.SO_SNDBUF,\n conf.bufsize\n )\n sa_ll = self.ins.getsockname()\n if sa_ll[3] in conf.l2types:\n self.LL = conf.l2types[sa_ll[3]]\n self.lvl = 2\n elif sa_ll[1] in conf.l3types:\n self.LL = conf.l3types[sa_ll[1]]\n self.lvl = 3\n else:\n self.LL = conf.default_l2\n self.lvl = 2\n warning(\"Unable to guess type (interface=%s protocol=%#x family=%i). Using %s\", sa_ll[0], sa_ll[1], sa_ll[3], self.LL.name) # noqa: E501\n\n def close(self):\n if self.closed:\n return\n try:\n if self.promisc and self.ins:\n set_promisc(self.ins, self.iface, 0)\n except AttributeError:\n pass\n SuperSocket.close(self)\n\n def recv_raw(self, x=MTU):\n \"\"\"Receives a packet, then returns a tuple containing (cls, pkt_data, time)\"\"\" # noqa: E501\n pkt, sa_ll, ts = self._recv_raw(self.ins, x)\n if self.outs and sa_ll[2] == socket.PACKET_OUTGOING:\n return None, None, None\n if ts is None:\n ts = get_last_packet_timestamp(self.ins)\n return self.LL, pkt, ts\n\n def send(self, x):\n try:\n return SuperSocket.send(self, x)\n except socket.error as msg:\n if msg.errno == 22 and len(x) < conf.min_pkt_size:\n padding = b\"\\x00\" * (conf.min_pkt_size - len(x))\n if isinstance(x, Packet):\n return SuperSocket.send(self, x / Padding(load=padding))\n else:\n return SuperSocket.send(self, raw(x) + padding)\n raise\n\n\nclass L2ListenSocket(L2Socket):\n desc = \"read packets at layer 2 using Linux PF_PACKET sockets. Also receives the packets going OUT\" # noqa: E501\n\n def send(self, x):\n raise Scapy_Exception(\"Can't send anything with L2ListenSocket\")\n\n\nclass L3PacketSocket(L2Socket):\n desc = \"read/write packets at layer 3 using Linux PF_PACKET sockets\"\n\n def recv(self, x=MTU):\n pkt = SuperSocket.recv(self, x)\n if pkt and self.lvl == 2:\n pkt.payload.time = pkt.time\n return pkt.payload\n return pkt\n\n def send(self, x):\n iff = x.route()[0]\n if iff is None:\n iff = conf.iface\n sdto = (iff, self.type)\n self.outs.bind(sdto)\n sn = self.outs.getsockname()\n ll = lambda x: x\n if type(x) in conf.l3types:\n sdto = (iff, conf.l3types[type(x)])\n if sn[3] in conf.l2types:\n ll = lambda x: conf.l2types[sn[3]]() / x\n sx = raw(ll(x))\n x.sent_time = time.time()\n try:\n self.outs.sendto(sx, sdto)\n except socket.error as msg:\n if msg.errno == 22 and len(sx) < conf.min_pkt_size:\n self.outs.send(sx + b\"\\x00\" * (conf.min_pkt_size - len(sx)))\n elif conf.auto_fragment and msg.errno == 90:\n for p in x.fragment():\n self.outs.sendto(raw(ll(p)), sdto)\n else:\n raise\n\n\nclass VEthPair(object):\n \"\"\"\n encapsulates a virtual Ethernet interface pair\n \"\"\"\n\n def __init__(self, iface_name, peer_name):\n\n if not LINUX:\n # ToDo: do we need a kernel version check here?\n raise ScapyInvalidPlatformException(\n 'Virtual Ethernet interface pair only available on Linux'\n )\n\n self.ifaces = [iface_name, peer_name]\n\n def iface(self):\n return self.ifaces[0]\n\n def peer(self):\n return self.ifaces[1]\n\n def setup(self):\n \"\"\"\n create veth pair links\n :raises subprocess.CalledProcessError if operation fails\n \"\"\"\n subprocess.check_call(['ip', 'link', 'add', self.ifaces[0], 'type', 'veth', 'peer', 'name', self.ifaces[1]]) # noqa: E501\n\n def destroy(self):\n \"\"\"\n remove veth pair links\n :raises subprocess.CalledProcessError if operation fails\n \"\"\"\n subprocess.check_call(['ip', 'link', 'del', self.ifaces[0]])\n\n def up(self):\n \"\"\"\n set veth pair links up\n :raises subprocess.CalledProcessError if operation fails\n \"\"\"\n for idx in [0, 1]:\n subprocess.check_call([\"ip\", \"link\", \"set\", self.ifaces[idx], \"up\"]) # noqa: E501\n\n def down(self):\n \"\"\"\n set veth pair links down\n :raises subprocess.CalledProcessError if operation fails\n \"\"\"\n for idx in [0, 1]:\n subprocess.check_call([\"ip\", \"link\", \"set\", self.ifaces[idx], \"down\"]) # noqa: E501\n\n def __enter__(self):\n self.setup()\n self.up()\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n self.destroy()\n", "path": "scapy/arch/linux.py" } ]
[ { "content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Philippe Biondi <[email protected]>\n# This program is published under a GPLv2 license\n\n\"\"\"\nLinux specific functions.\n\"\"\"\n\nfrom __future__ import absolute_import\n\n\nimport array\nfrom fcntl import ioctl\nimport os\nfrom select import select\nimport socket\nimport struct\nimport time\nimport re\n\nimport subprocess\n\nfrom scapy.compat import raw, plain_str\nfrom scapy.consts import LOOPBACK_NAME, LINUX\nimport scapy.utils\nimport scapy.utils6\nfrom scapy.packet import Packet, Padding\nfrom scapy.config import conf\nfrom scapy.data import MTU, ETH_P_ALL, SOL_PACKET, SO_ATTACH_FILTER, \\\n SO_TIMESTAMPNS\nfrom scapy.supersocket import SuperSocket\nfrom scapy.error import warning, Scapy_Exception, \\\n ScapyInvalidPlatformException\nfrom scapy.arch.common import get_if, compile_filter\nimport scapy.modules.six as six\nfrom scapy.modules.six.moves import range\n\nfrom scapy.arch.common import get_if_raw_hwaddr # noqa: F401\n\n# From bits/ioctls.h\nSIOCGIFHWADDR = 0x8927 # Get hardware address\nSIOCGIFADDR = 0x8915 # get PA address\nSIOCGIFNETMASK = 0x891b # get network PA mask\nSIOCGIFNAME = 0x8910 # get iface name\nSIOCSIFLINK = 0x8911 # set iface channel\nSIOCGIFCONF = 0x8912 # get iface list\nSIOCGIFFLAGS = 0x8913 # get flags\nSIOCSIFFLAGS = 0x8914 # set flags\nSIOCGIFINDEX = 0x8933 # name -> if_index mapping\nSIOCGIFCOUNT = 0x8938 # get number of devices\nSIOCGSTAMP = 0x8906 # get packet timestamp (as a timeval)\n\n# From if.h\nIFF_UP = 0x1 # Interface is up.\nIFF_BROADCAST = 0x2 # Broadcast address valid.\nIFF_DEBUG = 0x4 # Turn on debugging.\nIFF_LOOPBACK = 0x8 # Is a loopback net.\nIFF_POINTOPOINT = 0x10 # Interface is point-to-point link.\nIFF_NOTRAILERS = 0x20 # Avoid use of trailers.\nIFF_RUNNING = 0x40 # Resources allocated.\nIFF_NOARP = 0x80 # No address resolution protocol.\nIFF_PROMISC = 0x100 # Receive all packets.\n\n# From netpacket/packet.h\nPACKET_ADD_MEMBERSHIP = 1\nPACKET_DROP_MEMBERSHIP = 2\nPACKET_RECV_OUTPUT = 3\nPACKET_RX_RING = 5\nPACKET_STATISTICS = 6\nPACKET_MR_MULTICAST = 0\nPACKET_MR_PROMISC = 1\nPACKET_MR_ALLMULTI = 2\n\n# From net/route.h\nRTF_UP = 0x0001 # Route usable\nRTF_REJECT = 0x0200\n\n# From if_packet.h\nPACKET_HOST = 0 # To us\nPACKET_BROADCAST = 1 # To all\nPACKET_MULTICAST = 2 # To group\nPACKET_OTHERHOST = 3 # To someone else\nPACKET_OUTGOING = 4 # Outgoing of any type\nPACKET_LOOPBACK = 5 # MC/BRD frame looped back\nPACKET_USER = 6 # To user space\nPACKET_KERNEL = 7 # To kernel space\nPACKET_AUXDATA = 8\nPACKET_FASTROUTE = 6 # Fastrouted frame\n# Unused, PACKET_FASTROUTE and PACKET_LOOPBACK are invisible to user space\n\n# Utils\n\n\ndef get_if_raw_addr(iff):\n try:\n return get_if(iff, SIOCGIFADDR)[20:24]\n except IOError:\n return b\"\\0\\0\\0\\0\"\n\n\ndef get_if_list():\n try:\n f = open(\"/proc/net/dev\", \"rb\")\n except IOError:\n f.close()\n warning(\"Can't open /proc/net/dev !\")\n return []\n lst = []\n f.readline()\n f.readline()\n for line in f:\n line = plain_str(line)\n lst.append(line.split(\":\")[0].strip())\n f.close()\n return lst\n\n\ndef get_working_if():\n \"\"\"\n Return the name of the first network interfcace that is up.\n \"\"\"\n for i in get_if_list():\n if i == LOOPBACK_NAME:\n continue\n ifflags = struct.unpack(\"16xH14x\", get_if(i, SIOCGIFFLAGS))[0]\n if ifflags & IFF_UP:\n return i\n return LOOPBACK_NAME\n\n\ndef attach_filter(sock, bpf_filter, iface):\n # XXX We generate the filter on the interface conf.iface\n # because tcpdump open the \"any\" interface and ppp interfaces\n # in cooked mode. As we use them in raw mode, the filter will not\n # work... one solution could be to use \"any\" interface and translate\n # the filter from cooked mode to raw mode\n # mode\n bp = compile_filter(bpf_filter, iface)\n sock.setsockopt(socket.SOL_SOCKET, SO_ATTACH_FILTER, bp)\n\n\ndef set_promisc(s, iff, val=1):\n mreq = struct.pack(\"IHH8s\", get_if_index(iff), PACKET_MR_PROMISC, 0, b\"\")\n if val:\n cmd = PACKET_ADD_MEMBERSHIP\n else:\n cmd = PACKET_DROP_MEMBERSHIP\n s.setsockopt(SOL_PACKET, cmd, mreq)\n\n\ndef get_alias_address(iface_name, ip_mask, gw_str, metric):\n \"\"\"\n Get the correct source IP address of an interface alias\n \"\"\"\n\n # Detect the architecture\n if scapy.consts.IS_64BITS:\n offset, name_len = 16, 40\n else:\n offset, name_len = 32, 32\n\n # Retrieve interfaces structures\n sck = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n names = array.array('B', b'\\0' * 4096)\n ifreq = ioctl(sck.fileno(), SIOCGIFCONF,\n struct.pack(\"iL\", len(names), names.buffer_info()[0]))\n\n # Extract interfaces names\n out = struct.unpack(\"iL\", ifreq)[0]\n names = names.tobytes() if six.PY3 else names.tostring()\n names = [names[i:i + offset].split(b'\\0', 1)[0] for i in range(0, out, name_len)] # noqa: E501\n\n # Look for the IP address\n for ifname in names:\n # Only look for a matching interface name\n if not ifname.decode(\"utf8\").startswith(iface_name):\n continue\n\n # Retrieve and convert addresses\n ifreq = ioctl(sck, SIOCGIFADDR, struct.pack(\"16s16x\", ifname))\n ifaddr = struct.unpack(\">I\", ifreq[20:24])[0]\n ifreq = ioctl(sck, SIOCGIFNETMASK, struct.pack(\"16s16x\", ifname))\n msk = struct.unpack(\">I\", ifreq[20:24])[0]\n\n # Get the full interface name\n ifname = plain_str(ifname)\n if ':' in ifname:\n ifname = ifname[:ifname.index(':')]\n else:\n continue\n\n # Check if the source address is included in the network\n if (ifaddr & msk) == ip_mask:\n sck.close()\n return (ifaddr & msk, msk, gw_str, ifname,\n scapy.utils.ltoa(ifaddr), metric)\n\n sck.close()\n return\n\n\ndef read_routes():\n try:\n f = open(\"/proc/net/route\", \"rb\")\n except IOError:\n warning(\"Can't open /proc/net/route !\")\n return []\n routes = []\n s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n try:\n ifreq = ioctl(s, SIOCGIFADDR, struct.pack(\"16s16x\", scapy.consts.LOOPBACK_NAME.encode(\"utf8\"))) # noqa: E501\n addrfamily = struct.unpack(\"h\", ifreq[16:18])[0]\n if addrfamily == socket.AF_INET:\n ifreq2 = ioctl(s, SIOCGIFNETMASK, struct.pack(\"16s16x\", scapy.consts.LOOPBACK_NAME.encode(\"utf8\"))) # noqa: E501\n msk = socket.ntohl(struct.unpack(\"I\", ifreq2[20:24])[0])\n dst = socket.ntohl(struct.unpack(\"I\", ifreq[20:24])[0]) & msk\n ifaddr = scapy.utils.inet_ntoa(ifreq[20:24])\n routes.append((dst, msk, \"0.0.0.0\", scapy.consts.LOOPBACK_NAME, ifaddr, 1)) # noqa: E501\n else:\n warning(\"Interface %s: unknown address family (%i)\" % (scapy.consts.LOOPBACK_NAME, addrfamily)) # noqa: E501\n except IOError as err:\n if err.errno == 99:\n warning(\"Interface %s: no address assigned\" % scapy.consts.LOOPBACK_NAME) # noqa: E501\n else:\n warning(\"Interface %s: failed to get address config (%s)\" % (scapy.consts.LOOPBACK_NAME, str(err))) # noqa: E501\n\n for line in f.readlines()[1:]:\n line = plain_str(line)\n iff, dst, gw, flags, _, _, metric, msk, _, _, _ = line.split()\n flags = int(flags, 16)\n if flags & RTF_UP == 0:\n continue\n if flags & RTF_REJECT:\n continue\n try:\n ifreq = ioctl(s, SIOCGIFADDR, struct.pack(\"16s16x\", iff.encode(\"utf8\"))) # noqa: E501\n except IOError: # interface is present in routing tables but does not have any assigned IP # noqa: E501\n ifaddr = \"0.0.0.0\"\n ifaddr_int = 0\n else:\n addrfamily = struct.unpack(\"h\", ifreq[16:18])[0]\n if addrfamily == socket.AF_INET:\n ifaddr = scapy.utils.inet_ntoa(ifreq[20:24])\n ifaddr_int = struct.unpack(\"!I\", ifreq[20:24])[0]\n else:\n warning(\"Interface %s: unknown address family (%i)\", iff, addrfamily) # noqa: E501\n continue\n\n # Attempt to detect an interface alias based on addresses inconsistencies # noqa: E501\n dst_int = socket.htonl(int(dst, 16)) & 0xffffffff\n msk_int = socket.htonl(int(msk, 16)) & 0xffffffff\n gw_str = scapy.utils.inet_ntoa(struct.pack(\"I\", int(gw, 16)))\n metric = int(metric)\n\n if ifaddr_int & msk_int != dst_int:\n tmp_route = get_alias_address(iff, dst_int, gw_str, metric)\n if tmp_route:\n routes.append(tmp_route)\n else:\n routes.append((dst_int, msk_int, gw_str, iff, ifaddr, metric))\n\n else:\n routes.append((dst_int, msk_int, gw_str, iff, ifaddr, metric))\n\n f.close()\n s.close()\n return routes\n\n############\n# IPv6 #\n############\n\n\ndef in6_getifaddr():\n \"\"\"\n Returns a list of 3-tuples of the form (addr, scope, iface) where\n 'addr' is the address of scope 'scope' associated to the interface\n 'iface'.\n\n This is the list of all addresses of all interfaces available on\n the system.\n \"\"\"\n ret = []\n try:\n fdesc = open(\"/proc/net/if_inet6\", \"rb\")\n except IOError:\n return ret\n for line in fdesc:\n # addr, index, plen, scope, flags, ifname\n tmp = plain_str(line).split()\n addr = scapy.utils6.in6_ptop(\n b':'.join(\n struct.unpack('4s4s4s4s4s4s4s4s', tmp[0].encode())\n ).decode()\n )\n # (addr, scope, iface)\n ret.append((addr, int(tmp[3], 16), tmp[5]))\n fdesc.close()\n return ret\n\n\ndef read_routes6():\n try:\n f = open(\"/proc/net/ipv6_route\", \"rb\")\n except IOError:\n return []\n # 1. destination network\n # 2. destination prefix length\n # 3. source network displayed\n # 4. source prefix length\n # 5. next hop\n # 6. metric\n # 7. reference counter (?!?)\n # 8. use counter (?!?)\n # 9. flags\n # 10. device name\n routes = []\n\n def proc2r(p):\n ret = struct.unpack('4s4s4s4s4s4s4s4s', p)\n ret = b':'.join(ret).decode()\n return scapy.utils6.in6_ptop(ret)\n\n lifaddr = in6_getifaddr()\n for line in f.readlines():\n d, dp, _, _, nh, metric, rc, us, fl, dev = line.split()\n metric = int(metric, 16)\n fl = int(fl, 16)\n dev = plain_str(dev)\n\n if fl & RTF_UP == 0:\n continue\n if fl & RTF_REJECT:\n continue\n\n d = proc2r(d)\n dp = int(dp, 16)\n nh = proc2r(nh)\n\n cset = [] # candidate set (possible source addresses)\n if dev == LOOPBACK_NAME:\n if d == '::':\n continue\n cset = ['::1']\n else:\n devaddrs = (x for x in lifaddr if x[2] == dev)\n cset = scapy.utils6.construct_source_candidate_set(d, dp, devaddrs)\n\n if len(cset) != 0:\n routes.append((d, dp, nh, dev, cset, metric))\n f.close()\n return routes\n\n\ndef get_if_index(iff):\n return int(struct.unpack(\"I\", get_if(iff, SIOCGIFINDEX)[16:20])[0])\n\n\nif os.uname()[4] in ['x86_64', 'aarch64']:\n def get_last_packet_timestamp(sock):\n ts = ioctl(sock, SIOCGSTAMP, \"1234567890123456\")\n s, us = struct.unpack(\"QQ\", ts)\n return s + us / 1000000.0\nelse:\n def get_last_packet_timestamp(sock):\n ts = ioctl(sock, SIOCGSTAMP, \"12345678\")\n s, us = struct.unpack(\"II\", ts)\n return s + us / 1000000.0\n\n\ndef _flush_fd(fd):\n if hasattr(fd, 'fileno'):\n fd = fd.fileno()\n while True:\n r, w, e = select([fd], [], [], 0)\n if r:\n os.read(fd, MTU)\n else:\n break\n\n\ndef get_iface_mode(iface):\n \"\"\"Return the interface mode.\n params:\n - iface: the iwconfig interface\n \"\"\"\n p = subprocess.Popen([\"iwconfig\", iface], stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT)\n output, err = p.communicate()\n match = re.search(br\"mode:([a-zA-Z]*)\", output.lower())\n if match:\n return plain_str(match.group(1))\n return \"unknown\"\n\n\ndef set_iface_monitor(iface, monitor):\n \"\"\"Sets the monitor mode (or remove it) from an interface.\n params:\n - iface: the iwconfig interface\n - monitor: True if the interface should be set in monitor mode,\n False if it should be in managed mode\n \"\"\"\n mode = get_iface_mode(iface)\n if mode == \"unknown\":\n warning(\"Could not parse iwconfig !\")\n current_monitor = mode == \"monitor\"\n if monitor == current_monitor:\n # Already correct\n return True\n s_mode = \"monitor\" if monitor else \"managed\"\n\n def _check_call(commands):\n p = subprocess.Popen(commands,\n stderr=subprocess.PIPE,\n stdout=subprocess.PIPE)\n stdout, stderr = p.communicate()\n if p.returncode != 0:\n warning(\"%s failed !\" % \" \".join(commands))\n return False\n return True\n if not _check_call([\"ifconfig\", iface, \"down\"]):\n return False\n if not _check_call([\"iwconfig\", iface, \"mode\", s_mode]):\n return False\n if not _check_call([\"ifconfig\", iface, \"up\"]):\n return False\n return True\n\n\nclass L2Socket(SuperSocket):\n desc = \"read/write packets at layer 2 using Linux PF_PACKET sockets\"\n\n def __init__(self, iface=None, type=ETH_P_ALL, promisc=None, filter=None,\n nofilter=0, monitor=None):\n self.iface = conf.iface if iface is None else iface\n self.type = type\n self.promisc = conf.sniff_promisc if promisc is None else promisc\n if monitor is not None:\n warning(\n \"The monitor argument is ineffective on native linux sockets.\"\n \" Use set_iface_monitor instead.\"\n )\n self.ins = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(type)) # noqa: E501\n if not nofilter:\n if conf.except_filter:\n if filter:\n filter = \"(%s) and not (%s)\" % (filter, conf.except_filter)\n else:\n filter = \"not (%s)\" % conf.except_filter\n if filter is not None:\n attach_filter(self.ins, filter, iface)\n if self.promisc:\n set_promisc(self.ins, self.iface)\n self.ins.bind((self.iface, type))\n _flush_fd(self.ins)\n self.ins.setsockopt(\n socket.SOL_SOCKET,\n socket.SO_RCVBUF,\n conf.bufsize\n )\n if not six.PY2:\n # Receive Auxiliary Data (VLAN tags)\n self.ins.setsockopt(SOL_PACKET, PACKET_AUXDATA, 1)\n self.ins.setsockopt(\n socket.SOL_SOCKET,\n SO_TIMESTAMPNS,\n 1\n )\n if isinstance(self, L2ListenSocket):\n self.outs = None\n else:\n self.outs = self.ins\n self.outs.setsockopt(\n socket.SOL_SOCKET,\n socket.SO_SNDBUF,\n conf.bufsize\n )\n sa_ll = self.ins.getsockname()\n if sa_ll[3] in conf.l2types:\n self.LL = conf.l2types[sa_ll[3]]\n self.lvl = 2\n elif sa_ll[1] in conf.l3types:\n self.LL = conf.l3types[sa_ll[1]]\n self.lvl = 3\n else:\n self.LL = conf.default_l2\n self.lvl = 2\n warning(\"Unable to guess type (interface=%s protocol=%#x family=%i). Using %s\", sa_ll[0], sa_ll[1], sa_ll[3], self.LL.name) # noqa: E501\n\n def close(self):\n if self.closed:\n return\n try:\n if self.promisc and self.ins:\n set_promisc(self.ins, self.iface, 0)\n except (AttributeError, OSError):\n pass\n SuperSocket.close(self)\n\n def recv_raw(self, x=MTU):\n \"\"\"Receives a packet, then returns a tuple containing (cls, pkt_data, time)\"\"\" # noqa: E501\n pkt, sa_ll, ts = self._recv_raw(self.ins, x)\n if self.outs and sa_ll[2] == socket.PACKET_OUTGOING:\n return None, None, None\n if ts is None:\n ts = get_last_packet_timestamp(self.ins)\n return self.LL, pkt, ts\n\n def send(self, x):\n try:\n return SuperSocket.send(self, x)\n except socket.error as msg:\n if msg.errno == 22 and len(x) < conf.min_pkt_size:\n padding = b\"\\x00\" * (conf.min_pkt_size - len(x))\n if isinstance(x, Packet):\n return SuperSocket.send(self, x / Padding(load=padding))\n else:\n return SuperSocket.send(self, raw(x) + padding)\n raise\n\n\nclass L2ListenSocket(L2Socket):\n desc = \"read packets at layer 2 using Linux PF_PACKET sockets. Also receives the packets going OUT\" # noqa: E501\n\n def send(self, x):\n raise Scapy_Exception(\"Can't send anything with L2ListenSocket\")\n\n\nclass L3PacketSocket(L2Socket):\n desc = \"read/write packets at layer 3 using Linux PF_PACKET sockets\"\n\n def recv(self, x=MTU):\n pkt = SuperSocket.recv(self, x)\n if pkt and self.lvl == 2:\n pkt.payload.time = pkt.time\n return pkt.payload\n return pkt\n\n def send(self, x):\n iff = x.route()[0]\n if iff is None:\n iff = conf.iface\n sdto = (iff, self.type)\n self.outs.bind(sdto)\n sn = self.outs.getsockname()\n ll = lambda x: x\n if type(x) in conf.l3types:\n sdto = (iff, conf.l3types[type(x)])\n if sn[3] in conf.l2types:\n ll = lambda x: conf.l2types[sn[3]]() / x\n sx = raw(ll(x))\n x.sent_time = time.time()\n try:\n self.outs.sendto(sx, sdto)\n except socket.error as msg:\n if msg.errno == 22 and len(sx) < conf.min_pkt_size:\n self.outs.send(sx + b\"\\x00\" * (conf.min_pkt_size - len(sx)))\n elif conf.auto_fragment and msg.errno == 90:\n for p in x.fragment():\n self.outs.sendto(raw(ll(p)), sdto)\n else:\n raise\n\n\nclass VEthPair(object):\n \"\"\"\n encapsulates a virtual Ethernet interface pair\n \"\"\"\n\n def __init__(self, iface_name, peer_name):\n\n if not LINUX:\n # ToDo: do we need a kernel version check here?\n raise ScapyInvalidPlatformException(\n 'Virtual Ethernet interface pair only available on Linux'\n )\n\n self.ifaces = [iface_name, peer_name]\n\n def iface(self):\n return self.ifaces[0]\n\n def peer(self):\n return self.ifaces[1]\n\n def setup(self):\n \"\"\"\n create veth pair links\n :raises subprocess.CalledProcessError if operation fails\n \"\"\"\n subprocess.check_call(['ip', 'link', 'add', self.ifaces[0], 'type', 'veth', 'peer', 'name', self.ifaces[1]]) # noqa: E501\n\n def destroy(self):\n \"\"\"\n remove veth pair links\n :raises subprocess.CalledProcessError if operation fails\n \"\"\"\n subprocess.check_call(['ip', 'link', 'del', self.ifaces[0]])\n\n def up(self):\n \"\"\"\n set veth pair links up\n :raises subprocess.CalledProcessError if operation fails\n \"\"\"\n for idx in [0, 1]:\n subprocess.check_call([\"ip\", \"link\", \"set\", self.ifaces[idx], \"up\"]) # noqa: E501\n\n def down(self):\n \"\"\"\n set veth pair links down\n :raises subprocess.CalledProcessError if operation fails\n \"\"\"\n for idx in [0, 1]:\n subprocess.check_call([\"ip\", \"link\", \"set\", self.ifaces[idx], \"down\"]) # noqa: E501\n\n def __enter__(self):\n self.setup()\n self.up()\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n self.destroy()\n", "path": "scapy/arch/linux.py" } ]
diff --git a/scapy/arch/linux.py b/scapy/arch/linux.py index a6a70b988c9..6a082176e39 100644 --- a/scapy/arch/linux.py +++ b/scapy/arch/linux.py @@ -494,7 +494,7 @@ def close(self): try: if self.promisc and self.ins: set_promisc(self.ins, self.iface, 0) - except AttributeError: + except (AttributeError, OSError): pass SuperSocket.close(self)
graspologic-org__graspologic-583
Fix bug in `is_unweighted` for sparse - [ ] Does this PR add any new dependencies? - [ ] Does this PR modify any existing APIs? - [ ] Is the change to the API backwards compatible? - [ ] Have you built the documentation (reference and/or tutorial) and verified the generated documentation is appropriate? #### Reference Issues/PRs #### What does this implement/fix? Briefly explain your changes. `is_unweighted` doesn't work properly for a sparse array input #### Any other comments? I think we could instead just do `graph[graph != 0].max() == 1 and graph[graph != 0].min() == 1` for that entire section of the code. [BUG] Bug in joblib transitive dependency causes exception when multi-threading ## Expected Behavior Multi-threading LatentDistributionTest using a "workers" value != 1 should return without error on all platforms. ## Actual Behavior When using any "workers" value > 1 or equal to -1 on a Windows computer, the code throws an exception. ## Example Code ```python test = LatentDistributionTest(input_graph=False, workers=10) result = test.fit_predict(graph1, graph2) ``` ## Full Traceback ```pytb C:\ProgramData\Anaconda3\lib\site-packages\joblib\disk.py:122: UserWarning: Unable to delete folder C:\Users\msrwinadm4\AppData\Local\Temp\5\joblib_memmapping_folder_11132_7308949288 after 5 tentatives. .format(folder_path, RM_SUBDIRS_N_RETRY)) Traceback (most recent call last): File "GraphsByOrg.py", line 79, in <module> logger.info(f'Calculating nonpar for {org1} and {org2}') File "C:\ProgramData\Anaconda3\lib\site-packages\graspologic\inference\latent_distribution_test.py", line 487, in fit_predict self.fit(A1, A2) File "C:\ProgramData\Anaconda3\lib\site-packages\graspologic\inference\latent_distribution_test.py", line 449, in fit X1_hat, X2_hat, reps=self.n_bootstraps, workers=self.workers, auto=False File "C:\ProgramData\Anaconda3\lib\site-packages\hyppo\ksample\ksamp.py", line 166, in test return self.indep_test.test(u, v, reps, workers, auto=auto) File "C:\ProgramData\Anaconda3\lib\site-packages\hyppo\independence\dcorr.py", line 215, in test stat, pvalue = super(Dcorr, self).test(x, y, reps, workers) File "C:\ProgramData\Anaconda3\lib\site-packages\hyppo\independence\base.py", line 67, in test self._statistic, x, y, reps=reps, workers=workers, is_distsim=is_distsim File "C:\ProgramData\Anaconda3\lib\site-packages\hyppo\_utils.py", line 140, in perm_test [delayed(_perm_stat)(calc_stat, x, y, is_distsim) for rep in range(reps)] File "C:\ProgramData\Anaconda3\lib\site-packages\joblib\parallel.py", line 1027, in __call__ self._terminate_backend() File "C:\ProgramData\Anaconda3\lib\site-packages\joblib\parallel.py", line 734, in _terminate_backend self._backend.terminate() File "C:\ProgramData\Anaconda3\lib\site-packages\joblib\_parallel_backends.py", line 571, in terminate delete_folder(self._workers._temp_folder) File "C:\ProgramData\Anaconda3\lib\site-packages\joblib\disk.py", line 115, in delete_folder shutil.rmtree(folder_path, False, None) File "C:\ProgramData\Anaconda3\lib\shutil.py", line 516, in rmtree return _rmtree_unsafe(path, onerror) File "C:\ProgramData\Anaconda3\lib\shutil.py", line 400, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\ProgramData\Anaconda3\lib\shutil.py", line 398, in _rmtree_unsafe os.unlink(fullname) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\msrwinadm4\\AppData\\Local\\Temp\\5\\joblib_memmapping_folder_11132_7308949288\\11132-1819792920136-683b9c4b033b449dbac251acbe3decfb.pkl' C:\ProgramData\Anaconda3\lib\site-packages\joblib\disk.py:122: UserWarning: Unable to delete folder C:\Users\msrwinadm4\AppData\Local\Temp\5\joblib_memmapping_folder_11132_7308949288 after 5 tentatives. .format(folder_path, RM_SUBDIRS_N_RETRY)) C:\ProgramData\Anaconda3\lib\site-packages\joblib\_memmapping_reducer.py:409: UserWarning: Failed to clean temporary folder: C:\Users\msrwinadm4\AppData\Local\Temp\5\joblib_memmapping_folder_11132_7308949288 .format(pool_folder)) ``` ## Your Environment * Python version: 3.7.6 (Anaconda) * graspologic version: 0.1.0.dev331219603 * Windows 2016 Datacenter (448 GB RAM) x64 ## Additional Details graspologic==0.1.0.dev331219603 joblib==0.14.1 hyppo==0.1.3 scikit-image==0.16.2 scikit-learn==0.22.1 scipy==1.4.1 numpy==1.18.1 ## Underlying problem: Older versions of joblib have a known issue running on Windows. See https://github.com/joblib/joblib/issues/806. This appears to be fixed on May 3rd, 2020 by https://github.com/joblib/joblib/pull/966. Hyppo uses joblib as a transitive dependency of scikit-learn but does not declare it as a dependency. Scikit-learn only requires joblib 0.11 which does not include this fix. See https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/_min_dependencies.py
[ { "content": "# Copyright (c) Microsoft Corporation and contributors.\n# Licensed under the MIT License.\n\nimport os\nimport sys\nfrom setuptools import setup, find_packages\n\n\nMINIMUM_PYTHON_VERSION = 3, 6 # Minimum of Python 3.6\n\nif sys.version_info < MINIMUM_PYTHON_VERSION:\n sys.exit(\"Python {}.{}+ is required.\".format(*MINIMUM_PYTHON_VERSION))\n\nsys.path.insert(0, os.path.join(\"graspologic\", \"version\"))\nfrom version import version\n\nsys.path.pop(0)\n\nversion_path = os.path.join(\"graspologic\", \"version\", \"version.txt\")\nwith open(version_path, \"w\") as version_file:\n version_file.write(f\"{version}\")\n\nwith open(\"README.md\", \"r\") as f:\n LONG_DESCRIPTION = f.read()\n\nsetup(\n name=\"graspologic\",\n version=version,\n description=\"A set of python modules for graph statistics\",\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n author=\"Eric Bridgeford, Jaewon Chung, Benjamin Pedigo, Bijan Varjavand\",\n author_email=\"[email protected]\",\n maintainer=\"Dwayne Pryce\",\n maintainer_email=\"[email protected]\",\n url=\"https://github.com/microsoft/graspologic\",\n license=\"MIT\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n packages=find_packages(exclude=[\"tests\", \"tests.*\", \"tests/*\"]),\n include_package_data=True,\n package_data={\"version\": [os.path.join(\"graspologic\", \"version\", \"version.txt\")]},\n install_requires=[\n \"anytree>=2.8.0\",\n \"gensim\",\n \"hyppo>=0.1.3\",\n \"matplotlib>=3.0.0,<=3.3.0\",\n \"networkx>=2.1\",\n \"numpy>=1.8.1\",\n \"POT>=0.7.0\",\n \"seaborn>=0.9.0\",\n \"scikit-learn>=0.19.1\",\n \"scipy>=1.4.0\",\n ],\n extras_require={\n \"dev\": [\n \"black\",\n \"ipykernel>=5.1.0\",\n \"ipython>=7.4.0\",\n \"mypy\",\n \"nbsphinx\",\n \"numpydoc\",\n \"pandoc\",\n \"pytest\",\n \"pytest-cov\",\n \"sphinx\",\n \"sphinxcontrib-rawfiles\",\n \"sphinx-rtd-theme\",\n \"testfixtures\",\n ]\n },\n)\n", "path": "setup.py" } ]
[ { "content": "# Copyright (c) Microsoft Corporation and contributors.\n# Licensed under the MIT License.\n\nimport os\nimport sys\nfrom setuptools import setup, find_packages\n\n\nMINIMUM_PYTHON_VERSION = 3, 6 # Minimum of Python 3.6\n\nif sys.version_info < MINIMUM_PYTHON_VERSION:\n sys.exit(\"Python {}.{}+ is required.\".format(*MINIMUM_PYTHON_VERSION))\n\nsys.path.insert(0, os.path.join(\"graspologic\", \"version\"))\nfrom version import version\n\nsys.path.pop(0)\n\nversion_path = os.path.join(\"graspologic\", \"version\", \"version.txt\")\nwith open(version_path, \"w\") as version_file:\n version_file.write(f\"{version}\")\n\nwith open(\"README.md\", \"r\") as f:\n LONG_DESCRIPTION = f.read()\n\nsetup(\n name=\"graspologic\",\n version=version,\n description=\"A set of python modules for graph statistics\",\n long_description=LONG_DESCRIPTION,\n long_description_content_type=\"text/markdown\",\n author=\"Eric Bridgeford, Jaewon Chung, Benjamin Pedigo, Bijan Varjavand\",\n author_email=\"[email protected]\",\n maintainer=\"Dwayne Pryce\",\n maintainer_email=\"[email protected]\",\n url=\"https://github.com/microsoft/graspologic\",\n license=\"MIT\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n packages=find_packages(exclude=[\"tests\", \"tests.*\", \"tests/*\"]),\n include_package_data=True,\n package_data={\"version\": [os.path.join(\"graspologic\", \"version\", \"version.txt\")]},\n install_requires=[\n \"anytree>=2.8.0\",\n \"gensim\",\n \"hyppo>=0.1.3\",\n \"joblib>=0.17.0\", # Older versions of joblib cause issue #806. Transitive dependency of hyppo.\n \"matplotlib>=3.0.0,<=3.3.0\",\n \"networkx>=2.1\",\n \"numpy>=1.8.1\",\n \"POT>=0.7.0\",\n \"seaborn>=0.9.0\",\n \"scikit-learn>=0.19.1\",\n \"scipy>=1.4.0\",\n ],\n extras_require={\n \"dev\": [\n \"black\",\n \"ipykernel>=5.1.0\",\n \"ipython>=7.4.0\",\n \"mypy\",\n \"nbsphinx\",\n \"numpydoc\",\n \"pandoc\",\n \"pytest\",\n \"pytest-cov\",\n \"sphinx\",\n \"sphinxcontrib-rawfiles\",\n \"sphinx-rtd-theme\",\n \"testfixtures\",\n ]\n },\n)\n", "path": "setup.py" } ]
diff --git a/setup.py b/setup.py index b1fa371b2..cc5d59121 100644 --- a/setup.py +++ b/setup.py @@ -51,6 +51,7 @@ "anytree>=2.8.0", "gensim", "hyppo>=0.1.3", + "joblib>=0.17.0", # Older versions of joblib cause issue #806. Transitive dependency of hyppo. "matplotlib>=3.0.0,<=3.3.0", "networkx>=2.1", "numpy>=1.8.1",
kserve__kserve-1137
Installed KFServing SDK 0.4 but getting import error while running the custom built image /kind bug **What steps did you take and what happened:** Run a custom built image with KFServing SDK 0.4. ``` Traceback (most recent call last): File "/python3/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/python3/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/job/blambda-function/image_transformer_v2/__main__.py", line 15, in <module> import kfserving File "/python3/lib/python3.7/site-packages/kfserving/__init__.py", line 18, in <module> from .storage import Storage File "/python3/lib/python3.7/site-packages/kfserving/storage.py", line 23, in <module> from google.cloud import storage File "/python3/lib/python3.7/site-packages/google/cloud/storage/__init__.py", line 39, in <module> from google.cloud.storage.batch import Batch File "/python3/lib/python3.7/site-packages/google/cloud/storage/batch.py", line 31, in <module> from google.cloud.storage._http import Connection File "/python3/lib/python3.7/site-packages/google/cloud/storage/_http.py", line 17, in <module> from google.cloud import _http File "/python3/lib/python3.7/site-packages/google/cloud/_http.py", line 22, in <module> from six.moves import collections_abc ImportError: cannot import name 'collections_abc' from 'six.moves' (unknown location) ``` **What did you expect to happen:** **Anything else you would like to add:** We have fixed this in master branch but looks like we need to patch the setup.py in 0.4 branch and release a new minor version **Environment:** - Istio Version: - Knative Version: - KFServing Version: - Kubeflow version: - Kfdef:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm] - Minikube version: - Kubernetes version: (use `kubectl version`): - OS (e.g. from `/etc/os-release`):
[ { "content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='alibiexplainer',\n version='0.4.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kubeflow/kfserving/python/kfserving/alibiexplainer',\n description='Model Explaination Server. \\\n Not intended for use outside KFServing Frameworks Images',\n long_description=open('README.md').read(),\n python_requires='>=3.6',\n packages=find_packages(\"alibiexplainer\"),\n install_requires=[\n \"kfserving>=0.4.0\",\n \"alibi==0.4.0\",\n \"scikit-learn>=0.20.3\",\n \"argparse>=1.4.0\",\n \"requests>=2.22.0\",\n \"joblib>=0.13.2\",\n \"pandas>=0.24.2\",\n \"numpy>=1.16.3\",\n \"dill>=0.3.0\",\n \"spacy>=2.1.4\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "python/alibiexplainer/setup.py" } ]
[ { "content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='alibiexplainer',\n version='0.4.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kubeflow/kfserving/python/kfserving/alibiexplainer',\n description='Model Explaination Server. \\\n Not intended for use outside KFServing Frameworks Images',\n long_description=open('README.md').read(),\n python_requires='>=3.6',\n packages=find_packages(\"alibiexplainer\"),\n install_requires=[\n \"shap==0.35\",\n \"kfserving>=0.4.0\",\n \"alibi==0.4.0\",\n \"scikit-learn>=0.20.3\",\n \"argparse>=1.4.0\",\n \"requests>=2.22.0\",\n \"joblib>=0.13.2\",\n \"pandas>=0.24.2\",\n \"numpy>=1.16.3\",\n \"dill>=0.3.0\",\n \"spacy>=2.1.4\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "python/alibiexplainer/setup.py" } ]
diff --git a/python/alibiexplainer/setup.py b/python/alibiexplainer/setup.py index 6dce65a6c55..88dc38638c0 100644 --- a/python/alibiexplainer/setup.py +++ b/python/alibiexplainer/setup.py @@ -32,6 +32,7 @@ python_requires='>=3.6', packages=find_packages("alibiexplainer"), install_requires=[ + "shap==0.35", "kfserving>=0.4.0", "alibi==0.4.0", "scikit-learn>=0.20.3", diff --git a/python/kfserving/requirements.txt b/python/kfserving/requirements.txt index 455c148da27..027686e558f 100644 --- a/python/kfserving/requirements.txt +++ b/python/kfserving/requirements.txt @@ -1,14 +1,15 @@ certifi>=14.05.14 -six>=1.10 +six==1.15 python_dateutil>=2.5.3 setuptools>=21.0.0 urllib3>=1.15.1 kubernetes==10.0.1 -tornado>=1.4.1 +tornado>=6.0.0 argparse>=1.4.0 minio>=4.0.9 -google-cloud-storage>=1.16.0 +google-cloud-storage>=1.31.0 adal>=1.2.2 table_logger>=0.3.5 numpy>=1.17.3 -azure-storage-blob>=1.3.0,<=2.1.0 \ No newline at end of file +azure-storage-blob>=1.3.0,<=2.1.0 + diff --git a/python/kfserving/test/test_storage.py b/python/kfserving/test/test_storage.py index 528ead5d1f7..65e4a4cd250 100644 --- a/python/kfserving/test/test_storage.py +++ b/python/kfserving/test/test_storage.py @@ -73,7 +73,7 @@ def test_no_permission_buckets(mock_connection, mock_minio): bad_gcs_path = "gs://random/path" # Access private buckets without credentials mock_minio.return_value = Minio("s3.us.cloud-object-storage.appdomain.cloud", secure=True) - mock_connection.side_effect = error.AccessDenied(None) + mock_connection.side_effect = error.AccessDenied() with pytest.raises(error.AccessDenied): kfserving.Storage.download(bad_s3_path) mock_connection.side_effect = exceptions.Forbidden(None) diff --git a/test/scripts/run-e2e-tests.sh b/test/scripts/run-e2e-tests.sh index 4135dd529e8..04fb80c8e3b 100755 --- a/test/scripts/run-e2e-tests.sh +++ b/test/scripts/run-e2e-tests.sh @@ -80,9 +80,9 @@ kubectl create clusterrolebinding cluster-admin-binding \ --user=$(gcloud config get-value core/account) # Install and Initialize Helm -curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 -chmod 700 get_helm.sh -./get_helm.sh +wget https://get.helm.sh/helm-v3.0.2-linux-amd64.tar.gz +tar xvf helm-v3.0.2-linux-amd64.tar.gz +mv linux-amd64/helm /usr/local/bin/ echo "Install istio ..." mkdir istio_tmp
sopel-irc__sopel-1987
reddit: floating point error in upvote ratio https://github.com/sopel-irc/sopel/blob/d850844870ac62e4568b4743bd38f0f90d76af7d/sopel/modules/reddit.py#L183 Occasionally this code results in an output ratio like "57.99999999999999%". Should be super easy for anyone who wants a quick PR.
[ { "content": "# coding=utf-8\n\"\"\"\nreddit.py - Sopel Reddit Plugin\nCopyright 2012, Elsie Powell, embolalia.com\nCopyright 2019, dgw, technobabbl.es\nCopyright 2019, deathbybandaid, deathbybandaid.net\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport datetime as dt\nimport re\nimport sys\nimport textwrap\n\nimport praw\nimport prawcore\nimport requests\n\nfrom sopel import plugin\nfrom sopel.formatting import bold, color, colors\nfrom sopel.tools import time\nfrom sopel.tools.web import USER_AGENT\n\n# clean up all of this when dropping py2/old py3 versions\nif sys.version_info.major >= 3:\n unicode = str\n if sys.version_info.minor >= 4:\n from html import unescape\n else:\n from html.parser import HTMLParser\n unescape = HTMLParser().unescape\nelse:\n from HTMLParser import HTMLParser\n unescape = HTMLParser().unescape\n\nPLUGIN_OUTPUT_PREFIX = '[reddit] '\n\ndomain = r'https?://(?:www\\.|old\\.|pay\\.|ssl\\.|[a-z]{2}\\.)?reddit\\.com'\nsubreddit_url = r'%s/r/([\\w-]+)/?$' % domain\npost_url = r'%s/r/\\S+?/comments/([\\w-]+)(?:/[\\w%%]+)?/?$' % domain\nshort_post_url = r'https?://redd\\.it/([\\w-]+)'\nuser_url = r'%s/u(?:ser)?/([\\w-]+)' % domain\ncomment_url = r'%s/r/\\S+?/comments/\\S+?/\\S+?/([\\w-]+)' % domain\nimage_url = r'https?://i\\.redd\\.it/\\S+'\nvideo_url = r'https?://v\\.redd\\.it/([\\w-]+)'\ngallery_url = r'https?://(?:www\\.)?reddit\\.com/gallery/([\\w-]+)'\n\n\ndef setup(bot):\n if 'reddit_praw' not in bot.memory:\n # Create a PRAW instance just once, at load time\n bot.memory['reddit_praw'] = praw.Reddit(\n user_agent=USER_AGENT,\n client_id='6EiphT6SSQq7FQ',\n client_secret=None,\n )\n\n\ndef shutdown(bot):\n # Clean up shared PRAW instance\n bot.memory.pop('reddit_praw', None)\n\n\ndef get_time_created(bot, trigger, entrytime):\n tz = time.get_timezone(\n bot.db, bot.config, None, trigger.nick, trigger.sender)\n time_created = dt.datetime.utcfromtimestamp(entrytime)\n created = time.format_time(bot.db,\n bot.config, tz,\n trigger.nick, trigger.sender,\n time_created)\n return created\n\n\ndef get_is_cakeday(entrytime):\n now = dt.datetime.utcnow()\n cakeday_start = dt.datetime.utcfromtimestamp(entrytime)\n cakeday_start = cakeday_start.replace(year=now.year)\n day = dt.timedelta(days=1)\n year_div_by_400 = now.year % 400 == 0\n year_div_by_100 = now.year % 100 == 0\n year_div_by_4 = now.year % 4 == 0\n is_leap = year_div_by_400 or ((not year_div_by_100) and year_div_by_4)\n if (not is_leap) and ((cakeday_start.month, cakeday_start.day) == (2, 29)):\n # If cake day is 2/29 and it's not a leap year, cake day is 3/1.\n # Cake day begins at exact account creation time.\n is_cakeday = cakeday_start + day <= now <= cakeday_start + (2 * day)\n else:\n is_cakeday = cakeday_start <= now <= cakeday_start + day\n return is_cakeday\n\n\[email protected](image_url)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef image_info(bot, trigger, match):\n url = match.group(0)\n results = list(\n bot.memory['reddit_praw']\n .subreddit('all')\n .search('url:{}'.format(url), sort='new', params={'include_over_18': 'on'})\n )\n try:\n oldest = results[-1]\n except IndexError:\n # Fail silently if the image link can't be mapped to a submission\n return plugin.NOLIMIT\n return say_post_info(bot, trigger, oldest.id, False, True)\n\n\[email protected](video_url)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef video_info(bot, trigger, match):\n # Get the video URL with a cheeky hack\n url = requests.head(\n 'https://www.reddit.com/video/{}'.format(match.group(1)),\n timeout=(10.0, 4.0)).headers['Location']\n try:\n return say_post_info(\n bot, trigger, re.match(post_url, url).group(1), False, True)\n except AttributeError:\n # Fail silently if we can't map the video link to a submission\n return plugin.NOLIMIT\n\n\[email protected](post_url)\[email protected](short_post_url)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef rpost_info(bot, trigger, match):\n match = match or trigger\n return say_post_info(bot, trigger, match.group(1))\n\n\[email protected](gallery_url)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef rgallery_info(bot, trigger, match):\n match = match or trigger\n return say_post_info(bot, trigger, match.group(1), False)\n\n\ndef say_post_info(bot, trigger, id_, show_link=True, show_comments_link=False):\n try:\n s = bot.memory['reddit_praw'].submission(id=id_)\n\n message = ('{title} {link}{nsfw} | {points} {points_text} '\n '({percent}) | {comments} comments | Posted by {author} | '\n 'Created at {created}{comments_link}')\n\n subreddit = s.subreddit.display_name\n if not show_link:\n link = 'to r/{}'.format(subreddit)\n elif s.is_self:\n link = '(self.{})'.format(subreddit)\n else:\n link = '({}) to r/{}'.format(s.url, subreddit)\n\n nsfw = ''\n if s.over_18:\n nsfw += ' ' + bold(color('[NSFW]', colors.RED))\n\n sfw = bot.db.get_channel_value(trigger.sender, 'sfw')\n if sfw:\n link = '(link hidden)'\n bot.kick(\n trigger.nick, trigger.sender,\n 'Linking to NSFW content in a SFW channel.'\n )\n if s.spoiler:\n nsfw += ' ' + bold(color('[SPOILER]', colors.GRAY))\n\n spoiler_free = bot.db.get_channel_value(trigger.sender, 'spoiler_free')\n if spoiler_free:\n link = '(link hidden)'\n bot.kick(\n trigger.nick, trigger.sender,\n 'Linking to spoiler content in a spoiler-free channel.'\n )\n\n if s.author:\n author = s.author.name\n else:\n author = '[deleted]'\n\n created = get_time_created(bot, trigger, s.created_utc)\n\n if s.score > 0:\n point_color = colors.GREEN\n else:\n point_color = colors.RED\n\n points_text = 'point' if s.score == 1 else 'points'\n\n percent = color(unicode(s.upvote_ratio * 100) + '%', point_color)\n\n comments_link = ''\n if show_comments_link:\n try:\n comments_link = ' | ' + s.shortlink\n except AttributeError:\n # the value assigned earlier will be used\n pass\n\n title = unescape(s.title)\n message = message.format(\n title=title, link=link, nsfw=nsfw, points=s.score, points_text=points_text,\n percent=percent, comments=s.num_comments, author=author, created=created,\n comments_link=comments_link)\n\n bot.say(message)\n except prawcore.exceptions.NotFound:\n bot.reply('No such post.')\n return plugin.NOLIMIT\n\n\[email protected](comment_url)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef comment_info(bot, trigger, match):\n \"\"\"Shows information about the linked comment\"\"\"\n try:\n c = bot.memory['reddit_praw'].comment(match.group(1))\n except prawcore.exceptions.NotFound:\n bot.reply('No such comment.')\n return plugin.NOLIMIT\n\n message = ('Comment by {author} | {points} {points_text} | '\n 'Posted at {posted} | {comment}')\n\n if c.author:\n author = c.author.name\n else:\n author = '[deleted]'\n\n points_text = 'point' if c.score == 1 else 'points'\n\n posted = get_time_created(bot, trigger, c.created_utc)\n\n # stolen from the function I (dgw) wrote for our github plugin\n lines = [line for line in c.body.splitlines() if line and line[0] != '>']\n short = textwrap.wrap(lines[0], 250)[0]\n if len(lines) > 1 or short != lines[0]:\n short += ' […]'\n\n message = message.format(\n author=author, points=c.score, points_text=points_text,\n posted=posted, comment=short)\n\n bot.say(message)\n\n\ndef subreddit_info(bot, trigger, match, commanded=False):\n \"\"\"Shows information about the given subreddit\"\"\"\n match_lower = match.lower()\n if match_lower in ['all', 'popular']:\n message = ('[REDDIT] {link}{nsfw} | {public_description}')\n nsfw = ' ' + bold(color('[Possible NSFW]', colors.ORANGE))\n link = \"https://reddit.com/r/\" + match_lower\n public_description = ''\n if match_lower == 'all':\n public_description = ('Today\\'s top content from hundreds of '\n 'thousands of Reddit communities.')\n elif match_lower == 'popular':\n public_description = ('The top trending content from some of '\n 'Reddit\\'s most popular communities')\n message = message.format(\n link=link, nsfw=nsfw, public_description=public_description)\n bot.say(message)\n return plugin.NOLIMIT\n\n r = bot.memory['reddit_praw']\n try:\n r.subreddits.search_by_name(match, exact=True)\n except prawcore.exceptions.NotFound:\n if commanded:\n bot.reply('No such subreddit.')\n # Fail silently if it wasn't an explicit command.\n return plugin.NOLIMIT\n\n try:\n s = r.subreddit(match)\n s.subreddit_type\n except prawcore.exceptions.Forbidden:\n bot.reply(\"r/\" + match + \" appears to be a private subreddit!\")\n return plugin.NOLIMIT\n except prawcore.exceptions.NotFound:\n bot.reply(\"r/\" + match + \" appears to be a banned subreddit!\")\n return plugin.NOLIMIT\n\n link = \"https://reddit.com/r/\" + s.display_name\n\n created = get_time_created(bot, trigger, s.created_utc)\n\n message = ('{link}{nsfw} | {subscribers} subscribers | '\n 'Created at {created} | {public_description}')\n\n nsfw = ''\n if s.over18:\n nsfw += ' ' + bold(color('[NSFW]', colors.RED))\n\n sfw = bot.db.get_channel_value(trigger.sender, 'sfw')\n if sfw:\n link = '(link hidden)'\n bot.kick(\n trigger.nick, trigger.sender,\n 'Linking to NSFW content in a SFW channel.'\n )\n\n message = message.format(\n link=link, nsfw=nsfw, subscribers='{:,}'.format(s.subscribers),\n created=created, public_description=s.public_description)\n bot.say(message)\n\n\ndef redditor_info(bot, trigger, match, commanded=False):\n \"\"\"Shows information about the given Redditor\"\"\"\n try:\n u = bot.memory['reddit_praw'].redditor(match)\n u.id # shortcut to check if the user exists or not\n except prawcore.exceptions.NotFound:\n if commanded:\n bot.reply('No such Redditor.')\n # Fail silently if it wasn't an explicit command.\n return plugin.NOLIMIT\n\n message = u.name\n is_cakeday = get_is_cakeday(u.created_utc)\n\n if is_cakeday:\n message = message + ' | ' + bold(color('Cake day', colors.LIGHT_PURPLE))\n if commanded:\n message = message + ' | https://reddit.com/u/' + u.name\n if u.is_gold:\n message = message + ' | ' + bold(color('Gold', colors.YELLOW))\n if u.is_employee:\n message = message + ' | ' + bold(color('Employee', colors.RED))\n if u.is_mod:\n message = message + ' | ' + bold(color('Mod', colors.GREEN))\n message = message + (' | Link: ' + str(u.link_karma) +\n ' | Comment: ' + str(u.comment_karma))\n\n bot.say(message)\n\n\[email protected](user_url)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef auto_redditor_info(bot, trigger, match):\n return redditor_info(bot, trigger, match.group(1))\n\n\[email protected](subreddit_url)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef auto_subreddit_info(bot, trigger, match):\n return subreddit_info(bot, trigger, match.group(1))\n\n\[email protected]_chanmsg('Setting SFW status is only supported in a channel.')\[email protected]_privilege(plugin.OP)\[email protected]('setsafeforwork', 'setsfw')\[email protected]('.setsfw true')\[email protected]('.setsfw false')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef set_channel_sfw(bot, trigger):\n \"\"\"\n Sets the Safe for Work status (true or false) for the current\n channel. Defaults to false.\n \"\"\"\n param = 'true'\n if trigger.group(2) and trigger.group(3):\n param = trigger.group(3).strip().lower()\n sfw = param == 'true'\n bot.db.set_channel_value(trigger.sender, 'sfw', sfw)\n if sfw:\n bot.say('%s is now flagged as SFW.' % trigger.sender)\n else:\n bot.say('%s is now flagged as NSFW.' % trigger.sender)\n\n\[email protected]('getsafeforwork', 'getsfw')\[email protected]('.getsfw [channel]')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef get_channel_sfw(bot, trigger):\n \"\"\"\n Gets the preferred channel's Safe for Work status, or the current\n channel's status if no channel given.\n \"\"\"\n channel = trigger.group(2)\n if not channel:\n channel = trigger.sender\n if channel.is_nick():\n bot.reply('{}getsfw with no channel param is only permitted in '\n 'channels.'.format(bot.config.core.help_prefix))\n return\n\n channel = channel.strip()\n\n sfw = bot.db.get_channel_value(channel, 'sfw')\n if sfw:\n bot.say('%s is flagged as SFW' % channel)\n else:\n bot.say('%s is flagged as NSFW' % channel)\n\n\[email protected]_chanmsg('Only channels can be marked as spoiler-free.')\[email protected]_privilege(plugin.OP)\[email protected]('setspoilerfree', 'setspoilfree')\[email protected]('.setspoilfree true')\[email protected]('.setspoilfree false')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef set_channel_spoiler_free(bot, trigger):\n \"\"\"\n Sets the Spoiler-Free status (true or false) for the current channel.\n Defaults to false.\n \"\"\"\n param = 'true'\n if trigger.group(2) and trigger.group(3):\n param = trigger.group(3).strip().lower()\n spoiler_free = param == 'true'\n bot.db.set_channel_value(trigger.sender, 'spoiler_free', spoiler_free)\n if spoiler_free:\n bot.say('%s is now flagged as spoiler-free.' % trigger.sender)\n else:\n bot.say('%s is now flagged as spoilers-allowed.' % trigger.sender)\n\n\[email protected]('getspoilerfree', 'getspoilfree')\[email protected]('.getspoilfree [channel]')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef get_channel_spoiler_free(bot, trigger):\n \"\"\"\n Gets the channel's Spoiler-Free status, or the current channel's\n status if no channel given.\n \"\"\"\n channel = trigger.group(2)\n if not channel:\n channel = trigger.sender\n if channel.is_nick():\n bot.reply('{}getspoilfree with no channel param is only permitted '\n 'in channels.'.format(bot.config.core.help_prefix))\n return\n\n channel = channel.strip()\n\n spoiler_free = bot.db.get_channel_value(channel, 'spoiler_free')\n if spoiler_free:\n bot.say('%s is flagged as spoiler-free' % channel)\n else:\n bot.say('%s is flagged as spoilers-allowed' % channel)\n\n\[email protected](r'(?<!\\S)/?(?P<prefix>r|u)/(?P<id>[a-zA-Z0-9-_]+)\\b')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef reddit_slash_info(bot, trigger):\n searchtype = trigger.group('prefix').lower()\n match = trigger.group('id')\n if searchtype == \"r\":\n return subreddit_info(bot, trigger, match, commanded=False)\n elif searchtype == \"u\":\n return redditor_info(bot, trigger, match, commanded=False)\n\n\[email protected]('subreddit')\[email protected]('.subreddit plex')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef subreddit_command(bot, trigger):\n # require input\n if not trigger.group(2):\n bot.reply('You must provide a subreddit name.')\n return\n\n # subreddit names do not contain spaces\n match = trigger.group(3)\n return subreddit_info(bot, trigger, match, commanded=True)\n\n\[email protected]('redditor')\[email protected]('.redditor poem_for_your_sprog')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef redditor_command(bot, trigger):\n # require input\n if not trigger.group(2):\n bot.reply('You must provide a Redditor name.')\n return\n\n # Redditor names do not contain spaces\n match = trigger.group(3)\n return redditor_info(bot, trigger, match, commanded=True)\n", "path": "sopel/modules/reddit.py" } ]
[ { "content": "# coding=utf-8\n\"\"\"\nreddit.py - Sopel Reddit Plugin\nCopyright 2012, Elsie Powell, embolalia.com\nCopyright 2019, dgw, technobabbl.es\nCopyright 2019, deathbybandaid, deathbybandaid.net\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport datetime as dt\nimport re\nimport sys\nimport textwrap\n\nimport praw\nimport prawcore\nimport requests\n\nfrom sopel import plugin\nfrom sopel.formatting import bold, color, colors\nfrom sopel.tools import time\nfrom sopel.tools.web import USER_AGENT\n\n# clean up all of this when dropping py2/old py3 versions\nif sys.version_info.major >= 3:\n unicode = str\n if sys.version_info.minor >= 4:\n from html import unescape\n else:\n from html.parser import HTMLParser\n unescape = HTMLParser().unescape\nelse:\n from HTMLParser import HTMLParser\n unescape = HTMLParser().unescape\n\nPLUGIN_OUTPUT_PREFIX = '[reddit] '\n\ndomain = r'https?://(?:www\\.|old\\.|pay\\.|ssl\\.|[a-z]{2}\\.)?reddit\\.com'\nsubreddit_url = r'%s/r/([\\w-]+)/?$' % domain\npost_url = r'%s/r/\\S+?/comments/([\\w-]+)(?:/[\\w%%]+)?/?$' % domain\nshort_post_url = r'https?://redd\\.it/([\\w-]+)'\nuser_url = r'%s/u(?:ser)?/([\\w-]+)' % domain\ncomment_url = r'%s/r/\\S+?/comments/\\S+?/\\S+?/([\\w-]+)' % domain\nimage_url = r'https?://i\\.redd\\.it/\\S+'\nvideo_url = r'https?://v\\.redd\\.it/([\\w-]+)'\ngallery_url = r'https?://(?:www\\.)?reddit\\.com/gallery/([\\w-]+)'\n\n\ndef setup(bot):\n if 'reddit_praw' not in bot.memory:\n # Create a PRAW instance just once, at load time\n bot.memory['reddit_praw'] = praw.Reddit(\n user_agent=USER_AGENT,\n client_id='6EiphT6SSQq7FQ',\n client_secret=None,\n )\n\n\ndef shutdown(bot):\n # Clean up shared PRAW instance\n bot.memory.pop('reddit_praw', None)\n\n\ndef get_time_created(bot, trigger, entrytime):\n tz = time.get_timezone(\n bot.db, bot.config, None, trigger.nick, trigger.sender)\n time_created = dt.datetime.utcfromtimestamp(entrytime)\n created = time.format_time(bot.db,\n bot.config, tz,\n trigger.nick, trigger.sender,\n time_created)\n return created\n\n\ndef get_is_cakeday(entrytime):\n now = dt.datetime.utcnow()\n cakeday_start = dt.datetime.utcfromtimestamp(entrytime)\n cakeday_start = cakeday_start.replace(year=now.year)\n day = dt.timedelta(days=1)\n year_div_by_400 = now.year % 400 == 0\n year_div_by_100 = now.year % 100 == 0\n year_div_by_4 = now.year % 4 == 0\n is_leap = year_div_by_400 or ((not year_div_by_100) and year_div_by_4)\n if (not is_leap) and ((cakeday_start.month, cakeday_start.day) == (2, 29)):\n # If cake day is 2/29 and it's not a leap year, cake day is 3/1.\n # Cake day begins at exact account creation time.\n is_cakeday = cakeday_start + day <= now <= cakeday_start + (2 * day)\n else:\n is_cakeday = cakeday_start <= now <= cakeday_start + day\n return is_cakeday\n\n\[email protected](image_url)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef image_info(bot, trigger, match):\n url = match.group(0)\n results = list(\n bot.memory['reddit_praw']\n .subreddit('all')\n .search('url:{}'.format(url), sort='new', params={'include_over_18': 'on'})\n )\n try:\n oldest = results[-1]\n except IndexError:\n # Fail silently if the image link can't be mapped to a submission\n return plugin.NOLIMIT\n return say_post_info(bot, trigger, oldest.id, False, True)\n\n\[email protected](video_url)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef video_info(bot, trigger, match):\n # Get the video URL with a cheeky hack\n url = requests.head(\n 'https://www.reddit.com/video/{}'.format(match.group(1)),\n timeout=(10.0, 4.0)).headers['Location']\n try:\n return say_post_info(\n bot, trigger, re.match(post_url, url).group(1), False, True)\n except AttributeError:\n # Fail silently if we can't map the video link to a submission\n return plugin.NOLIMIT\n\n\[email protected](post_url)\[email protected](short_post_url)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef rpost_info(bot, trigger, match):\n match = match or trigger\n return say_post_info(bot, trigger, match.group(1))\n\n\[email protected](gallery_url)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef rgallery_info(bot, trigger, match):\n match = match or trigger\n return say_post_info(bot, trigger, match.group(1), False)\n\n\ndef say_post_info(bot, trigger, id_, show_link=True, show_comments_link=False):\n try:\n s = bot.memory['reddit_praw'].submission(id=id_)\n\n message = ('{title} {link}{nsfw} | {points} {points_text} '\n '({percent}) | {comments} comments | Posted by {author} | '\n 'Created at {created}{comments_link}')\n\n subreddit = s.subreddit.display_name\n if not show_link:\n link = 'to r/{}'.format(subreddit)\n elif s.is_self:\n link = '(self.{})'.format(subreddit)\n else:\n link = '({}) to r/{}'.format(s.url, subreddit)\n\n nsfw = ''\n if s.over_18:\n nsfw += ' ' + bold(color('[NSFW]', colors.RED))\n\n sfw = bot.db.get_channel_value(trigger.sender, 'sfw')\n if sfw:\n link = '(link hidden)'\n bot.kick(\n trigger.nick, trigger.sender,\n 'Linking to NSFW content in a SFW channel.'\n )\n if s.spoiler:\n nsfw += ' ' + bold(color('[SPOILER]', colors.GRAY))\n\n spoiler_free = bot.db.get_channel_value(trigger.sender, 'spoiler_free')\n if spoiler_free:\n link = '(link hidden)'\n bot.kick(\n trigger.nick, trigger.sender,\n 'Linking to spoiler content in a spoiler-free channel.'\n )\n\n if s.author:\n author = s.author.name\n else:\n author = '[deleted]'\n\n created = get_time_created(bot, trigger, s.created_utc)\n\n if s.score > 0:\n point_color = colors.GREEN\n else:\n point_color = colors.RED\n\n points_text = 'point' if s.score == 1 else 'points'\n\n percent = color('{:.1%}'.format(s.upvote_ratio), point_color)\n\n comments_link = ''\n if show_comments_link:\n try:\n comments_link = ' | ' + s.shortlink\n except AttributeError:\n # the value assigned earlier will be used\n pass\n\n title = unescape(s.title)\n message = message.format(\n title=title, link=link, nsfw=nsfw, points=s.score, points_text=points_text,\n percent=percent, comments=s.num_comments, author=author, created=created,\n comments_link=comments_link)\n\n bot.say(message)\n except prawcore.exceptions.NotFound:\n bot.reply('No such post.')\n return plugin.NOLIMIT\n\n\[email protected](comment_url)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef comment_info(bot, trigger, match):\n \"\"\"Shows information about the linked comment\"\"\"\n try:\n c = bot.memory['reddit_praw'].comment(match.group(1))\n except prawcore.exceptions.NotFound:\n bot.reply('No such comment.')\n return plugin.NOLIMIT\n\n message = ('Comment by {author} | {points} {points_text} | '\n 'Posted at {posted} | {comment}')\n\n if c.author:\n author = c.author.name\n else:\n author = '[deleted]'\n\n points_text = 'point' if c.score == 1 else 'points'\n\n posted = get_time_created(bot, trigger, c.created_utc)\n\n # stolen from the function I (dgw) wrote for our github plugin\n lines = [line for line in c.body.splitlines() if line and line[0] != '>']\n short = textwrap.wrap(lines[0], 250)[0]\n if len(lines) > 1 or short != lines[0]:\n short += ' […]'\n\n message = message.format(\n author=author, points=c.score, points_text=points_text,\n posted=posted, comment=short)\n\n bot.say(message)\n\n\ndef subreddit_info(bot, trigger, match, commanded=False):\n \"\"\"Shows information about the given subreddit\"\"\"\n match_lower = match.lower()\n if match_lower in ['all', 'popular']:\n message = ('[REDDIT] {link}{nsfw} | {public_description}')\n nsfw = ' ' + bold(color('[Possible NSFW]', colors.ORANGE))\n link = \"https://reddit.com/r/\" + match_lower\n public_description = ''\n if match_lower == 'all':\n public_description = ('Today\\'s top content from hundreds of '\n 'thousands of Reddit communities.')\n elif match_lower == 'popular':\n public_description = ('The top trending content from some of '\n 'Reddit\\'s most popular communities')\n message = message.format(\n link=link, nsfw=nsfw, public_description=public_description)\n bot.say(message)\n return plugin.NOLIMIT\n\n r = bot.memory['reddit_praw']\n try:\n r.subreddits.search_by_name(match, exact=True)\n except prawcore.exceptions.NotFound:\n if commanded:\n bot.reply('No such subreddit.')\n # Fail silently if it wasn't an explicit command.\n return plugin.NOLIMIT\n\n try:\n s = r.subreddit(match)\n s.subreddit_type\n except prawcore.exceptions.Forbidden:\n bot.reply(\"r/\" + match + \" appears to be a private subreddit!\")\n return plugin.NOLIMIT\n except prawcore.exceptions.NotFound:\n bot.reply(\"r/\" + match + \" appears to be a banned subreddit!\")\n return plugin.NOLIMIT\n\n link = \"https://reddit.com/r/\" + s.display_name\n\n created = get_time_created(bot, trigger, s.created_utc)\n\n message = ('{link}{nsfw} | {subscribers} subscribers | '\n 'Created at {created} | {public_description}')\n\n nsfw = ''\n if s.over18:\n nsfw += ' ' + bold(color('[NSFW]', colors.RED))\n\n sfw = bot.db.get_channel_value(trigger.sender, 'sfw')\n if sfw:\n link = '(link hidden)'\n bot.kick(\n trigger.nick, trigger.sender,\n 'Linking to NSFW content in a SFW channel.'\n )\n\n message = message.format(\n link=link, nsfw=nsfw, subscribers='{:,}'.format(s.subscribers),\n created=created, public_description=s.public_description)\n bot.say(message)\n\n\ndef redditor_info(bot, trigger, match, commanded=False):\n \"\"\"Shows information about the given Redditor\"\"\"\n try:\n u = bot.memory['reddit_praw'].redditor(match)\n u.id # shortcut to check if the user exists or not\n except prawcore.exceptions.NotFound:\n if commanded:\n bot.reply('No such Redditor.')\n # Fail silently if it wasn't an explicit command.\n return plugin.NOLIMIT\n\n message = u.name\n is_cakeday = get_is_cakeday(u.created_utc)\n\n if is_cakeday:\n message = message + ' | ' + bold(color('Cake day', colors.LIGHT_PURPLE))\n if commanded:\n message = message + ' | https://reddit.com/u/' + u.name\n if u.is_gold:\n message = message + ' | ' + bold(color('Gold', colors.YELLOW))\n if u.is_employee:\n message = message + ' | ' + bold(color('Employee', colors.RED))\n if u.is_mod:\n message = message + ' | ' + bold(color('Mod', colors.GREEN))\n message = message + (' | Link: ' + str(u.link_karma) +\n ' | Comment: ' + str(u.comment_karma))\n\n bot.say(message)\n\n\[email protected](user_url)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef auto_redditor_info(bot, trigger, match):\n return redditor_info(bot, trigger, match.group(1))\n\n\[email protected](subreddit_url)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef auto_subreddit_info(bot, trigger, match):\n return subreddit_info(bot, trigger, match.group(1))\n\n\[email protected]_chanmsg('Setting SFW status is only supported in a channel.')\[email protected]_privilege(plugin.OP)\[email protected]('setsafeforwork', 'setsfw')\[email protected]('.setsfw true')\[email protected]('.setsfw false')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef set_channel_sfw(bot, trigger):\n \"\"\"\n Sets the Safe for Work status (true or false) for the current\n channel. Defaults to false.\n \"\"\"\n param = 'true'\n if trigger.group(2) and trigger.group(3):\n param = trigger.group(3).strip().lower()\n sfw = param == 'true'\n bot.db.set_channel_value(trigger.sender, 'sfw', sfw)\n if sfw:\n bot.say('%s is now flagged as SFW.' % trigger.sender)\n else:\n bot.say('%s is now flagged as NSFW.' % trigger.sender)\n\n\[email protected]('getsafeforwork', 'getsfw')\[email protected]('.getsfw [channel]')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef get_channel_sfw(bot, trigger):\n \"\"\"\n Gets the preferred channel's Safe for Work status, or the current\n channel's status if no channel given.\n \"\"\"\n channel = trigger.group(2)\n if not channel:\n channel = trigger.sender\n if channel.is_nick():\n bot.reply('{}getsfw with no channel param is only permitted in '\n 'channels.'.format(bot.config.core.help_prefix))\n return\n\n channel = channel.strip()\n\n sfw = bot.db.get_channel_value(channel, 'sfw')\n if sfw:\n bot.say('%s is flagged as SFW' % channel)\n else:\n bot.say('%s is flagged as NSFW' % channel)\n\n\[email protected]_chanmsg('Only channels can be marked as spoiler-free.')\[email protected]_privilege(plugin.OP)\[email protected]('setspoilerfree', 'setspoilfree')\[email protected]('.setspoilfree true')\[email protected]('.setspoilfree false')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef set_channel_spoiler_free(bot, trigger):\n \"\"\"\n Sets the Spoiler-Free status (true or false) for the current channel.\n Defaults to false.\n \"\"\"\n param = 'true'\n if trigger.group(2) and trigger.group(3):\n param = trigger.group(3).strip().lower()\n spoiler_free = param == 'true'\n bot.db.set_channel_value(trigger.sender, 'spoiler_free', spoiler_free)\n if spoiler_free:\n bot.say('%s is now flagged as spoiler-free.' % trigger.sender)\n else:\n bot.say('%s is now flagged as spoilers-allowed.' % trigger.sender)\n\n\[email protected]('getspoilerfree', 'getspoilfree')\[email protected]('.getspoilfree [channel]')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef get_channel_spoiler_free(bot, trigger):\n \"\"\"\n Gets the channel's Spoiler-Free status, or the current channel's\n status if no channel given.\n \"\"\"\n channel = trigger.group(2)\n if not channel:\n channel = trigger.sender\n if channel.is_nick():\n bot.reply('{}getspoilfree with no channel param is only permitted '\n 'in channels.'.format(bot.config.core.help_prefix))\n return\n\n channel = channel.strip()\n\n spoiler_free = bot.db.get_channel_value(channel, 'spoiler_free')\n if spoiler_free:\n bot.say('%s is flagged as spoiler-free' % channel)\n else:\n bot.say('%s is flagged as spoilers-allowed' % channel)\n\n\[email protected](r'(?<!\\S)/?(?P<prefix>r|u)/(?P<id>[a-zA-Z0-9-_]+)\\b')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef reddit_slash_info(bot, trigger):\n searchtype = trigger.group('prefix').lower()\n match = trigger.group('id')\n if searchtype == \"r\":\n return subreddit_info(bot, trigger, match, commanded=False)\n elif searchtype == \"u\":\n return redditor_info(bot, trigger, match, commanded=False)\n\n\[email protected]('subreddit')\[email protected]('.subreddit plex')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef subreddit_command(bot, trigger):\n # require input\n if not trigger.group(2):\n bot.reply('You must provide a subreddit name.')\n return\n\n # subreddit names do not contain spaces\n match = trigger.group(3)\n return subreddit_info(bot, trigger, match, commanded=True)\n\n\[email protected]('redditor')\[email protected]('.redditor poem_for_your_sprog')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef redditor_command(bot, trigger):\n # require input\n if not trigger.group(2):\n bot.reply('You must provide a Redditor name.')\n return\n\n # Redditor names do not contain spaces\n match = trigger.group(3)\n return redditor_info(bot, trigger, match, commanded=True)\n", "path": "sopel/modules/reddit.py" } ]
diff --git a/sopel/modules/reddit.py b/sopel/modules/reddit.py index ac97563cc8..819af86dd7 100644 --- a/sopel/modules/reddit.py +++ b/sopel/modules/reddit.py @@ -192,7 +192,7 @@ def say_post_info(bot, trigger, id_, show_link=True, show_comments_link=False): points_text = 'point' if s.score == 1 else 'points' - percent = color(unicode(s.upvote_ratio * 100) + '%', point_color) + percent = color('{:.1%}'.format(s.upvote_ratio), point_color) comments_link = '' if show_comments_link:
conan-io__conan-6198
[bug] SystemPackageTool installed() method is missing According to [conan-io/docs](https://github.com/conan-io/docs/blame/18d6adbf56a55a7d9185a12aa707b5fe161b35e9/reference/conanfile/methods.rst#L642), [docs.conan.io](https://docs.conan.io/en/latest/reference/conanfile/methods.html#systempackagetool) SystemPackageTool should have a public `installed(package_name)` method. Instead, only a protected `_installed(packages)` is provided (see [here](https://github.com/conan-io/conan/blob/develop/conans/client/tools/system_pm.py#L146)). ### Environment Details (include every applicable attribute) * Operating System+version: Ubuntu 18.04.3 LTS * Compiler+version: no applicable * Conan version: 1.20.5 * Python version: 3.6.9 ### Steps to reproduce (Include if Applicable) Try to use `SystemPackageTool.installed(package_name)` ### Logs (Executed commands with output) (Include/Attach if Applicable) ` ERROR: while executing system_requirements(): 'SystemPackageTool' object has no attribute 'installed'` and if `_installed` is used: ``` Linter warnings WARN: Linter. Line 25: Access to a protected member _installed of a client class ```
[ { "content": "import os\nimport sys\n\nfrom conans.client.runner import ConanRunner\nfrom conans.client.tools.oss import OSInfo, cross_building, get_cross_building_settings\nfrom conans.client.tools.files import which\nfrom conans.errors import ConanException\nfrom conans.util.env_reader import get_env\nfrom conans.util.fallbacks import default_output\n\n\nclass SystemPackageTool(object):\n\n def __init__(self, runner=None, os_info=None, tool=None, recommends=False, output=None, conanfile=None):\n output = output if output else conanfile.output if conanfile else None\n self._output = default_output(output, 'conans.client.tools.system_pm.SystemPackageTool')\n os_info = os_info or OSInfo()\n self._is_up_to_date = False\n self._tool = tool or self._create_tool(os_info, output=self._output)\n self._tool._sudo_str = self._get_sudo_str()\n self._tool._runner = runner or ConanRunner(output=self._output)\n self._tool._recommends = recommends\n self._conanfile = conanfile\n\n @staticmethod\n def _get_sudo_str():\n if not SystemPackageTool._is_sudo_enabled():\n return \"\"\n\n if hasattr(sys.stdout, \"isatty\") and not sys.stdout.isatty():\n return \"sudo -A \"\n else:\n return \"sudo \"\n\n @staticmethod\n def _is_sudo_enabled():\n if \"CONAN_SYSREQUIRES_SUDO\" not in os.environ:\n if not which(\"sudo\"):\n return False\n if os.name == 'posix' and os.geteuid() == 0:\n return False\n if os.name == 'nt':\n return False\n return get_env(\"CONAN_SYSREQUIRES_SUDO\", True)\n\n @staticmethod\n def _get_sysrequire_mode():\n allowed_modes = (\"enabled\", \"verify\", \"disabled\")\n mode = get_env(\"CONAN_SYSREQUIRES_MODE\", \"enabled\")\n mode_lower = mode.lower()\n if mode_lower not in allowed_modes:\n raise ConanException(\"CONAN_SYSREQUIRES_MODE=%s is not allowed, allowed modes=%r\"\n % (mode, allowed_modes))\n return mode_lower\n\n @staticmethod\n def _create_tool(os_info, output):\n if os_info.with_apt:\n return AptTool(output=output)\n elif os_info.with_dnf:\n return DnfTool(output=output)\n elif os_info.with_yum:\n return YumTool(output=output)\n elif os_info.with_pacman:\n return PacManTool(output=output)\n elif os_info.is_macos:\n return BrewTool(output=output)\n elif os_info.is_freebsd:\n return PkgTool(output=output)\n elif os_info.is_solaris:\n return PkgUtilTool(output=output)\n elif os_info.with_zypper:\n return ZypperTool(output=output)\n else:\n return NullTool(output=output)\n\n def add_repository(self, repository, repo_key=None, update=True):\n self._tool.add_repository(repository, repo_key=repo_key)\n if update:\n self.update()\n\n def update(self):\n \"\"\"\n Get the system package tool update command\n \"\"\"\n mode = self._get_sysrequire_mode()\n if mode in (\"disabled\", \"verify\"):\n self._output.info(\"Not updating system_requirements. CONAN_SYSREQUIRES_MODE=%s\" % mode)\n return\n self._is_up_to_date = True\n self._tool.update()\n\n def install(self, packages, update=True, force=False, arch_names=None):\n \"\"\" Get the system package tool install command.\n\n :param packages: String with all package to be installed e.g. \"libusb-dev libfoobar-dev\"\n :param update: Run update command before to install\n :param force: Force installing all packages\n :param arch_names: Package suffix/prefix name used by installer tool e.g. {\"x86_64\": \"amd64\"}\n :return: None\n \"\"\"\n packages = [packages] if isinstance(packages, str) else list(packages)\n packages = self._get_package_names(packages, arch_names)\n\n mode = self._get_sysrequire_mode()\n\n if mode in (\"verify\", \"disabled\"):\n # Report to output packages need to be installed\n if mode == \"disabled\":\n self._output.info(\"The following packages need to be installed:\\n %s\"\n % \"\\n\".join(packages))\n return\n\n if mode == \"verify\" and not self._installed(packages):\n self._output.error(\"The following packages need to be installed:\\n %s\"\n % \"\\n\".join(packages))\n raise ConanException(\"Aborted due to CONAN_SYSREQUIRES_MODE=%s. \"\n \"Some system packages need to be installed\" % mode)\n\n if not force and self._installed(packages):\n return\n\n # From here system packages can be updated/modified\n if update and not self._is_up_to_date:\n self.update()\n self._install_any(packages)\n\n def _get_package_names(self, packages, arch_names):\n \"\"\" Parse package names according it architecture\n\n :param packages: list with all package to be installed e.g. [\"libusb-dev libfoobar-dev\"]\n :param arch_names: Package suffix/prefix name used by installer tool\n :return: list with all parsed names e.g. [\"libusb-dev:armhf libfoobar-dev:armhf\"]\n \"\"\"\n if self._conanfile and self._conanfile.settings and cross_building(self._conanfile.settings):\n _, build_arch, _, host_arch = get_cross_building_settings(self._conanfile.settings)\n arch = host_arch or build_arch\n parsed_packages = []\n for package in packages:\n for package_name in package.split(\" \"):\n parsed_packages.append(self._tool.get_package_name(package_name, arch,\n arch_names))\n return parsed_packages\n return packages\n\n def _installed(self, packages):\n if not packages:\n return True\n\n for pkg in packages:\n if self._tool.installed(pkg):\n self._output.info(\"Package already installed: %s\" % pkg)\n return True\n return False\n\n def _install_any(self, packages):\n if len(packages) == 1:\n return self._tool.install(packages[0])\n for pkg in packages:\n try:\n return self._tool.install(pkg)\n except ConanException:\n pass\n raise ConanException(\"Could not install any of %s\" % packages)\n\n\nclass BaseTool(object):\n def __init__(self, output=None):\n self._output = default_output(output, 'conans.client.tools.system_pm.BaseTool')\n\n def get_package_name(self, package, arch, arch_names):\n \"\"\" Retrieve package name to installed according the target arch.\n\n :param package: Regular package name e.g libusb-dev\n :param arch: Host arch from Conanfile.settings\n :param arch_names: Dictionary with suffix/prefix names e.g {\"x86_64\": \"amd64\"}\n :return: Package name for Tool e.g. libusb-dev:i386\n \"\"\"\n return package\n\n\nclass NullTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n pass\n\n def update(self):\n pass\n\n def install(self, package_name):\n self._output.warn(\"Only available for linux with apt-get, yum, or pacman or OSX with brew or\"\n \" FreeBSD with pkg or Solaris with pkgutil\")\n\n def installed(self, package_name):\n return False\n\n\nclass AptTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n _run(self._runner, \"%sapt-add-repository %s\" % (self._sudo_str, repository),\n output=self._output)\n if repo_key:\n _run(self._runner, \"wget -qO - %s | %sapt-key add -\" % (repo_key, self._sudo_str),\n output=self._output)\n\n def update(self):\n _run(self._runner, \"%sapt-get update\" % self._sudo_str, output=self._output)\n\n def install(self, package_name):\n recommends_str = '' if self._recommends else '--no-install-recommends '\n _run(self._runner,\n \"%sapt-get install -y %s%s\" % (self._sudo_str, recommends_str, package_name),\n output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner(\"dpkg-query -W -f='${Status}' %s | grep -q \\\"ok installed\\\"\"\n % package_name, None)\n return exit_code == 0\n\n def get_package_name(self, package, arch, arch_names):\n if arch_names is None:\n arch_names = {\"x86_64\": \"amd64\",\n \"x86\": \"i386\",\n \"ppc32\": \"powerpc\",\n \"ppc64le\": \"ppc64el\",\n \"armv7\": \"arm\",\n \"armv7hf\": \"armhf\",\n \"armv8\": \"arm64\",\n \"s390x\": \"s390x\"}\n if arch in arch_names:\n return \"%s:%s\" % (package, arch_names[arch])\n return package\n\n\nclass YumTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"YumTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"%syum check-update -y\" % self._sudo_str, accepted_returns=[0, 100],\n output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"%syum install -y %s\" % (self._sudo_str, package_name),\n output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner(\"rpm -q %s\" % package_name, None)\n return exit_code == 0\n\n def get_package_name(self, package, arch, arch_names):\n if arch_names is None:\n arch_names = {\"x86_64\": \"x86_64\",\n \"x86\": \"i?86\",\n \"ppc32\": \"powerpc\",\n \"ppc64le\": \"ppc64le\",\n \"armv7\": \"armv7\",\n \"armv7hf\": \"armv7hl\",\n \"armv8\": \"aarch64\",\n \"s390x\": \"s390x\"}\n if arch in arch_names:\n return \"%s.%s\" % (package, arch_names[arch])\n return package\n\n\nclass DnfTool(YumTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"DnfTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"%sdnf check-update -y\" % self._sudo_str, accepted_returns=[0, 100],\n output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"%sdnf install -y %s\" % (self._sudo_str, package_name),\n output=self._output)\n\n\nclass BrewTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"BrewTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"brew update\", output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"brew install %s\" % package_name, output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner('test -n \"$(brew ls --versions %s)\"' % package_name, None)\n return exit_code == 0\n\n\nclass PkgTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"PkgTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"%spkg update\" % self._sudo_str, output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"%spkg install -y %s\" % (self._sudo_str, package_name),\n output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner(\"pkg info %s\" % package_name, None)\n return exit_code == 0\n\n\nclass PkgUtilTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"PkgUtilTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"%spkgutil --catalog\" % self._sudo_str, output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"%spkgutil --install --yes %s\" % (self._sudo_str, package_name),\n output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner('test -n \"`pkgutil --list %s`\"' % package_name, None)\n return exit_code == 0\n\n\nclass ChocolateyTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"ChocolateyTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"choco outdated\", output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"choco install --yes %s\" % package_name, output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner('choco search --local-only --exact %s | '\n 'findstr /c:\"1 packages installed.\"' % package_name, None)\n return exit_code == 0\n\n\nclass PacManTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"PacManTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"%spacman -Syyu --noconfirm\" % self._sudo_str, output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"%spacman -S --noconfirm %s\" % (self._sudo_str, package_name),\n output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner(\"pacman -Qi %s\" % package_name, None)\n return exit_code == 0\n\n def get_package_name(self, package, arch, arch_names):\n if arch_names is None:\n arch_names = {\"x86\": \"lib32\"}\n if arch in arch_names:\n return \"%s-%s\" % (arch_names[arch], package)\n return package\n\n\nclass ZypperTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"ZypperTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"%szypper --non-interactive ref\" % self._sudo_str, output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"%szypper --non-interactive in %s\" % (self._sudo_str, package_name),\n output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner(\"rpm -q %s\" % package_name, None)\n return exit_code == 0\n\n def get_package_name(self, package, arch, arch_names):\n if arch_names is None:\n arch_names = {\"x86\": \"i586\"}\n if arch in arch_names:\n return \"%s.%s\" % (arch_names[arch], package)\n return package\n\n\ndef _run(runner, command, output, accepted_returns=None):\n accepted_returns = accepted_returns or [0, ]\n output.info(\"Running: %s\" % command)\n if runner(command, True) not in accepted_returns:\n raise ConanException(\"Command '%s' failed\" % command)\n", "path": "conans/client/tools/system_pm.py" } ]
[ { "content": "import os\nimport sys\n\nfrom conans.client.runner import ConanRunner\nfrom conans.client.tools.oss import OSInfo, cross_building, get_cross_building_settings\nfrom conans.client.tools.files import which\nfrom conans.errors import ConanException\nfrom conans.util.env_reader import get_env\nfrom conans.util.fallbacks import default_output\n\n\nclass SystemPackageTool(object):\n\n def __init__(self, runner=None, os_info=None, tool=None, recommends=False, output=None, conanfile=None):\n output = output if output else conanfile.output if conanfile else None\n self._output = default_output(output, 'conans.client.tools.system_pm.SystemPackageTool')\n os_info = os_info or OSInfo()\n self._is_up_to_date = False\n self._tool = tool or self._create_tool(os_info, output=self._output)\n self._tool._sudo_str = self._get_sudo_str()\n self._tool._runner = runner or ConanRunner(output=self._output)\n self._tool._recommends = recommends\n self._conanfile = conanfile\n\n @staticmethod\n def _get_sudo_str():\n if not SystemPackageTool._is_sudo_enabled():\n return \"\"\n\n if hasattr(sys.stdout, \"isatty\") and not sys.stdout.isatty():\n return \"sudo -A \"\n else:\n return \"sudo \"\n\n @staticmethod\n def _is_sudo_enabled():\n if \"CONAN_SYSREQUIRES_SUDO\" not in os.environ:\n if not which(\"sudo\"):\n return False\n if os.name == 'posix' and os.geteuid() == 0:\n return False\n if os.name == 'nt':\n return False\n return get_env(\"CONAN_SYSREQUIRES_SUDO\", True)\n\n @staticmethod\n def _get_sysrequire_mode():\n allowed_modes = (\"enabled\", \"verify\", \"disabled\")\n mode = get_env(\"CONAN_SYSREQUIRES_MODE\", \"enabled\")\n mode_lower = mode.lower()\n if mode_lower not in allowed_modes:\n raise ConanException(\"CONAN_SYSREQUIRES_MODE=%s is not allowed, allowed modes=%r\"\n % (mode, allowed_modes))\n return mode_lower\n\n @staticmethod\n def _create_tool(os_info, output):\n if os_info.with_apt:\n return AptTool(output=output)\n elif os_info.with_dnf:\n return DnfTool(output=output)\n elif os_info.with_yum:\n return YumTool(output=output)\n elif os_info.with_pacman:\n return PacManTool(output=output)\n elif os_info.is_macos:\n return BrewTool(output=output)\n elif os_info.is_freebsd:\n return PkgTool(output=output)\n elif os_info.is_solaris:\n return PkgUtilTool(output=output)\n elif os_info.with_zypper:\n return ZypperTool(output=output)\n else:\n return NullTool(output=output)\n\n def add_repository(self, repository, repo_key=None, update=True):\n self._tool.add_repository(repository, repo_key=repo_key)\n if update:\n self.update()\n\n def update(self):\n \"\"\"\n Get the system package tool update command\n \"\"\"\n mode = self._get_sysrequire_mode()\n if mode in (\"disabled\", \"verify\"):\n self._output.info(\"Not updating system_requirements. CONAN_SYSREQUIRES_MODE=%s\" % mode)\n return\n self._is_up_to_date = True\n self._tool.update()\n\n def install(self, packages, update=True, force=False, arch_names=None):\n \"\"\" Get the system package tool install command.\n\n :param packages: String with all package to be installed e.g. \"libusb-dev libfoobar-dev\"\n :param update: Run update command before to install\n :param force: Force installing all packages\n :param arch_names: Package suffix/prefix name used by installer tool e.g. {\"x86_64\": \"amd64\"}\n :return: None\n \"\"\"\n packages = [packages] if isinstance(packages, str) else list(packages)\n packages = self._get_package_names(packages, arch_names)\n\n mode = self._get_sysrequire_mode()\n\n if mode in (\"verify\", \"disabled\"):\n # Report to output packages need to be installed\n if mode == \"disabled\":\n self._output.info(\"The following packages need to be installed:\\n %s\"\n % \"\\n\".join(packages))\n return\n\n if mode == \"verify\" and not self._installed(packages):\n self._output.error(\"The following packages need to be installed:\\n %s\"\n % \"\\n\".join(packages))\n raise ConanException(\"Aborted due to CONAN_SYSREQUIRES_MODE=%s. \"\n \"Some system packages need to be installed\" % mode)\n\n if not force and self._installed(packages):\n return\n\n # From here system packages can be updated/modified\n if update and not self._is_up_to_date:\n self.update()\n self._install_any(packages)\n\n def _get_package_names(self, packages, arch_names):\n \"\"\" Parse package names according it architecture\n\n :param packages: list with all package to be installed e.g. [\"libusb-dev libfoobar-dev\"]\n :param arch_names: Package suffix/prefix name used by installer tool\n :return: list with all parsed names e.g. [\"libusb-dev:armhf libfoobar-dev:armhf\"]\n \"\"\"\n if self._conanfile and self._conanfile.settings and cross_building(self._conanfile.settings):\n _, build_arch, _, host_arch = get_cross_building_settings(self._conanfile.settings)\n arch = host_arch or build_arch\n parsed_packages = []\n for package in packages:\n for package_name in package.split(\" \"):\n parsed_packages.append(self._tool.get_package_name(package_name, arch,\n arch_names))\n return parsed_packages\n return packages\n\n def installed(self, package_name):\n return self._tool.installed(package_name)\n\n def _installed(self, packages):\n if not packages:\n return True\n\n for pkg in packages:\n if self._tool.installed(pkg):\n self._output.info(\"Package already installed: %s\" % pkg)\n return True\n return False\n\n def _install_any(self, packages):\n if len(packages) == 1:\n return self._tool.install(packages[0])\n for pkg in packages:\n try:\n return self._tool.install(pkg)\n except ConanException:\n pass\n raise ConanException(\"Could not install any of %s\" % packages)\n\n\nclass BaseTool(object):\n def __init__(self, output=None):\n self._output = default_output(output, 'conans.client.tools.system_pm.BaseTool')\n\n def get_package_name(self, package, arch, arch_names):\n \"\"\" Retrieve package name to installed according the target arch.\n\n :param package: Regular package name e.g libusb-dev\n :param arch: Host arch from Conanfile.settings\n :param arch_names: Dictionary with suffix/prefix names e.g {\"x86_64\": \"amd64\"}\n :return: Package name for Tool e.g. libusb-dev:i386\n \"\"\"\n return package\n\n\nclass NullTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n pass\n\n def update(self):\n pass\n\n def install(self, package_name):\n self._output.warn(\"Only available for linux with apt-get, yum, or pacman or OSX with brew or\"\n \" FreeBSD with pkg or Solaris with pkgutil\")\n\n def installed(self, package_name):\n return False\n\n\nclass AptTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n _run(self._runner, \"%sapt-add-repository %s\" % (self._sudo_str, repository),\n output=self._output)\n if repo_key:\n _run(self._runner, \"wget -qO - %s | %sapt-key add -\" % (repo_key, self._sudo_str),\n output=self._output)\n\n def update(self):\n _run(self._runner, \"%sapt-get update\" % self._sudo_str, output=self._output)\n\n def install(self, package_name):\n recommends_str = '' if self._recommends else '--no-install-recommends '\n _run(self._runner,\n \"%sapt-get install -y %s%s\" % (self._sudo_str, recommends_str, package_name),\n output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner(\"dpkg-query -W -f='${Status}' %s | grep -q \\\"ok installed\\\"\"\n % package_name, None)\n return exit_code == 0\n\n def get_package_name(self, package, arch, arch_names):\n if arch_names is None:\n arch_names = {\"x86_64\": \"amd64\",\n \"x86\": \"i386\",\n \"ppc32\": \"powerpc\",\n \"ppc64le\": \"ppc64el\",\n \"armv7\": \"arm\",\n \"armv7hf\": \"armhf\",\n \"armv8\": \"arm64\",\n \"s390x\": \"s390x\"}\n if arch in arch_names:\n return \"%s:%s\" % (package, arch_names[arch])\n return package\n\n\nclass YumTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"YumTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"%syum check-update -y\" % self._sudo_str, accepted_returns=[0, 100],\n output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"%syum install -y %s\" % (self._sudo_str, package_name),\n output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner(\"rpm -q %s\" % package_name, None)\n return exit_code == 0\n\n def get_package_name(self, package, arch, arch_names):\n if arch_names is None:\n arch_names = {\"x86_64\": \"x86_64\",\n \"x86\": \"i?86\",\n \"ppc32\": \"powerpc\",\n \"ppc64le\": \"ppc64le\",\n \"armv7\": \"armv7\",\n \"armv7hf\": \"armv7hl\",\n \"armv8\": \"aarch64\",\n \"s390x\": \"s390x\"}\n if arch in arch_names:\n return \"%s.%s\" % (package, arch_names[arch])\n return package\n\n\nclass DnfTool(YumTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"DnfTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"%sdnf check-update -y\" % self._sudo_str, accepted_returns=[0, 100],\n output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"%sdnf install -y %s\" % (self._sudo_str, package_name),\n output=self._output)\n\n\nclass BrewTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"BrewTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"brew update\", output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"brew install %s\" % package_name, output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner('test -n \"$(brew ls --versions %s)\"' % package_name, None)\n return exit_code == 0\n\n\nclass PkgTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"PkgTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"%spkg update\" % self._sudo_str, output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"%spkg install -y %s\" % (self._sudo_str, package_name),\n output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner(\"pkg info %s\" % package_name, None)\n return exit_code == 0\n\n\nclass PkgUtilTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"PkgUtilTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"%spkgutil --catalog\" % self._sudo_str, output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"%spkgutil --install --yes %s\" % (self._sudo_str, package_name),\n output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner('test -n \"`pkgutil --list %s`\"' % package_name, None)\n return exit_code == 0\n\n\nclass ChocolateyTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"ChocolateyTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"choco outdated\", output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"choco install --yes %s\" % package_name, output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner('choco search --local-only --exact %s | '\n 'findstr /c:\"1 packages installed.\"' % package_name, None)\n return exit_code == 0\n\n\nclass PacManTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"PacManTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"%spacman -Syyu --noconfirm\" % self._sudo_str, output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"%spacman -S --noconfirm %s\" % (self._sudo_str, package_name),\n output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner(\"pacman -Qi %s\" % package_name, None)\n return exit_code == 0\n\n def get_package_name(self, package, arch, arch_names):\n if arch_names is None:\n arch_names = {\"x86\": \"lib32\"}\n if arch in arch_names:\n return \"%s-%s\" % (arch_names[arch], package)\n return package\n\n\nclass ZypperTool(BaseTool):\n def add_repository(self, repository, repo_key=None):\n raise ConanException(\"ZypperTool::add_repository not implemented\")\n\n def update(self):\n _run(self._runner, \"%szypper --non-interactive ref\" % self._sudo_str, output=self._output)\n\n def install(self, package_name):\n _run(self._runner, \"%szypper --non-interactive in %s\" % (self._sudo_str, package_name),\n output=self._output)\n\n def installed(self, package_name):\n exit_code = self._runner(\"rpm -q %s\" % package_name, None)\n return exit_code == 0\n\n def get_package_name(self, package, arch, arch_names):\n if arch_names is None:\n arch_names = {\"x86\": \"i586\"}\n if arch in arch_names:\n return \"%s.%s\" % (arch_names[arch], package)\n return package\n\n\ndef _run(runner, command, output, accepted_returns=None):\n accepted_returns = accepted_returns or [0, ]\n output.info(\"Running: %s\" % command)\n if runner(command, True) not in accepted_returns:\n raise ConanException(\"Command '%s' failed\" % command)\n", "path": "conans/client/tools/system_pm.py" } ]
diff --git a/conans/client/tools/system_pm.py b/conans/client/tools/system_pm.py index de3b10e4aac..7b5e8d62ce1 100644 --- a/conans/client/tools/system_pm.py +++ b/conans/client/tools/system_pm.py @@ -143,6 +143,9 @@ def _get_package_names(self, packages, arch_names): return parsed_packages return packages + def installed(self, package_name): + return self._tool.installed(package_name) + def _installed(self, packages): if not packages: return True diff --git a/conans/test/unittests/client/tools/system_pm_test.py b/conans/test/unittests/client/tools/system_pm_test.py index 84a8aefdd92..335850ab3af 100644 --- a/conans/test/unittests/client/tools/system_pm_test.py +++ b/conans/test/unittests/client/tools/system_pm_test.py @@ -443,8 +443,10 @@ def system_package_tool_installed_test(self): expected_package = "chocolatey" # The expected should be installed on development/testing machines self.assertTrue(spt._tool.installed(expected_package)) + self.assertTrue(spt.installed(expected_package)) # This package hopefully doesn't exist self.assertFalse(spt._tool.installed("oidfjgesiouhrgioeurhgielurhgaeiorhgioearhgoaeirhg")) + self.assertFalse(spt.installed("oidfjgesiouhrgioeurhgielurhgaeiorhgioearhgoaeirhg")) def system_package_tool_fail_when_not_0_returned_test(self): def get_linux_error_message():
readthedocs__readthedocs.org-7582
Upgrade elastic search to 7.x https://www.elastic.co/blog/elasticsearch-7-0-0-released Changelog https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking-changes-7.0.html
[ { "content": "import logging\nimport re\n\nfrom django.conf import settings\nfrom elasticsearch import Elasticsearch\nfrom elasticsearch_dsl import FacetedSearch, TermsFacet\nfrom elasticsearch_dsl.faceted_search import NestedFacet\nfrom elasticsearch_dsl.query import (\n Bool,\n FunctionScore,\n MultiMatch,\n Nested,\n SimpleQueryString,\n Term,\n Wildcard,\n)\n\nfrom readthedocs.core.utils.extend import SettingsOverrideObject\nfrom readthedocs.search.documents import PageDocument, ProjectDocument\n\nlog = logging.getLogger(__name__)\n\nALL_FACETS = ['project', 'version', 'role_name', 'language', 'index']\n\n\nclass RTDFacetedSearch(FacetedSearch):\n\n \"\"\"Custom wrapper around FacetedSearch.\"\"\"\n\n operators = []\n\n # Sources to be excluded from results.\n excludes = []\n\n _highlight_options = {\n 'encoder': 'html',\n 'number_of_fragments': 1,\n 'pre_tags': ['<span>'],\n 'post_tags': ['</span>'],\n }\n\n def __init__(\n self,\n query=None,\n filters=None,\n projects=None,\n user=None,\n use_advanced_query=True,\n **kwargs,\n ):\n \"\"\"\n Pass in a user in order to filter search results by privacy.\n\n :param projects: A dictionary of project slugs mapped to a `VersionData` object.\n Results are filter with these values.\n\n :param use_advanced_query: If `True` forces to always use\n `SimpleQueryString` for the text query object.\n\n .. warning::\n\n The `self.user` and `self.filter_by_user` attributes\n aren't currently used on the .org, but are used on the .com.\n \"\"\"\n self.user = user\n self.filter_by_user = kwargs.pop('filter_by_user', True)\n self.use_advanced_query = use_advanced_query\n self.projects = projects or {}\n\n # Hack a fix to our broken connection pooling\n # This creates a new connection on every request,\n # but actually works :)\n log.info('Hacking Elastic to fix search connection pooling')\n self.using = Elasticsearch(**settings.ELASTICSEARCH_DSL['default'])\n\n filters = filters or {}\n\n # We may recieve invalid filters\n valid_filters = {\n k: v\n for k, v in filters.items()\n if k in self.facets\n }\n super().__init__(query=query, filters=valid_filters, **kwargs)\n\n def _get_queries(self, *, query, fields):\n \"\"\"\n Get a list of query objects according to the query.\n\n If the query is a *single term* (a single word)\n we try to match partial words and substrings\n (available only with the DEFAULT_TO_FUZZY_SEARCH feature flag).\n\n If the query is a phrase or contains the syntax from a simple query string,\n we use the SimpleQueryString query.\n \"\"\"\n is_single_term = (\n not self.use_advanced_query and\n query and len(query.split()) <= 1 and\n not self._is_advanced_query(query)\n )\n get_queries_function = (\n self._get_single_term_queries\n if is_single_term\n else self._get_text_queries\n )\n\n return get_queries_function(\n query=query,\n fields=fields,\n )\n\n def _get_text_queries(self, *, query, fields):\n \"\"\"\n Returns a list of query objects according to the query.\n\n SimpleQueryString provides a syntax to let advanced users manipulate\n the results explicitly.\n\n We need to search for both \"and\" and \"or\" operators.\n The score of \"and\" should be higher as it satisfies both \"or\" and \"and\".\n\n For valid options, see:\n\n - https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-simple-query-string-query.html # noqa\n \"\"\"\n queries = []\n is_advanced_query = self.use_advanced_query or self._is_advanced_query(query)\n for operator in self.operators:\n if is_advanced_query:\n query_string = SimpleQueryString(\n query=query,\n fields=fields,\n default_operator=operator,\n )\n else:\n query_string = self._get_fuzzy_query(\n query=query,\n fields=fields,\n operator=operator,\n )\n queries.append(query_string)\n return queries\n\n def _get_single_term_queries(self, query, fields):\n \"\"\"\n Returns a list of query objects for fuzzy and partial results.\n\n We need to search for both \"and\" and \"or\" operators.\n The score of \"and\" should be higher as it satisfies both \"or\" and \"and\".\n\n We use the Wildcard query with the query surrounded by ``*`` to match substrings.\n\n For valid options, see:\n\n - https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-wildcard-query.html # noqa\n \"\"\"\n queries = []\n for operator in self.operators:\n query_string = self._get_fuzzy_query(\n query=query,\n fields=fields,\n operator=operator,\n )\n queries.append(query_string)\n for field in fields:\n # Remove boosting from the field\n field = re.sub(r'\\^.*$', '', field)\n kwargs = {\n field: {'value': f'*{query}*'},\n }\n queries.append(Wildcard(**kwargs))\n return queries\n\n def _get_fuzzy_query(self, *, query, fields, operator):\n \"\"\"\n Returns a query object used for fuzzy results.\n\n For valid options, see:\n\n - https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query.html\n \"\"\"\n return MultiMatch(\n query=query,\n fields=fields,\n operator=operator,\n fuzziness=\"AUTO:4,6\",\n prefix_length=1,\n )\n\n def _is_advanced_query(self, query):\n \"\"\"\n Check if query looks like to be using the syntax from a simple query string.\n\n .. note::\n\n We don't check if the syntax is valid.\n The tokens used aren't very common in a normal query, so checking if\n the query contains any of them should be enough to determinate if\n it's an advanced query.\n\n Simple query syntax:\n\n https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-simple-query-string-query.html#simple-query-string-syntax\n \"\"\"\n tokens = {'+', '|', '-', '\"', '*', '(', ')', '~'}\n query_tokens = set(query)\n return not tokens.isdisjoint(query_tokens)\n\n def query(self, search, query):\n \"\"\"\n Add query part to ``search`` when needed.\n\n Also:\n\n * Adds SimpleQueryString with `self.operators` instead of default query.\n * Adds HTML encoding of results to avoid XSS issues.\n \"\"\"\n search = search.highlight_options(**self._highlight_options)\n search = search.source(excludes=self.excludes)\n\n queries = self._get_queries(\n query=query,\n fields=self.fields,\n )\n\n # run bool query with should, so it returns result where either of the query matches\n bool_query = Bool(should=queries)\n search = search.query(bool_query)\n return search\n\n\nclass ProjectSearchBase(RTDFacetedSearch):\n facets = {'language': TermsFacet(field='language')}\n doc_types = [ProjectDocument]\n index = ProjectDocument._index._name\n fields = ('name^10', 'slug^5', 'description')\n operators = ['and', 'or']\n excludes = ['users', 'language']\n\n\nclass PageSearchBase(RTDFacetedSearch):\n facets = {\n 'project': TermsFacet(field='project'),\n 'version': TermsFacet(field='version'),\n 'role_name': NestedFacet(\n 'domains',\n TermsFacet(field='domains.role_name')\n ),\n }\n doc_types = [PageDocument]\n index = PageDocument._index._name\n\n # boosting for these fields need to be close enough\n # to be re-boosted by the page rank.\n _outer_fields = ['title^1.5']\n _section_fields = ['sections.title^2', 'sections.content']\n _domain_fields = [\n 'domains.name^1.5',\n 'domains.docstrings',\n ]\n fields = _outer_fields\n\n # need to search for both 'and' and 'or' operations\n # the score of and should be higher as it satisfies both or and and\n operators = ['and', 'or']\n\n excludes = ['rank', 'sections', 'domains', 'commit', 'build']\n\n def total_count(self):\n \"\"\"Returns the total count of results of the current query.\"\"\"\n s = self.build_search()\n\n # setting size=0 so that no results are returned,\n # we are only interested in the total count\n s = s.extra(size=0)\n s = s.execute()\n return s.hits.total\n\n def query(self, search, query):\n \"\"\"\n Manipulates the query to support nested queries and a custom rank for pages.\n\n If `self.projects` was given, we use it to filter the documents that\n match the same project and version.\n \"\"\"\n search = search.highlight_options(**self._highlight_options)\n search = search.source(excludes=self.excludes)\n\n queries = self._get_queries(\n query=query,\n fields=self.fields,\n )\n\n sections_nested_query = self._get_nested_query(\n query=query,\n path='sections',\n fields=self._section_fields,\n )\n\n domains_nested_query = self._get_nested_query(\n query=query,\n path='domains',\n fields=self._domain_fields,\n )\n\n queries.extend([sections_nested_query, domains_nested_query])\n bool_query = Bool(should=queries)\n\n if self.projects:\n versions_query = [\n Bool(\n must=[\n Term(project={'value': project}),\n Term(version={'value': version}),\n ]\n )\n for project, version in self.projects.items()\n ]\n bool_query = Bool(must=[bool_query, Bool(should=versions_query)])\n\n final_query = FunctionScore(\n query=bool_query,\n script_score=self._get_script_score(),\n )\n search = search.query(final_query)\n return search\n\n def _get_nested_query(self, *, query, path, fields):\n \"\"\"Generate a nested query with passed parameters.\"\"\"\n queries = self._get_queries(\n query=query,\n fields=fields,\n )\n\n raw_fields = (\n # Remove boosting from the field\n re.sub(r'\\^.*$', '', field)\n for field in fields\n )\n\n highlight = dict(\n self._highlight_options,\n fields={\n field: {}\n for field in raw_fields\n },\n )\n\n return Nested(\n path=path,\n inner_hits={'highlight': highlight},\n query=Bool(should=queries),\n )\n\n def _get_script_score(self):\n \"\"\"\n Gets an ES script to map the page rank to a valid score weight.\n\n ES expects the rank to be a number greater than 0,\n but users can set this between [-10, +10].\n We map that range to [0.01, 2] (21 possible values).\n\n The first lower rank (0.8) needs to bring the score from the highest boost (sections.title^2)\n close to the lowest boost (title^1.5), that way exact results take priority:\n\n - 2.0 * 0.8 = 1.6 (score close to 1.5, but not lower than it)\n - 1.5 * 0.8 = 1.2 (score lower than 1.5)\n\n The first higher rank (1.2) needs to bring the score from the lowest boost (title^1.5)\n close to the highest boost (sections.title^2), that way exact results take priority:\n\n - 2.0 * 1.3 = 2.6 (score higher thank 2.0)\n - 1.5 * 1.3 = 1.95 (score close to 2.0, but not higher than it)\n\n The next lower and higher ranks need to decrease/increase both scores.\n\n See https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-script-score-query.html#field-value-factor # noqa\n \"\"\"\n ranking = [\n 0.01,\n 0.05,\n 0.1,\n 0.2,\n 0.3,\n 0.4,\n 0.5,\n 0.6,\n 0.7,\n 0.8,\n 1,\n 1.3,\n 1.4,\n 1.5,\n 1.6,\n 1.7,\n 1.8,\n 1.9,\n 1.93,\n 1.96,\n 2,\n ]\n # Each rank maps to a element in the ranking list.\n # -10 will map to the first element (-10 + 10 = 0) and so on.\n source = \"\"\"\n int rank = doc['rank'].size() == 0 ? 0 : (int) doc['rank'].value;\n return params.ranking[rank + 10] * _score;\n \"\"\"\n return {\n \"script\": {\n \"source\": source,\n \"params\": {\"ranking\": ranking},\n },\n }\n\n\nclass PageSearch(SettingsOverrideObject):\n\n \"\"\"\n Allow this class to be overridden based on CLASS_OVERRIDES setting.\n\n This is primary used on the .com to adjust how we filter our search queries\n \"\"\"\n\n _default_class = PageSearchBase\n\n\nclass ProjectSearch(SettingsOverrideObject):\n\n \"\"\"\n Allow this class to be overridden based on CLASS_OVERRIDES setting.\n\n This is primary used on the .com to adjust how we filter our search queries\n \"\"\"\n\n _default_class = ProjectSearchBase\n", "path": "readthedocs/search/faceted_search.py" } ]
[ { "content": "import logging\nimport re\n\nfrom django.conf import settings\nfrom elasticsearch import Elasticsearch\nfrom elasticsearch_dsl import FacetedSearch, TermsFacet\nfrom elasticsearch_dsl.faceted_search import NestedFacet\nfrom elasticsearch_dsl.query import (\n Bool,\n FunctionScore,\n MultiMatch,\n Nested,\n SimpleQueryString,\n Term,\n Wildcard,\n)\n\nfrom readthedocs.core.utils.extend import SettingsOverrideObject\nfrom readthedocs.search.documents import PageDocument, ProjectDocument\n\nlog = logging.getLogger(__name__)\n\nALL_FACETS = ['project', 'version', 'role_name', 'language', 'index']\n\n\nclass RTDFacetedSearch(FacetedSearch):\n\n \"\"\"Custom wrapper around FacetedSearch.\"\"\"\n\n operators = []\n\n # Sources to be excluded from results.\n excludes = []\n\n _highlight_options = {\n 'encoder': 'html',\n 'number_of_fragments': 1,\n 'pre_tags': ['<span>'],\n 'post_tags': ['</span>'],\n }\n\n def __init__(\n self,\n query=None,\n filters=None,\n projects=None,\n user=None,\n use_advanced_query=True,\n **kwargs,\n ):\n \"\"\"\n Pass in a user in order to filter search results by privacy.\n\n :param projects: A dictionary of project slugs mapped to a `VersionData` object.\n Results are filter with these values.\n\n :param use_advanced_query: If `True` forces to always use\n `SimpleQueryString` for the text query object.\n\n .. warning::\n\n The `self.user` and `self.filter_by_user` attributes\n aren't currently used on the .org, but are used on the .com.\n \"\"\"\n self.user = user\n self.filter_by_user = kwargs.pop('filter_by_user', True)\n self.use_advanced_query = use_advanced_query\n self.projects = projects or {}\n\n # Hack a fix to our broken connection pooling\n # This creates a new connection on every request,\n # but actually works :)\n log.info('Hacking Elastic to fix search connection pooling')\n self.using = Elasticsearch(**settings.ELASTICSEARCH_DSL['default'])\n\n filters = filters or {}\n\n # We may recieve invalid filters\n valid_filters = {\n k: v\n for k, v in filters.items()\n if k in self.facets\n }\n super().__init__(query=query, filters=valid_filters, **kwargs)\n\n def _get_queries(self, *, query, fields):\n \"\"\"\n Get a list of query objects according to the query.\n\n If the query is a *single term* (a single word)\n we try to match partial words and substrings\n (available only with the DEFAULT_TO_FUZZY_SEARCH feature flag).\n\n If the query is a phrase or contains the syntax from a simple query string,\n we use the SimpleQueryString query.\n \"\"\"\n is_single_term = (\n not self.use_advanced_query and\n query and len(query.split()) <= 1 and\n not self._is_advanced_query(query)\n )\n get_queries_function = (\n self._get_single_term_queries\n if is_single_term\n else self._get_text_queries\n )\n\n return get_queries_function(\n query=query,\n fields=fields,\n )\n\n def _get_text_queries(self, *, query, fields):\n \"\"\"\n Returns a list of query objects according to the query.\n\n SimpleQueryString provides a syntax to let advanced users manipulate\n the results explicitly.\n\n We need to search for both \"and\" and \"or\" operators.\n The score of \"and\" should be higher as it satisfies both \"or\" and \"and\".\n\n For valid options, see:\n\n - https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-simple-query-string-query.html # noqa\n \"\"\"\n queries = []\n is_advanced_query = self.use_advanced_query or self._is_advanced_query(query)\n for operator in self.operators:\n if is_advanced_query:\n query_string = SimpleQueryString(\n query=query,\n fields=fields,\n default_operator=operator,\n )\n else:\n query_string = self._get_fuzzy_query(\n query=query,\n fields=fields,\n operator=operator,\n )\n queries.append(query_string)\n return queries\n\n def _get_single_term_queries(self, query, fields):\n \"\"\"\n Returns a list of query objects for fuzzy and partial results.\n\n We need to search for both \"and\" and \"or\" operators.\n The score of \"and\" should be higher as it satisfies both \"or\" and \"and\".\n\n We use the Wildcard query with the query surrounded by ``*`` to match substrings.\n\n For valid options, see:\n\n - https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-wildcard-query.html # noqa\n \"\"\"\n queries = []\n for operator in self.operators:\n query_string = self._get_fuzzy_query(\n query=query,\n fields=fields,\n operator=operator,\n )\n queries.append(query_string)\n for field in fields:\n # Remove boosting from the field\n field = re.sub(r'\\^.*$', '', field)\n kwargs = {\n field: {'value': f'*{query}*'},\n }\n queries.append(Wildcard(**kwargs))\n return queries\n\n def _get_fuzzy_query(self, *, query, fields, operator):\n \"\"\"\n Returns a query object used for fuzzy results.\n\n For valid options, see:\n\n - https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query.html\n \"\"\"\n return MultiMatch(\n query=query,\n fields=fields,\n operator=operator,\n fuzziness=\"AUTO:4,6\",\n prefix_length=1,\n )\n\n def _is_advanced_query(self, query):\n \"\"\"\n Check if query looks like to be using the syntax from a simple query string.\n\n .. note::\n\n We don't check if the syntax is valid.\n The tokens used aren't very common in a normal query, so checking if\n the query contains any of them should be enough to determinate if\n it's an advanced query.\n\n Simple query syntax:\n\n https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-simple-query-string-query.html#simple-query-string-syntax\n \"\"\"\n tokens = {'+', '|', '-', '\"', '*', '(', ')', '~'}\n query_tokens = set(query)\n return not tokens.isdisjoint(query_tokens)\n\n def query(self, search, query):\n \"\"\"\n Add query part to ``search`` when needed.\n\n Also:\n\n * Adds SimpleQueryString with `self.operators` instead of default query.\n * Adds HTML encoding of results to avoid XSS issues.\n \"\"\"\n search = search.highlight_options(**self._highlight_options)\n search = search.source(excludes=self.excludes)\n\n queries = self._get_queries(\n query=query,\n fields=self.fields,\n )\n\n # run bool query with should, so it returns result where either of the query matches\n bool_query = Bool(should=queries)\n search = search.query(bool_query)\n return search\n\n\nclass ProjectSearchBase(RTDFacetedSearch):\n facets = {'language': TermsFacet(field='language')}\n doc_types = [ProjectDocument]\n index = ProjectDocument._index._name\n fields = ('name^10', 'slug^5', 'description')\n operators = ['and', 'or']\n excludes = ['users', 'language']\n\n\nclass PageSearchBase(RTDFacetedSearch):\n facets = {\n 'project': TermsFacet(field='project'),\n 'version': TermsFacet(field='version'),\n 'role_name': NestedFacet(\n 'domains',\n TermsFacet(field='domains.role_name')\n ),\n }\n doc_types = [PageDocument]\n index = PageDocument._index._name\n\n # boosting for these fields need to be close enough\n # to be re-boosted by the page rank.\n _outer_fields = ['title^1.5']\n _section_fields = ['sections.title^2', 'sections.content']\n _domain_fields = [\n 'domains.name^1.5',\n 'domains.docstrings',\n ]\n fields = _outer_fields\n\n # need to search for both 'and' and 'or' operations\n # the score of and should be higher as it satisfies both or and and\n operators = ['and', 'or']\n\n excludes = ['rank', 'sections', 'domains', 'commit', 'build']\n\n def total_count(self):\n \"\"\"Returns the total count of results of the current query.\"\"\"\n s = self.build_search()\n\n # setting size=0 so that no results are returned,\n # we are only interested in the total count\n s = s.extra(size=0)\n s = s.execute()\n return s.hits.total['value']\n\n def query(self, search, query):\n \"\"\"\n Manipulates the query to support nested queries and a custom rank for pages.\n\n If `self.projects` was given, we use it to filter the documents that\n match the same project and version.\n \"\"\"\n search = search.highlight_options(**self._highlight_options)\n search = search.source(excludes=self.excludes)\n\n queries = self._get_queries(\n query=query,\n fields=self.fields,\n )\n\n sections_nested_query = self._get_nested_query(\n query=query,\n path='sections',\n fields=self._section_fields,\n )\n\n domains_nested_query = self._get_nested_query(\n query=query,\n path='domains',\n fields=self._domain_fields,\n )\n\n queries.extend([sections_nested_query, domains_nested_query])\n bool_query = Bool(should=queries)\n\n if self.projects:\n versions_query = [\n Bool(\n must=[\n Term(project={'value': project}),\n Term(version={'value': version}),\n ]\n )\n for project, version in self.projects.items()\n ]\n bool_query = Bool(must=[bool_query, Bool(should=versions_query)])\n\n final_query = FunctionScore(\n query=bool_query,\n script_score=self._get_script_score(),\n )\n search = search.query(final_query)\n return search\n\n def _get_nested_query(self, *, query, path, fields):\n \"\"\"Generate a nested query with passed parameters.\"\"\"\n queries = self._get_queries(\n query=query,\n fields=fields,\n )\n\n raw_fields = (\n # Remove boosting from the field\n re.sub(r'\\^.*$', '', field)\n for field in fields\n )\n\n highlight = dict(\n self._highlight_options,\n fields={\n field: {}\n for field in raw_fields\n },\n )\n\n return Nested(\n path=path,\n inner_hits={'highlight': highlight},\n query=Bool(should=queries),\n )\n\n def _get_script_score(self):\n \"\"\"\n Gets an ES script to map the page rank to a valid score weight.\n\n ES expects the rank to be a number greater than 0,\n but users can set this between [-10, +10].\n We map that range to [0.01, 2] (21 possible values).\n\n The first lower rank (0.8) needs to bring the score from the highest boost (sections.title^2)\n close to the lowest boost (title^1.5), that way exact results take priority:\n\n - 2.0 * 0.8 = 1.6 (score close to 1.5, but not lower than it)\n - 1.5 * 0.8 = 1.2 (score lower than 1.5)\n\n The first higher rank (1.2) needs to bring the score from the lowest boost (title^1.5)\n close to the highest boost (sections.title^2), that way exact results take priority:\n\n - 2.0 * 1.3 = 2.6 (score higher thank 2.0)\n - 1.5 * 1.3 = 1.95 (score close to 2.0, but not higher than it)\n\n The next lower and higher ranks need to decrease/increase both scores.\n\n See https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-script-score-query.html#field-value-factor # noqa\n \"\"\"\n ranking = [\n 0.01,\n 0.05,\n 0.1,\n 0.2,\n 0.3,\n 0.4,\n 0.5,\n 0.6,\n 0.7,\n 0.8,\n 1,\n 1.3,\n 1.4,\n 1.5,\n 1.6,\n 1.7,\n 1.8,\n 1.9,\n 1.93,\n 1.96,\n 2,\n ]\n # Each rank maps to a element in the ranking list.\n # -10 will map to the first element (-10 + 10 = 0) and so on.\n source = \"\"\"\n int rank = doc['rank'].size() == 0 ? 0 : (int) doc['rank'].value;\n return params.ranking[rank + 10] * _score;\n \"\"\"\n return {\n \"script\": {\n \"source\": source,\n \"params\": {\"ranking\": ranking},\n },\n }\n\n\nclass PageSearch(SettingsOverrideObject):\n\n \"\"\"\n Allow this class to be overridden based on CLASS_OVERRIDES setting.\n\n This is primary used on the .com to adjust how we filter our search queries\n \"\"\"\n\n _default_class = PageSearchBase\n\n\nclass ProjectSearch(SettingsOverrideObject):\n\n \"\"\"\n Allow this class to be overridden based on CLASS_OVERRIDES setting.\n\n This is primary used on the .com to adjust how we filter our search queries\n \"\"\"\n\n _default_class = ProjectSearchBase\n", "path": "readthedocs/search/faceted_search.py" } ]
diff --git a/.circleci/config.yml b/.circleci/config.yml index 89119d19ca4..7516f9c8d3a 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -6,8 +6,10 @@ jobs: - image: 'cimg/python:3.6' environment: TOX_POSARGS: '' - - image: 'docker.elastic.co/elasticsearch/elasticsearch:6.8.12' + - image: 'docker.elastic.co/elasticsearch/elasticsearch:7.9.2' name: search + environment: + discovery.type: single-node steps: - checkout - run: git submodule sync diff --git a/readthedocs/search/faceted_search.py b/readthedocs/search/faceted_search.py index 69ba778a429..33e0064bb11 100644 --- a/readthedocs/search/faceted_search.py +++ b/readthedocs/search/faceted_search.py @@ -275,7 +275,7 @@ def total_count(self): # we are only interested in the total count s = s.extra(size=0) s = s.execute() - return s.hits.total + return s.hits.total['value'] def query(self, search, query): """ diff --git a/requirements/pip.txt b/requirements/pip.txt index 08e5d23f969..fbc9599ea50 100644 --- a/requirements/pip.txt +++ b/requirements/pip.txt @@ -52,9 +52,9 @@ django-allauth==0.42.0 # pyup: ignore GitPython==3.1.11 # Search -elasticsearch==6.8.1 # pyup: <7.0.0 -elasticsearch-dsl==6.4.0 # pyup: <7.0 -django-elasticsearch-dsl==6.4.2 # pyup: <7.0 +elasticsearch==7.9.1 # pyup: <8.0.0 +elasticsearch-dsl==7.3.0 # pyup: <8.0 +django-elasticsearch-dsl==7.1.4 # pyup: <8.0 selectolax==0.2.9 # NOTE: this dep can be removed in python 3.7 in favor of ``date.fromisoformat``
hylang__hy-358
Allow macros to return None ``` (defmacro foo []) (foo) ``` Will break as macros are not handling the NoneType yet
[ { "content": "# Copyright (c) 2013 Paul Tagliamonte <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nfrom hy.models.expression import HyExpression\nfrom hy.models.string import HyString\nfrom hy.models.symbol import HySymbol\nfrom hy.models.list import HyList\nfrom hy.models.integer import HyInteger\nfrom hy.models.float import HyFloat\nfrom hy.models.complex import HyComplex\nfrom hy.models.dict import HyDict\nfrom hy._compat import str_type\n\nfrom collections import defaultdict\n\n\nCORE_MACROS = [\n \"hy.core.bootstrap\",\n]\n\nEXTRA_MACROS = [\n \"hy.core.macros\",\n]\n\n_hy_macros = defaultdict(dict)\n\n\ndef macro(name):\n \"\"\"Decorator to define a macro called `name`.\n\n This stores the macro `name` in the namespace for the module where it is\n defined.\n\n If the module where it is defined is in `hy.core`, then the macro is stored\n in the default `None` namespace.\n\n This function is called from the `defmacro` special form in the compiler.\n\n \"\"\"\n def _(fn):\n module_name = fn.__module__\n if module_name.startswith(\"hy.core\"):\n module_name = None\n _hy_macros[module_name][name] = fn\n return fn\n return _\n\n\ndef require(source_module, target_module):\n \"\"\"Load the macros from `source_module` in the namespace of\n `target_module`.\n\n This function is called from the `require` special form in the compiler.\n\n \"\"\"\n macros = _hy_macros[source_module]\n refs = _hy_macros[target_module]\n for name, macro in macros.items():\n refs[name] = macro\n\n\n# type -> wrapping function mapping for _wrap_value\n_wrappers = {\n int: HyInteger,\n bool: lambda x: HySymbol(\"True\") if x else HySymbol(\"False\"),\n float: HyFloat,\n complex: HyComplex,\n str_type: HyString,\n dict: lambda d: HyDict(_wrap_value(x) for x in sum(d.items(), ())),\n list: lambda l: HyList(_wrap_value(x) for x in l)\n}\n\n\ndef _wrap_value(x):\n \"\"\"Wrap `x` into the corresponding Hy type.\n\n This allows a macro to return an unquoted expression transparently.\n\n \"\"\"\n wrapper = _wrappers.get(type(x))\n if wrapper is None:\n return x\n else:\n return wrapper(x)\n\n\ndef load_macros(module_name):\n \"\"\"Load the hy builtin macros for module `module_name`.\n\n Modules from `hy.core` can only use the macros from CORE_MACROS.\n Other modules get the macros from CORE_MACROS and EXTRA_MACROS.\n\n \"\"\"\n\n def _import(module, module_name=module_name):\n \"__import__ a module, avoiding recursions\"\n if module != module_name:\n __import__(module)\n\n for module in CORE_MACROS:\n _import(module)\n\n if module_name.startswith(\"hy.core\"):\n return\n\n for module in EXTRA_MACROS:\n _import(module)\n\n\ndef macroexpand(tree, module_name):\n \"\"\"Expand the toplevel macros for the `tree`.\n\n Load the macros from the given `module_name`, then expand the (top-level)\n macros in `tree` until it stops changing.\n\n \"\"\"\n load_macros(module_name)\n old = None\n while old != tree:\n old = tree\n tree = macroexpand_1(tree, module_name)\n return tree\n\n\ndef macroexpand_1(tree, module_name):\n \"\"\"Expand the toplevel macro from `tree` once, in the context of\n `module_name`.\"\"\"\n if isinstance(tree, HyExpression):\n if tree == []:\n return tree\n\n fn = tree[0]\n if fn in (\"quote\", \"quasiquote\"):\n return tree\n ntree = HyExpression(tree[:])\n ntree.replace(tree)\n\n if isinstance(fn, HyString):\n m = _hy_macros[module_name].get(fn)\n if m is None:\n m = _hy_macros[None].get(fn)\n if m is not None:\n obj = _wrap_value(m(*ntree[1:]))\n obj.replace(tree)\n return obj\n\n return ntree\n return tree\n", "path": "hy/macros.py" } ]
[ { "content": "# Copyright (c) 2013 Paul Tagliamonte <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nfrom hy.models.expression import HyExpression\nfrom hy.models.string import HyString\nfrom hy.models.symbol import HySymbol\nfrom hy.models.list import HyList\nfrom hy.models.integer import HyInteger\nfrom hy.models.float import HyFloat\nfrom hy.models.complex import HyComplex\nfrom hy.models.dict import HyDict\nfrom hy._compat import str_type\n\nfrom collections import defaultdict\n\n\nCORE_MACROS = [\n \"hy.core.bootstrap\",\n]\n\nEXTRA_MACROS = [\n \"hy.core.macros\",\n]\n\n_hy_macros = defaultdict(dict)\n\n\ndef macro(name):\n \"\"\"Decorator to define a macro called `name`.\n\n This stores the macro `name` in the namespace for the module where it is\n defined.\n\n If the module where it is defined is in `hy.core`, then the macro is stored\n in the default `None` namespace.\n\n This function is called from the `defmacro` special form in the compiler.\n\n \"\"\"\n def _(fn):\n module_name = fn.__module__\n if module_name.startswith(\"hy.core\"):\n module_name = None\n _hy_macros[module_name][name] = fn\n return fn\n return _\n\n\ndef require(source_module, target_module):\n \"\"\"Load the macros from `source_module` in the namespace of\n `target_module`.\n\n This function is called from the `require` special form in the compiler.\n\n \"\"\"\n macros = _hy_macros[source_module]\n refs = _hy_macros[target_module]\n for name, macro in macros.items():\n refs[name] = macro\n\n\n# type -> wrapping function mapping for _wrap_value\n_wrappers = {\n int: HyInteger,\n bool: lambda x: HySymbol(\"True\") if x else HySymbol(\"False\"),\n float: HyFloat,\n complex: HyComplex,\n str_type: HyString,\n dict: lambda d: HyDict(_wrap_value(x) for x in sum(d.items(), ())),\n list: lambda l: HyList(_wrap_value(x) for x in l),\n type(None): lambda foo: HySymbol(\"None\"),\n}\n\n\ndef _wrap_value(x):\n \"\"\"Wrap `x` into the corresponding Hy type.\n\n This allows a macro to return an unquoted expression transparently.\n\n \"\"\"\n wrapper = _wrappers.get(type(x))\n if wrapper is None:\n return x\n else:\n return wrapper(x)\n\n\ndef load_macros(module_name):\n \"\"\"Load the hy builtin macros for module `module_name`.\n\n Modules from `hy.core` can only use the macros from CORE_MACROS.\n Other modules get the macros from CORE_MACROS and EXTRA_MACROS.\n\n \"\"\"\n\n def _import(module, module_name=module_name):\n \"__import__ a module, avoiding recursions\"\n if module != module_name:\n __import__(module)\n\n for module in CORE_MACROS:\n _import(module)\n\n if module_name.startswith(\"hy.core\"):\n return\n\n for module in EXTRA_MACROS:\n _import(module)\n\n\ndef macroexpand(tree, module_name):\n \"\"\"Expand the toplevel macros for the `tree`.\n\n Load the macros from the given `module_name`, then expand the (top-level)\n macros in `tree` until it stops changing.\n\n \"\"\"\n load_macros(module_name)\n old = None\n while old != tree:\n old = tree\n tree = macroexpand_1(tree, module_name)\n return tree\n\n\ndef macroexpand_1(tree, module_name):\n \"\"\"Expand the toplevel macro from `tree` once, in the context of\n `module_name`.\"\"\"\n if isinstance(tree, HyExpression):\n if tree == []:\n return tree\n\n fn = tree[0]\n if fn in (\"quote\", \"quasiquote\"):\n return tree\n ntree = HyExpression(tree[:])\n ntree.replace(tree)\n\n if isinstance(fn, HyString):\n m = _hy_macros[module_name].get(fn)\n if m is None:\n m = _hy_macros[None].get(fn)\n if m is not None:\n obj = _wrap_value(m(*ntree[1:]))\n obj.replace(tree)\n return obj\n\n return ntree\n return tree\n", "path": "hy/macros.py" } ]
diff --git a/hy/macros.py b/hy/macros.py index 20d0caca7..d3f08d88c 100644 --- a/hy/macros.py +++ b/hy/macros.py @@ -84,7 +84,8 @@ def require(source_module, target_module): complex: HyComplex, str_type: HyString, dict: lambda d: HyDict(_wrap_value(x) for x in sum(d.items(), ())), - list: lambda l: HyList(_wrap_value(x) for x in l) + list: lambda l: HyList(_wrap_value(x) for x in l), + type(None): lambda foo: HySymbol("None"), } diff --git a/tests/native_tests/native_macros.hy b/tests/native_tests/native_macros.hy index f5b1b74e1..b9f1cf07d 100644 --- a/tests/native_tests/native_macros.hy +++ b/tests/native_tests/native_macros.hy @@ -34,6 +34,9 @@ (defmacro a-dict [] {1 2}) (assert (= (a-dict) {1 2})) +(defmacro a-none []) +(assert (= (a-none) None)) + ; A macro calling a previously defined function (eval-when-compile (defn foo [x y]
kserve__kserve-1053
Tabular Explainer e2e test failing /kind bug ``` (base) C02YJ034JGH5:~ dsun20$ kubectl logs isvc-explainer-tabular-explainer-default-7cnkj-deployment-4q4hn -n kfserving-ci-e2e-test kfserving-container [I 200828 13:12:28 font_manager:1423] Generating new fontManager, this may take some time... Traceback (most recent call last): File "/usr/local/lib/python3.7/runpy.py", line 183, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/usr/local/lib/python3.7/runpy.py", line 142, in _get_module_details return _get_module_details(pkg_main_name, error) File "/usr/local/lib/python3.7/runpy.py", line 109, in _get_module_details __import__(pkg_name) File "/alibiexplainer/alibiexplainer/__init__.py", line 15, in <module> from .explainer import AlibiExplainer File "/alibiexplainer/alibiexplainer/explainer.py", line 21, in <module> from alibiexplainer.anchor_images import AnchorImages File "/alibiexplainer/alibiexplainer/anchor_images.py", line 17, in <module> import alibi File "/usr/local/lib/python3.7/site-packages/alibi/__init__.py", line 1, in <module> from . import confidence, datasets, explainers, utils File "/usr/local/lib/python3.7/site-packages/alibi/explainers/__init__.py", line 11, in <module> from .kernel_shap import KernelShap File "/usr/local/lib/python3.7/site-packages/alibi/explainers/kernel_shap.py", line 11, in <module> from shap.common import DenseData, DenseDataWithIndex ModuleNotFoundError: No module named 'shap.common' ``` **What did you expect to happen:** **Anything else you would like to add:** [Miscellaneous information that will assist in solving the issue.] **Environment:** - Istio Version: - Knative Version: - KFServing Version: - Kubeflow version: - Kfdef:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm] - Minikube version: - Kubernetes version: (use `kubectl version`): - OS (e.g. from `/etc/os-release`):
[ { "content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='alibiexplainer',\n version='0.4.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kubeflow/kfserving/python/kfserving/alibiexplainer',\n description='Model Explaination Server. \\\n Not intended for use outside KFServing Frameworks Images',\n long_description=open('README.md').read(),\n python_requires='>=3.6',\n packages=find_packages(\"alibiexplainer\"),\n install_requires=[\n \"kfserving>=0.4.0\",\n \"alibi==0.4.0\",\n \"scikit-learn>=0.20.3\",\n \"argparse>=1.4.0\",\n \"requests>=2.22.0\",\n \"joblib>=0.13.2\",\n \"pandas>=0.24.2\",\n \"numpy>=1.16.3\",\n \"dill>=0.3.0\",\n \"spacy>=2.1.4\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "python/alibiexplainer/setup.py" } ]
[ { "content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='alibiexplainer',\n version='0.4.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kubeflow/kfserving/python/kfserving/alibiexplainer',\n description='Model Explaination Server. \\\n Not intended for use outside KFServing Frameworks Images',\n long_description=open('README.md').read(),\n python_requires='>=3.6',\n packages=find_packages(\"alibiexplainer\"),\n install_requires=[\n \"shap==0.35\",\n \"kfserving>=0.4.0\",\n \"alibi==0.4.0\",\n \"scikit-learn>=0.20.3\",\n \"argparse>=1.4.0\",\n \"requests>=2.22.0\",\n \"joblib>=0.13.2\",\n \"pandas>=0.24.2\",\n \"numpy>=1.16.3\",\n \"dill>=0.3.0\",\n \"spacy>=2.1.4\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "python/alibiexplainer/setup.py" } ]
diff --git a/python/alibiexplainer/setup.py b/python/alibiexplainer/setup.py index 6dce65a6c55..88dc38638c0 100644 --- a/python/alibiexplainer/setup.py +++ b/python/alibiexplainer/setup.py @@ -32,6 +32,7 @@ python_requires='>=3.6', packages=find_packages("alibiexplainer"), install_requires=[ + "shap==0.35", "kfserving>=0.4.0", "alibi==0.4.0", "scikit-learn>=0.20.3",
pypi__warehouse-6301
[Project-scoped API tokens] aren't available to maintainers **Describe the bug** <!-- A clear and concise description the bug --> When I use a "bot" account with "Maintainer" level access to projects, there are no projects to select from in the form for the token creation. **Expected behavior** <!-- A clear and concise description of what you expected to happen --> Since this "bot" can upload dists using user/password auth, it should also have similar privileges set when using tokens. **To Reproduce** <!-- Steps to reproduce the bug, or a link to PyPI where the bug is visible --> Go to https://pypi.org/manage/account/token and try selecting a project where you have only "Maintainer"-level access, not "Owner". **My Platform** N/A **Additional context** <!-- Add any other context, links, etc. about the feature here. --> N/A
[ { "content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport base64\nimport io\n\nfrom collections import defaultdict\n\nimport pyqrcode\n\nfrom paginate_sqlalchemy import SqlalchemyOrmPage as SQLAlchemyORMPage\nfrom pyramid.httpexceptions import HTTPBadRequest, HTTPNotFound, HTTPSeeOther\nfrom pyramid.response import Response\nfrom pyramid.view import view_config, view_defaults\nfrom sqlalchemy import func\nfrom sqlalchemy.orm import joinedload\nfrom sqlalchemy.orm.exc import NoResultFound\n\nimport warehouse.utils.otp as otp\n\nfrom warehouse.accounts.interfaces import IPasswordBreachedService, IUserService\nfrom warehouse.accounts.models import Email, User\nfrom warehouse.accounts.views import logout\nfrom warehouse.email import (\n send_account_deletion_email,\n send_added_as_collaborator_email,\n send_collaborator_added_email,\n send_email_verification_email,\n send_password_change_email,\n send_primary_email_change_email,\n)\nfrom warehouse.macaroons.interfaces import IMacaroonService\nfrom warehouse.manage.forms import (\n AddEmailForm,\n ChangePasswordForm,\n ChangeRoleForm,\n CreateMacaroonForm,\n CreateRoleForm,\n DeleteMacaroonForm,\n DeleteTOTPForm,\n DeleteWebAuthnForm,\n ProvisionTOTPForm,\n ProvisionWebAuthnForm,\n SaveAccountForm,\n)\nfrom warehouse.packaging.models import File, JournalEntry, Project, Release, Role\nfrom warehouse.utils.http import is_safe_url\nfrom warehouse.utils.paginate import paginate_url_factory\nfrom warehouse.utils.project import confirm_project, destroy_docs, remove_project\n\n\ndef user_projects(request):\n \"\"\" Return all the projects for which the user is a sole owner \"\"\"\n projects_owned = (\n request.db.query(Project.id)\n .join(Role.project)\n .filter(Role.role_name == \"Owner\", Role.user == request.user)\n .subquery()\n )\n\n with_sole_owner = (\n request.db.query(Role.project_id)\n .join(projects_owned)\n .filter(Role.role_name == \"Owner\")\n .group_by(Role.project_id)\n .having(func.count(Role.project_id) == 1)\n .subquery()\n )\n\n return {\n \"projects_owned\": (\n request.db.query(Project)\n .join(projects_owned, Project.id == projects_owned.c.id)\n .order_by(Project.name)\n .all()\n ),\n \"projects_sole_owned\": (\n request.db.query(Project).join(with_sole_owner).order_by(Project.name).all()\n ),\n }\n\n\n@view_defaults(\n route_name=\"manage.account\",\n renderer=\"manage/account.html\",\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n permission=\"manage:user\",\n)\nclass ManageAccountViews:\n def __init__(self, request):\n self.request = request\n self.user_service = request.find_service(IUserService, context=None)\n self.breach_service = request.find_service(\n IPasswordBreachedService, context=None\n )\n\n @property\n def active_projects(self):\n return user_projects(request=self.request)[\"projects_sole_owned\"]\n\n @property\n def default_response(self):\n return {\n \"save_account_form\": SaveAccountForm(name=self.request.user.name),\n \"add_email_form\": AddEmailForm(\n user_service=self.user_service, user_id=self.request.user.id\n ),\n \"change_password_form\": ChangePasswordForm(\n user_service=self.user_service, breach_service=self.breach_service\n ),\n \"active_projects\": self.active_projects,\n }\n\n @view_config(request_method=\"GET\")\n def manage_account(self):\n return self.default_response\n\n @view_config(request_method=\"POST\", request_param=SaveAccountForm.__params__)\n def save_account(self):\n form = SaveAccountForm(self.request.POST)\n\n if form.validate():\n self.user_service.update_user(self.request.user.id, **form.data)\n self.request.session.flash(\"Account details updated\", queue=\"success\")\n\n return {**self.default_response, \"save_account_form\": form}\n\n @view_config(request_method=\"POST\", request_param=AddEmailForm.__params__)\n def add_email(self):\n form = AddEmailForm(\n self.request.POST,\n user_service=self.user_service,\n user_id=self.request.user.id,\n )\n\n if form.validate():\n email = self.user_service.add_email(self.request.user.id, form.email.data)\n\n send_email_verification_email(self.request, (self.request.user, email))\n\n self.request.session.flash(\n f\"Email {email.email} added - check your email for \"\n + \"a verification link\",\n queue=\"success\",\n )\n return self.default_response\n\n return {**self.default_response, \"add_email_form\": form}\n\n @view_config(request_method=\"POST\", request_param=[\"delete_email_id\"])\n def delete_email(self):\n try:\n email = (\n self.request.db.query(Email)\n .filter(\n Email.id == self.request.POST[\"delete_email_id\"],\n Email.user_id == self.request.user.id,\n )\n .one()\n )\n except NoResultFound:\n self.request.session.flash(\"Email address not found\", queue=\"error\")\n return self.default_response\n\n if email.primary:\n self.request.session.flash(\n \"Cannot remove primary email address\", queue=\"error\"\n )\n else:\n self.request.user.emails.remove(email)\n self.request.session.flash(\n f\"Email address {email.email} removed\", queue=\"success\"\n )\n return self.default_response\n\n @view_config(request_method=\"POST\", request_param=[\"primary_email_id\"])\n def change_primary_email(self):\n previous_primary_email = self.request.user.primary_email\n try:\n new_primary_email = (\n self.request.db.query(Email)\n .filter(\n Email.user_id == self.request.user.id,\n Email.id == self.request.POST[\"primary_email_id\"],\n Email.verified.is_(True),\n )\n .one()\n )\n except NoResultFound:\n self.request.session.flash(\"Email address not found\", queue=\"error\")\n return self.default_response\n\n self.request.db.query(Email).filter(\n Email.user_id == self.request.user.id, Email.primary.is_(True)\n ).update(values={\"primary\": False})\n\n new_primary_email.primary = True\n\n self.request.session.flash(\n f\"Email address {new_primary_email.email} set as primary\", queue=\"success\"\n )\n\n if previous_primary_email is not None:\n send_primary_email_change_email(\n self.request, (self.request.user, previous_primary_email)\n )\n return self.default_response\n\n @view_config(request_method=\"POST\", request_param=[\"reverify_email_id\"])\n def reverify_email(self):\n try:\n email = (\n self.request.db.query(Email)\n .filter(\n Email.id == self.request.POST[\"reverify_email_id\"],\n Email.user_id == self.request.user.id,\n )\n .one()\n )\n except NoResultFound:\n self.request.session.flash(\"Email address not found\", queue=\"error\")\n return self.default_response\n\n if email.verified:\n self.request.session.flash(\"Email is already verified\", queue=\"error\")\n else:\n send_email_verification_email(self.request, (self.request.user, email))\n\n self.request.session.flash(\n f\"Verification email for {email.email} resent\", queue=\"success\"\n )\n\n return self.default_response\n\n @view_config(request_method=\"POST\", request_param=ChangePasswordForm.__params__)\n def change_password(self):\n form = ChangePasswordForm(\n **self.request.POST,\n username=self.request.user.username,\n full_name=self.request.user.name,\n email=self.request.user.email,\n user_service=self.user_service,\n breach_service=self.breach_service,\n check_password_metrics_tags=[\"method:new_password\"],\n )\n\n if form.validate():\n self.user_service.update_user(\n self.request.user.id, password=form.new_password.data\n )\n send_password_change_email(self.request, self.request.user)\n self.request.session.flash(\"Password updated\", queue=\"success\")\n\n return {**self.default_response, \"change_password_form\": form}\n\n @view_config(request_method=\"POST\", request_param=[\"confirm_username\"])\n def delete_account(self):\n username = self.request.params.get(\"confirm_username\")\n\n if not username:\n self.request.session.flash(\"Confirm the request\", queue=\"error\")\n return self.default_response\n\n if username != self.request.user.username:\n self.request.session.flash(\n f\"Could not delete account - {username!r} is not the same as \"\n f\"{self.request.user.username!r}\",\n queue=\"error\",\n )\n return self.default_response\n\n if self.active_projects:\n self.request.session.flash(\n \"Cannot delete account with active project ownerships\", queue=\"error\"\n )\n return self.default_response\n\n # Update all journals to point to `deleted-user` instead\n deleted_user = (\n self.request.db.query(User).filter(User.username == \"deleted-user\").one()\n )\n\n journals = (\n self.request.db.query(JournalEntry)\n .options(joinedload(\"submitted_by\"))\n .filter(JournalEntry.submitted_by == self.request.user)\n .all()\n )\n\n for journal in journals:\n journal.submitted_by = deleted_user\n\n # Send a notification email\n send_account_deletion_email(self.request, self.request.user)\n\n # Actually delete the user\n self.request.db.delete(self.request.user)\n\n return logout(self.request)\n\n\n@view_defaults(\n route_name=\"manage.account.totp-provision\",\n renderer=\"manage/account/totp-provision.html\",\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n permission=\"manage:user\",\n http_cache=0,\n)\nclass ProvisionTOTPViews:\n def __init__(self, request):\n self.request = request\n self.user_service = request.find_service(IUserService, context=None)\n\n @property\n def default_response(self):\n totp_secret = self.request.session.get_totp_secret()\n return {\n \"provision_totp_secret\": base64.b32encode(totp_secret).decode(),\n \"provision_totp_form\": ProvisionTOTPForm(totp_secret=totp_secret),\n \"provision_totp_uri\": otp.generate_totp_provisioning_uri(\n totp_secret,\n self.request.user.username,\n issuer_name=self.request.registry.settings[\"site.name\"],\n ),\n }\n\n @view_config(route_name=\"manage.account.totp-provision.image\", request_method=\"GET\")\n def generate_totp_qr(self):\n if not self.request.user.has_primary_verified_email:\n self.request.session.flash(\n \"Verify your email to modify two factor authentication\", queue=\"error\"\n )\n return Response(status=403)\n\n totp_secret = self.user_service.get_totp_secret(self.request.user.id)\n if totp_secret:\n return Response(status=403)\n\n totp_qr = pyqrcode.create(self.default_response[\"provision_totp_uri\"])\n qr_buffer = io.BytesIO()\n totp_qr.svg(qr_buffer, scale=5)\n\n return Response(content_type=\"image/svg+xml\", body=qr_buffer.getvalue())\n\n @view_config(request_method=\"GET\")\n def totp_provision(self):\n if not self.request.user.has_primary_verified_email:\n self.request.session.flash(\n \"Verify your email to modify two factor authentication\", queue=\"error\"\n )\n return Response(status=403)\n\n totp_secret = self.user_service.get_totp_secret(self.request.user.id)\n if totp_secret:\n self.request.session.flash(\n \"Account cannot be linked to more than one authentication \"\n \"application at a time\",\n queue=\"error\",\n )\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n return self.default_response\n\n @view_config(request_method=\"POST\", request_param=ProvisionTOTPForm.__params__)\n def validate_totp_provision(self):\n if not self.request.user.has_primary_verified_email:\n self.request.session.flash(\n \"Verify your email to modify two factor authentication\", queue=\"error\"\n )\n return Response(status=403)\n\n totp_secret = self.user_service.get_totp_secret(self.request.user.id)\n if totp_secret:\n self.request.session.flash(\n \"Account cannot be linked to more than one authentication \"\n \"application at a time\",\n queue=\"error\",\n )\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n form = ProvisionTOTPForm(\n **self.request.POST, totp_secret=self.request.session.get_totp_secret()\n )\n\n if form.validate():\n self.user_service.update_user(\n self.request.user.id, totp_secret=self.request.session.get_totp_secret()\n )\n\n self.request.session.clear_totp_secret()\n self.request.session.flash(\n \"Authentication application successfully set up\", queue=\"success\"\n )\n\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n return {**self.default_response, \"provision_totp_form\": form}\n\n @view_config(request_method=\"POST\", request_param=DeleteTOTPForm.__params__)\n def delete_totp(self):\n if not self.request.user.has_primary_verified_email:\n self.request.session.flash(\n \"Verify your email to modify two factor authentication\", queue=\"error\"\n )\n return Response(status=403)\n\n totp_secret = self.user_service.get_totp_secret(self.request.user.id)\n if not totp_secret:\n self.request.session.flash(\n \"There is no authentication application to delete\", queue=\"error\"\n )\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n form = DeleteTOTPForm(\n **self.request.POST,\n username=self.request.user.username,\n user_service=self.user_service,\n )\n\n if form.validate():\n self.user_service.update_user(self.request.user.id, totp_secret=None)\n self.request.session.flash(\n \"Authentication application removed from PyPI. \"\n \"Remember to remove PyPI from your application.\",\n queue=\"success\",\n )\n else:\n self.request.session.flash(\"Invalid credentials\", queue=\"error\")\n\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n\n@view_defaults(\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n permission=\"manage:user\",\n http_cache=0,\n)\nclass ProvisionWebAuthnViews:\n def __init__(self, request):\n self.request = request\n self.user_service = request.find_service(IUserService, context=None)\n\n @view_config(\n request_method=\"GET\",\n route_name=\"manage.account.webauthn-provision\",\n renderer=\"manage/account/webauthn-provision.html\",\n )\n def webauthn_provision(self):\n return {}\n\n @view_config(\n request_method=\"GET\",\n route_name=\"manage.account.webauthn-provision.options\",\n renderer=\"json\",\n )\n def webauthn_provision_options(self):\n return self.user_service.get_webauthn_credential_options(\n self.request.user.id,\n challenge=self.request.session.get_webauthn_challenge(),\n rp_name=self.request.registry.settings[\"site.name\"],\n rp_id=self.request.domain,\n icon_url=self.request.registry.settings.get(\n \"warehouse.domain\", self.request.domain\n ),\n )\n\n @view_config(\n request_method=\"POST\",\n request_param=ProvisionWebAuthnForm.__params__,\n route_name=\"manage.account.webauthn-provision.validate\",\n renderer=\"json\",\n )\n def validate_webauthn_provision(self):\n form = ProvisionWebAuthnForm(\n **self.request.POST,\n user_service=self.user_service,\n user_id=self.request.user.id,\n challenge=self.request.session.get_webauthn_challenge(),\n rp_id=self.request.domain,\n origin=self.request.host_url,\n )\n\n self.request.session.clear_webauthn_challenge()\n\n if form.validate():\n self.user_service.add_webauthn(\n self.request.user.id,\n label=form.label.data,\n credential_id=form.validated_credential.credential_id.decode(),\n public_key=form.validated_credential.public_key.decode(),\n sign_count=form.validated_credential.sign_count,\n )\n self.request.session.flash(\n \"Security device successfully set up\", queue=\"success\"\n )\n return {\"success\": \"Security device successfully set up\"}\n\n errors = [\n str(error) for error_list in form.errors.values() for error in error_list\n ]\n return {\"fail\": {\"errors\": errors}}\n\n @view_config(\n request_method=\"POST\",\n request_param=DeleteWebAuthnForm.__params__,\n route_name=\"manage.account.webauthn-provision.delete\",\n )\n def delete_webauthn(self):\n if len(self.request.user.webauthn) == 0:\n self.request.session.flash(\n \"There is no security device to delete\", queue=\"error\"\n )\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n form = DeleteWebAuthnForm(\n **self.request.POST,\n username=self.request.user.username,\n user_service=self.user_service,\n user_id=self.request.user.id,\n )\n\n if form.validate():\n self.request.user.webauthn.remove(form.webauthn)\n self.request.session.flash(\"Security device removed\", queue=\"success\")\n else:\n self.request.session.flash(\"Invalid credentials\", queue=\"error\")\n\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n\n@view_defaults(\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n permission=\"manage:user\",\n renderer=\"manage/token.html\",\n route_name=\"manage.account.token\",\n)\nclass ProvisionMacaroonViews:\n def __init__(self, request):\n self.request = request\n self.user_service = request.find_service(IUserService, context=None)\n self.macaroon_service = request.find_service(IMacaroonService, context=None)\n\n @property\n def project_names(self):\n projects = user_projects(self.request)[\"projects_owned\"]\n return [project.name for project in projects]\n\n @property\n def default_response(self):\n return {\n \"project_names\": self.project_names,\n \"create_macaroon_form\": CreateMacaroonForm(\n user_id=self.request.user.id,\n macaroon_service=self.macaroon_service,\n project_names=self.project_names,\n ),\n \"delete_macaroon_form\": DeleteMacaroonForm(\n macaroon_service=self.macaroon_service\n ),\n }\n\n @view_config(request_method=\"GET\")\n def manage_macaroons(self):\n return self.default_response\n\n @view_config(request_method=\"POST\", request_param=CreateMacaroonForm.__params__)\n def create_macaroon(self):\n if not self.request.user.has_primary_verified_email:\n self.request.session.flash(\n \"Verify your email to create an API token.\", queue=\"error\"\n )\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n form = CreateMacaroonForm(\n **self.request.POST,\n user_id=self.request.user.id,\n macaroon_service=self.macaroon_service,\n project_names=self.project_names,\n )\n\n response = {**self.default_response}\n if form.validate():\n serialized_macaroon, macaroon = self.macaroon_service.create_macaroon(\n location=self.request.domain,\n user_id=self.request.user.id,\n description=form.description.data,\n caveats={\"permissions\": form.validated_scope, \"version\": 1},\n )\n response.update(serialized_macaroon=serialized_macaroon, macaroon=macaroon)\n\n return {**response, \"create_macaroon_form\": form}\n\n @view_config(request_method=\"POST\", request_param=DeleteMacaroonForm.__params__)\n def delete_macaroon(self):\n form = DeleteMacaroonForm(\n **self.request.POST, macaroon_service=self.macaroon_service\n )\n\n if form.validate():\n description = self.macaroon_service.find_macaroon(\n form.macaroon_id.data\n ).description\n self.macaroon_service.delete_macaroon(form.macaroon_id.data)\n self.request.session.flash(\n f\"Deleted API token '{description}'.\", queue=\"success\"\n )\n\n redirect_to = self.request.referer\n if not is_safe_url(redirect_to, host=self.request.host):\n redirect_to = self.request.route_path(\"manage.account\")\n return HTTPSeeOther(redirect_to)\n\n\n@view_config(\n route_name=\"manage.projects\",\n renderer=\"manage/projects.html\",\n uses_session=True,\n permission=\"manage:user\",\n)\ndef manage_projects(request):\n def _key(project):\n if project.releases:\n return project.releases[0].created\n return project.created\n\n all_user_projects = user_projects(request)\n projects_owned = set(\n project.name for project in all_user_projects[\"projects_owned\"]\n )\n projects_sole_owned = set(\n project.name for project in all_user_projects[\"projects_sole_owned\"]\n )\n\n return {\n \"projects\": sorted(request.user.projects, key=_key, reverse=True),\n \"projects_owned\": projects_owned,\n \"projects_sole_owned\": projects_sole_owned,\n }\n\n\n@view_config(\n route_name=\"manage.project.settings\",\n context=Project,\n renderer=\"manage/settings.html\",\n uses_session=True,\n permission=\"manage:project\",\n)\ndef manage_project_settings(project, request):\n return {\"project\": project}\n\n\n@view_config(\n route_name=\"manage.project.delete_project\",\n context=Project,\n uses_session=True,\n require_methods=[\"POST\"],\n permission=\"manage:project\",\n)\ndef delete_project(project, request):\n confirm_project(project, request, fail_route=\"manage.project.settings\")\n remove_project(project, request)\n\n return HTTPSeeOther(request.route_path(\"manage.projects\"))\n\n\n@view_config(\n route_name=\"manage.project.destroy_docs\",\n context=Project,\n uses_session=True,\n require_methods=[\"POST\"],\n permission=\"manage:project\",\n)\ndef destroy_project_docs(project, request):\n confirm_project(project, request, fail_route=\"manage.project.documentation\")\n destroy_docs(project, request)\n\n return HTTPSeeOther(\n request.route_path(\n \"manage.project.documentation\", project_name=project.normalized_name\n )\n )\n\n\n@view_config(\n route_name=\"manage.project.releases\",\n context=Project,\n renderer=\"manage/releases.html\",\n uses_session=True,\n permission=\"manage:project\",\n)\ndef manage_project_releases(project, request):\n return {\"project\": project}\n\n\n@view_defaults(\n route_name=\"manage.project.release\",\n context=Release,\n renderer=\"manage/release.html\",\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n permission=\"manage:project\",\n)\nclass ManageProjectRelease:\n def __init__(self, release, request):\n self.release = release\n self.request = request\n\n @view_config(request_method=\"GET\")\n def manage_project_release(self):\n return {\n \"project\": self.release.project,\n \"release\": self.release,\n \"files\": self.release.files.all(),\n }\n\n @view_config(request_method=\"POST\", request_param=[\"confirm_version\"])\n def delete_project_release(self):\n version = self.request.POST.get(\"confirm_version\")\n if not version:\n self.request.session.flash(\"Confirm the request\", queue=\"error\")\n return HTTPSeeOther(\n self.request.route_path(\n \"manage.project.release\",\n project_name=self.release.project.name,\n version=self.release.version,\n )\n )\n\n if version != self.release.version:\n self.request.session.flash(\n \"Could not delete release - \"\n + f\"{version!r} is not the same as {self.release.version!r}\",\n queue=\"error\",\n )\n return HTTPSeeOther(\n self.request.route_path(\n \"manage.project.release\",\n project_name=self.release.project.name,\n version=self.release.version,\n )\n )\n\n self.request.db.add(\n JournalEntry(\n name=self.release.project.name,\n action=\"remove release\",\n version=self.release.version,\n submitted_by=self.request.user,\n submitted_from=self.request.remote_addr,\n )\n )\n\n self.request.db.delete(self.release)\n\n self.request.session.flash(\n f\"Deleted release {self.release.version!r}\", queue=\"success\"\n )\n\n return HTTPSeeOther(\n self.request.route_path(\n \"manage.project.releases\", project_name=self.release.project.name\n )\n )\n\n @view_config(\n request_method=\"POST\", request_param=[\"confirm_project_name\", \"file_id\"]\n )\n def delete_project_release_file(self):\n def _error(message):\n self.request.session.flash(message, queue=\"error\")\n return HTTPSeeOther(\n self.request.route_path(\n \"manage.project.release\",\n project_name=self.release.project.name,\n version=self.release.version,\n )\n )\n\n project_name = self.request.POST.get(\"confirm_project_name\")\n\n if not project_name:\n return _error(\"Confirm the request\")\n\n try:\n release_file = (\n self.request.db.query(File)\n .filter(\n File.release == self.release,\n File.id == self.request.POST.get(\"file_id\"),\n )\n .one()\n )\n except NoResultFound:\n return _error(\"Could not find file\")\n\n if project_name != self.release.project.name:\n return _error(\n \"Could not delete file - \" + f\"{project_name!r} is not the same as \"\n f\"{self.release.project.name!r}\"\n )\n\n self.request.db.add(\n JournalEntry(\n name=self.release.project.name,\n action=f\"remove file {release_file.filename}\",\n version=self.release.version,\n submitted_by=self.request.user,\n submitted_from=self.request.remote_addr,\n )\n )\n\n self.request.db.delete(release_file)\n\n self.request.session.flash(\n f\"Deleted file {release_file.filename!r}\", queue=\"success\"\n )\n\n return HTTPSeeOther(\n self.request.route_path(\n \"manage.project.release\",\n project_name=self.release.project.name,\n version=self.release.version,\n )\n )\n\n\n@view_config(\n route_name=\"manage.project.roles\",\n context=Project,\n renderer=\"manage/roles.html\",\n uses_session=True,\n require_methods=False,\n permission=\"manage:project\",\n)\ndef manage_project_roles(project, request, _form_class=CreateRoleForm):\n user_service = request.find_service(IUserService, context=None)\n form = _form_class(request.POST, user_service=user_service)\n\n if request.method == \"POST\" and form.validate():\n username = form.username.data\n role_name = form.role_name.data\n userid = user_service.find_userid(username)\n user = user_service.get_user(userid)\n\n if request.db.query(\n request.db.query(Role)\n .filter(\n Role.user == user, Role.project == project, Role.role_name == role_name\n )\n .exists()\n ).scalar():\n request.session.flash(\n f\"User '{username}' already has {role_name} role for project\",\n queue=\"error\",\n )\n elif user.primary_email is None or not user.primary_email.verified:\n request.session.flash(\n f\"User '{username}' does not have a verified primary email \"\n f\"address and cannot be added as a {role_name} for project\",\n queue=\"error\",\n )\n else:\n request.db.add(\n Role(user=user, project=project, role_name=form.role_name.data)\n )\n request.db.add(\n JournalEntry(\n name=project.name,\n action=f\"add {role_name} {username}\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n\n owner_roles = (\n request.db.query(Role)\n .join(Role.user)\n .filter(Role.role_name == \"Owner\", Role.project == project)\n )\n owner_users = {owner.user for owner in owner_roles}\n\n # Don't send to the owner that added the new role\n owner_users.discard(request.user)\n\n # Don't send owners email to new user if they are now an owner\n owner_users.discard(user)\n\n send_collaborator_added_email(\n request,\n owner_users,\n user=user,\n submitter=request.user,\n project_name=project.name,\n role=form.role_name.data,\n )\n\n send_added_as_collaborator_email(\n request,\n user,\n submitter=request.user,\n project_name=project.name,\n role=form.role_name.data,\n )\n\n request.session.flash(\n f\"Added collaborator '{form.username.data}'\", queue=\"success\"\n )\n form = _form_class(user_service=user_service)\n\n roles = request.db.query(Role).join(User).filter(Role.project == project).all()\n\n # TODO: The following lines are a hack to handle multiple roles for a\n # single user and should be removed when fixing GH-2745\n roles_by_user = defaultdict(list)\n for role in roles:\n roles_by_user[role.user.username].append(role)\n\n return {\"project\": project, \"roles_by_user\": roles_by_user, \"form\": form}\n\n\n@view_config(\n route_name=\"manage.project.change_role\",\n context=Project,\n uses_session=True,\n require_methods=[\"POST\"],\n permission=\"manage:project\",\n)\ndef change_project_role(project, request, _form_class=ChangeRoleForm):\n # TODO: This view was modified to handle deleting multiple roles for a\n # single user and should be updated when fixing GH-2745\n\n form = _form_class(request.POST)\n\n if form.validate():\n role_ids = request.POST.getall(\"role_id\")\n\n if len(role_ids) > 1:\n # This user has more than one role, so just delete all the ones\n # that aren't what we want.\n #\n # TODO: This branch should be removed when fixing GH-2745.\n roles = (\n request.db.query(Role)\n .join(User)\n .filter(\n Role.id.in_(role_ids),\n Role.project == project,\n Role.role_name != form.role_name.data,\n )\n .all()\n )\n removing_self = any(\n role.role_name == \"Owner\" and role.user == request.user\n for role in roles\n )\n if removing_self:\n request.session.flash(\"Cannot remove yourself as Owner\", queue=\"error\")\n else:\n for role in roles:\n request.db.delete(role)\n request.db.add(\n JournalEntry(\n name=project.name,\n action=f\"remove {role.role_name} {role.user.username}\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n request.session.flash(\"Changed role\", queue=\"success\")\n else:\n # This user only has one role, so get it and change the type.\n try:\n role = (\n request.db.query(Role)\n .join(User)\n .filter(\n Role.id == request.POST.get(\"role_id\"), Role.project == project\n )\n .one()\n )\n if role.role_name == \"Owner\" and role.user == request.user:\n request.session.flash(\n \"Cannot remove yourself as Owner\", queue=\"error\"\n )\n else:\n request.db.add(\n JournalEntry(\n name=project.name,\n action=\"change {} {} to {}\".format(\n role.role_name, role.user.username, form.role_name.data\n ),\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n role.role_name = form.role_name.data\n request.session.flash(\"Changed role\", queue=\"success\")\n except NoResultFound:\n request.session.flash(\"Could not find role\", queue=\"error\")\n\n return HTTPSeeOther(\n request.route_path(\"manage.project.roles\", project_name=project.name)\n )\n\n\n@view_config(\n route_name=\"manage.project.delete_role\",\n context=Project,\n uses_session=True,\n require_methods=[\"POST\"],\n permission=\"manage:project\",\n)\ndef delete_project_role(project, request):\n # TODO: This view was modified to handle deleting multiple roles for a\n # single user and should be updated when fixing GH-2745\n\n roles = (\n request.db.query(Role)\n .join(User)\n .filter(Role.id.in_(request.POST.getall(\"role_id\")), Role.project == project)\n .all()\n )\n removing_self = any(\n role.role_name == \"Owner\" and role.user == request.user for role in roles\n )\n\n if not roles:\n request.session.flash(\"Could not find role\", queue=\"error\")\n elif removing_self:\n request.session.flash(\"Cannot remove yourself as Owner\", queue=\"error\")\n else:\n for role in roles:\n request.db.delete(role)\n request.db.add(\n JournalEntry(\n name=project.name,\n action=f\"remove {role.role_name} {role.user.username}\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n request.session.flash(\"Removed role\", queue=\"success\")\n\n return HTTPSeeOther(\n request.route_path(\"manage.project.roles\", project_name=project.name)\n )\n\n\n@view_config(\n route_name=\"manage.project.history\",\n context=Project,\n renderer=\"manage/history.html\",\n uses_session=True,\n permission=\"manage:project\",\n)\ndef manage_project_history(project, request):\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\")\n\n journals_query = (\n request.db.query(JournalEntry)\n .options(joinedload(\"submitted_by\"))\n .filter(JournalEntry.name == project.name)\n .order_by(JournalEntry.submitted_date.desc(), JournalEntry.id.desc())\n )\n\n journals = SQLAlchemyORMPage(\n journals_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n if journals.page_count and page_num > journals.page_count:\n raise HTTPNotFound\n\n return {\"project\": project, \"journals\": journals}\n\n\n@view_config(\n route_name=\"manage.project.documentation\",\n context=Project,\n renderer=\"manage/documentation.html\",\n uses_session=True,\n permission=\"manage:project\",\n)\ndef manage_project_documentation(project, request):\n return {\"project\": project}\n", "path": "warehouse/manage/views.py" } ]
[ { "content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport base64\nimport io\n\nfrom collections import defaultdict\n\nimport pyqrcode\n\nfrom paginate_sqlalchemy import SqlalchemyOrmPage as SQLAlchemyORMPage\nfrom pyramid.httpexceptions import HTTPBadRequest, HTTPNotFound, HTTPSeeOther\nfrom pyramid.response import Response\nfrom pyramid.view import view_config, view_defaults\nfrom sqlalchemy import func\nfrom sqlalchemy.orm import joinedload\nfrom sqlalchemy.orm.exc import NoResultFound\n\nimport warehouse.utils.otp as otp\n\nfrom warehouse.accounts.interfaces import IPasswordBreachedService, IUserService\nfrom warehouse.accounts.models import Email, User\nfrom warehouse.accounts.views import logout\nfrom warehouse.email import (\n send_account_deletion_email,\n send_added_as_collaborator_email,\n send_collaborator_added_email,\n send_email_verification_email,\n send_password_change_email,\n send_primary_email_change_email,\n)\nfrom warehouse.macaroons.interfaces import IMacaroonService\nfrom warehouse.manage.forms import (\n AddEmailForm,\n ChangePasswordForm,\n ChangeRoleForm,\n CreateMacaroonForm,\n CreateRoleForm,\n DeleteMacaroonForm,\n DeleteTOTPForm,\n DeleteWebAuthnForm,\n ProvisionTOTPForm,\n ProvisionWebAuthnForm,\n SaveAccountForm,\n)\nfrom warehouse.packaging.models import File, JournalEntry, Project, Release, Role\nfrom warehouse.utils.http import is_safe_url\nfrom warehouse.utils.paginate import paginate_url_factory\nfrom warehouse.utils.project import confirm_project, destroy_docs, remove_project\n\n\ndef user_projects(request):\n \"\"\" Return all the projects for which the user is a sole owner \"\"\"\n projects_owned = (\n request.db.query(Project.id)\n .join(Role.project)\n .filter(Role.role_name == \"Owner\", Role.user == request.user)\n .subquery()\n )\n\n with_sole_owner = (\n request.db.query(Role.project_id)\n .join(projects_owned)\n .filter(Role.role_name == \"Owner\")\n .group_by(Role.project_id)\n .having(func.count(Role.project_id) == 1)\n .subquery()\n )\n\n return {\n \"projects_owned\": (\n request.db.query(Project)\n .join(projects_owned, Project.id == projects_owned.c.id)\n .order_by(Project.name)\n .all()\n ),\n \"projects_sole_owned\": (\n request.db.query(Project).join(with_sole_owner).order_by(Project.name).all()\n ),\n }\n\n\n@view_defaults(\n route_name=\"manage.account\",\n renderer=\"manage/account.html\",\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n permission=\"manage:user\",\n)\nclass ManageAccountViews:\n def __init__(self, request):\n self.request = request\n self.user_service = request.find_service(IUserService, context=None)\n self.breach_service = request.find_service(\n IPasswordBreachedService, context=None\n )\n\n @property\n def active_projects(self):\n return user_projects(request=self.request)[\"projects_sole_owned\"]\n\n @property\n def default_response(self):\n return {\n \"save_account_form\": SaveAccountForm(name=self.request.user.name),\n \"add_email_form\": AddEmailForm(\n user_service=self.user_service, user_id=self.request.user.id\n ),\n \"change_password_form\": ChangePasswordForm(\n user_service=self.user_service, breach_service=self.breach_service\n ),\n \"active_projects\": self.active_projects,\n }\n\n @view_config(request_method=\"GET\")\n def manage_account(self):\n return self.default_response\n\n @view_config(request_method=\"POST\", request_param=SaveAccountForm.__params__)\n def save_account(self):\n form = SaveAccountForm(self.request.POST)\n\n if form.validate():\n self.user_service.update_user(self.request.user.id, **form.data)\n self.request.session.flash(\"Account details updated\", queue=\"success\")\n\n return {**self.default_response, \"save_account_form\": form}\n\n @view_config(request_method=\"POST\", request_param=AddEmailForm.__params__)\n def add_email(self):\n form = AddEmailForm(\n self.request.POST,\n user_service=self.user_service,\n user_id=self.request.user.id,\n )\n\n if form.validate():\n email = self.user_service.add_email(self.request.user.id, form.email.data)\n\n send_email_verification_email(self.request, (self.request.user, email))\n\n self.request.session.flash(\n f\"Email {email.email} added - check your email for \"\n + \"a verification link\",\n queue=\"success\",\n )\n return self.default_response\n\n return {**self.default_response, \"add_email_form\": form}\n\n @view_config(request_method=\"POST\", request_param=[\"delete_email_id\"])\n def delete_email(self):\n try:\n email = (\n self.request.db.query(Email)\n .filter(\n Email.id == self.request.POST[\"delete_email_id\"],\n Email.user_id == self.request.user.id,\n )\n .one()\n )\n except NoResultFound:\n self.request.session.flash(\"Email address not found\", queue=\"error\")\n return self.default_response\n\n if email.primary:\n self.request.session.flash(\n \"Cannot remove primary email address\", queue=\"error\"\n )\n else:\n self.request.user.emails.remove(email)\n self.request.session.flash(\n f\"Email address {email.email} removed\", queue=\"success\"\n )\n return self.default_response\n\n @view_config(request_method=\"POST\", request_param=[\"primary_email_id\"])\n def change_primary_email(self):\n previous_primary_email = self.request.user.primary_email\n try:\n new_primary_email = (\n self.request.db.query(Email)\n .filter(\n Email.user_id == self.request.user.id,\n Email.id == self.request.POST[\"primary_email_id\"],\n Email.verified.is_(True),\n )\n .one()\n )\n except NoResultFound:\n self.request.session.flash(\"Email address not found\", queue=\"error\")\n return self.default_response\n\n self.request.db.query(Email).filter(\n Email.user_id == self.request.user.id, Email.primary.is_(True)\n ).update(values={\"primary\": False})\n\n new_primary_email.primary = True\n\n self.request.session.flash(\n f\"Email address {new_primary_email.email} set as primary\", queue=\"success\"\n )\n\n if previous_primary_email is not None:\n send_primary_email_change_email(\n self.request, (self.request.user, previous_primary_email)\n )\n return self.default_response\n\n @view_config(request_method=\"POST\", request_param=[\"reverify_email_id\"])\n def reverify_email(self):\n try:\n email = (\n self.request.db.query(Email)\n .filter(\n Email.id == self.request.POST[\"reverify_email_id\"],\n Email.user_id == self.request.user.id,\n )\n .one()\n )\n except NoResultFound:\n self.request.session.flash(\"Email address not found\", queue=\"error\")\n return self.default_response\n\n if email.verified:\n self.request.session.flash(\"Email is already verified\", queue=\"error\")\n else:\n send_email_verification_email(self.request, (self.request.user, email))\n\n self.request.session.flash(\n f\"Verification email for {email.email} resent\", queue=\"success\"\n )\n\n return self.default_response\n\n @view_config(request_method=\"POST\", request_param=ChangePasswordForm.__params__)\n def change_password(self):\n form = ChangePasswordForm(\n **self.request.POST,\n username=self.request.user.username,\n full_name=self.request.user.name,\n email=self.request.user.email,\n user_service=self.user_service,\n breach_service=self.breach_service,\n check_password_metrics_tags=[\"method:new_password\"],\n )\n\n if form.validate():\n self.user_service.update_user(\n self.request.user.id, password=form.new_password.data\n )\n send_password_change_email(self.request, self.request.user)\n self.request.session.flash(\"Password updated\", queue=\"success\")\n\n return {**self.default_response, \"change_password_form\": form}\n\n @view_config(request_method=\"POST\", request_param=[\"confirm_username\"])\n def delete_account(self):\n username = self.request.params.get(\"confirm_username\")\n\n if not username:\n self.request.session.flash(\"Confirm the request\", queue=\"error\")\n return self.default_response\n\n if username != self.request.user.username:\n self.request.session.flash(\n f\"Could not delete account - {username!r} is not the same as \"\n f\"{self.request.user.username!r}\",\n queue=\"error\",\n )\n return self.default_response\n\n if self.active_projects:\n self.request.session.flash(\n \"Cannot delete account with active project ownerships\", queue=\"error\"\n )\n return self.default_response\n\n # Update all journals to point to `deleted-user` instead\n deleted_user = (\n self.request.db.query(User).filter(User.username == \"deleted-user\").one()\n )\n\n journals = (\n self.request.db.query(JournalEntry)\n .options(joinedload(\"submitted_by\"))\n .filter(JournalEntry.submitted_by == self.request.user)\n .all()\n )\n\n for journal in journals:\n journal.submitted_by = deleted_user\n\n # Send a notification email\n send_account_deletion_email(self.request, self.request.user)\n\n # Actually delete the user\n self.request.db.delete(self.request.user)\n\n return logout(self.request)\n\n\n@view_defaults(\n route_name=\"manage.account.totp-provision\",\n renderer=\"manage/account/totp-provision.html\",\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n permission=\"manage:user\",\n http_cache=0,\n)\nclass ProvisionTOTPViews:\n def __init__(self, request):\n self.request = request\n self.user_service = request.find_service(IUserService, context=None)\n\n @property\n def default_response(self):\n totp_secret = self.request.session.get_totp_secret()\n return {\n \"provision_totp_secret\": base64.b32encode(totp_secret).decode(),\n \"provision_totp_form\": ProvisionTOTPForm(totp_secret=totp_secret),\n \"provision_totp_uri\": otp.generate_totp_provisioning_uri(\n totp_secret,\n self.request.user.username,\n issuer_name=self.request.registry.settings[\"site.name\"],\n ),\n }\n\n @view_config(route_name=\"manage.account.totp-provision.image\", request_method=\"GET\")\n def generate_totp_qr(self):\n if not self.request.user.has_primary_verified_email:\n self.request.session.flash(\n \"Verify your email to modify two factor authentication\", queue=\"error\"\n )\n return Response(status=403)\n\n totp_secret = self.user_service.get_totp_secret(self.request.user.id)\n if totp_secret:\n return Response(status=403)\n\n totp_qr = pyqrcode.create(self.default_response[\"provision_totp_uri\"])\n qr_buffer = io.BytesIO()\n totp_qr.svg(qr_buffer, scale=5)\n\n return Response(content_type=\"image/svg+xml\", body=qr_buffer.getvalue())\n\n @view_config(request_method=\"GET\")\n def totp_provision(self):\n if not self.request.user.has_primary_verified_email:\n self.request.session.flash(\n \"Verify your email to modify two factor authentication\", queue=\"error\"\n )\n return Response(status=403)\n\n totp_secret = self.user_service.get_totp_secret(self.request.user.id)\n if totp_secret:\n self.request.session.flash(\n \"Account cannot be linked to more than one authentication \"\n \"application at a time\",\n queue=\"error\",\n )\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n return self.default_response\n\n @view_config(request_method=\"POST\", request_param=ProvisionTOTPForm.__params__)\n def validate_totp_provision(self):\n if not self.request.user.has_primary_verified_email:\n self.request.session.flash(\n \"Verify your email to modify two factor authentication\", queue=\"error\"\n )\n return Response(status=403)\n\n totp_secret = self.user_service.get_totp_secret(self.request.user.id)\n if totp_secret:\n self.request.session.flash(\n \"Account cannot be linked to more than one authentication \"\n \"application at a time\",\n queue=\"error\",\n )\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n form = ProvisionTOTPForm(\n **self.request.POST, totp_secret=self.request.session.get_totp_secret()\n )\n\n if form.validate():\n self.user_service.update_user(\n self.request.user.id, totp_secret=self.request.session.get_totp_secret()\n )\n\n self.request.session.clear_totp_secret()\n self.request.session.flash(\n \"Authentication application successfully set up\", queue=\"success\"\n )\n\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n return {**self.default_response, \"provision_totp_form\": form}\n\n @view_config(request_method=\"POST\", request_param=DeleteTOTPForm.__params__)\n def delete_totp(self):\n if not self.request.user.has_primary_verified_email:\n self.request.session.flash(\n \"Verify your email to modify two factor authentication\", queue=\"error\"\n )\n return Response(status=403)\n\n totp_secret = self.user_service.get_totp_secret(self.request.user.id)\n if not totp_secret:\n self.request.session.flash(\n \"There is no authentication application to delete\", queue=\"error\"\n )\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n form = DeleteTOTPForm(\n **self.request.POST,\n username=self.request.user.username,\n user_service=self.user_service,\n )\n\n if form.validate():\n self.user_service.update_user(self.request.user.id, totp_secret=None)\n self.request.session.flash(\n \"Authentication application removed from PyPI. \"\n \"Remember to remove PyPI from your application.\",\n queue=\"success\",\n )\n else:\n self.request.session.flash(\"Invalid credentials\", queue=\"error\")\n\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n\n@view_defaults(\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n permission=\"manage:user\",\n http_cache=0,\n)\nclass ProvisionWebAuthnViews:\n def __init__(self, request):\n self.request = request\n self.user_service = request.find_service(IUserService, context=None)\n\n @view_config(\n request_method=\"GET\",\n route_name=\"manage.account.webauthn-provision\",\n renderer=\"manage/account/webauthn-provision.html\",\n )\n def webauthn_provision(self):\n return {}\n\n @view_config(\n request_method=\"GET\",\n route_name=\"manage.account.webauthn-provision.options\",\n renderer=\"json\",\n )\n def webauthn_provision_options(self):\n return self.user_service.get_webauthn_credential_options(\n self.request.user.id,\n challenge=self.request.session.get_webauthn_challenge(),\n rp_name=self.request.registry.settings[\"site.name\"],\n rp_id=self.request.domain,\n icon_url=self.request.registry.settings.get(\n \"warehouse.domain\", self.request.domain\n ),\n )\n\n @view_config(\n request_method=\"POST\",\n request_param=ProvisionWebAuthnForm.__params__,\n route_name=\"manage.account.webauthn-provision.validate\",\n renderer=\"json\",\n )\n def validate_webauthn_provision(self):\n form = ProvisionWebAuthnForm(\n **self.request.POST,\n user_service=self.user_service,\n user_id=self.request.user.id,\n challenge=self.request.session.get_webauthn_challenge(),\n rp_id=self.request.domain,\n origin=self.request.host_url,\n )\n\n self.request.session.clear_webauthn_challenge()\n\n if form.validate():\n self.user_service.add_webauthn(\n self.request.user.id,\n label=form.label.data,\n credential_id=form.validated_credential.credential_id.decode(),\n public_key=form.validated_credential.public_key.decode(),\n sign_count=form.validated_credential.sign_count,\n )\n self.request.session.flash(\n \"Security device successfully set up\", queue=\"success\"\n )\n return {\"success\": \"Security device successfully set up\"}\n\n errors = [\n str(error) for error_list in form.errors.values() for error in error_list\n ]\n return {\"fail\": {\"errors\": errors}}\n\n @view_config(\n request_method=\"POST\",\n request_param=DeleteWebAuthnForm.__params__,\n route_name=\"manage.account.webauthn-provision.delete\",\n )\n def delete_webauthn(self):\n if len(self.request.user.webauthn) == 0:\n self.request.session.flash(\n \"There is no security device to delete\", queue=\"error\"\n )\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n form = DeleteWebAuthnForm(\n **self.request.POST,\n username=self.request.user.username,\n user_service=self.user_service,\n user_id=self.request.user.id,\n )\n\n if form.validate():\n self.request.user.webauthn.remove(form.webauthn)\n self.request.session.flash(\"Security device removed\", queue=\"success\")\n else:\n self.request.session.flash(\"Invalid credentials\", queue=\"error\")\n\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n\n@view_defaults(\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n permission=\"manage:user\",\n renderer=\"manage/token.html\",\n route_name=\"manage.account.token\",\n)\nclass ProvisionMacaroonViews:\n def __init__(self, request):\n self.request = request\n self.user_service = request.find_service(IUserService, context=None)\n self.macaroon_service = request.find_service(IMacaroonService, context=None)\n\n @property\n def project_names(self):\n return sorted(project.name for project in self.request.user.projects)\n\n @property\n def default_response(self):\n return {\n \"project_names\": self.project_names,\n \"create_macaroon_form\": CreateMacaroonForm(\n user_id=self.request.user.id,\n macaroon_service=self.macaroon_service,\n project_names=self.project_names,\n ),\n \"delete_macaroon_form\": DeleteMacaroonForm(\n macaroon_service=self.macaroon_service\n ),\n }\n\n @view_config(request_method=\"GET\")\n def manage_macaroons(self):\n return self.default_response\n\n @view_config(request_method=\"POST\", request_param=CreateMacaroonForm.__params__)\n def create_macaroon(self):\n if not self.request.user.has_primary_verified_email:\n self.request.session.flash(\n \"Verify your email to create an API token.\", queue=\"error\"\n )\n return HTTPSeeOther(self.request.route_path(\"manage.account\"))\n\n form = CreateMacaroonForm(\n **self.request.POST,\n user_id=self.request.user.id,\n macaroon_service=self.macaroon_service,\n project_names=self.project_names,\n )\n\n response = {**self.default_response}\n if form.validate():\n serialized_macaroon, macaroon = self.macaroon_service.create_macaroon(\n location=self.request.domain,\n user_id=self.request.user.id,\n description=form.description.data,\n caveats={\"permissions\": form.validated_scope, \"version\": 1},\n )\n response.update(serialized_macaroon=serialized_macaroon, macaroon=macaroon)\n\n return {**response, \"create_macaroon_form\": form}\n\n @view_config(request_method=\"POST\", request_param=DeleteMacaroonForm.__params__)\n def delete_macaroon(self):\n form = DeleteMacaroonForm(\n **self.request.POST, macaroon_service=self.macaroon_service\n )\n\n if form.validate():\n description = self.macaroon_service.find_macaroon(\n form.macaroon_id.data\n ).description\n self.macaroon_service.delete_macaroon(form.macaroon_id.data)\n self.request.session.flash(\n f\"Deleted API token '{description}'.\", queue=\"success\"\n )\n\n redirect_to = self.request.referer\n if not is_safe_url(redirect_to, host=self.request.host):\n redirect_to = self.request.route_path(\"manage.account\")\n return HTTPSeeOther(redirect_to)\n\n\n@view_config(\n route_name=\"manage.projects\",\n renderer=\"manage/projects.html\",\n uses_session=True,\n permission=\"manage:user\",\n)\ndef manage_projects(request):\n def _key(project):\n if project.releases:\n return project.releases[0].created\n return project.created\n\n all_user_projects = user_projects(request)\n projects_owned = set(\n project.name for project in all_user_projects[\"projects_owned\"]\n )\n projects_sole_owned = set(\n project.name for project in all_user_projects[\"projects_sole_owned\"]\n )\n\n return {\n \"projects\": sorted(request.user.projects, key=_key, reverse=True),\n \"projects_owned\": projects_owned,\n \"projects_sole_owned\": projects_sole_owned,\n }\n\n\n@view_config(\n route_name=\"manage.project.settings\",\n context=Project,\n renderer=\"manage/settings.html\",\n uses_session=True,\n permission=\"manage:project\",\n)\ndef manage_project_settings(project, request):\n return {\"project\": project}\n\n\n@view_config(\n route_name=\"manage.project.delete_project\",\n context=Project,\n uses_session=True,\n require_methods=[\"POST\"],\n permission=\"manage:project\",\n)\ndef delete_project(project, request):\n confirm_project(project, request, fail_route=\"manage.project.settings\")\n remove_project(project, request)\n\n return HTTPSeeOther(request.route_path(\"manage.projects\"))\n\n\n@view_config(\n route_name=\"manage.project.destroy_docs\",\n context=Project,\n uses_session=True,\n require_methods=[\"POST\"],\n permission=\"manage:project\",\n)\ndef destroy_project_docs(project, request):\n confirm_project(project, request, fail_route=\"manage.project.documentation\")\n destroy_docs(project, request)\n\n return HTTPSeeOther(\n request.route_path(\n \"manage.project.documentation\", project_name=project.normalized_name\n )\n )\n\n\n@view_config(\n route_name=\"manage.project.releases\",\n context=Project,\n renderer=\"manage/releases.html\",\n uses_session=True,\n permission=\"manage:project\",\n)\ndef manage_project_releases(project, request):\n return {\"project\": project}\n\n\n@view_defaults(\n route_name=\"manage.project.release\",\n context=Release,\n renderer=\"manage/release.html\",\n uses_session=True,\n require_csrf=True,\n require_methods=False,\n permission=\"manage:project\",\n)\nclass ManageProjectRelease:\n def __init__(self, release, request):\n self.release = release\n self.request = request\n\n @view_config(request_method=\"GET\")\n def manage_project_release(self):\n return {\n \"project\": self.release.project,\n \"release\": self.release,\n \"files\": self.release.files.all(),\n }\n\n @view_config(request_method=\"POST\", request_param=[\"confirm_version\"])\n def delete_project_release(self):\n version = self.request.POST.get(\"confirm_version\")\n if not version:\n self.request.session.flash(\"Confirm the request\", queue=\"error\")\n return HTTPSeeOther(\n self.request.route_path(\n \"manage.project.release\",\n project_name=self.release.project.name,\n version=self.release.version,\n )\n )\n\n if version != self.release.version:\n self.request.session.flash(\n \"Could not delete release - \"\n + f\"{version!r} is not the same as {self.release.version!r}\",\n queue=\"error\",\n )\n return HTTPSeeOther(\n self.request.route_path(\n \"manage.project.release\",\n project_name=self.release.project.name,\n version=self.release.version,\n )\n )\n\n self.request.db.add(\n JournalEntry(\n name=self.release.project.name,\n action=\"remove release\",\n version=self.release.version,\n submitted_by=self.request.user,\n submitted_from=self.request.remote_addr,\n )\n )\n\n self.request.db.delete(self.release)\n\n self.request.session.flash(\n f\"Deleted release {self.release.version!r}\", queue=\"success\"\n )\n\n return HTTPSeeOther(\n self.request.route_path(\n \"manage.project.releases\", project_name=self.release.project.name\n )\n )\n\n @view_config(\n request_method=\"POST\", request_param=[\"confirm_project_name\", \"file_id\"]\n )\n def delete_project_release_file(self):\n def _error(message):\n self.request.session.flash(message, queue=\"error\")\n return HTTPSeeOther(\n self.request.route_path(\n \"manage.project.release\",\n project_name=self.release.project.name,\n version=self.release.version,\n )\n )\n\n project_name = self.request.POST.get(\"confirm_project_name\")\n\n if not project_name:\n return _error(\"Confirm the request\")\n\n try:\n release_file = (\n self.request.db.query(File)\n .filter(\n File.release == self.release,\n File.id == self.request.POST.get(\"file_id\"),\n )\n .one()\n )\n except NoResultFound:\n return _error(\"Could not find file\")\n\n if project_name != self.release.project.name:\n return _error(\n \"Could not delete file - \" + f\"{project_name!r} is not the same as \"\n f\"{self.release.project.name!r}\"\n )\n\n self.request.db.add(\n JournalEntry(\n name=self.release.project.name,\n action=f\"remove file {release_file.filename}\",\n version=self.release.version,\n submitted_by=self.request.user,\n submitted_from=self.request.remote_addr,\n )\n )\n\n self.request.db.delete(release_file)\n\n self.request.session.flash(\n f\"Deleted file {release_file.filename!r}\", queue=\"success\"\n )\n\n return HTTPSeeOther(\n self.request.route_path(\n \"manage.project.release\",\n project_name=self.release.project.name,\n version=self.release.version,\n )\n )\n\n\n@view_config(\n route_name=\"manage.project.roles\",\n context=Project,\n renderer=\"manage/roles.html\",\n uses_session=True,\n require_methods=False,\n permission=\"manage:project\",\n)\ndef manage_project_roles(project, request, _form_class=CreateRoleForm):\n user_service = request.find_service(IUserService, context=None)\n form = _form_class(request.POST, user_service=user_service)\n\n if request.method == \"POST\" and form.validate():\n username = form.username.data\n role_name = form.role_name.data\n userid = user_service.find_userid(username)\n user = user_service.get_user(userid)\n\n if request.db.query(\n request.db.query(Role)\n .filter(\n Role.user == user, Role.project == project, Role.role_name == role_name\n )\n .exists()\n ).scalar():\n request.session.flash(\n f\"User '{username}' already has {role_name} role for project\",\n queue=\"error\",\n )\n elif user.primary_email is None or not user.primary_email.verified:\n request.session.flash(\n f\"User '{username}' does not have a verified primary email \"\n f\"address and cannot be added as a {role_name} for project\",\n queue=\"error\",\n )\n else:\n request.db.add(\n Role(user=user, project=project, role_name=form.role_name.data)\n )\n request.db.add(\n JournalEntry(\n name=project.name,\n action=f\"add {role_name} {username}\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n\n owner_roles = (\n request.db.query(Role)\n .join(Role.user)\n .filter(Role.role_name == \"Owner\", Role.project == project)\n )\n owner_users = {owner.user for owner in owner_roles}\n\n # Don't send to the owner that added the new role\n owner_users.discard(request.user)\n\n # Don't send owners email to new user if they are now an owner\n owner_users.discard(user)\n\n send_collaborator_added_email(\n request,\n owner_users,\n user=user,\n submitter=request.user,\n project_name=project.name,\n role=form.role_name.data,\n )\n\n send_added_as_collaborator_email(\n request,\n user,\n submitter=request.user,\n project_name=project.name,\n role=form.role_name.data,\n )\n\n request.session.flash(\n f\"Added collaborator '{form.username.data}'\", queue=\"success\"\n )\n form = _form_class(user_service=user_service)\n\n roles = request.db.query(Role).join(User).filter(Role.project == project).all()\n\n # TODO: The following lines are a hack to handle multiple roles for a\n # single user and should be removed when fixing GH-2745\n roles_by_user = defaultdict(list)\n for role in roles:\n roles_by_user[role.user.username].append(role)\n\n return {\"project\": project, \"roles_by_user\": roles_by_user, \"form\": form}\n\n\n@view_config(\n route_name=\"manage.project.change_role\",\n context=Project,\n uses_session=True,\n require_methods=[\"POST\"],\n permission=\"manage:project\",\n)\ndef change_project_role(project, request, _form_class=ChangeRoleForm):\n # TODO: This view was modified to handle deleting multiple roles for a\n # single user and should be updated when fixing GH-2745\n\n form = _form_class(request.POST)\n\n if form.validate():\n role_ids = request.POST.getall(\"role_id\")\n\n if len(role_ids) > 1:\n # This user has more than one role, so just delete all the ones\n # that aren't what we want.\n #\n # TODO: This branch should be removed when fixing GH-2745.\n roles = (\n request.db.query(Role)\n .join(User)\n .filter(\n Role.id.in_(role_ids),\n Role.project == project,\n Role.role_name != form.role_name.data,\n )\n .all()\n )\n removing_self = any(\n role.role_name == \"Owner\" and role.user == request.user\n for role in roles\n )\n if removing_self:\n request.session.flash(\"Cannot remove yourself as Owner\", queue=\"error\")\n else:\n for role in roles:\n request.db.delete(role)\n request.db.add(\n JournalEntry(\n name=project.name,\n action=f\"remove {role.role_name} {role.user.username}\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n request.session.flash(\"Changed role\", queue=\"success\")\n else:\n # This user only has one role, so get it and change the type.\n try:\n role = (\n request.db.query(Role)\n .join(User)\n .filter(\n Role.id == request.POST.get(\"role_id\"), Role.project == project\n )\n .one()\n )\n if role.role_name == \"Owner\" and role.user == request.user:\n request.session.flash(\n \"Cannot remove yourself as Owner\", queue=\"error\"\n )\n else:\n request.db.add(\n JournalEntry(\n name=project.name,\n action=\"change {} {} to {}\".format(\n role.role_name, role.user.username, form.role_name.data\n ),\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n role.role_name = form.role_name.data\n request.session.flash(\"Changed role\", queue=\"success\")\n except NoResultFound:\n request.session.flash(\"Could not find role\", queue=\"error\")\n\n return HTTPSeeOther(\n request.route_path(\"manage.project.roles\", project_name=project.name)\n )\n\n\n@view_config(\n route_name=\"manage.project.delete_role\",\n context=Project,\n uses_session=True,\n require_methods=[\"POST\"],\n permission=\"manage:project\",\n)\ndef delete_project_role(project, request):\n # TODO: This view was modified to handle deleting multiple roles for a\n # single user and should be updated when fixing GH-2745\n\n roles = (\n request.db.query(Role)\n .join(User)\n .filter(Role.id.in_(request.POST.getall(\"role_id\")), Role.project == project)\n .all()\n )\n removing_self = any(\n role.role_name == \"Owner\" and role.user == request.user for role in roles\n )\n\n if not roles:\n request.session.flash(\"Could not find role\", queue=\"error\")\n elif removing_self:\n request.session.flash(\"Cannot remove yourself as Owner\", queue=\"error\")\n else:\n for role in roles:\n request.db.delete(role)\n request.db.add(\n JournalEntry(\n name=project.name,\n action=f\"remove {role.role_name} {role.user.username}\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n request.session.flash(\"Removed role\", queue=\"success\")\n\n return HTTPSeeOther(\n request.route_path(\"manage.project.roles\", project_name=project.name)\n )\n\n\n@view_config(\n route_name=\"manage.project.history\",\n context=Project,\n renderer=\"manage/history.html\",\n uses_session=True,\n permission=\"manage:project\",\n)\ndef manage_project_history(project, request):\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\")\n\n journals_query = (\n request.db.query(JournalEntry)\n .options(joinedload(\"submitted_by\"))\n .filter(JournalEntry.name == project.name)\n .order_by(JournalEntry.submitted_date.desc(), JournalEntry.id.desc())\n )\n\n journals = SQLAlchemyORMPage(\n journals_query,\n page=page_num,\n items_per_page=25,\n url_maker=paginate_url_factory(request),\n )\n\n if journals.page_count and page_num > journals.page_count:\n raise HTTPNotFound\n\n return {\"project\": project, \"journals\": journals}\n\n\n@view_config(\n route_name=\"manage.project.documentation\",\n context=Project,\n renderer=\"manage/documentation.html\",\n uses_session=True,\n permission=\"manage:project\",\n)\ndef manage_project_documentation(project, request):\n return {\"project\": project}\n", "path": "warehouse/manage/views.py" } ]
diff --git a/tests/unit/manage/test_views.py b/tests/unit/manage/test_views.py index 876b2979ca53..ddcdf1c33be3 100644 --- a/tests/unit/manage/test_views.py +++ b/tests/unit/manage/test_views.py @@ -1363,6 +1363,41 @@ def test_default_response(self, monkeypatch): "delete_macaroon_form": delete_macaroon_obj, } + def test_project_names(self, db_request): + user = UserFactory.create() + another_user = UserFactory.create() + + db_request.user = user + db_request.find_service = lambda *a, **kw: pretend.stub() + + # A project with a sole owner that is the user + with_sole_owner = ProjectFactory.create(name="foo") + RoleFactory.create(user=user, project=with_sole_owner, role_name="Owner") + RoleFactory.create( + user=another_user, project=with_sole_owner, role_name="Maintainer" + ) + + # A project with multiple owners, including the user + with_multiple_owners = ProjectFactory.create(name="bar") + RoleFactory.create(user=user, project=with_multiple_owners, role_name="Owner") + RoleFactory.create( + user=another_user, project=with_multiple_owners, role_name="Owner" + ) + + # A project with a sole owner that is not the user + not_an_owner = ProjectFactory.create(name="baz") + RoleFactory.create(user=user, project=not_an_owner, role_name="Maintainer") + RoleFactory.create(user=another_user, project=not_an_owner, role_name="Owner") + + # A project that the user is neither owner nor maintainer of + neither_owner_nor_maintainer = ProjectFactory.create(name="quux") + RoleFactory.create( + user=another_user, project=neither_owner_nor_maintainer, role_name="Owner" + ) + + view = views.ProvisionMacaroonViews(db_request) + assert set(view.project_names) == {"foo", "bar", "baz"} + def test_manage_macaroons(self, monkeypatch): request = pretend.stub(find_service=lambda *a, **kw: pretend.stub()) @@ -1412,10 +1447,10 @@ def test_create_macaroon_invalid_form(self, monkeypatch): ) monkeypatch.setattr(views, "CreateMacaroonForm", create_macaroon_cls) - user_projects = pretend.call_recorder( - lambda r: {"projects_owned": [pretend.stub(name=pretend.stub())]} + project_names = [pretend.stub()] + monkeypatch.setattr( + views.ProvisionMacaroonViews, "project_names", project_names ) - monkeypatch.setattr(views, "user_projects", user_projects) default_response = {"default": "response"} monkeypatch.setattr( @@ -1458,11 +1493,10 @@ def test_create_macaroon(self, monkeypatch): ) monkeypatch.setattr(views, "CreateMacaroonForm", create_macaroon_cls) - project_name = pretend.stub() - user_projects = pretend.call_recorder( - lambda r: {"projects_owned": [pretend.stub(name=project_name)]} + project_names = [pretend.stub()] + monkeypatch.setattr( + views.ProvisionMacaroonViews, "project_names", project_names ) - monkeypatch.setattr(views, "user_projects", user_projects) default_response = {"default": "response"} monkeypatch.setattr( diff --git a/warehouse/manage/views.py b/warehouse/manage/views.py index 7e1f5e1dd63e..2b523c493abf 100644 --- a/warehouse/manage/views.py +++ b/warehouse/manage/views.py @@ -559,8 +559,7 @@ def __init__(self, request): @property def project_names(self): - projects = user_projects(self.request)["projects_owned"] - return [project.name for project in projects] + return sorted(project.name for project in self.request.user.projects) @property def default_response(self): diff --git a/warehouse/templates/manage/account.html b/warehouse/templates/manage/account.html index a19d0ddfd330..63a17b646429 100644 --- a/warehouse/templates/manage/account.html +++ b/warehouse/templates/manage/account.html @@ -152,7 +152,7 @@ All projects {% else %} {% for project in macaroon.caveats.get("permissions")['projects'] %} - <a href="{{ request.route_path('manage.project.releases', project_name=project) }}">{{ project }}</a> + <a href="{{ request.route_path('packaging.project', name=project) }}">{{ project }}</a> {% endfor %} {% endif %} </td> diff --git a/warehouse/templates/manage/token.html b/warehouse/templates/manage/token.html index daecf4e56425..4b0cfcb6ca20 100644 --- a/warehouse/templates/manage/token.html +++ b/warehouse/templates/manage/token.html @@ -88,6 +88,7 @@ <h2>Add another token</h2> <label for="token_scope" class="form-group__label">Scope</label> <select name="token_scope" id="token_scope" class="form-group__input" aria-describedby="token_scope-errors"> <option disabled selected value="scope:unspecified">Select scope...</option> + <option value="scope:user">Entire account (all projects)</option> {% for project in project_names %} <option value="scope:project:{{ project }}">Project: {{ project }}</option> {% endfor %}
strawberry-graphql__strawberry-1994
Postponed annotation evaluation causes `Annotated` to break When using postponed annotation evaluation, annotating resolver arguments no longer works: ```python from __future__ import annotations import random from typing import Annotated import strawberry @strawberry.type class Query: @strawberry.field def dice_roll( self, sides: Annotated[ int, strawberry.argument(description="Number of sides the die should have."), ] = 6, ) -> int: return random.randint(1, sides) strawberry.Schema(query=Query) ``` The example above raises this TypeError: ``` TypeError: Query fields cannot be resolved. Unexpected type 'typing.Annotated[int, <strawberry.arguments.StrawberryArgumentAnnotation object at 0x7fd12e130d00>]' ``` When the first line (`from __future__ import annotations`) is left out, everything works as intended. This will probably also break once Python 3.11 lands, since the behavior will become mandatory then. #1586 refers to a somewhat related issue.
[ { "content": "from __future__ import annotations\n\nfrom typing import Any, Optional, Union, cast\n\nfrom typing_extensions import Annotated, get_args, get_origin\n\nfrom strawberry.type import StrawberryType\n\nfrom .annotation import StrawberryAnnotation\n\n\nclass StrawberryAutoMeta(type):\n \"\"\"Metaclass for StrawberryAuto.\n\n This is used to make sure StrawberryAuto is a singleton and also to\n override the behavior of `isinstance` so that it consider the following\n cases:\n\n >> isinstance(StrawberryAuto(), StrawberryAuto)\n True\n >> isinstance(StrawberryAnnotation(StrawberryAuto()), StrawberryAuto)\n True\n >> isinstance(Annotated[StrawberryAuto(), object()), StrawberryAuto)\n True\n\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n self._instance: Optional[StrawberryAuto] = None\n super().__init__(*args, **kwargs)\n\n def __call__(cls, *args, **kwargs):\n if cls._instance is None:\n cls._instance = super().__call__(*args, **kwargs)\n\n return cls._instance\n\n def __instancecheck__(\n self,\n instance: Union[StrawberryAuto, StrawberryAnnotation, StrawberryType, type],\n ):\n if isinstance(instance, StrawberryAnnotation):\n resolved = instance.annotation\n if isinstance(resolved, str):\n namespace = instance.namespace\n resolved = namespace and namespace.get(resolved)\n\n if resolved is not None:\n instance = cast(type, resolved)\n\n if instance is auto:\n return True\n\n # Support uses of Annotated[auto, something()]\n if get_origin(instance) is Annotated:\n args = get_args(instance)\n if args[0] is Any:\n return any(isinstance(arg, StrawberryAuto) for arg in args[1:])\n\n return False\n\n\nclass StrawberryAuto(metaclass=StrawberryAutoMeta):\n def __str__(self):\n return \"auto\"\n\n def __repr__(self):\n return \"<auto>\"\n\n\nauto = Annotated[Any, StrawberryAuto()]\n", "path": "strawberry/auto.py" } ]
[ { "content": "from __future__ import annotations\n\nfrom typing import Any, Optional, Union, cast\n\nfrom typing_extensions import Annotated, get_args, get_origin\n\nfrom strawberry.type import StrawberryType\n\nfrom .annotation import StrawberryAnnotation\n\n\nclass StrawberryAutoMeta(type):\n \"\"\"Metaclass for StrawberryAuto.\n\n This is used to make sure StrawberryAuto is a singleton and also to\n override the behavior of `isinstance` so that it consider the following\n cases:\n\n >> isinstance(StrawberryAuto(), StrawberryAuto)\n True\n >> isinstance(StrawberryAnnotation(StrawberryAuto()), StrawberryAuto)\n True\n >> isinstance(Annotated[StrawberryAuto(), object()), StrawberryAuto)\n True\n\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n self._instance: Optional[StrawberryAuto] = None\n super().__init__(*args, **kwargs)\n\n def __call__(cls, *args, **kwargs):\n if cls._instance is None:\n cls._instance = super().__call__(*args, **kwargs)\n\n return cls._instance\n\n def __instancecheck__(\n self,\n instance: Union[StrawberryAuto, StrawberryAnnotation, StrawberryType, type],\n ):\n if isinstance(instance, StrawberryAnnotation):\n resolved = instance.annotation\n if isinstance(resolved, str):\n namespace = instance.namespace\n resolved = namespace and namespace.get(resolved)\n\n if resolved is not None:\n instance = cast(type, resolved)\n\n if instance is auto:\n return True\n\n # Support uses of Annotated[auto, something()]\n if get_origin(instance) is Annotated:\n args = get_args(instance)\n if args[0] is Any:\n return any(isinstance(arg, StrawberryAuto) for arg in args[1:])\n\n return instance == \"strawberry.auto\"\n\n\nclass StrawberryAuto(metaclass=StrawberryAutoMeta):\n def __str__(self):\n return \"auto\"\n\n def __repr__(self):\n return \"<auto>\"\n\n\nauto = Annotated[Any, StrawberryAuto()]\n", "path": "strawberry/auto.py" } ]
diff --git a/RELEASE.md b/RELEASE.md new file mode 100644 index 0000000000..d3956a9cdf --- /dev/null +++ b/RELEASE.md @@ -0,0 +1,4 @@ +Release type: patch + +This release adds an initial fix to make `strawberry.auto` work when using +`from __future__ import annotations`. diff --git a/strawberry/auto.py b/strawberry/auto.py index 66747ebbeb..9232388090 100644 --- a/strawberry/auto.py +++ b/strawberry/auto.py @@ -57,7 +57,7 @@ def __instancecheck__( if args[0] is Any: return any(isinstance(arg, StrawberryAuto) for arg in args[1:]) - return False + return instance == "strawberry.auto" class StrawberryAuto(metaclass=StrawberryAutoMeta): diff --git a/tests/experimental/pydantic/schema/test_forward_reference.py b/tests/experimental/pydantic/schema/test_forward_reference.py new file mode 100644 index 0000000000..ebc94d4b37 --- /dev/null +++ b/tests/experimental/pydantic/schema/test_forward_reference.py @@ -0,0 +1,50 @@ +from __future__ import annotations + +import textwrap +from typing import Optional + +import pydantic + +import strawberry + + +def test_auto_fields(): + global User + + class UserModel(pydantic.BaseModel): + age: int + password: Optional[str] + other: float + + @strawberry.experimental.pydantic.type(UserModel) + class User: + age: strawberry.auto + password: strawberry.auto + + @strawberry.type + class Query: + @strawberry.field + def user(self) -> User: + return User(age=1, password="ABC") + + schema = strawberry.Schema(query=Query) + + expected_schema = """ + type Query { + user: User! + } + + type User { + age: Int! + password: String + } + """ + + assert str(schema) == textwrap.dedent(expected_schema).strip() + + query = "{ user { age } }" + + result = schema.execute_sync(query) + + assert not result.errors + assert result.data["user"]["age"] == 1
Rapptz__discord.py-1745
Using wait=True in webhook.execute raises AttributeError Title. Here's the code (using eval command) and traceback in case. ```py webhook = (await ctx.channel.webhooks())[0] msg = await webhook.execute("test", wait=True) ``` ``` Traceback (most recent call last): File "/home/nguuuquaaa/bot/Belphegor/belphegor/admin.py", line 131, in _eval await func() File "<string>", line 3, in func File "/usr/local/lib/python3.6/dist-packages/discord/webhook.py", line 197, in handle_execution_response return Message(data=data, state=self, channel=self.webhook.channel) File "/usr/local/lib/python3.6/dist-packages/discord/message.py", line 213, in __init__ self._update(channel, data) File "/usr/local/lib/python3.6/dist-packages/discord/message.py", line 278, in _update getattr(self, '_handle_%s' % handler)(data[handler]) File "/usr/local/lib/python3.6/dist-packages/discord/message.py", line 291, in _handle_author self.author = self._state.store_user(author) AttributeError: 'AsyncWebhookAdapter' object has no attribute 'store_user' ```
[ { "content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2017 Rapptz\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\n\nimport asyncio\nimport json\nimport time\nimport re\n\nimport aiohttp\n\nfrom . import utils\nfrom .errors import InvalidArgument, HTTPException, Forbidden, NotFound\nfrom .user import BaseUser, User\n\n__all__ = ['WebhookAdapter', 'AsyncWebhookAdapter', 'RequestsWebhookAdapter', 'Webhook']\n\nclass WebhookAdapter:\n \"\"\"Base class for all webhook adapters.\n\n Attributes\n ------------\n webhook: :class:`Webhook`\n The webhook that owns this adapter.\n \"\"\"\n\n BASE = 'https://discordapp.com/api/v7'\n\n def _prepare(self, webhook):\n self._webhook_id = webhook.id\n self._webhook_token = webhook.token\n self._request_url = '{0.BASE}/webhooks/{1}/{2}'.format(self, webhook.id, webhook.token)\n self.webhook = webhook\n\n def request(self, verb, url, payload=None, multipart=None):\n \"\"\"Actually does the request.\n\n Subclasses must implement this.\n\n Parameters\n -----------\n verb: str\n The HTTP verb to use for the request.\n url: str\n The URL to send the request to. This will have\n the query parameters already added to it, if any.\n multipart: Optional[dict]\n A dict containing multipart form data to send with\n the request. If a filename is being uploaded, then it will\n be under a ``file`` key which will have a 3-element :class:`tuple`\n denoting ``(filename, file, content_type)``.\n payload: Optional[dict]\n The JSON to send with the request, if any.\n \"\"\"\n raise NotImplementedError()\n\n def delete_webhook(self):\n return self.request('DELETE', self._request_url)\n\n def edit_webhook(self, **payload):\n return self.request('PATCH', self._request_url, payload=payload)\n\n def handle_execution_response(self, data, *, wait):\n \"\"\"Transforms the webhook execution response into something\n more meaningful.\n\n This is mainly used to convert the data into a :class:`Message`\n if necessary.\n\n Subclasses must implement this.\n\n Parameters\n ------------\n data\n The data that was returned from the request.\n wait: bool\n Whether the webhook execution was asked to wait or not.\n \"\"\"\n raise NotImplementedError()\n\n def _store_user(self, data):\n # mocks a ConnectionState for appropriate use for Message\n return BaseUser(state=self, data=data)\n\n def execute_webhook(self, *, payload, wait=False, file=None, files=None):\n if file is not None:\n multipart = {\n 'file': file,\n 'payload_json': utils.to_json(payload)\n }\n data = None\n elif files is not None:\n multipart = {\n 'payload_json': utils.to_json(payload)\n }\n for i, file in enumerate(files, start=1):\n multipart['file%i' % i] = file\n data = None\n else:\n data = payload\n multipart = None\n\n url = '%s?wait=%d' % (self._request_url, wait)\n maybe_coro = self.request('POST', url, multipart=multipart, payload=data)\n return self.handle_execution_response(maybe_coro, wait=wait)\n\nclass AsyncWebhookAdapter(WebhookAdapter):\n \"\"\"A webhook adapter suited for use with aiohttp.\n\n .. note::\n\n You are responsible for cleaning up the client session.\n\n Parameters\n -----------\n session: aiohttp.ClientSession\n The session to use to send requests.\n \"\"\"\n\n def __init__(self, session):\n self.session = session\n self.loop = session.loop\n\n async def request(self, verb, url, payload=None, multipart=None):\n headers = {}\n data = None\n if payload:\n headers['Content-Type'] = 'application/json'\n data = utils.to_json(payload)\n\n if multipart:\n data = aiohttp.FormData()\n for key, value in multipart.items():\n if key.startswith('file'):\n data.add_field(key, value[1], filename=value[0], content_type=value[2])\n else:\n data.add_field(key, value)\n\n for tries in range(5):\n async with self.session.request(verb, url, headers=headers, data=data) as r:\n data = await r.text(encoding='utf-8')\n if r.headers['Content-Type'] == 'application/json':\n data = json.loads(data)\n\n # check if we have rate limit header information\n remaining = r.headers.get('X-Ratelimit-Remaining')\n if remaining == '0' and r.status != 429:\n delta = utils._parse_ratelimit_header(r)\n await asyncio.sleep(delta, loop=self.loop)\n\n if 300 > r.status >= 200:\n return data\n\n # we are being rate limited\n if r.status == 429:\n retry_after = data['retry_after'] / 1000.0\n await asyncio.sleep(retry_after, loop=self.loop)\n continue\n\n if r.status in (500, 502):\n await asyncio.sleep(1 + tries * 2, loop=self.loop)\n continue\n\n if r.status == 403:\n raise Forbidden(r, data)\n elif r.status == 404:\n raise NotFound(r, data)\n else:\n raise HTTPException(r, data)\n\n async def handle_execution_response(self, response, *, wait):\n data = await response\n if not wait:\n return data\n\n # transform into Message object\n from .message import Message\n return Message(data=data, state=self, channel=self.webhook.channel)\n\nclass RequestsWebhookAdapter(WebhookAdapter):\n \"\"\"A webhook adapter suited for use with ``requests``.\n\n Only versions of requests higher than 2.13.0 are supported.\n\n Parameters\n -----------\n session: Optional[`requests.Session <http://docs.python-requests.org/en/latest/api/#requests.Session>`_]\n The requests session to use for sending requests. If not given then\n each request will create a new session. Note if a session is given,\n the webhook adapter **will not** clean it up for you. You must close\n the session yourself.\n sleep: bool\n Whether to sleep the thread when encountering a 429 or pre-emptive\n rate limit or a 5xx status code. Defaults to ``True``. If set to\n ``False`` then this will raise an :exc:`HTTPException` instead.\n \"\"\"\n\n def __init__(self, session=None, *, sleep=True):\n import requests\n self.session = session or requests\n self.sleep = sleep\n\n def request(self, verb, url, payload=None, multipart=None):\n headers = {}\n data = None\n if payload:\n headers['Content-Type'] = 'application/json'\n data = utils.to_json(payload)\n\n if multipart is not None:\n data = {'payload_json': multipart.pop('payload_json')}\n\n for tries in range(5):\n r = self.session.request(verb, url, headers=headers, data=data, files=multipart)\n r.encoding = 'utf-8'\n data = r.text\n\n # compatibility with aiohttp\n r.status = r.status_code\n\n if r.headers['Content-Type'] == 'application/json':\n data = json.loads(data)\n\n # check if we have rate limit header information\n remaining = r.headers.get('X-Ratelimit-Remaining')\n if remaining == '0' and r.status != 429 and self.sleep:\n delta = utils._parse_ratelimit_header(r)\n time.sleep(delta)\n\n if 300 > r.status >= 200:\n return data\n\n # we are being rate limited\n if r.status == 429:\n if self.sleep:\n retry_after = data['retry_after'] / 1000.0\n time.sleep(retry_after)\n continue\n else:\n raise HTTPException(r, data)\n\n if self.sleep and r.status in (500, 502):\n time.sleep(1 + tries * 2)\n continue\n\n if r.status == 403:\n raise Forbidden(r, data)\n elif r.status == 404:\n raise NotFound(r, data)\n else:\n raise HTTPException(r, data)\n\n def handle_execution_response(self, response, *, wait):\n if not wait:\n return response\n\n # transform into Message object\n from .message import Message\n return Message(data=response, state=self, channel=self.webhook.channel)\n\nclass Webhook:\n \"\"\"Represents a Discord webhook.\n\n Webhooks are a form to send messages to channels in Discord without a\n bot user or authentication.\n\n There are two main ways to use Webhooks. The first is through the ones\n received by the library such as :meth:`.Guild.webhooks` and\n :meth:`.TextChannel.webhooks`. The ones received by the library will\n automatically have an adapter bound using the library's HTTP session.\n Those webhooks will have :meth:`~.Webhook.send`, :meth:`~.Webhook.delete` and\n :meth:`~.Webhook.edit` as coroutines.\n\n The second form involves creating a webhook object manually without having\n it bound to a websocket connection using the :meth:`~.Webhook.from_url` or\n :meth:`~.Webhook.partial` classmethods. This form allows finer grained control\n over how requests are done, allowing you to mix async and sync code using either\n ``aiohttp`` or ``requests``.\n\n For example, creating a webhook from a URL and using ``aiohttp``:\n\n .. code-block:: python3\n\n from discord import Webhook, AsyncWebhookAdapter\n import aiohttp\n\n async def foo():\n async with aiohttp.ClientSession() as session:\n webhook = Webhook.from_url('url-here', adapter=AsyncWebhookAdapter(session))\n await webhook.send('Hello World', username='Foo')\n\n Or creating a webhook from an ID and token and using ``requests``:\n\n .. code-block:: python3\n\n import requests\n from discord import Webhook, RequestsWebhookAdapter\n\n webhook = Webhook.partial(123456, 'abcdefg', adapter=RequestsWebhookAdapter())\n webhook.send('Hello World', username='Foo')\n\n Attributes\n ------------\n id: :class:`int`\n The webhook's ID\n token: :class:`str`\n The authentication token of the webhook.\n guild_id: Optional[:class:`int`]\n The guild ID this webhook is for.\n channel_id: Optional[:class:`int`]\n The channel ID this webhook is for.\n user: Optional[:class:`abc.User`]\n The user this webhook was created by. If the webhook was\n received without authentication then this will be ``None``.\n name: Optional[:class:`str`]\n The default name of the webhook.\n avatar: Optional[:class:`str`]\n The default avatar of the webhook.\n \"\"\"\n\n __slots__ = ('id', 'guild_id', 'channel_id', 'user', 'name', 'avatar',\n 'token', '_state', '_adapter')\n\n def __init__(self, data, *, adapter, state=None):\n self.id = int(data['id'])\n self.channel_id = utils._get_as_snowflake(data, 'channel_id')\n self.guild_id = utils._get_as_snowflake(data, 'guild_id')\n self.name = data.get('name')\n self.avatar = data.get('avatar')\n self.token = data['token']\n self._state = state\n self._adapter = adapter\n self._adapter._prepare(self)\n\n user = data.get('user')\n if user is None:\n self.user = None\n elif state is None:\n self.user = BaseUser(state=None, data=user)\n else:\n self.user = User(state=state, data=user)\n\n def __repr__(self):\n return '<Webhook id=%r>' % self.id\n\n @property\n def url(self):\n \"\"\"Returns the webhook's url.\"\"\"\n return 'https://discordapp.com/api/webhooks/{}/{}'.format(self.id, self.token)\n\n @classmethod\n def partial(cls, id, token, *, adapter):\n \"\"\"Creates a partial :class:`Webhook`.\n\n A partial webhook is just a webhook object with an ID and a token.\n\n Parameters\n -----------\n id: int\n The ID of the webhook.\n token: str\n The authentication token of the webhook.\n adapter: :class:`WebhookAdapter`\n The webhook adapter to use when sending requests. This is\n typically :class:`AsyncWebhookAdapter` for ``aiohttp`` or\n :class:`RequestsWebhookAdapter` for ``requests``.\n \"\"\"\n\n if not isinstance(adapter, WebhookAdapter):\n raise TypeError('adapter must be a subclass of WebhookAdapter')\n\n data = {\n 'id': id,\n 'token': token\n }\n\n return cls(data, adapter=adapter)\n\n @classmethod\n def from_url(cls, url, *, adapter):\n \"\"\"Creates a partial :class:`Webhook` from a webhook URL.\n\n Parameters\n ------------\n url: str\n The URL of the webhook.\n adapter: :class:`WebhookAdapter`\n The webhook adapter to use when sending requests. This is\n typically :class:`AsyncWebhookAdapter` for ``aiohttp`` or\n :class:`RequestsWebhookAdapter` for ``requests``.\n\n Raises\n -------\n InvalidArgument\n The URL is invalid.\n \"\"\"\n\n m = re.search(r'discordapp.com/api/webhooks/(?P<id>[0-9]{17,21})/(?P<token>[A-Za-z0-9\\.\\-\\_]{60,68})', url)\n if m is None:\n raise InvalidArgument('Invalid webhook URL given.')\n return cls(m.groupdict(), adapter=adapter)\n\n @classmethod\n def from_state(cls, data, state):\n return cls(data, adapter=AsyncWebhookAdapter(session=state.http._session), state=state)\n\n @property\n def guild(self):\n \"\"\"Optional[:class:`Guild`]: The guild this webhook belongs to.\n\n If this is a partial webhook, then this will always return ``None``.\n \"\"\"\n return self._state and self._state._get_guild(self.guild_id)\n\n @property\n def channel(self):\n \"\"\"Optional[:class:`TextChannel`]: The text channel this webhook belongs to.\n\n If this is a partial webhook, then this will always return ``None``.\n \"\"\"\n guild = self.guild\n return guild and guild.get_channel(self.channel_id)\n\n @property\n def created_at(self):\n \"\"\"Returns the webhook's creation time in UTC.\"\"\"\n return utils.snowflake_time(self.id)\n\n @property\n def avatar_url(self):\n \"\"\"Returns a friendly URL version of the avatar the webhook has.\n\n If the webhook does not have a traditional avatar, their default\n avatar URL is returned instead.\n\n This is equivalent to calling :meth:`avatar_url_as` with the\n default parameters.\n \"\"\"\n return self.avatar_url_as()\n\n def avatar_url_as(self, *, format=None, size=1024):\n \"\"\"Returns a friendly URL version of the avatar the webhook has.\n\n If the webhook does not have a traditional avatar, their default\n avatar URL is returned instead.\n\n The format must be one of 'jpeg', 'jpg', or 'png'.\n The size must be a power of 2 between 16 and 1024.\n\n Parameters\n -----------\n format: Optional[str]\n The format to attempt to convert the avatar to.\n If the format is ``None``, then it is equivalent to png.\n size: int\n The size of the image to display.\n\n Returns\n --------\n str\n The resulting CDN URL.\n\n Raises\n ------\n InvalidArgument\n Bad image format passed to ``format`` or invalid ``size``.\n \"\"\"\n if self.avatar is None:\n # Default is always blurple apparently\n return 'https://cdn.discordapp.com/embed/avatars/0.png'\n\n if not utils.valid_icon_size(size):\n raise InvalidArgument(\"size must be a power of 2 between 16 and 1024\")\n\n format = format or 'png'\n\n if format not in ('png', 'jpg', 'jpeg'):\n raise InvalidArgument(\"format must be one of 'png', 'jpg', or 'jpeg'.\")\n\n return 'https://cdn.discordapp.com/avatars/{0.id}/{0.avatar}.{1}?size={2}'.format(self, format, size)\n\n def delete(self):\n \"\"\"|maybecoro|\n\n Deletes this Webhook.\n\n If the webhook is constructed with a :class:`RequestsWebhookAdapter` then this is\n not a coroutine.\n\n Raises\n -------\n HTTPException\n Deleting the webhook failed.\n NotFound\n This webhook does not exist.\n Forbidden\n You do not have permissions to delete this webhook.\n \"\"\"\n return self._adapter.delete_webhook()\n\n def edit(self, **kwargs):\n \"\"\"|maybecoro|\n\n Edits this Webhook.\n\n If the webhook is constructed with a :class:`RequestsWebhookAdapter` then this is\n not a coroutine.\n\n Parameters\n -------------\n name: Optional[str]\n The webhook's new default name.\n avatar: Optional[bytes]\n A :term:`py:bytes-like object` representing the webhook's new default avatar.\n\n Raises\n -------\n HTTPException\n Editing the webhook failed.\n NotFound\n This webhook does not exist.\n Forbidden\n You do not have permissions to edit this webhook.\n \"\"\"\n payload = {}\n\n try:\n name = kwargs['name']\n except KeyError:\n pass\n else:\n if name is not None:\n payload['name'] = str(name)\n else:\n payload['name'] = None\n\n try:\n avatar = kwargs['avatar']\n except KeyError:\n pass\n else:\n if avatar is not None:\n payload['avatar'] = utils._bytes_to_base64_data(avatar)\n else:\n payload['avatar'] = None\n\n return self._adapter.edit_webhook(**payload)\n\n def send(self, content=None, *, wait=False, username=None, avatar_url=None, tts=False,\n file=None, files=None, embed=None, embeds=None):\n \"\"\"|maybecoro|\n\n Sends a message using the webhook.\n\n If the webhook is constructed with a :class:`RequestsWebhookAdapter` then this is\n not a coroutine.\n\n The content must be a type that can convert to a string through ``str(content)``.\n\n To upload a single file, the ``file`` parameter should be used with a\n single :class:`File` object.\n\n If the ``embed`` parameter is provided, it must be of type :class:`Embed` and\n it must be a rich embed type. You cannot mix the ``embed`` parameter with the\n ``embeds`` parameter, which must be a :class:`list` of :class:`Embed` objects to send.\n\n Parameters\n ------------\n content\n The content of the message to send.\n wait: bool\n Whether the server should wait before sending a response. This essentially\n means that the return type of this function changes from ``None`` to\n a :class:`Message` if set to ``True``.\n username: str\n The username to send with this message. If no username is provided\n then the default username for the webhook is used.\n avatar_url: str\n The avatar URL to send with this message. If no avatar URL is provided\n then the default avatar for the webhook is used.\n tts: bool\n Indicates if the message should be sent using text-to-speech.\n file: :class:`File`\n The file to upload. This cannot be mixed with ``files`` parameter.\n files: List[:class:`File`]\n A list of files to send with the content. This cannot be mixed with the\n ``file`` parameter.\n embed: :class:`Embed`\n The rich embed for the content to send. This cannot be mixed with\n ``embeds`` parameter.\n embeds: List[:class:`Embed`]\n A list of embeds to send with the content. Maximum of 10. This cannot\n be mixed with the ``embed`` parameter.\n\n Raises\n --------\n HTTPException\n Sending the message failed.\n NotFound\n This webhook was not found.\n Forbidden\n The authorization token for the webhook is incorrect.\n InvalidArgument\n You specified both ``embed`` and ``embeds`` or the length of\n ``embeds`` was invalid.\n\n Returns\n ---------\n Optional[:class:`Message`]\n The message that was sent.\n \"\"\"\n\n payload = {}\n\n if files is not None and file is not None:\n raise InvalidArgument('Cannot mix file and files keyword arguments.')\n if embeds is not None and embed is not None:\n raise InvalidArgument('Cannot mix embed and embeds keyword arguments.')\n\n if embeds is not None:\n if len(embeds) > 10:\n raise InvalidArgument('embeds has a maximum of 10 elements.')\n payload['embeds'] = [e.to_dict() for e in embeds]\n\n if embed is not None:\n payload['embeds'] = [embed.to_dict()]\n\n if content is not None:\n payload['content'] = str(content)\n\n payload['tts'] = tts\n if avatar_url:\n payload['avatar_url'] = avatar_url\n if username:\n payload['username'] = username\n\n if file is not None:\n try:\n to_pass = (file.filename, file.open_file(), 'application/octet-stream')\n return self._adapter.execute_webhook(wait=wait, file=to_pass, payload=payload)\n finally:\n file.close()\n elif files is not None:\n try:\n to_pass = [(file.filename, file.open_file(), 'application/octet-stream')\n for file in files]\n return self._adapter.execute_webhook(wait=wait, files=to_pass, payload=payload)\n finally:\n for file in files:\n file.close()\n else:\n return self._adapter.execute_webhook(wait=wait, payload=payload)\n\n def execute(self, *args, **kwargs):\n \"\"\"An alias for :meth:`~.Webhook.send`.\"\"\"\n return self.send(*args, **kwargs)\n", "path": "discord/webhook.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-2017 Rapptz\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\n\nimport asyncio\nimport json\nimport time\nimport re\n\nimport aiohttp\n\nfrom . import utils\nfrom .errors import InvalidArgument, HTTPException, Forbidden, NotFound\nfrom .user import BaseUser, User\n\n__all__ = ['WebhookAdapter', 'AsyncWebhookAdapter', 'RequestsWebhookAdapter', 'Webhook']\n\nclass WebhookAdapter:\n \"\"\"Base class for all webhook adapters.\n\n Attributes\n ------------\n webhook: :class:`Webhook`\n The webhook that owns this adapter.\n \"\"\"\n\n BASE = 'https://discordapp.com/api/v7'\n\n def _prepare(self, webhook):\n self._webhook_id = webhook.id\n self._webhook_token = webhook.token\n self._request_url = '{0.BASE}/webhooks/{1}/{2}'.format(self, webhook.id, webhook.token)\n self.webhook = webhook\n\n def request(self, verb, url, payload=None, multipart=None):\n \"\"\"Actually does the request.\n\n Subclasses must implement this.\n\n Parameters\n -----------\n verb: str\n The HTTP verb to use for the request.\n url: str\n The URL to send the request to. This will have\n the query parameters already added to it, if any.\n multipart: Optional[dict]\n A dict containing multipart form data to send with\n the request. If a filename is being uploaded, then it will\n be under a ``file`` key which will have a 3-element :class:`tuple`\n denoting ``(filename, file, content_type)``.\n payload: Optional[dict]\n The JSON to send with the request, if any.\n \"\"\"\n raise NotImplementedError()\n\n def delete_webhook(self):\n return self.request('DELETE', self._request_url)\n\n def edit_webhook(self, **payload):\n return self.request('PATCH', self._request_url, payload=payload)\n\n def handle_execution_response(self, data, *, wait):\n \"\"\"Transforms the webhook execution response into something\n more meaningful.\n\n This is mainly used to convert the data into a :class:`Message`\n if necessary.\n\n Subclasses must implement this.\n\n Parameters\n ------------\n data\n The data that was returned from the request.\n wait: bool\n Whether the webhook execution was asked to wait or not.\n \"\"\"\n raise NotImplementedError()\n\n def store_user(self, data):\n # mocks a ConnectionState for appropriate use for Message\n return BaseUser(state=self, data=data)\n\n def execute_webhook(self, *, payload, wait=False, file=None, files=None):\n if file is not None:\n multipart = {\n 'file': file,\n 'payload_json': utils.to_json(payload)\n }\n data = None\n elif files is not None:\n multipart = {\n 'payload_json': utils.to_json(payload)\n }\n for i, file in enumerate(files, start=1):\n multipart['file%i' % i] = file\n data = None\n else:\n data = payload\n multipart = None\n\n url = '%s?wait=%d' % (self._request_url, wait)\n maybe_coro = self.request('POST', url, multipart=multipart, payload=data)\n return self.handle_execution_response(maybe_coro, wait=wait)\n\nclass AsyncWebhookAdapter(WebhookAdapter):\n \"\"\"A webhook adapter suited for use with aiohttp.\n\n .. note::\n\n You are responsible for cleaning up the client session.\n\n Parameters\n -----------\n session: aiohttp.ClientSession\n The session to use to send requests.\n \"\"\"\n\n def __init__(self, session):\n self.session = session\n self.loop = session.loop\n\n async def request(self, verb, url, payload=None, multipart=None):\n headers = {}\n data = None\n if payload:\n headers['Content-Type'] = 'application/json'\n data = utils.to_json(payload)\n\n if multipart:\n data = aiohttp.FormData()\n for key, value in multipart.items():\n if key.startswith('file'):\n data.add_field(key, value[1], filename=value[0], content_type=value[2])\n else:\n data.add_field(key, value)\n\n for tries in range(5):\n async with self.session.request(verb, url, headers=headers, data=data) as r:\n data = await r.text(encoding='utf-8')\n if r.headers['Content-Type'] == 'application/json':\n data = json.loads(data)\n\n # check if we have rate limit header information\n remaining = r.headers.get('X-Ratelimit-Remaining')\n if remaining == '0' and r.status != 429:\n delta = utils._parse_ratelimit_header(r)\n await asyncio.sleep(delta, loop=self.loop)\n\n if 300 > r.status >= 200:\n return data\n\n # we are being rate limited\n if r.status == 429:\n retry_after = data['retry_after'] / 1000.0\n await asyncio.sleep(retry_after, loop=self.loop)\n continue\n\n if r.status in (500, 502):\n await asyncio.sleep(1 + tries * 2, loop=self.loop)\n continue\n\n if r.status == 403:\n raise Forbidden(r, data)\n elif r.status == 404:\n raise NotFound(r, data)\n else:\n raise HTTPException(r, data)\n\n async def handle_execution_response(self, response, *, wait):\n data = await response\n if not wait:\n return data\n\n # transform into Message object\n from .message import Message\n return Message(data=data, state=self, channel=self.webhook.channel)\n\nclass RequestsWebhookAdapter(WebhookAdapter):\n \"\"\"A webhook adapter suited for use with ``requests``.\n\n Only versions of requests higher than 2.13.0 are supported.\n\n Parameters\n -----------\n session: Optional[`requests.Session <http://docs.python-requests.org/en/latest/api/#requests.Session>`_]\n The requests session to use for sending requests. If not given then\n each request will create a new session. Note if a session is given,\n the webhook adapter **will not** clean it up for you. You must close\n the session yourself.\n sleep: bool\n Whether to sleep the thread when encountering a 429 or pre-emptive\n rate limit or a 5xx status code. Defaults to ``True``. If set to\n ``False`` then this will raise an :exc:`HTTPException` instead.\n \"\"\"\n\n def __init__(self, session=None, *, sleep=True):\n import requests\n self.session = session or requests\n self.sleep = sleep\n\n def request(self, verb, url, payload=None, multipart=None):\n headers = {}\n data = None\n if payload:\n headers['Content-Type'] = 'application/json'\n data = utils.to_json(payload)\n\n if multipart is not None:\n data = {'payload_json': multipart.pop('payload_json')}\n\n for tries in range(5):\n r = self.session.request(verb, url, headers=headers, data=data, files=multipart)\n r.encoding = 'utf-8'\n data = r.text\n\n # compatibility with aiohttp\n r.status = r.status_code\n\n if r.headers['Content-Type'] == 'application/json':\n data = json.loads(data)\n\n # check if we have rate limit header information\n remaining = r.headers.get('X-Ratelimit-Remaining')\n if remaining == '0' and r.status != 429 and self.sleep:\n delta = utils._parse_ratelimit_header(r)\n time.sleep(delta)\n\n if 300 > r.status >= 200:\n return data\n\n # we are being rate limited\n if r.status == 429:\n if self.sleep:\n retry_after = data['retry_after'] / 1000.0\n time.sleep(retry_after)\n continue\n else:\n raise HTTPException(r, data)\n\n if self.sleep and r.status in (500, 502):\n time.sleep(1 + tries * 2)\n continue\n\n if r.status == 403:\n raise Forbidden(r, data)\n elif r.status == 404:\n raise NotFound(r, data)\n else:\n raise HTTPException(r, data)\n\n def handle_execution_response(self, response, *, wait):\n if not wait:\n return response\n\n # transform into Message object\n from .message import Message\n return Message(data=response, state=self, channel=self.webhook.channel)\n\nclass Webhook:\n \"\"\"Represents a Discord webhook.\n\n Webhooks are a form to send messages to channels in Discord without a\n bot user or authentication.\n\n There are two main ways to use Webhooks. The first is through the ones\n received by the library such as :meth:`.Guild.webhooks` and\n :meth:`.TextChannel.webhooks`. The ones received by the library will\n automatically have an adapter bound using the library's HTTP session.\n Those webhooks will have :meth:`~.Webhook.send`, :meth:`~.Webhook.delete` and\n :meth:`~.Webhook.edit` as coroutines.\n\n The second form involves creating a webhook object manually without having\n it bound to a websocket connection using the :meth:`~.Webhook.from_url` or\n :meth:`~.Webhook.partial` classmethods. This form allows finer grained control\n over how requests are done, allowing you to mix async and sync code using either\n ``aiohttp`` or ``requests``.\n\n For example, creating a webhook from a URL and using ``aiohttp``:\n\n .. code-block:: python3\n\n from discord import Webhook, AsyncWebhookAdapter\n import aiohttp\n\n async def foo():\n async with aiohttp.ClientSession() as session:\n webhook = Webhook.from_url('url-here', adapter=AsyncWebhookAdapter(session))\n await webhook.send('Hello World', username='Foo')\n\n Or creating a webhook from an ID and token and using ``requests``:\n\n .. code-block:: python3\n\n import requests\n from discord import Webhook, RequestsWebhookAdapter\n\n webhook = Webhook.partial(123456, 'abcdefg', adapter=RequestsWebhookAdapter())\n webhook.send('Hello World', username='Foo')\n\n Attributes\n ------------\n id: :class:`int`\n The webhook's ID\n token: :class:`str`\n The authentication token of the webhook.\n guild_id: Optional[:class:`int`]\n The guild ID this webhook is for.\n channel_id: Optional[:class:`int`]\n The channel ID this webhook is for.\n user: Optional[:class:`abc.User`]\n The user this webhook was created by. If the webhook was\n received without authentication then this will be ``None``.\n name: Optional[:class:`str`]\n The default name of the webhook.\n avatar: Optional[:class:`str`]\n The default avatar of the webhook.\n \"\"\"\n\n __slots__ = ('id', 'guild_id', 'channel_id', 'user', 'name', 'avatar',\n 'token', '_state', '_adapter')\n\n def __init__(self, data, *, adapter, state=None):\n self.id = int(data['id'])\n self.channel_id = utils._get_as_snowflake(data, 'channel_id')\n self.guild_id = utils._get_as_snowflake(data, 'guild_id')\n self.name = data.get('name')\n self.avatar = data.get('avatar')\n self.token = data['token']\n self._state = state\n self._adapter = adapter\n self._adapter._prepare(self)\n\n user = data.get('user')\n if user is None:\n self.user = None\n elif state is None:\n self.user = BaseUser(state=None, data=user)\n else:\n self.user = User(state=state, data=user)\n\n def __repr__(self):\n return '<Webhook id=%r>' % self.id\n\n @property\n def url(self):\n \"\"\"Returns the webhook's url.\"\"\"\n return 'https://discordapp.com/api/webhooks/{}/{}'.format(self.id, self.token)\n\n @classmethod\n def partial(cls, id, token, *, adapter):\n \"\"\"Creates a partial :class:`Webhook`.\n\n A partial webhook is just a webhook object with an ID and a token.\n\n Parameters\n -----------\n id: int\n The ID of the webhook.\n token: str\n The authentication token of the webhook.\n adapter: :class:`WebhookAdapter`\n The webhook adapter to use when sending requests. This is\n typically :class:`AsyncWebhookAdapter` for ``aiohttp`` or\n :class:`RequestsWebhookAdapter` for ``requests``.\n \"\"\"\n\n if not isinstance(adapter, WebhookAdapter):\n raise TypeError('adapter must be a subclass of WebhookAdapter')\n\n data = {\n 'id': id,\n 'token': token\n }\n\n return cls(data, adapter=adapter)\n\n @classmethod\n def from_url(cls, url, *, adapter):\n \"\"\"Creates a partial :class:`Webhook` from a webhook URL.\n\n Parameters\n ------------\n url: str\n The URL of the webhook.\n adapter: :class:`WebhookAdapter`\n The webhook adapter to use when sending requests. This is\n typically :class:`AsyncWebhookAdapter` for ``aiohttp`` or\n :class:`RequestsWebhookAdapter` for ``requests``.\n\n Raises\n -------\n InvalidArgument\n The URL is invalid.\n \"\"\"\n\n m = re.search(r'discordapp.com/api/webhooks/(?P<id>[0-9]{17,21})/(?P<token>[A-Za-z0-9\\.\\-\\_]{60,68})', url)\n if m is None:\n raise InvalidArgument('Invalid webhook URL given.')\n return cls(m.groupdict(), adapter=adapter)\n\n @classmethod\n def from_state(cls, data, state):\n return cls(data, adapter=AsyncWebhookAdapter(session=state.http._session), state=state)\n\n @property\n def guild(self):\n \"\"\"Optional[:class:`Guild`]: The guild this webhook belongs to.\n\n If this is a partial webhook, then this will always return ``None``.\n \"\"\"\n return self._state and self._state._get_guild(self.guild_id)\n\n @property\n def channel(self):\n \"\"\"Optional[:class:`TextChannel`]: The text channel this webhook belongs to.\n\n If this is a partial webhook, then this will always return ``None``.\n \"\"\"\n guild = self.guild\n return guild and guild.get_channel(self.channel_id)\n\n @property\n def created_at(self):\n \"\"\"Returns the webhook's creation time in UTC.\"\"\"\n return utils.snowflake_time(self.id)\n\n @property\n def avatar_url(self):\n \"\"\"Returns a friendly URL version of the avatar the webhook has.\n\n If the webhook does not have a traditional avatar, their default\n avatar URL is returned instead.\n\n This is equivalent to calling :meth:`avatar_url_as` with the\n default parameters.\n \"\"\"\n return self.avatar_url_as()\n\n def avatar_url_as(self, *, format=None, size=1024):\n \"\"\"Returns a friendly URL version of the avatar the webhook has.\n\n If the webhook does not have a traditional avatar, their default\n avatar URL is returned instead.\n\n The format must be one of 'jpeg', 'jpg', or 'png'.\n The size must be a power of 2 between 16 and 1024.\n\n Parameters\n -----------\n format: Optional[str]\n The format to attempt to convert the avatar to.\n If the format is ``None``, then it is equivalent to png.\n size: int\n The size of the image to display.\n\n Returns\n --------\n str\n The resulting CDN URL.\n\n Raises\n ------\n InvalidArgument\n Bad image format passed to ``format`` or invalid ``size``.\n \"\"\"\n if self.avatar is None:\n # Default is always blurple apparently\n return 'https://cdn.discordapp.com/embed/avatars/0.png'\n\n if not utils.valid_icon_size(size):\n raise InvalidArgument(\"size must be a power of 2 between 16 and 1024\")\n\n format = format or 'png'\n\n if format not in ('png', 'jpg', 'jpeg'):\n raise InvalidArgument(\"format must be one of 'png', 'jpg', or 'jpeg'.\")\n\n return 'https://cdn.discordapp.com/avatars/{0.id}/{0.avatar}.{1}?size={2}'.format(self, format, size)\n\n def delete(self):\n \"\"\"|maybecoro|\n\n Deletes this Webhook.\n\n If the webhook is constructed with a :class:`RequestsWebhookAdapter` then this is\n not a coroutine.\n\n Raises\n -------\n HTTPException\n Deleting the webhook failed.\n NotFound\n This webhook does not exist.\n Forbidden\n You do not have permissions to delete this webhook.\n \"\"\"\n return self._adapter.delete_webhook()\n\n def edit(self, **kwargs):\n \"\"\"|maybecoro|\n\n Edits this Webhook.\n\n If the webhook is constructed with a :class:`RequestsWebhookAdapter` then this is\n not a coroutine.\n\n Parameters\n -------------\n name: Optional[str]\n The webhook's new default name.\n avatar: Optional[bytes]\n A :term:`py:bytes-like object` representing the webhook's new default avatar.\n\n Raises\n -------\n HTTPException\n Editing the webhook failed.\n NotFound\n This webhook does not exist.\n Forbidden\n You do not have permissions to edit this webhook.\n \"\"\"\n payload = {}\n\n try:\n name = kwargs['name']\n except KeyError:\n pass\n else:\n if name is not None:\n payload['name'] = str(name)\n else:\n payload['name'] = None\n\n try:\n avatar = kwargs['avatar']\n except KeyError:\n pass\n else:\n if avatar is not None:\n payload['avatar'] = utils._bytes_to_base64_data(avatar)\n else:\n payload['avatar'] = None\n\n return self._adapter.edit_webhook(**payload)\n\n def send(self, content=None, *, wait=False, username=None, avatar_url=None, tts=False,\n file=None, files=None, embed=None, embeds=None):\n \"\"\"|maybecoro|\n\n Sends a message using the webhook.\n\n If the webhook is constructed with a :class:`RequestsWebhookAdapter` then this is\n not a coroutine.\n\n The content must be a type that can convert to a string through ``str(content)``.\n\n To upload a single file, the ``file`` parameter should be used with a\n single :class:`File` object.\n\n If the ``embed`` parameter is provided, it must be of type :class:`Embed` and\n it must be a rich embed type. You cannot mix the ``embed`` parameter with the\n ``embeds`` parameter, which must be a :class:`list` of :class:`Embed` objects to send.\n\n Parameters\n ------------\n content\n The content of the message to send.\n wait: bool\n Whether the server should wait before sending a response. This essentially\n means that the return type of this function changes from ``None`` to\n a :class:`Message` if set to ``True``.\n username: str\n The username to send with this message. If no username is provided\n then the default username for the webhook is used.\n avatar_url: str\n The avatar URL to send with this message. If no avatar URL is provided\n then the default avatar for the webhook is used.\n tts: bool\n Indicates if the message should be sent using text-to-speech.\n file: :class:`File`\n The file to upload. This cannot be mixed with ``files`` parameter.\n files: List[:class:`File`]\n A list of files to send with the content. This cannot be mixed with the\n ``file`` parameter.\n embed: :class:`Embed`\n The rich embed for the content to send. This cannot be mixed with\n ``embeds`` parameter.\n embeds: List[:class:`Embed`]\n A list of embeds to send with the content. Maximum of 10. This cannot\n be mixed with the ``embed`` parameter.\n\n Raises\n --------\n HTTPException\n Sending the message failed.\n NotFound\n This webhook was not found.\n Forbidden\n The authorization token for the webhook is incorrect.\n InvalidArgument\n You specified both ``embed`` and ``embeds`` or the length of\n ``embeds`` was invalid.\n\n Returns\n ---------\n Optional[:class:`Message`]\n The message that was sent.\n \"\"\"\n\n payload = {}\n\n if files is not None and file is not None:\n raise InvalidArgument('Cannot mix file and files keyword arguments.')\n if embeds is not None and embed is not None:\n raise InvalidArgument('Cannot mix embed and embeds keyword arguments.')\n\n if embeds is not None:\n if len(embeds) > 10:\n raise InvalidArgument('embeds has a maximum of 10 elements.')\n payload['embeds'] = [e.to_dict() for e in embeds]\n\n if embed is not None:\n payload['embeds'] = [embed.to_dict()]\n\n if content is not None:\n payload['content'] = str(content)\n\n payload['tts'] = tts\n if avatar_url:\n payload['avatar_url'] = avatar_url\n if username:\n payload['username'] = username\n\n if file is not None:\n try:\n to_pass = (file.filename, file.open_file(), 'application/octet-stream')\n return self._adapter.execute_webhook(wait=wait, file=to_pass, payload=payload)\n finally:\n file.close()\n elif files is not None:\n try:\n to_pass = [(file.filename, file.open_file(), 'application/octet-stream')\n for file in files]\n return self._adapter.execute_webhook(wait=wait, files=to_pass, payload=payload)\n finally:\n for file in files:\n file.close()\n else:\n return self._adapter.execute_webhook(wait=wait, payload=payload)\n\n def execute(self, *args, **kwargs):\n \"\"\"An alias for :meth:`~.Webhook.send`.\"\"\"\n return self.send(*args, **kwargs)\n", "path": "discord/webhook.py" } ]
diff --git a/discord/webhook.py b/discord/webhook.py index 0dfe6ba6281a..a4f84632a45c 100644 --- a/discord/webhook.py +++ b/discord/webhook.py @@ -100,7 +100,7 @@ def handle_execution_response(self, data, *, wait): """ raise NotImplementedError() - def _store_user(self, data): + def store_user(self, data): # mocks a ConnectionState for appropriate use for Message return BaseUser(state=self, data=data)
Zeroto521__my-data-toolkit-467
TYP: specific `None` type <!-- Thanks for contributing a pull request! Please follow these standard acronyms to start the commit message: - ENH: enhancement - BUG: bug fix - DOC: documentation - TYP: type annotations - TST: addition or modification of tests - MAINT: maintenance commit (refactoring, typos, etc.) - BLD: change related to building - REL: related to releasing - API: an (incompatible) API change - DEP: deprecate something, or remove a deprecated object - DEV: development tool or utility - REV: revert an earlier commit - PERF: performance improvement - BOT: always commit via a bot - CI: related to CI or CD - CLN: Code cleanup --> - [ ] closes #xxxx - [x] whatsnew entry `paramerter: TheType = None` means `parameter` is `Optional[TheType]`. `parameter: None` means `None` has any specifal function.
[ { "content": "from __future__ import annotations\n\nfrom textwrap import dedent\nfrom typing import TYPE_CHECKING\n\nimport pandas as pd\nfrom pandas.util._decorators import doc\nfrom pandas.util._validators import validate_bool_kwarg\n\nfrom dtoolkit.accessor.register import register_series_method\n\nif TYPE_CHECKING:\n from typing import Any\n\n from dtoolkit._typing import IntOrStr\n from dtoolkit._typing import OneDimArray\n from dtoolkit._typing import Number\n\n\n@register_series_method\n@doc(\n returns=dedent(\n \"\"\"\n Returns\n -------\n str or None\n The name of the Series.\n \"\"\",\n ),\n)\ndef cols(s: pd.Series) -> str | None:\n \"\"\"\n An API to gather :attr:`~pandas.Series.name` and\n :attr:`~pandas.DataFrame.columns` to one.\n {returns}\n See Also\n --------\n pandas.Series.name\n pandas.DataFrame.columns\n dtoolkit.accessor.series.cols\n dtoolkit.accessor.dataframe.cols\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n\n Get :attr:`~pandas.Series.name`.\n\n >>> s = pd.Series(range(10), name=\"item\")\n >>> s.cols()\n 'item'\n\n Get :attr:`~pandas.DataFrame.columns`.\n\n >>> d = pd.DataFrame({{\"a\": [1, 2], \"b\": [3, 4]}})\n >>> d.cols()\n ['a', 'b']\n \"\"\"\n\n return s.name\n\n\n@register_series_method\ndef drop_inf(\n s: pd.Series,\n inf: str = \"all\",\n inplace: bool = False,\n) -> pd.Series | None:\n \"\"\"\n Remove ``inf`` values.\n\n Parameters\n ----------\n inf : {'all', 'pos', 'neg'}, default 'all'\n * 'all' : Remove ``inf`` and ``-inf``.\n * 'pos' : Only remove ``inf``.\n * 'neg' : Only remove ``-inf``.\n\n inplace : bool, default False\n If True, do operation inplace and return None.\n\n Returns\n -------\n Series or None\n Series with ``inf`` entries dropped from it or None if\n ``inplace=True``.\n\n See Also\n --------\n dtoolkit.accessor.dataframe.drop_inf\n :obj:`~pandas.DataFrame` drops rows or columns which contain ``inf``\n values.\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> import numpy as np\n >>> s = pd.Series([1., 2., np.inf])\n >>> s\n 0 1.0\n 1 2.0\n 2 inf\n dtype: float64\n\n Drop inf values from a Series.\n\n >>> s.drop_inf()\n 0 1.0\n 1 2.0\n dtype: float64\n\n Keep the Series with valid entries in the same variable.\n\n >>> s.drop_inf(inplace=True)\n >>> s\n 0 1.0\n 1 2.0\n dtype: float64\n \"\"\"\n from dtoolkit.accessor._util import get_inf_range\n\n inplace = validate_bool_kwarg(inplace, \"inplace\")\n inf_range = get_inf_range(inf)\n mask = s.isin(inf_range)\n result = s[~mask]\n\n if not inplace:\n return result\n\n s._update_inplace(result)\n\n\n@register_series_method\ndef bin(\n s: pd.Series,\n bins,\n labels=None,\n right: bool = True,\n retbins: bool = False,\n precision: int = 3,\n include_lowest: bool = False,\n duplicates: str = \"raise\",\n ordered: bool = False,\n inplace: bool = False,\n) -> pd.Series | None:\n \"\"\"\n Bin values into discrete intervals.\n\n See Also\n --------\n pandas.cut: This accessor's prototype method.\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n\n Create **score** samples:\n\n >>> s = pd.Series([100, 10, 50, 20, 90, 60])\n\n Bin score to rank level:\n\n - (0, 60] -> E\n - (60, 70] -> D\n - (70, 80] -> C\n - (80, 90] -> B\n - (90, 100] -> A\n\n >>> s.bin([0, 60, 70, 80, 90, 100], ['E', 'D', 'C', 'B', 'A'], right=True)\n 0 A\n 1 E\n 2 E\n 3 E\n 4 B\n 5 E\n dtype: category\n Categories (5, object): ['E', 'D', 'C', 'B', 'A']\n \"\"\"\n inplace = validate_bool_kwarg(inplace, \"inplace\")\n\n result = pd.cut(\n s,\n bins=bins,\n right=right,\n labels=labels,\n retbins=retbins,\n precision=precision,\n include_lowest=include_lowest,\n duplicates=duplicates,\n ordered=ordered,\n )\n\n if not inplace:\n return result\n\n s._update_inplace(result)\n\n\n@register_series_method\ndef top_n(\n s: pd.Series,\n n: int,\n largest: bool = True,\n keep: str = \"first\",\n) -> pd.Series:\n \"\"\"\n Return the top `n` values.\n\n This method is the collection of\n :meth:`~pandas.Series.nlargest` and :meth:`~pandas.Series.nsmallest`\n methods.\n\n Parameters\n ----------\n n : int\n Number of top to return.\n\n largest : bool, default True\n - True, the top is the largest.\n - True, the top is the smallest.\n\n keep : {\"first\", \"last\", \"all\"}, default \"first\"\n Where there are duplicate values:\n\n - first : prioritize the first occurrence(s).\n - last : prioritize the last occurrence(s).\n - all : do not drop any duplicates, even it means selecting more than\n n items.\n\n Returns\n -------\n Series\n\n See Also\n --------\n dtoolkit.accessor.series.expand\n Transform each element of a list-like to a column.\n dtoolkit.accessor.dataframe.top_n\n Returns each row's top n.\n \"\"\"\n\n if largest:\n return s.nlargest(n=n, keep=keep)\n\n return s.nsmallest(n=n, keep=keep)\n\n\n@register_series_method\n@doc(\n see_also=dedent(\n \"\"\"\n See Also\n --------\n pandas.Series.explode\n Transform each element of a list-like to a row.\n dtoolkit.accessor.dataframe.expand\n Transform each element of a list-like to a column.\n \"\"\",\n ),\n examples=dedent(\n \"\"\"\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n\n Expand the *list-like* element.\n\n >>> s = pd.Series([[1, 2, 3], 'foo', [], [3, 4]], name=\"item\")\n >>> s.expand()\n item_0 item_1 item_2\n 0 1 2.0 3.0\n 1 foo NaN NaN\n 2 None NaN NaN\n 3 3 4.0 NaN\n\n Expand *sub-element* type is list-like.\n\n >>> s = pd.Series([(\"a\", \"b\"), [1, [2, 3]]], name=\"item\")\n >>> s.expand(flatten=True)\n item_0 item_1 item_2\n 0 a b NaN\n 1 1 2 3.0\n\n Set the columns of name.\n\n >>> s = pd.Series([(\"a\", 1), [\"b\", 2]], name=\"item\")\n >>> s.expand(suffix=[\"index\", \"value\"], delimiter=\"-\")\n item-index item-value\n 0 a 1\n 1 b 2\n\n Also could handle **different lengths** of element and suffix list.\n\n >>> s = pd.Series([(1, 2), [1, 2, 3]], name=\"item\")\n >>> s.expand()\n item_0 item_1 item_2\n 0 1 2 NaN\n 1 1 2 3.0\n >>> s.expand(suffix=[\"a\", \"b\", \"c\", \"d\"])\n item_a item_b item_c\n 0 1 2 NaN\n 1 1 2 3.0\n \"\"\",\n ),\n)\ndef expand(\n s: pd.Series,\n suffix: list[IntOrStr] = None,\n delimiter: str = \"_\",\n flatten: bool = False,\n) -> pd.DataFrame:\n \"\"\"\n Transform each element of a list-like to a **column**.\n\n .. image:: ../../../../_static/expand-vs-explode.svg\n :width: 80%\n :align: center\n\n Parameters\n ----------\n suffix : list of str or int, optional\n New columns of return :class:`~pandas.DataFrame`.\n\n delimiter : str, default \"_\"\n The delimiter between :attr:`~pandas.Series.name` and `suffix`.\n\n flatten : bool, default False\n Flatten all like-list elements or not. It would cost more time.\n\n Returns\n -------\n DataFrame\n The structure of new column name is ``{{column name}}{{delimiter}}{{suffix}}``.\n {see_also}\n {examples}\n \"\"\"\n from pandas.api.types import is_list_like\n\n from dtoolkit.accessor._util import collapse\n\n def wrap_collapse(x) -> list[Any]:\n if is_list_like(x):\n if flatten:\n return list(collapse(x))\n return x\n return [x]\n\n s_list = s.apply(wrap_collapse)\n s_len = s_list.len()\n if all(s_len == 1):\n return s\n\n max_len = s_len.max()\n if suffix and len(suffix) < max_len:\n raise ValueError(\n f\"suffix length is less than the max size of {s.name!r} elements.\",\n )\n\n if s.name is None:\n raise ValueError(\"the column name should be specified.\")\n\n columns = suffix or range(max_len)\n return pd.DataFrame(\n s_list.tolist(),\n index=s.index,\n columns=columns[:max_len],\n ).add_prefix(s.name + delimiter)\n\n\n@register_series_method(name=\"len\")\n@register_series_method\ndef lens(s: pd.Series, number: int = 1, other: int = None) -> pd.Series:\n \"\"\"\n Return the length of each element in the series.\n\n Equals to ``s.apply(len)``, but the length of ``number`` type will as ``1``,\n the length of other types will as ``NaN``.\n\n Parameters\n ----------\n number : int or None, default '1'\n The default length of `number` type.\n other : int or None, default None\n The default length of `other` type.\n\n Returns\n -------\n Series\n\n Notes\n -----\n - To keep the Python naming style, so use this accessor via\n ``Series.len`` rather than ``Series.lens``.\n\n - Different to :meth:`pandas.Series.str.len`. It only returns\n :class:`collections.abc.Iterable` type length. Other type will return `NaN`.\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> s = pd.Series([0, 1, \"string\", (\"tuple\",), [\"list\"], {}, object])\n >>> s\n 0 0\n 1 1\n 2 string\n 3 (tuple,)\n 4 [list]\n 5 {}\n 6 <class 'object'>\n dtype: object\n >>> s.len()\n 0 1.0\n 1 1.0\n 2 6.0\n 3 1.0\n 4 1.0\n 5 0.0\n 6 NaN\n dtype: float64\n\n Set `number` and `other` default return.\n\n >>> s.len(number=0, other=0)\n 0 0\n 1 0\n 2 6\n 3 1\n 4 1\n 5 0\n 6 0\n dtype: int64\n \"\"\"\n from pandas.api.types import is_number\n\n def wrap_len(x) -> int | None:\n if hasattr(x, \"__len__\"):\n return len(x)\n elif is_number(x):\n return number\n else:\n return other\n\n return s.apply(wrap_len)\n\n\n@register_series_method\ndef error_report(\n s: pd.Series,\n predicted: OneDimArray | list[Number],\n columns: list[IntOrStr] = None,\n) -> pd.DataFrame:\n \"\"\"\n Calculate `absolute error` and `relative error` of two columns.\n\n Parameters\n ----------\n predicted : list of int or float, ndarrray, Series\n A array is compared to ``s``.\n columns : list of str or int, optional\n The columns of returning DataFrame, each represents `true value`,\n `predicted value`, `absolute error`, and `relative error`.\n\n Returns\n -------\n DataFrame\n Return four columns DataFrame and each represents `true value`,\n `predicted value`, `absolute error`, and `relative error`.\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> s = pd.Series([1, 2, 3])\n >>> s.error_report([3, 2, 1])\n true value predicted value absolute error relative error\n 0 1 3 2 2.000000\n 1 2 2 0 0.000000\n 2 3 1 2 0.666667\n\n If the name of ``s`` or ``predicted`` is not None, the columns of\n ``error_report`` would use the name of ``s`` and ``predicted``.\n\n >>> s = pd.Series([1, 2, 3], name=\"y\")\n >>> predicted = pd.Series([3, 2, 1], name=\"y predicted\")\n >>> s.error_report(predicted)\n y y predicted absolute error relative error\n 0 1 3 2 2.000000\n 1 2 2 0 0.000000\n 2 3 1 2 0.666667\n\n If ``columns`` is not None, the columns of ``error_report`` would use it\n firstly.\n\n >>> s.error_report(predicted, columns=[\"a\", \"b\", \"c\", \"d\"])\n a b c d\n 0 1 3 2 2.000000\n 1 2 2 0 0.000000\n 2 3 1 2 0.666667\n \"\"\"\n\n if len(s) != len(predicted):\n raise IndexError(\n \"Length of 'predicted' doesn't match length of 'reference'.\",\n )\n\n if isinstance(predicted, pd.Series):\n if not s.index.equals(predicted.index):\n raise IndexError(\n \"Index values of 'predicted' sequence doesn't \"\n \"match index values of 'reference'.\",\n )\n else:\n predicted = pd.Series(predicted, index=s.index)\n\n if columns is None:\n columns = [\n s.name or \"true value\",\n predicted.name or \"predicted value\",\n \"absolute error\",\n \"relative error\",\n ]\n elif len(columns) != 4:\n raise IndexError(\"The length of 'columns' is not equal to 4.\")\n\n absolute_error = (predicted - s).abs()\n relative_error = absolute_error / s\n\n return pd.concat(\n [\n s,\n predicted,\n absolute_error,\n relative_error,\n ],\n axis=1,\n keys=columns,\n )\n\n\n@register_series_method(name=\"getattr\")\n@register_series_method\ndef get_attr(s: pd.Series, name: str, *args, **kwargs) -> pd.Series:\n \"\"\"\n Return the value of the named attribute of Series element.\n\n The back core logical is :func:`getattr`.\n\n Read more in the `User Guide`_.\n\n .. _User Guide: ../../guide/tips_about_getattr.ipynb\n\n Parameters\n ----------\n name : str\n The name of one of the Series element's attributes. If the named attribute\n does not exist, None is returned.\n args, kwargs\n The arguments of the function type attribute.\n\n Returns\n -------\n Series\n\n See Also\n --------\n getattr\n\n Notes\n -----\n To keep the Python naming style, so use this accessor via\n ``Series.getattr`` rather than ``Series.get_attr``.\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> s = pd.Series([\"hello\", \"world\"])\n\n Get a attribute.\n\n >>> s.getattr(\"__doc__\")\n 0 str(object='') -> str\\\\nstr(bytes_or_buffer[, e...\n 1 str(object='') -> str\\\\nstr(bytes_or_buffer[, e...\n dtype: object\n\n Get a don't exist attribute.\n\n >>> s.getattr(\"whatever\")\n 0 None\n 1 None\n dtype: object\n\n Get a method attribute and call it.\n\n >>> s.getattr(\"count\", \"l\")\n 0 2\n 1 1\n dtype: int64\n \"\"\"\n\n def wrap_getattr(x):\n attr = getattr(x, name, None)\n if callable(attr):\n return attr(*args, **kwargs)\n return attr\n\n return s.apply(wrap_getattr)\n", "path": "dtoolkit/accessor/series.py" } ]
[ { "content": "from __future__ import annotations\n\nfrom textwrap import dedent\nfrom typing import TYPE_CHECKING\n\nimport pandas as pd\nfrom pandas.util._decorators import doc\nfrom pandas.util._validators import validate_bool_kwarg\n\nfrom dtoolkit.accessor.register import register_series_method\n\nif TYPE_CHECKING:\n from typing import Any\n\n from dtoolkit._typing import IntOrStr\n from dtoolkit._typing import OneDimArray\n from dtoolkit._typing import Number\n\n\n@register_series_method\n@doc(\n returns=dedent(\n \"\"\"\n Returns\n -------\n str or None\n The name of the Series.\n \"\"\",\n ),\n)\ndef cols(s: pd.Series) -> str | None:\n \"\"\"\n An API to gather :attr:`~pandas.Series.name` and\n :attr:`~pandas.DataFrame.columns` to one.\n {returns}\n See Also\n --------\n pandas.Series.name\n pandas.DataFrame.columns\n dtoolkit.accessor.series.cols\n dtoolkit.accessor.dataframe.cols\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n\n Get :attr:`~pandas.Series.name`.\n\n >>> s = pd.Series(range(10), name=\"item\")\n >>> s.cols()\n 'item'\n\n Get :attr:`~pandas.DataFrame.columns`.\n\n >>> d = pd.DataFrame({{\"a\": [1, 2], \"b\": [3, 4]}})\n >>> d.cols()\n ['a', 'b']\n \"\"\"\n\n return s.name\n\n\n@register_series_method\ndef drop_inf(\n s: pd.Series,\n inf: str = \"all\",\n inplace: bool = False,\n) -> pd.Series | None:\n \"\"\"\n Remove ``inf`` values.\n\n Parameters\n ----------\n inf : {'all', 'pos', 'neg'}, default 'all'\n * 'all' : Remove ``inf`` and ``-inf``.\n * 'pos' : Only remove ``inf``.\n * 'neg' : Only remove ``-inf``.\n\n inplace : bool, default False\n If True, do operation inplace and return None.\n\n Returns\n -------\n Series or None\n Series with ``inf`` entries dropped from it or None if\n ``inplace=True``.\n\n See Also\n --------\n dtoolkit.accessor.dataframe.drop_inf\n :obj:`~pandas.DataFrame` drops rows or columns which contain ``inf``\n values.\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> import numpy as np\n >>> s = pd.Series([1., 2., np.inf])\n >>> s\n 0 1.0\n 1 2.0\n 2 inf\n dtype: float64\n\n Drop inf values from a Series.\n\n >>> s.drop_inf()\n 0 1.0\n 1 2.0\n dtype: float64\n\n Keep the Series with valid entries in the same variable.\n\n >>> s.drop_inf(inplace=True)\n >>> s\n 0 1.0\n 1 2.0\n dtype: float64\n \"\"\"\n from dtoolkit.accessor._util import get_inf_range\n\n inplace = validate_bool_kwarg(inplace, \"inplace\")\n inf_range = get_inf_range(inf)\n mask = s.isin(inf_range)\n result = s[~mask]\n\n if not inplace:\n return result\n\n s._update_inplace(result)\n\n\n@register_series_method\ndef bin(\n s: pd.Series,\n bins,\n labels=None,\n right: bool = True,\n retbins: bool = False,\n precision: int = 3,\n include_lowest: bool = False,\n duplicates: str = \"raise\",\n ordered: bool = False,\n inplace: bool = False,\n) -> pd.Series | None:\n \"\"\"\n Bin values into discrete intervals.\n\n See Also\n --------\n pandas.cut: This accessor's prototype method.\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n\n Create **score** samples:\n\n >>> s = pd.Series([100, 10, 50, 20, 90, 60])\n\n Bin score to rank level:\n\n - (0, 60] -> E\n - (60, 70] -> D\n - (70, 80] -> C\n - (80, 90] -> B\n - (90, 100] -> A\n\n >>> s.bin([0, 60, 70, 80, 90, 100], ['E', 'D', 'C', 'B', 'A'], right=True)\n 0 A\n 1 E\n 2 E\n 3 E\n 4 B\n 5 E\n dtype: category\n Categories (5, object): ['E', 'D', 'C', 'B', 'A']\n \"\"\"\n inplace = validate_bool_kwarg(inplace, \"inplace\")\n\n result = pd.cut(\n s,\n bins=bins,\n right=right,\n labels=labels,\n retbins=retbins,\n precision=precision,\n include_lowest=include_lowest,\n duplicates=duplicates,\n ordered=ordered,\n )\n\n if not inplace:\n return result\n\n s._update_inplace(result)\n\n\n@register_series_method\ndef top_n(\n s: pd.Series,\n n: int,\n largest: bool = True,\n keep: str = \"first\",\n) -> pd.Series:\n \"\"\"\n Return the top `n` values.\n\n This method is the collection of\n :meth:`~pandas.Series.nlargest` and :meth:`~pandas.Series.nsmallest`\n methods.\n\n Parameters\n ----------\n n : int\n Number of top to return.\n\n largest : bool, default True\n - True, the top is the largest.\n - True, the top is the smallest.\n\n keep : {\"first\", \"last\", \"all\"}, default \"first\"\n Where there are duplicate values:\n\n - first : prioritize the first occurrence(s).\n - last : prioritize the last occurrence(s).\n - all : do not drop any duplicates, even it means selecting more than\n n items.\n\n Returns\n -------\n Series\n\n See Also\n --------\n dtoolkit.accessor.series.expand\n Transform each element of a list-like to a column.\n dtoolkit.accessor.dataframe.top_n\n Returns each row's top n.\n \"\"\"\n\n if largest:\n return s.nlargest(n=n, keep=keep)\n\n return s.nsmallest(n=n, keep=keep)\n\n\n@register_series_method\n@doc(\n see_also=dedent(\n \"\"\"\n See Also\n --------\n pandas.Series.explode\n Transform each element of a list-like to a row.\n dtoolkit.accessor.dataframe.expand\n Transform each element of a list-like to a column.\n \"\"\",\n ),\n examples=dedent(\n \"\"\"\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n\n Expand the *list-like* element.\n\n >>> s = pd.Series([[1, 2, 3], 'foo', [], [3, 4]], name=\"item\")\n >>> s.expand()\n item_0 item_1 item_2\n 0 1 2.0 3.0\n 1 foo NaN NaN\n 2 None NaN NaN\n 3 3 4.0 NaN\n\n Expand *sub-element* type is list-like.\n\n >>> s = pd.Series([(\"a\", \"b\"), [1, [2, 3]]], name=\"item\")\n >>> s.expand(flatten=True)\n item_0 item_1 item_2\n 0 a b NaN\n 1 1 2 3.0\n\n Set the columns of name.\n\n >>> s = pd.Series([(\"a\", 1), [\"b\", 2]], name=\"item\")\n >>> s.expand(suffix=[\"index\", \"value\"], delimiter=\"-\")\n item-index item-value\n 0 a 1\n 1 b 2\n\n Also could handle **different lengths** of element and suffix list.\n\n >>> s = pd.Series([(1, 2), [1, 2, 3]], name=\"item\")\n >>> s.expand()\n item_0 item_1 item_2\n 0 1 2 NaN\n 1 1 2 3.0\n >>> s.expand(suffix=[\"a\", \"b\", \"c\", \"d\"])\n item_a item_b item_c\n 0 1 2 NaN\n 1 1 2 3.0\n \"\"\",\n ),\n)\ndef expand(\n s: pd.Series,\n suffix: list[IntOrStr] = None,\n delimiter: str = \"_\",\n flatten: bool = False,\n) -> pd.DataFrame:\n \"\"\"\n Transform each element of a list-like to a **column**.\n\n .. image:: ../../../../_static/expand-vs-explode.svg\n :width: 80%\n :align: center\n\n Parameters\n ----------\n suffix : list of str or int, optional\n New columns of return :class:`~pandas.DataFrame`.\n\n delimiter : str, default \"_\"\n The delimiter between :attr:`~pandas.Series.name` and `suffix`.\n\n flatten : bool, default False\n Flatten all like-list elements or not. It would cost more time.\n\n Returns\n -------\n DataFrame\n The structure of new column name is ``{{column name}}{{delimiter}}{{suffix}}``.\n {see_also}\n {examples}\n \"\"\"\n from pandas.api.types import is_list_like\n\n from dtoolkit.accessor._util import collapse\n\n def wrap_collapse(x) -> list[Any]:\n if is_list_like(x):\n if flatten:\n return list(collapse(x))\n return x\n return [x]\n\n s_list = s.apply(wrap_collapse)\n s_len = s_list.len()\n if all(s_len == 1):\n return s\n\n max_len = s_len.max()\n if suffix and len(suffix) < max_len:\n raise ValueError(\n f\"suffix length is less than the max size of {s.name!r} elements.\",\n )\n\n if s.name is None:\n raise ValueError(\"the column name should be specified.\")\n\n columns = suffix or range(max_len)\n return pd.DataFrame(\n s_list.tolist(),\n index=s.index,\n columns=columns[:max_len],\n ).add_prefix(s.name + delimiter)\n\n\n@register_series_method(name=\"len\")\n@register_series_method\ndef lens(s: pd.Series, number: int = 1, other: int = None) -> pd.Series:\n \"\"\"\n Return the length of each element in the series.\n\n Equals to ``s.apply(len)``, but the length of ``number`` type will as ``1``,\n the length of other types will as ``NaN``.\n\n Parameters\n ----------\n number : int, default 1\n The default length of `number` type.\n other : int or None, default None\n The default length of `other` type.\n\n Returns\n -------\n Series\n\n Notes\n -----\n - To keep the Python naming style, so use this accessor via\n ``Series.len`` rather than ``Series.lens``.\n\n - Different to :meth:`pandas.Series.str.len`. It only returns\n :class:`collections.abc.Iterable` type length. Other type will return `NaN`.\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> s = pd.Series([0, 1, \"string\", (\"tuple\",), [\"list\"], {}, object])\n >>> s\n 0 0\n 1 1\n 2 string\n 3 (tuple,)\n 4 [list]\n 5 {}\n 6 <class 'object'>\n dtype: object\n >>> s.len()\n 0 1.0\n 1 1.0\n 2 6.0\n 3 1.0\n 4 1.0\n 5 0.0\n 6 NaN\n dtype: float64\n\n Set `number` and `other` default return.\n\n >>> s.len(number=0, other=0)\n 0 0\n 1 0\n 2 6\n 3 1\n 4 1\n 5 0\n 6 0\n dtype: int64\n \"\"\"\n from pandas.api.types import is_number\n\n def wrap_len(x) -> int | None:\n if hasattr(x, \"__len__\"):\n return len(x)\n elif is_number(x):\n return number\n else:\n return other\n\n return s.apply(wrap_len)\n\n\n@register_series_method\ndef error_report(\n s: pd.Series,\n predicted: OneDimArray | list[Number],\n columns: list[IntOrStr] = None,\n) -> pd.DataFrame:\n \"\"\"\n Calculate `absolute error` and `relative error` of two columns.\n\n Parameters\n ----------\n predicted : list of int or float, ndarrray, Series\n A array is compared to ``s``.\n columns : list of str or int, optional\n The columns of returning DataFrame, each represents `true value`,\n `predicted value`, `absolute error`, and `relative error`.\n\n Returns\n -------\n DataFrame\n Return four columns DataFrame and each represents `true value`,\n `predicted value`, `absolute error`, and `relative error`.\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> s = pd.Series([1, 2, 3])\n >>> s.error_report([3, 2, 1])\n true value predicted value absolute error relative error\n 0 1 3 2 2.000000\n 1 2 2 0 0.000000\n 2 3 1 2 0.666667\n\n If the name of ``s`` or ``predicted`` is not None, the columns of\n ``error_report`` would use the name of ``s`` and ``predicted``.\n\n >>> s = pd.Series([1, 2, 3], name=\"y\")\n >>> predicted = pd.Series([3, 2, 1], name=\"y predicted\")\n >>> s.error_report(predicted)\n y y predicted absolute error relative error\n 0 1 3 2 2.000000\n 1 2 2 0 0.000000\n 2 3 1 2 0.666667\n\n If ``columns`` is not None, the columns of ``error_report`` would use it\n firstly.\n\n >>> s.error_report(predicted, columns=[\"a\", \"b\", \"c\", \"d\"])\n a b c d\n 0 1 3 2 2.000000\n 1 2 2 0 0.000000\n 2 3 1 2 0.666667\n \"\"\"\n\n if len(s) != len(predicted):\n raise IndexError(\n \"Length of 'predicted' doesn't match length of 'reference'.\",\n )\n\n if isinstance(predicted, pd.Series):\n if not s.index.equals(predicted.index):\n raise IndexError(\n \"Index values of 'predicted' sequence doesn't \"\n \"match index values of 'reference'.\",\n )\n else:\n predicted = pd.Series(predicted, index=s.index)\n\n if columns is None:\n columns = [\n s.name or \"true value\",\n predicted.name or \"predicted value\",\n \"absolute error\",\n \"relative error\",\n ]\n elif len(columns) != 4:\n raise IndexError(\"The length of 'columns' is not equal to 4.\")\n\n absolute_error = (predicted - s).abs()\n relative_error = absolute_error / s\n\n return pd.concat(\n [\n s,\n predicted,\n absolute_error,\n relative_error,\n ],\n axis=1,\n keys=columns,\n )\n\n\n@register_series_method(name=\"getattr\")\n@register_series_method\ndef get_attr(s: pd.Series, name: str, *args, **kwargs) -> pd.Series:\n \"\"\"\n Return the value of the named attribute of Series element.\n\n The back core logical is :func:`getattr`.\n\n Read more in the `User Guide`_.\n\n .. _User Guide: ../../guide/tips_about_getattr.ipynb\n\n Parameters\n ----------\n name : str\n The name of one of the Series element's attributes. If the named attribute\n does not exist, None is returned.\n args, kwargs\n The arguments of the function type attribute.\n\n Returns\n -------\n Series\n\n See Also\n --------\n getattr\n\n Notes\n -----\n To keep the Python naming style, so use this accessor via\n ``Series.getattr`` rather than ``Series.get_attr``.\n\n Examples\n --------\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> s = pd.Series([\"hello\", \"world\"])\n\n Get a attribute.\n\n >>> s.getattr(\"__doc__\")\n 0 str(object='') -> str\\\\nstr(bytes_or_buffer[, e...\n 1 str(object='') -> str\\\\nstr(bytes_or_buffer[, e...\n dtype: object\n\n Get a don't exist attribute.\n\n >>> s.getattr(\"whatever\")\n 0 None\n 1 None\n dtype: object\n\n Get a method attribute and call it.\n\n >>> s.getattr(\"count\", \"l\")\n 0 2\n 1 1\n dtype: int64\n \"\"\"\n\n def wrap_getattr(x):\n attr = getattr(x, name, None)\n if callable(attr):\n return attr(*args, **kwargs)\n return attr\n\n return s.apply(wrap_getattr)\n", "path": "dtoolkit/accessor/series.py" } ]
diff --git a/dtoolkit/accessor/series.py b/dtoolkit/accessor/series.py index b8481bd44..3597e58ad 100644 --- a/dtoolkit/accessor/series.py +++ b/dtoolkit/accessor/series.py @@ -382,7 +382,7 @@ def lens(s: pd.Series, number: int = 1, other: int = None) -> pd.Series: Parameters ---------- - number : int or None, default '1' + number : int, default 1 The default length of `number` type. other : int or None, default None The default length of `other` type.
sktime__sktime-6019
[BUG] `BaseRegressor.score` method fails with `sklearn.metrics r2_score got an unexpected keyword argument 'normalize'` **Describe the bug** When using ResNetRegressor score (also tested with FCNRegressor and same bug) you got a sklearn.metrics error. TypeError: got an unexpected keyword argument 'normalize' **To Reproduce** ```python import sktime from sktime.regression.deep_learning.resnet import ResNetRegressor from sktime.datasets import load_UCR_UEA_dataset data = 'BIDMC32HR' train_x, train_y = load_UCR_UEA_dataset(name=data, split="train") test_x, test_y = load_UCR_UEA_dataset(name=data, split="test") model = ResNetRegressor(n_epochs=1, batch_size=1) model.fit(train_x, train_y) model.score(test_x, test_y) ``` **Expected behavior** Not an error but score **Additional context** ResNetRegressor::score <details> ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[7], line 1 ----> 1 model.score(test_x, test_y) File [~/Development/sktime-dev/sktime/sktime/regression/base.py:302](http://localhost:18888/lab/tree/sktime/sktime/sktime/regression/base.py#line=301), in BaseRegressor.score(self, X, y, multioutput) 298 from sklearn.metrics import r2_score 300 self.check_is_fitted() --> 302 return r2_score(y, self.predict(X), normalize=True, multioutput=multioutput) File [~/Development/sktime-dev/venv/lib/python3.10/site-packages/sklearn/utils/_param_validation.py:191](http://localhost:18888/lab/tree/sktime/venv/lib/python3.10/site-packages/sklearn/utils/_param_validation.py#line=190), in validate_params.<locals>.decorator.<locals>.wrapper(*args, **kwargs) 188 func_sig = signature(func) 190 # Map *args/**kwargs to the function signature --> 191 params = func_sig.bind(*args, **kwargs) 192 params.apply_defaults() 194 # ignore self/cls and positional/keyword markers File /usr/lib/python3.10/inspect.py:3186, in Signature.bind(self, *args, **kwargs) 3181 def bind(self, /, *args, **kwargs): 3182 """Get a BoundArguments object, that maps the passed `args` 3183 and `kwargs` to the function's signature. Raises `TypeError` 3184 if the passed arguments can not be bound. 3185 """ -> 3186 return self._bind(args, kwargs) File /usr/lib/python3.10/inspect.py:3175, in Signature._bind(self, args, kwargs, partial) 3173 arguments[kwargs_param.name] = kwargs 3174 else: -> 3175 raise TypeError( 3176 'got an unexpected keyword argument {arg!r}'.format( 3177 arg=next(iter(kwargs)))) 3179 return self._bound_arguments_cls(self, arguments) TypeError: got an unexpected keyword argument 'normalize' ``` </details> FCNRegressor::model <details> ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[8], line 4 2 model = FCNRegressor(n_epochs=1, batch_size=1) 3 model.fit(train_x, train_y) ----> 4 model.score(test_x, test_y) File [~/Development/sktime-dev/sktime/sktime/regression/base.py:302](http://localhost:18888/lab/tree/sktime/sktime/sktime/regression/base.py#line=301), in BaseRegressor.score(self, X, y, multioutput) 298 from sklearn.metrics import r2_score 300 self.check_is_fitted() --> 302 return r2_score(y, self.predict(X), normalize=True, multioutput=multioutput) File [~/Development/sktime-dev/venv/lib/python3.10/site-packages/sklearn/utils/_param_validation.py:191](http://localhost:18888/lab/tree/sktime/venv/lib/python3.10/site-packages/sklearn/utils/_param_validation.py#line=190), in validate_params.<locals>.decorator.<locals>.wrapper(*args, **kwargs) 188 func_sig = signature(func) 190 # Map *args/**kwargs to the function signature --> 191 params = func_sig.bind(*args, **kwargs) 192 params.apply_defaults() 194 # ignore self/cls and positional/keyword markers File /usr/lib/python3.10/inspect.py:3186, in Signature.bind(self, *args, **kwargs) 3181 def bind(self, /, *args, **kwargs): 3182 """Get a BoundArguments object, that maps the passed `args` 3183 and `kwargs` to the function's signature. Raises `TypeError` 3184 if the passed arguments can not be bound. 3185 """ -> 3186 return self._bind(args, kwargs) File /usr/lib/python3.10/inspect.py:3175, in Signature._bind(self, args, kwargs, partial) 3173 arguments[kwargs_param.name] = kwargs 3174 else: -> 3175 raise TypeError( 3176 'got an unexpected keyword argument {arg!r}'.format( 3177 arg=next(iter(kwargs)))) 3179 return self._bound_arguments_cls(self, arguments) TypeError: got an unexpected keyword argument 'normalize' ``` </details> **Versions** <details> ``` /home/cyril/Development/sktime-dev/venv/lib/python3.10/site-packages/_distutils_hack/__init__.py:26: UserWarning: Setuptools is replacing distutils. warnings.warn("Setuptools is replacing distutils.") /home/cyril/Development/sktime-dev/venv/lib/python3.10/site-packages/statsforecast/core.py:26: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from tqdm.autonotebook import tqdm ``` System: python: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] executable: /home/cyril/Development/sktime-dev/venv/bin/python machine: Linux-6.5.0-14-generic-x86_64-with-glibc2.35 Python dependencies: pip: 24.0 sktime: 0.26.1 sklearn: 1.4.1.post1 skbase: 0.7.2 numpy: 1.26.4 scipy: 1.12.0 pandas: 2.1.4 matplotlib: 3.8.3 joblib: 1.3.2 numba: 0.58.1 statsmodels: 0.14.1 pmdarima: 2.0.4 statsforecast: 1.7.3 tsfresh: 0.20.2 tslearn: 0.6.3 torch: None tensorflow: 2.15.0 tensorflow_probability: None </details> <!-- Thanks for contributing! -->
[ { "content": "# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Abstract base class for time series regressors.\n\n class name: BaseRegressor\n\nDefining methods:\n fitting - fit(self, X, y)\n predicting - predict(self, X)\n\nInherited inspection methods:\n hyper-parameter inspection - get_params()\n fitted parameter inspection - get_fitted_params()\n\nState:\n fitted model/strategy - by convention, any attributes ending in \"_\"\n fitted state flag - is_fitted (property)\n fitted state inspection - check_is_fitted()\n\"\"\"\n\n__all__ = [\n \"BaseRegressor\",\n]\n__author__ = [\"mloning\", \"fkiraly\"]\n\nimport time\n\nimport numpy as np\n\nfrom sktime.base import BasePanelMixin\nfrom sktime.datatypes import VectorizedDF\nfrom sktime.utils.sklearn import is_sklearn_transformer\nfrom sktime.utils.validation import check_n_jobs\n\n\nclass BaseRegressor(BasePanelMixin):\n \"\"\"Abstract base class for time series regressors.\n\n The base regressor specifies the methods and method signatures that all\n regressors have to implement. Attributes with a underscore suffix are set in the\n method fit.\n\n Parameters\n ----------\n fit_time_ : integer, time (in milliseconds) for fit to run.\n _class_dictionary : dictionary mapping classes_ onto integers 0...n_classes_-1.\n _threads_to_use : number of threads to use in fit as determined by n_jobs.\n \"\"\"\n\n _tags = {\n \"object_type\": \"regressor\", # type of object\n \"X_inner_mtype\": \"numpy3D\", # which type do _fit/_predict, support for X?\n \"y_inner_mtype\": \"numpy1D\", # which type do _fit/_predict, support for y?\n # it should be either \"numpy3D\" or \"nested_univ\" (nested pd.DataFrame)\n \"capability:multioutput\": False, # whether regressor supports multioutput\n \"capability:multivariate\": False,\n \"capability:unequal_length\": False,\n \"capability:missing_values\": False,\n \"capability:train_estimate\": False,\n \"capability:contractable\": False,\n \"capability:multithreading\": False,\n \"authors\": \"sktime developers\", # author(s) of the object\n \"maintainers\": \"sktime developers\", # current maintainer(s) of the object\n }\n\n # convenience constant to control which metadata of input data\n # are regularly retrieved in input checks\n METADATA_REQ_IN_CHECKS = [\n \"n_instances\",\n \"has_nans\",\n \"is_univariate\",\n \"is_equal_length\",\n ]\n\n # attribute name where vectorized estimators are stored\n VECTORIZATION_ATTR = \"regressors_\" # e.g., classifiers_, regressors_\n\n # used in error messages\n TASK = \"regression\" # e.g., classification, regression\n EST_TYPE = \"regressor\" # e.g., classifier, regressor\n EST_TYPE_PLURAL = \"regressors\" # e.g., classifiers, regressors\n\n def __init__(self):\n self.fit_time_ = 0\n self._class_dictionary = {}\n self._threads_to_use = 1\n self._X_metadata = {}\n\n # required for compatibility with some sklearn interfaces\n # i.e. CalibratedRegressorCV\n self._estimator_type = \"regressor\"\n self._is_vectorized = False\n self._is_timed = False\n self._converter_store_y = {}\n\n super().__init__()\n\n def __rmul__(self, other):\n \"\"\"Magic * method, return concatenated RegressorPipeline, transformers on left.\n\n Overloaded multiplication operation for regressors. Implemented for `other`\n being a transformer, otherwise returns `NotImplemented`.\n\n Parameters\n ----------\n other: `sktime` transformer, must inherit from BaseTransformer\n otherwise, `NotImplemented` is returned\n\n Returns\n -------\n RegressorPipeline object, concatenation of `other` (first) with `self` (last).\n \"\"\"\n from sktime.regression.compose import RegressorPipeline\n from sktime.transformations.base import BaseTransformer\n from sktime.transformations.compose import TransformerPipeline\n from sktime.transformations.series.adapt import TabularToSeriesAdaptor\n\n # behaviour is implemented only if other inherits from BaseTransformer\n # in that case, distinctions arise from whether self or other is a pipeline\n # todo: this can probably be simplified further with \"zero length\" pipelines\n if isinstance(other, BaseTransformer):\n # RegressorPipeline already has the dunder method defined\n if isinstance(self, RegressorPipeline):\n return other * self\n # if other is a TransformerPipeline but self is not, first unwrap it\n elif isinstance(other, TransformerPipeline):\n return RegressorPipeline(regressor=self, transformers=other.steps)\n # if neither self nor other are a pipeline, construct a RegressorPipeline\n else:\n return RegressorPipeline(regressor=self, transformers=[other])\n elif is_sklearn_transformer(other):\n return TabularToSeriesAdaptor(other) * self\n else:\n return NotImplemented\n\n def fit(self, X, y):\n \"\"\"Fit time series regressor to training data.\n\n State change:\n Changes state to \"fitted\".\n\n Writes to self:\n Sets self.is_fitted to True.\n Sets fitted model attributes ending in \"_\".\n\n Parameters\n ----------\n X : sktime compatible time series panel data container, Panel scitype, e.g.,\n pd-multiindex: pd.DataFrame with columns = variables,\n index = pd.MultiIndex with first level = instance indices,\n second level = time indices\n numpy3D: 3D np.array (any number of dimensions, equal length series)\n of shape [n_instances, n_dimensions, series_length]\n or of any other supported Panel mtype\n for list of mtypes, see datatypes.SCITYPE_REGISTER\n for specifications, see examples/AA_datatypes_and_datasets.ipynb\n y : sktime compatible tabular data container, Table scitype\n 1D iterable, of shape [n_instances]\n or 2D iterable, of shape [n_instances, n_dimensions]\n class labels for fitting\n 0-th indices correspond to instance indices in X\n 1-st indices (if applicable) correspond to multioutput vector indices in X\n supported sktime types: np.ndarray (1D, 2D), pd.Series, pd.DataFrame\n\n Returns\n -------\n self : Reference to self.\n\n Notes\n -----\n Changes state by creating a fitted model that updates attributes\n ending in \"_\" and sets is_fitted flag to True.\n \"\"\"\n self.reset()\n\n # fit timer start\n start = int(round(time.time() * 1000))\n\n # check and convert y for multioutput vectorization\n y, y_metadata, y_inner_mtype = self._check_y(y, return_to_mtype=True)\n self._y_metadata = y_metadata\n self._y_inner_mtype = y_inner_mtype\n self._is_vectorized = isinstance(y, VectorizedDF)\n\n if self._is_vectorized:\n self._vectorize(\"fit\", X=X, y=y)\n # fit timer end\n self.fit_time_ = int(round(time.time() * 1000)) - start\n # this should happen last: fitted state is set to True\n self._is_fitted = True\n return self\n\n # no vectorization needed, proceed with normal fit\n\n # convenience conversions to allow user flexibility:\n # if X is 2D array, convert to 3D, if y is Series, convert to numpy\n X, y = self._internal_convert(X, y)\n\n # y float coercion\n if y is not None and isinstance(y, np.ndarray):\n y = y.astype(\"float\")\n\n # input checks\n X_metadata = self._check_input(\n X, y, return_metadata=self.METADATA_REQ_IN_CHECKS\n )\n self._X_metadata = X_metadata\n X_mtype = X_metadata[\"mtype\"]\n\n # Check this regressor can handle characteristics\n self._check_capabilities(X_metadata)\n\n # Convert data as dictated by the regressor tags\n X = self._convert_X(X, X_mtype)\n multithread = self.get_tag(\"capability:multithreading\")\n if multithread:\n try:\n self._threads_to_use = check_n_jobs(self.n_jobs)\n except NameError:\n raise AttributeError(\n \"self.n_jobs must be set if capability:multithreading is True\"\n )\n\n self._fit(X, y)\n self.fit_time_ = int(round(time.time() * 1000)) - start\n\n # this should happen last: fitted state is set to True\n self._is_fitted = True\n return self\n\n def predict(self, X) -> np.ndarray:\n \"\"\"Predicts labels for sequences in X.\n\n Parameters\n ----------\n X : sktime compatible time series panel data container, Panel scitype, e.g.,\n pd-multiindex: pd.DataFrame with columns = variables,\n index = pd.MultiIndex with first level = instance indices,\n second level = time indices\n numpy3D: 3D np.array (any number of dimensions, equal length series)\n of shape [n_instances, n_dimensions, series_length]\n or of any other supported Panel mtype\n for list of mtypes, see datatypes.SCITYPE_REGISTER\n for specifications, see examples/AA_datatypes_and_datasets.ipynb\n\n Returns\n -------\n y_pred : sktime compatible tabular data container, Table scitype\n 1D iterable, of shape [n_instances]\n or 2D iterable, of shape [n_instances, n_dimensions]\n predicted class labels\n 0-th indices correspond to instance indices in X\n 1-st indices (if applicable) correspond to multioutput vector indices in X\n 1D np.npdarray, if y univariate (one dimension)\n otherwise, same type as y passed in fit\n \"\"\"\n self.check_is_fitted()\n\n # enter vectorized mode if needed\n if self._is_vectorized:\n return self._vectorize(\"predict\", X=X)\n\n # boilerplate input checks for predict-like methods\n X = self._check_convert_X_for_predict(X)\n\n # call internal _predict, convert output\n y_pred_inner = self._predict(X)\n y_pred = self._convert_output_y(y_pred_inner)\n return y_pred\n\n def score(self, X, y, multioutput=\"uniform_average\") -> float:\n \"\"\"Scores predicted labels against ground truth labels on X.\n\n Parameters\n ----------\n X : sktime compatible time series panel data container, Panel scitype, e.g.,\n pd-multiindex: pd.DataFrame with columns = variables,\n index = pd.MultiIndex with first level = instance indices,\n second level = time indices\n numpy3D: 3D np.array (any number of dimensions, equal length series)\n of shape [n_instances, n_dimensions, series_length]\n or of any other supported Panel mtype\n for list of mtypes, see datatypes.SCITYPE_REGISTER\n for specifications, see examples/AA_datatypes_and_datasets.ipynb\n y : 2D np.array of int, of shape [n_instances, n_dimensions] - regression labels\n for fitting indices correspond to instance indices in X\n or 1D np.array of int, of shape [n_instances] - regression labels for\n fitting indices correspond to instance indices in X\n multioutput : str, optional (default=\"uniform_average\")\n {\"raw_values\", \"uniform_average\", \"variance_weighted\"}, array-like of shape\n (n_outputs,) or None, default=\"uniform_average\".\n Defines aggregating of multiple output scores. Array-like value defines\n weights used to average scores.\n\n Returns\n -------\n float, R-squared score of predict(X) vs y\n \"\"\"\n from sklearn.metrics import r2_score\n\n self.check_is_fitted()\n\n return r2_score(y, self.predict(X), normalize=True, multioutput=multioutput)\n\n def _fit(self, X, y):\n \"\"\"Fit time series regressor to training data.\n\n Abstract method, must be implemented.\n\n Parameters\n ----------\n X : guaranteed to be of a type in self.get_tag(\"X_inner_mtype\")\n if self.get_tag(\"X_inner_mtype\") = \"numpy3D\":\n 3D np.ndarray of shape = [n_instances, n_dimensions, series_length]\n if self.get_tag(\"X_inner_mtype\") = \"pd-multiindex\":\n pd.DataFrame with columns = variables,\n index = pd.MultiIndex with first level = instance indices,\n second level = time indices\n if self.get_tag(\"X_inner_mtype\") = \"nested_univ\":\n pd.DataFrame with each column a dimension, each cell a pd.Series\n for list of other mtypes, see datatypes.SCITYPE_REGISTER\n for specifications, see examples/AA_datatypes_and_datasets.ipynb\n y : guaranteed to be of a type in self.get_tag(\"y_inner_mtype\")\n 1D iterable, of shape [n_instances]\n or 2D iterable, of shape [n_instances, n_dimensions]\n class labels for fitting\n if self.get_tag(\"capaility:multioutput\") = False, guaranteed to be 1D\n if self.get_tag(\"capaility:multioutput\") = True, guaranteed to be 2D\n\n Returns\n -------\n self : Reference to self.\n\n Notes\n -----\n Changes state by creating a fitted model that updates attributes ending in \"_\"\n \"\"\"\n raise NotImplementedError(\"abstract method\")\n\n def _predict(self, X) -> np.ndarray:\n \"\"\"Predicts labels for sequences in X.\n\n Abstract method, must be implemented.\n\n Parameters\n ----------\n X : guaranteed to be of a type in self.get_tag(\"X_inner_mtype\")\n if self.get_tag(\"X_inner_mtype\") = \"numpy3D\":\n 3D np.ndarray of shape = [n_instances, n_dimensions, series_length]\n if self.get_tag(\"X_inner_mtype\") = \"nested_univ\":\n pd.DataFrame with each column a dimension, each cell a pd.Series\n for list of other mtypes, see datatypes.SCITYPE_REGISTER\n for specifications, see examples/AA_datatypes_and_datasets.ipynb\n\n Returns\n -------\n y : 1D np.array of float, of shape [n_instances] - predicted regression labels\n indices correspond to instance indices in X\n \"\"\"\n raise NotImplementedError(\"abstract method\")\n", "path": "sktime/regression/base.py" } ]
[ { "content": "# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Abstract base class for time series regressors.\n\n class name: BaseRegressor\n\nDefining methods:\n fitting - fit(self, X, y)\n predicting - predict(self, X)\n\nInherited inspection methods:\n hyper-parameter inspection - get_params()\n fitted parameter inspection - get_fitted_params()\n\nState:\n fitted model/strategy - by convention, any attributes ending in \"_\"\n fitted state flag - is_fitted (property)\n fitted state inspection - check_is_fitted()\n\"\"\"\n\n__all__ = [\n \"BaseRegressor\",\n]\n__author__ = [\"mloning\", \"fkiraly\"]\n\nimport time\n\nimport numpy as np\n\nfrom sktime.base import BasePanelMixin\nfrom sktime.datatypes import VectorizedDF\nfrom sktime.utils.sklearn import is_sklearn_transformer\nfrom sktime.utils.validation import check_n_jobs\n\n\nclass BaseRegressor(BasePanelMixin):\n \"\"\"Abstract base class for time series regressors.\n\n The base regressor specifies the methods and method signatures that all\n regressors have to implement. Attributes with a underscore suffix are set in the\n method fit.\n\n Parameters\n ----------\n fit_time_ : integer, time (in milliseconds) for fit to run.\n _class_dictionary : dictionary mapping classes_ onto integers 0...n_classes_-1.\n _threads_to_use : number of threads to use in fit as determined by n_jobs.\n \"\"\"\n\n _tags = {\n \"object_type\": \"regressor\", # type of object\n \"X_inner_mtype\": \"numpy3D\", # which type do _fit/_predict, support for X?\n \"y_inner_mtype\": \"numpy1D\", # which type do _fit/_predict, support for y?\n # it should be either \"numpy3D\" or \"nested_univ\" (nested pd.DataFrame)\n \"capability:multioutput\": False, # whether regressor supports multioutput\n \"capability:multivariate\": False,\n \"capability:unequal_length\": False,\n \"capability:missing_values\": False,\n \"capability:train_estimate\": False,\n \"capability:contractable\": False,\n \"capability:multithreading\": False,\n \"authors\": \"sktime developers\", # author(s) of the object\n \"maintainers\": \"sktime developers\", # current maintainer(s) of the object\n }\n\n # convenience constant to control which metadata of input data\n # are regularly retrieved in input checks\n METADATA_REQ_IN_CHECKS = [\n \"n_instances\",\n \"has_nans\",\n \"is_univariate\",\n \"is_equal_length\",\n ]\n\n # attribute name where vectorized estimators are stored\n VECTORIZATION_ATTR = \"regressors_\" # e.g., classifiers_, regressors_\n\n # used in error messages\n TASK = \"regression\" # e.g., classification, regression\n EST_TYPE = \"regressor\" # e.g., classifier, regressor\n EST_TYPE_PLURAL = \"regressors\" # e.g., classifiers, regressors\n\n def __init__(self):\n self.fit_time_ = 0\n self._class_dictionary = {}\n self._threads_to_use = 1\n self._X_metadata = {}\n\n # required for compatibility with some sklearn interfaces\n # i.e. CalibratedRegressorCV\n self._estimator_type = \"regressor\"\n self._is_vectorized = False\n self._is_timed = False\n self._converter_store_y = {}\n\n super().__init__()\n\n def __rmul__(self, other):\n \"\"\"Magic * method, return concatenated RegressorPipeline, transformers on left.\n\n Overloaded multiplication operation for regressors. Implemented for `other`\n being a transformer, otherwise returns `NotImplemented`.\n\n Parameters\n ----------\n other: `sktime` transformer, must inherit from BaseTransformer\n otherwise, `NotImplemented` is returned\n\n Returns\n -------\n RegressorPipeline object, concatenation of `other` (first) with `self` (last).\n \"\"\"\n from sktime.regression.compose import RegressorPipeline\n from sktime.transformations.base import BaseTransformer\n from sktime.transformations.compose import TransformerPipeline\n from sktime.transformations.series.adapt import TabularToSeriesAdaptor\n\n # behaviour is implemented only if other inherits from BaseTransformer\n # in that case, distinctions arise from whether self or other is a pipeline\n # todo: this can probably be simplified further with \"zero length\" pipelines\n if isinstance(other, BaseTransformer):\n # RegressorPipeline already has the dunder method defined\n if isinstance(self, RegressorPipeline):\n return other * self\n # if other is a TransformerPipeline but self is not, first unwrap it\n elif isinstance(other, TransformerPipeline):\n return RegressorPipeline(regressor=self, transformers=other.steps)\n # if neither self nor other are a pipeline, construct a RegressorPipeline\n else:\n return RegressorPipeline(regressor=self, transformers=[other])\n elif is_sklearn_transformer(other):\n return TabularToSeriesAdaptor(other) * self\n else:\n return NotImplemented\n\n def fit(self, X, y):\n \"\"\"Fit time series regressor to training data.\n\n State change:\n Changes state to \"fitted\".\n\n Writes to self:\n Sets self.is_fitted to True.\n Sets fitted model attributes ending in \"_\".\n\n Parameters\n ----------\n X : sktime compatible time series panel data container, Panel scitype, e.g.,\n pd-multiindex: pd.DataFrame with columns = variables,\n index = pd.MultiIndex with first level = instance indices,\n second level = time indices\n numpy3D: 3D np.array (any number of dimensions, equal length series)\n of shape [n_instances, n_dimensions, series_length]\n or of any other supported Panel mtype\n for list of mtypes, see datatypes.SCITYPE_REGISTER\n for specifications, see examples/AA_datatypes_and_datasets.ipynb\n y : sktime compatible tabular data container, Table scitype\n 1D iterable, of shape [n_instances]\n or 2D iterable, of shape [n_instances, n_dimensions]\n class labels for fitting\n 0-th indices correspond to instance indices in X\n 1-st indices (if applicable) correspond to multioutput vector indices in X\n supported sktime types: np.ndarray (1D, 2D), pd.Series, pd.DataFrame\n\n Returns\n -------\n self : Reference to self.\n\n Notes\n -----\n Changes state by creating a fitted model that updates attributes\n ending in \"_\" and sets is_fitted flag to True.\n \"\"\"\n self.reset()\n\n # fit timer start\n start = int(round(time.time() * 1000))\n\n # check and convert y for multioutput vectorization\n y, y_metadata, y_inner_mtype = self._check_y(y, return_to_mtype=True)\n self._y_metadata = y_metadata\n self._y_inner_mtype = y_inner_mtype\n self._is_vectorized = isinstance(y, VectorizedDF)\n\n if self._is_vectorized:\n self._vectorize(\"fit\", X=X, y=y)\n # fit timer end\n self.fit_time_ = int(round(time.time() * 1000)) - start\n # this should happen last: fitted state is set to True\n self._is_fitted = True\n return self\n\n # no vectorization needed, proceed with normal fit\n\n # convenience conversions to allow user flexibility:\n # if X is 2D array, convert to 3D, if y is Series, convert to numpy\n X, y = self._internal_convert(X, y)\n\n # y float coercion\n if y is not None and isinstance(y, np.ndarray):\n y = y.astype(\"float\")\n\n # input checks\n X_metadata = self._check_input(\n X, y, return_metadata=self.METADATA_REQ_IN_CHECKS\n )\n self._X_metadata = X_metadata\n X_mtype = X_metadata[\"mtype\"]\n\n # Check this regressor can handle characteristics\n self._check_capabilities(X_metadata)\n\n # Convert data as dictated by the regressor tags\n X = self._convert_X(X, X_mtype)\n multithread = self.get_tag(\"capability:multithreading\")\n if multithread:\n try:\n self._threads_to_use = check_n_jobs(self.n_jobs)\n except NameError:\n raise AttributeError(\n \"self.n_jobs must be set if capability:multithreading is True\"\n )\n\n self._fit(X, y)\n self.fit_time_ = int(round(time.time() * 1000)) - start\n\n # this should happen last: fitted state is set to True\n self._is_fitted = True\n return self\n\n def predict(self, X) -> np.ndarray:\n \"\"\"Predicts labels for sequences in X.\n\n Parameters\n ----------\n X : sktime compatible time series panel data container, Panel scitype, e.g.,\n pd-multiindex: pd.DataFrame with columns = variables,\n index = pd.MultiIndex with first level = instance indices,\n second level = time indices\n numpy3D: 3D np.array (any number of dimensions, equal length series)\n of shape [n_instances, n_dimensions, series_length]\n or of any other supported Panel mtype\n for list of mtypes, see datatypes.SCITYPE_REGISTER\n for specifications, see examples/AA_datatypes_and_datasets.ipynb\n\n Returns\n -------\n y_pred : sktime compatible tabular data container, Table scitype\n 1D iterable, of shape [n_instances]\n or 2D iterable, of shape [n_instances, n_dimensions]\n predicted class labels\n 0-th indices correspond to instance indices in X\n 1-st indices (if applicable) correspond to multioutput vector indices in X\n 1D np.npdarray, if y univariate (one dimension)\n otherwise, same type as y passed in fit\n \"\"\"\n self.check_is_fitted()\n\n # enter vectorized mode if needed\n if self._is_vectorized:\n return self._vectorize(\"predict\", X=X)\n\n # boilerplate input checks for predict-like methods\n X = self._check_convert_X_for_predict(X)\n\n # call internal _predict, convert output\n y_pred_inner = self._predict(X)\n y_pred = self._convert_output_y(y_pred_inner)\n return y_pred\n\n def score(self, X, y, multioutput=\"uniform_average\") -> float:\n \"\"\"Scores predicted labels against ground truth labels on X.\n\n Parameters\n ----------\n X : sktime compatible time series panel data container, Panel scitype, e.g.,\n pd-multiindex: pd.DataFrame with columns = variables,\n index = pd.MultiIndex with first level = instance indices,\n second level = time indices\n numpy3D: 3D np.array (any number of dimensions, equal length series)\n of shape [n_instances, n_dimensions, series_length]\n or of any other supported Panel mtype\n for list of mtypes, see datatypes.SCITYPE_REGISTER\n for specifications, see examples/AA_datatypes_and_datasets.ipynb\n y : 2D np.array of int, of shape [n_instances, n_dimensions] - regression labels\n for fitting indices correspond to instance indices in X\n or 1D np.array of int, of shape [n_instances] - regression labels for\n fitting indices correspond to instance indices in X\n multioutput : str, optional (default=\"uniform_average\")\n {\"raw_values\", \"uniform_average\", \"variance_weighted\"}, array-like of shape\n (n_outputs,) or None, default=\"uniform_average\".\n Defines aggregating of multiple output scores. Array-like value defines\n weights used to average scores.\n\n Returns\n -------\n float, R-squared score of predict(X) vs y\n \"\"\"\n from sklearn.metrics import r2_score\n\n self.check_is_fitted()\n\n return r2_score(y, self.predict(X), multioutput=multioutput)\n\n def _fit(self, X, y):\n \"\"\"Fit time series regressor to training data.\n\n Abstract method, must be implemented.\n\n Parameters\n ----------\n X : guaranteed to be of a type in self.get_tag(\"X_inner_mtype\")\n if self.get_tag(\"X_inner_mtype\") = \"numpy3D\":\n 3D np.ndarray of shape = [n_instances, n_dimensions, series_length]\n if self.get_tag(\"X_inner_mtype\") = \"pd-multiindex\":\n pd.DataFrame with columns = variables,\n index = pd.MultiIndex with first level = instance indices,\n second level = time indices\n if self.get_tag(\"X_inner_mtype\") = \"nested_univ\":\n pd.DataFrame with each column a dimension, each cell a pd.Series\n for list of other mtypes, see datatypes.SCITYPE_REGISTER\n for specifications, see examples/AA_datatypes_and_datasets.ipynb\n y : guaranteed to be of a type in self.get_tag(\"y_inner_mtype\")\n 1D iterable, of shape [n_instances]\n or 2D iterable, of shape [n_instances, n_dimensions]\n class labels for fitting\n if self.get_tag(\"capaility:multioutput\") = False, guaranteed to be 1D\n if self.get_tag(\"capaility:multioutput\") = True, guaranteed to be 2D\n\n Returns\n -------\n self : Reference to self.\n\n Notes\n -----\n Changes state by creating a fitted model that updates attributes ending in \"_\"\n \"\"\"\n raise NotImplementedError(\"abstract method\")\n\n def _predict(self, X) -> np.ndarray:\n \"\"\"Predicts labels for sequences in X.\n\n Abstract method, must be implemented.\n\n Parameters\n ----------\n X : guaranteed to be of a type in self.get_tag(\"X_inner_mtype\")\n if self.get_tag(\"X_inner_mtype\") = \"numpy3D\":\n 3D np.ndarray of shape = [n_instances, n_dimensions, series_length]\n if self.get_tag(\"X_inner_mtype\") = \"nested_univ\":\n pd.DataFrame with each column a dimension, each cell a pd.Series\n for list of other mtypes, see datatypes.SCITYPE_REGISTER\n for specifications, see examples/AA_datatypes_and_datasets.ipynb\n\n Returns\n -------\n y : 1D np.array of float, of shape [n_instances] - predicted regression labels\n indices correspond to instance indices in X\n \"\"\"\n raise NotImplementedError(\"abstract method\")\n", "path": "sktime/regression/base.py" } ]
diff --git a/.all-contributorsrc b/.all-contributorsrc index 7f4681438ca..0f7a286d3e4 100644 --- a/.all-contributorsrc +++ b/.all-contributorsrc @@ -2590,7 +2590,9 @@ "avatar_url": "https://avatars.githubusercontent.com/u/69190238?v=4", "profile": "https://cyrilmeyer.eu/", "contributions": [ - "code" + "bug", + "code", + "test" ] }, { diff --git a/sktime/regression/base.py b/sktime/regression/base.py index ef95794ea0e..11a41a35266 100644 --- a/sktime/regression/base.py +++ b/sktime/regression/base.py @@ -299,7 +299,7 @@ def score(self, X, y, multioutput="uniform_average") -> float: self.check_is_fitted() - return r2_score(y, self.predict(X), normalize=True, multioutput=multioutput) + return r2_score(y, self.predict(X), multioutput=multioutput) def _fit(self, X, y): """Fit time series regressor to training data. diff --git a/sktime/regression/tests/test_all_regressors.py b/sktime/regression/tests/test_all_regressors.py index c3e8b1c4ab9..a499fa65ef5 100644 --- a/sktime/regression/tests/test_all_regressors.py +++ b/sktime/regression/tests/test_all_regressors.py @@ -77,6 +77,9 @@ def test_regressor_output(self, estimator_instance, scenario): # run fit and predict y_pred = scenario.run(estimator_instance, method_sequence=["fit", "predict"]) + # check score + score = estimator_instance.score(X_new, y_pred) + assert np.issubdtype(score.dtype, np.floating) # check predict assert isinstance(y_pred, np.ndarray) @@ -95,6 +98,9 @@ def test_multioutput(self, estimator_instance): estimator_instance.fit(X, y_mult) y_pred = estimator_instance.predict(X) + # check score + score = estimator_instance.score(X, y_mult) + assert np.issubdtype(score.dtype, np.floating) assert isinstance(y_pred, pd.DataFrame) assert y_pred.shape == y_mult.shape
jupyter__docker-stacks-1859
[BUG] - Health Check fails if you change the port jupyter runs on ### What docker image(s) are you using? minimal-notebook ### OS system and architecture running docker image RHEL7 docker swarm ### What Docker command are you running? Not really relevant, but I need to run it in a docker swarm, with a generalise 'ingress service'. For this I needed to change internal port jupyter runs on needs to be changes for intergation into a 'ingress proxy' To change the port I made a slight modification the docker image to set the internal port it runs on (see below) The problem is the docker container dies unexpectedly after running for 46 seconds. During that time the service is visible within the conatiner, but not external to the container. This is because the built-in heathcheck never succeeds, and eventually kills the container with little logged reporting. (see below) ### How to Reproduce the problem? Dockerfile, to set port ```dockerfile FROM "jupyter/minimal-notebook:latest" # Update Jupyter configuration to set port RUN set -eux; \ sed -i 's/port = 8888/port = 8080/' /etc/jupyter/jupyter_notebook_config.py ;\ sed -i 's/port = 8888/port = 8080/' /etc/jupyter/jupyter_server_config.py ;\ :; ``` You can also change the port in other ways such as... Creating a `~joyvan/.jupyter/jupyter_server_config.py` file (which can also set a password) or setting a JUPYTER_PORT environment variable (IF the setting in `/etc/jupyter` configs are removed) ### Command output When you build and then run the modified docker image, `docker ps` reports `Up 9 seconds (health: starting) 8888/tcp` despite the fact that jupyter is now running on port 8080 46 seconds after starting the container dies with a unhelpful (Signal 15) Log output... ``` [I 2022-10-28 05:20:00.393 ServerApp] Jupyter Server 1.21.0 is running at: [I 2022-10-28 05:20:00.393 ServerApp] http://jupyter_service:8080/lab?token=929eaaf2e60f8a947761cb1f2741d745fd46dde62f6fef7c or http://127.0.0.1:8080/lab?token=929eaaf2e60f8a947761cb1f2741d745fd46dde62f6fef7c [I 2022-10-28 05:20:00.393 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [C 2022-10-28 05:20:00.397 ServerApp] To access the server, open this file in a browser: file:///home/jovyan/.local/share/jupyter/runtime/jpserver-8-open.html Or copy and paste one of these URLs: http://jupyter_service:8080/lab?token=929eaaf2e60f8a947761cb1f2741d745fd46dde62f6fef7c or http://127.0.0.1:8080/lab?token=929eaaf2e60f8a947761cb1f2741d745fd46dde62f6fef7c Entered start.sh with args: jupyter lab Executing the command: jupyter lab [C 2022-10-28 05:20:46.261 ServerApp] received signal 15, stopping [I 2022-10-28 05:20:46.262 ServerApp] Shutting down 2 extensions [I 2022-10-28 05:20:46.263 ServerApp] Shutting down 0 terminals ``` ### Expected behavior Changing the internal port should not take days of work to track down, it should be straight forward and documented. The healthcheck should also be properly documented in jupyter-stacks documentation. This will make it more 'swarm friendly' as well as allow others to integrate it better when port 8888 is NOT available. Yes you can map the port when doing a 'docker run', but that is NOT always possible. ### Actual behavior Internal Port changing is undocumented in stacks Heathcheck kills the container without notice (signal 15 hardly makes it clear) when port is different. Days of work lost trying to figure out what should be a straight forward and simple task. ### Anything else? There is an existing environment variable "JUPYTER_PORT" that defines the default port. But any such setting is currently overridden by the configuration files in `/etc/jupyter` This may be usable to set healthcheck, especially if the config file default is removed, or allows the env var to override. in Dockerfile.... ``` HEALTHCHECK --interval=15s --timeout=3s --start-period=5s --retries=3 \ CMD wget -O- --no-verbose --tries=1 --no-check-certificate \ http${GEN_CERT:+s}://localhost:${JUPYTER_PORT:-8888}${JUPYTERHUB_SERVICE_PREFIX:-/}api || exit 1 ``` That Environment variable also needs to be documented in the jupyter-stacks documentation, with the health check. [BUG] - Health Check fails if you change the port jupyter runs on ### What docker image(s) are you using? minimal-notebook ### OS system and architecture running docker image RHEL7 docker swarm ### What Docker command are you running? Not really relevant, but I need to run it in a docker swarm, with a generalise 'ingress service'. For this I needed to change internal port jupyter runs on needs to be changes for intergation into a 'ingress proxy' To change the port I made a slight modification the docker image to set the internal port it runs on (see below) The problem is the docker container dies unexpectedly after running for 46 seconds. During that time the service is visible within the conatiner, but not external to the container. This is because the built-in heathcheck never succeeds, and eventually kills the container with little logged reporting. (see below) ### How to Reproduce the problem? Dockerfile, to set port ```dockerfile FROM "jupyter/minimal-notebook:latest" # Update Jupyter configuration to set port RUN set -eux; \ sed -i 's/port = 8888/port = 8080/' /etc/jupyter/jupyter_notebook_config.py ;\ sed -i 's/port = 8888/port = 8080/' /etc/jupyter/jupyter_server_config.py ;\ :; ``` You can also change the port in other ways such as... Creating a `~joyvan/.jupyter/jupyter_server_config.py` file (which can also set a password) or setting a JUPYTER_PORT environment variable (IF the setting in `/etc/jupyter` configs are removed) ### Command output When you build and then run the modified docker image, `docker ps` reports `Up 9 seconds (health: starting) 8888/tcp` despite the fact that jupyter is now running on port 8080 46 seconds after starting the container dies with a unhelpful (Signal 15) Log output... ``` [I 2022-10-28 05:20:00.393 ServerApp] Jupyter Server 1.21.0 is running at: [I 2022-10-28 05:20:00.393 ServerApp] http://jupyter_service:8080/lab?token=929eaaf2e60f8a947761cb1f2741d745fd46dde62f6fef7c or http://127.0.0.1:8080/lab?token=929eaaf2e60f8a947761cb1f2741d745fd46dde62f6fef7c [I 2022-10-28 05:20:00.393 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [C 2022-10-28 05:20:00.397 ServerApp] To access the server, open this file in a browser: file:///home/jovyan/.local/share/jupyter/runtime/jpserver-8-open.html Or copy and paste one of these URLs: http://jupyter_service:8080/lab?token=929eaaf2e60f8a947761cb1f2741d745fd46dde62f6fef7c or http://127.0.0.1:8080/lab?token=929eaaf2e60f8a947761cb1f2741d745fd46dde62f6fef7c Entered start.sh with args: jupyter lab Executing the command: jupyter lab [C 2022-10-28 05:20:46.261 ServerApp] received signal 15, stopping [I 2022-10-28 05:20:46.262 ServerApp] Shutting down 2 extensions [I 2022-10-28 05:20:46.263 ServerApp] Shutting down 0 terminals ``` ### Expected behavior Changing the internal port should not take days of work to track down, it should be straight forward and documented. The healthcheck should also be properly documented in jupyter-stacks documentation. This will make it more 'swarm friendly' as well as allow others to integrate it better when port 8888 is NOT available. Yes you can map the port when doing a 'docker run', but that is NOT always possible. ### Actual behavior Internal Port changing is undocumented in stacks Heathcheck kills the container without notice (signal 15 hardly makes it clear) when port is different. Days of work lost trying to figure out what should be a straight forward and simple task. ### Anything else? There is an existing environment variable "JUPYTER_PORT" that defines the default port. But any such setting is currently overridden by the configuration files in `/etc/jupyter` This may be usable to set healthcheck, especially if the config file default is removed, or allows the env var to override. in Dockerfile.... ``` HEALTHCHECK --interval=15s --timeout=3s --start-period=5s --retries=3 \ CMD wget -O- --no-verbose --tries=1 --no-check-certificate \ http${GEN_CERT:+s}://localhost:${JUPYTER_PORT:-8888}${JUPYTERHUB_SERVICE_PREFIX:-/}api || exit 1 ``` That Environment variable also needs to be documented in the jupyter-stacks documentation, with the health check.
[ { "content": "# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n# mypy: ignore-errors\nimport os\nimport stat\nimport subprocess\n\nfrom jupyter_core.paths import jupyter_data_dir\n\nc = get_config() # noqa: F821\nc.ServerApp.ip = \"0.0.0.0\"\nc.ServerApp.port = 8888\nc.ServerApp.open_browser = False\n\n# to output both image/svg+xml and application/pdf plot formats in the notebook file\nc.InlineBackend.figure_formats = {\"png\", \"jpeg\", \"svg\", \"pdf\"}\n\n# https://github.com/jupyter/notebook/issues/3130\nc.FileContentsManager.delete_to_trash = False\n\n# Generate a self-signed certificate\nOPENSSL_CONFIG = \"\"\"\\\n[req]\ndistinguished_name = req_distinguished_name\n[req_distinguished_name]\n\"\"\"\nif \"GEN_CERT\" in os.environ:\n dir_name = jupyter_data_dir()\n pem_file = os.path.join(dir_name, \"notebook.pem\")\n os.makedirs(dir_name, exist_ok=True)\n\n # Generate an openssl.cnf file to set the distinguished name\n cnf_file = os.path.join(os.getenv(\"CONDA_DIR\", \"/usr/lib\"), \"ssl\", \"openssl.cnf\")\n if not os.path.isfile(cnf_file):\n with open(cnf_file, \"w\") as fh:\n fh.write(OPENSSL_CONFIG)\n\n # Generate a certificate if one doesn't exist on disk\n subprocess.check_call(\n [\n \"openssl\",\n \"req\",\n \"-new\",\n \"-newkey=rsa:2048\",\n \"-days=365\",\n \"-nodes\",\n \"-x509\",\n \"-subj=/C=XX/ST=XX/L=XX/O=generated/CN=generated\",\n f\"-keyout={pem_file}\",\n f\"-out={pem_file}\",\n ]\n )\n # Restrict access to the file\n os.chmod(pem_file, stat.S_IRUSR | stat.S_IWUSR)\n c.ServerApp.certfile = pem_file\n\n# Change default umask for all subprocesses of the notebook server if set in\n# the environment\nif \"NB_UMASK\" in os.environ:\n os.umask(int(os.environ[\"NB_UMASK\"], 8))\n", "path": "base-notebook/jupyter_server_config.py" } ]
[ { "content": "# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n# mypy: ignore-errors\nimport os\nimport stat\nimport subprocess\n\nfrom jupyter_core.paths import jupyter_data_dir\n\nc = get_config() # noqa: F821\nc.ServerApp.ip = \"0.0.0.0\"\nc.ServerApp.open_browser = False\n\n# to output both image/svg+xml and application/pdf plot formats in the notebook file\nc.InlineBackend.figure_formats = {\"png\", \"jpeg\", \"svg\", \"pdf\"}\n\n# https://github.com/jupyter/notebook/issues/3130\nc.FileContentsManager.delete_to_trash = False\n\n# Generate a self-signed certificate\nOPENSSL_CONFIG = \"\"\"\\\n[req]\ndistinguished_name = req_distinguished_name\n[req_distinguished_name]\n\"\"\"\nif \"GEN_CERT\" in os.environ:\n dir_name = jupyter_data_dir()\n pem_file = os.path.join(dir_name, \"notebook.pem\")\n os.makedirs(dir_name, exist_ok=True)\n\n # Generate an openssl.cnf file to set the distinguished name\n cnf_file = os.path.join(os.getenv(\"CONDA_DIR\", \"/usr/lib\"), \"ssl\", \"openssl.cnf\")\n if not os.path.isfile(cnf_file):\n with open(cnf_file, \"w\") as fh:\n fh.write(OPENSSL_CONFIG)\n\n # Generate a certificate if one doesn't exist on disk\n subprocess.check_call(\n [\n \"openssl\",\n \"req\",\n \"-new\",\n \"-newkey=rsa:2048\",\n \"-days=365\",\n \"-nodes\",\n \"-x509\",\n \"-subj=/C=XX/ST=XX/L=XX/O=generated/CN=generated\",\n f\"-keyout={pem_file}\",\n f\"-out={pem_file}\",\n ]\n )\n # Restrict access to the file\n os.chmod(pem_file, stat.S_IRUSR | stat.S_IWUSR)\n c.ServerApp.certfile = pem_file\n\n# Change default umask for all subprocesses of the notebook server if set in\n# the environment\nif \"NB_UMASK\" in os.environ:\n os.umask(int(os.environ[\"NB_UMASK\"], 8))\n", "path": "base-notebook/jupyter_server_config.py" } ]
diff --git a/base-notebook/Dockerfile b/base-notebook/Dockerfile index 9804ddb251..9f3d226df1 100644 --- a/base-notebook/Dockerfile +++ b/base-notebook/Dockerfile @@ -47,7 +47,8 @@ RUN mamba install --quiet --yes \ fix-permissions "${CONDA_DIR}" && \ fix-permissions "/home/${NB_USER}" -EXPOSE 8888 +ENV JUPYTER_PORT=8888 +EXPOSE $JUPYTER_PORT # Configure container startup CMD ["start-notebook.sh"] @@ -70,7 +71,7 @@ RUN sed -re "s/c.ServerApp/c.NotebookApp/g" \ # https://github.com/jupyter/docker-stacks/issues/915#issuecomment-1068528799 HEALTHCHECK --interval=5s --timeout=3s --start-period=5s --retries=3 \ CMD wget -O- --no-verbose --tries=1 --no-check-certificate \ - http${GEN_CERT:+s}://localhost:8888${JUPYTERHUB_SERVICE_PREFIX:-/}api || exit 1 + http${GEN_CERT:+s}://localhost:${JUPYTER_PORT}${JUPYTERHUB_SERVICE_PREFIX:-/}api || exit 1 # Switch back to jovyan to avoid accidental container runs as root USER ${NB_UID} diff --git a/base-notebook/jupyter_server_config.py b/base-notebook/jupyter_server_config.py index c95957cc7b..679f96bee0 100644 --- a/base-notebook/jupyter_server_config.py +++ b/base-notebook/jupyter_server_config.py @@ -9,7 +9,6 @@ c = get_config() # noqa: F821 c.ServerApp.ip = "0.0.0.0" -c.ServerApp.port = 8888 c.ServerApp.open_browser = False # to output both image/svg+xml and application/pdf plot formats in the notebook file diff --git a/docs/using/common.md b/docs/using/common.md index 2444495a3b..bf11753b29 100644 --- a/docs/using/common.md +++ b/docs/using/common.md @@ -22,10 +22,16 @@ You can pass [Jupyter server options](https://jupyter-server.readthedocs.io/en/l 2. To set the [base URL](https://jupyter-server.readthedocs.io/en/latest/operators/public-server.html#running-the-notebook-with-a-customized-url-prefix) of the notebook server, you can run the following: ```bash - docker run -it --rm -p 8888:8888 jupyter/base-notebook \ + docker run -it --rm -p 8888:8888 --no-healthcheck jupyter/base-notebook \ start-notebook.sh --NotebookApp.base_url=/customized/url/prefix/ ``` + Note: We pass the `--no-healthcheck` parameter when setting a custom `base_url` for the Jupyter server, + because our current implementation for doing healthcheck assumes the `base_url` to be `/` (the default). + Without using this parameter, the container may run, but it's state will be "unhealthy". + Alternatively, you can [use your own command for healthcheck](https://docs.docker.com/engine/reference/run/#healthcheck) + using the `--health-cmd` parameter. + ## Docker Options You may instruct the `start-notebook.sh` script to customize the container environment before launching the notebook server. @@ -123,6 +129,8 @@ You do so by passing arguments to the `docker run` command. The variables are unset after the hooks have been executed but before the command provided to the startup script runs. - `-e NOTEBOOK_ARGS="--log-level='DEBUG' --dev-mode"` - Adds custom options to add to `jupyter` commands. This way, the user could use any option supported by `jupyter` subcommand. +- `-e JUPYTER_PORT=8117` - Changes the port in the container that Jupyter is using to the value of the `${JUPYTER_PORT}` environment variable. + This may be useful if you run multiple instances of Jupyter in swarm mode and want to use a different port for each instance. ## Startup Hooks diff --git a/tests/base-notebook/test_container_options.py b/tests/base-notebook/test_container_options.py index bb3129e663..33a856221f 100644 --- a/tests/base-notebook/test_container_options.py +++ b/tests/base-notebook/test_container_options.py @@ -78,3 +78,39 @@ def test_unsigned_ssl( assert "ERROR" not in logs warnings = TrackedContainer.get_warnings(logs) assert not warnings + + [email protected]( + "env", + [ + {}, + {"JUPYTER_PORT": 1234, "DOCKER_STACKS_JUPYTER_CMD": "lab"}, + {"JUPYTER_PORT": 2345, "DOCKER_STACKS_JUPYTER_CMD": "notebook"}, + {"JUPYTER_PORT": 3456, "DOCKER_STACKS_JUPYTER_CMD": "server"}, + {"JUPYTER_PORT": 4567, "DOCKER_STACKS_JUPYTER_CMD": "nbclassic"}, + {"JUPYTER_PORT": 5678, "RESTARTABLE": "yes"}, + {"JUPYTER_PORT": 6789}, + {"JUPYTER_PORT": 7890, "DOCKER_STACKS_JUPYTER_CMD": "notebook"}, + ], +) +def test_custom_internal_port( + container: TrackedContainer, + http_client: requests.Session, + env: dict[str, str], +) -> None: + """Container should be accessible from the host + when using custom internal port""" + host_port = find_free_port() + internal_port = env.get("JUPYTER_PORT", 8888) + running_container = container.run_detached( + command=["start-notebook.sh", "--NotebookApp.token=''"], + environment=env, + ports={internal_port: host_port}, + ) + resp = http_client.get(f"http://localhost:{host_port}") + resp.raise_for_status() + logs = running_container.logs().decode("utf-8") + LOGGER.debug(logs) + assert "ERROR" not in logs + warnings = TrackedContainer.get_warnings(logs) + assert not warnings diff --git a/tests/base-notebook/test_healthcheck.py b/tests/base-notebook/test_healthcheck.py index e6260fa2f5..954825c7fc 100644 --- a/tests/base-notebook/test_healthcheck.py +++ b/tests/base-notebook/test_healthcheck.py @@ -17,10 +17,12 @@ [ None, ["DOCKER_STACKS_JUPYTER_CMD=lab"], - ["RESTARTABLE=yes"], ["DOCKER_STACKS_JUPYTER_CMD=notebook"], ["DOCKER_STACKS_JUPYTER_CMD=server"], ["DOCKER_STACKS_JUPYTER_CMD=nbclassic"], + ["RESTARTABLE=yes"], + ["JUPYTER_PORT=8171"], + ["JUPYTER_PORT=8117", "DOCKER_STACKS_JUPYTER_CMD=notebook"], ], ) def test_health(container: TrackedContainer, env: Optional[list[str]]) -> None:
spotify__luigi-1809
Terminal Width affects Lock identification under Centos 6.5 The luigi.lock.getpcmd function will return a shorter command line if the terminal is smaller in width. This can result in locks being misidentified as identical and can be a significant problem if your binary/scripts are located in deep paths. Presumably there is some option that can be passed to ps to resolve this. edit: It looks like if the ps command is changed to `ps x -wwo pid,args` that should take care of it.
[ { "content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2012-2015 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\"\"\"\nLocking functionality when launching things from the command line.\nUses a pidfile.\nThis prevents multiple identical workflows to be launched simultaneously.\n\"\"\"\nfrom __future__ import print_function\n\nimport hashlib\nimport os\n\nfrom luigi import six\n\n\ndef getpcmd(pid):\n \"\"\"\n Returns command of process.\n\n :param pid:\n \"\"\"\n if os.name == \"nt\":\n # Use wmic command instead of ps on Windows.\n cmd = 'wmic path win32_process where ProcessID=%s get Commandline' % (pid, )\n with os.popen(cmd, 'r') as p:\n lines = [line for line in p.readlines() if line.strip(\"\\r\\n \") != \"\"]\n if lines:\n _, val = lines\n return val\n else:\n cmd = 'ps -xo pid,args'\n with os.popen(cmd, 'r') as p:\n # Skip the column titles\n p.readline()\n for line in p:\n spid, scmd = line.strip().split(' ', 1)\n if int(spid) == int(pid):\n return scmd\n # Fallback instead of None, for e.g. Cygwin where -o is an \"unknown option\" for the ps command:\n return '[PROCESS_WITH_PID={}]'.format(pid)\n\n\ndef get_info(pid_dir, my_pid=None):\n # Check the name and pid of this process\n if my_pid is None:\n my_pid = os.getpid()\n\n my_cmd = getpcmd(my_pid)\n\n if six.PY3:\n cmd_hash = my_cmd.encode('utf8')\n else:\n cmd_hash = my_cmd\n\n pid_file = os.path.join(pid_dir, hashlib.md5(cmd_hash).hexdigest()) + '.pid'\n\n return my_pid, my_cmd, pid_file\n\n\ndef acquire_for(pid_dir, num_available=1, kill_signal=None):\n \"\"\"\n Makes sure the process is only run once at the same time with the same name.\n\n Notice that we since we check the process name, different parameters to the same\n command can spawn multiple processes at the same time, i.e. running\n \"/usr/bin/my_process\" does not prevent anyone from launching\n \"/usr/bin/my_process --foo bar\".\n \"\"\"\n\n my_pid, my_cmd, pid_file = get_info(pid_dir)\n\n # Check if there is a pid file corresponding to this name\n if not os.path.exists(pid_dir):\n os.mkdir(pid_dir)\n os.chmod(pid_dir, 0o777)\n\n pids = set()\n pid_cmds = {}\n if os.path.exists(pid_file):\n # There is such a file - read the pid and look up its process name\n pids.update(filter(None, map(str.strip, open(pid_file))))\n pid_cmds = dict((pid, getpcmd(pid)) for pid in pids)\n matching_pids = list(filter(lambda pid: pid_cmds[pid] == my_cmd, pids))\n\n if kill_signal is not None:\n for pid in map(int, matching_pids):\n os.kill(pid, kill_signal)\n elif len(matching_pids) >= num_available:\n # We are already running under a different pid\n print('Pid(s)', ', '.join(matching_pids), 'already running')\n return False\n else:\n # The pid belongs to something else, we could\n pass\n pid_cmds[str(my_pid)] = my_cmd\n\n # Write pids\n pids.add(str(my_pid))\n with open(pid_file, 'w') as f:\n f.writelines('%s\\n' % (pid, ) for pid in filter(pid_cmds.__getitem__, pids))\n\n # Make the file writable by all\n if os.name == 'nt':\n pass\n else:\n s = os.stat(pid_file)\n if os.getuid() == s.st_uid:\n os.chmod(pid_file, s.st_mode | 0o777)\n\n return True\n", "path": "luigi/lock.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2012-2015 Spotify AB\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\"\"\"\nLocking functionality when launching things from the command line.\nUses a pidfile.\nThis prevents multiple identical workflows to be launched simultaneously.\n\"\"\"\nfrom __future__ import print_function\n\nimport hashlib\nimport os\n\nfrom luigi import six\n\n\ndef getpcmd(pid):\n \"\"\"\n Returns command of process.\n\n :param pid:\n \"\"\"\n if os.name == \"nt\":\n # Use wmic command instead of ps on Windows.\n cmd = 'wmic path win32_process where ProcessID=%s get Commandline' % (pid, )\n with os.popen(cmd, 'r') as p:\n lines = [line for line in p.readlines() if line.strip(\"\\r\\n \") != \"\"]\n if lines:\n _, val = lines\n return val\n else:\n cmd = 'ps x -wwo pid,args'\n with os.popen(cmd, 'r') as p:\n # Skip the column titles\n p.readline()\n for line in p:\n spid, scmd = line.strip().split(' ', 1)\n if int(spid) == int(pid):\n return scmd\n # Fallback instead of None, for e.g. Cygwin where -o is an \"unknown option\" for the ps command:\n return '[PROCESS_WITH_PID={}]'.format(pid)\n\n\ndef get_info(pid_dir, my_pid=None):\n # Check the name and pid of this process\n if my_pid is None:\n my_pid = os.getpid()\n\n my_cmd = getpcmd(my_pid)\n\n if six.PY3:\n cmd_hash = my_cmd.encode('utf8')\n else:\n cmd_hash = my_cmd\n\n pid_file = os.path.join(pid_dir, hashlib.md5(cmd_hash).hexdigest()) + '.pid'\n\n return my_pid, my_cmd, pid_file\n\n\ndef acquire_for(pid_dir, num_available=1, kill_signal=None):\n \"\"\"\n Makes sure the process is only run once at the same time with the same name.\n\n Notice that we since we check the process name, different parameters to the same\n command can spawn multiple processes at the same time, i.e. running\n \"/usr/bin/my_process\" does not prevent anyone from launching\n \"/usr/bin/my_process --foo bar\".\n \"\"\"\n\n my_pid, my_cmd, pid_file = get_info(pid_dir)\n\n # Check if there is a pid file corresponding to this name\n if not os.path.exists(pid_dir):\n os.mkdir(pid_dir)\n os.chmod(pid_dir, 0o777)\n\n pids = set()\n pid_cmds = {}\n if os.path.exists(pid_file):\n # There is such a file - read the pid and look up its process name\n pids.update(filter(None, map(str.strip, open(pid_file))))\n pid_cmds = dict((pid, getpcmd(pid)) for pid in pids)\n matching_pids = list(filter(lambda pid: pid_cmds[pid] == my_cmd, pids))\n\n if kill_signal is not None:\n for pid in map(int, matching_pids):\n os.kill(pid, kill_signal)\n elif len(matching_pids) >= num_available:\n # We are already running under a different pid\n print('Pid(s)', ', '.join(matching_pids), 'already running')\n return False\n else:\n # The pid belongs to something else, we could\n pass\n pid_cmds[str(my_pid)] = my_cmd\n\n # Write pids\n pids.add(str(my_pid))\n with open(pid_file, 'w') as f:\n f.writelines('%s\\n' % (pid, ) for pid in filter(pid_cmds.__getitem__, pids))\n\n # Make the file writable by all\n if os.name == 'nt':\n pass\n else:\n s = os.stat(pid_file)\n if os.getuid() == s.st_uid:\n os.chmod(pid_file, s.st_mode | 0o777)\n\n return True\n", "path": "luigi/lock.py" } ]
diff --git a/luigi/lock.py b/luigi/lock.py index cf73f1279b..702629184f 100644 --- a/luigi/lock.py +++ b/luigi/lock.py @@ -42,7 +42,7 @@ def getpcmd(pid): _, val = lines return val else: - cmd = 'ps -xo pid,args' + cmd = 'ps x -wwo pid,args' with os.popen(cmd, 'r') as p: # Skip the column titles p.readline()
xorbitsai__inference-1096
ENH: Add the option to use CPU to inference even there is GPU device ### Is your feature request related to a problem? Please describe There is a GPU in my server, but when load some LLM model, I need load it into my memory because the model size is bigger than GPU memory. However, when I launch the model from web page, the N-GPU setting only contains auto, 0, 1 options, if I select 0, system will complain the following error: > Server error: 400 - [address=0.0.0.0:19270, pid=2063850] The parameter `n_gpu` must be greater than 0 and not greater than the number of GPUs: 1 on the machine. ### Describe the solution you'd like I think when the N GPU setting is set to 0, it should use CPU as inference device.
[ { "content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nimport torch\nfrom typing_extensions import Literal, Union\n\nDeviceType = Literal[\"cuda\", \"mps\", \"xpu\", \"cpu\"]\n\n\ndef is_xpu_available() -> bool:\n return hasattr(torch, \"xpu\") and torch.xpu.is_available()\n\n\ndef get_available_device() -> DeviceType:\n if torch.cuda.is_available():\n return \"cuda\"\n elif torch.backends.mps.is_available():\n return \"mps\"\n elif is_xpu_available():\n return \"xpu\"\n return \"cpu\"\n\n\ndef is_device_available(device: str) -> bool:\n if device == \"cuda\":\n return torch.cuda.is_available()\n elif device == \"mps\":\n return torch.backends.mps.is_available()\n elif device == \"xpu\":\n return is_xpu_available()\n elif device == \"cpu\":\n return True\n\n return False\n\n\ndef move_model_to_available_device(model):\n device = get_available_device()\n\n if device == \"cpu\":\n return model\n\n return model.to(device)\n\n\ndef get_device_preferred_dtype(device: str) -> Union[torch.dtype, None]:\n if device == \"cpu\":\n return torch.float32\n elif device == \"cuda\" or device == \"mps\":\n return torch.float16\n elif device == \"xpu\":\n return torch.bfloat16\n\n return None\n\n\ndef is_hf_accelerate_supported(device: str) -> bool:\n return device == \"cuda\" or device == \"xpu\"\n\n\ndef empty_cache():\n if torch.cuda.is_available():\n torch.cuda.empty_cache()\n if torch.backends.mps.is_available():\n torch.mps.empty_cache()\n if is_xpu_available():\n torch.xpu.empty_cache()\n\n\ndef gpu_count():\n if torch.cuda.is_available():\n cuda_visible_devices_env = os.getenv(\"CUDA_VISIBLE_DEVICES\", None)\n\n if cuda_visible_devices_env is None:\n return torch.cuda.device_count()\n\n cuda_visible_devices = (\n cuda_visible_devices_env.split(\",\") if cuda_visible_devices_env else []\n )\n\n return min(torch.cuda.device_count(), len(cuda_visible_devices))\n elif torch.backends.mps.is_available():\n return 1\n elif is_xpu_available():\n return torch.xpu.device_count()\n else:\n return 0\n", "path": "xinference/device_utils.py" } ]
[ { "content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nimport torch\nfrom typing_extensions import Literal, Union\n\nDeviceType = Literal[\"cuda\", \"mps\", \"xpu\", \"cpu\"]\n\n\ndef is_xpu_available() -> bool:\n return hasattr(torch, \"xpu\") and torch.xpu.is_available()\n\n\ndef get_available_device() -> DeviceType:\n if torch.cuda.is_available():\n return \"cuda\"\n elif torch.backends.mps.is_available():\n return \"mps\"\n elif is_xpu_available():\n return \"xpu\"\n return \"cpu\"\n\n\ndef is_device_available(device: str) -> bool:\n if device == \"cuda\":\n return torch.cuda.is_available()\n elif device == \"mps\":\n return torch.backends.mps.is_available()\n elif device == \"xpu\":\n return is_xpu_available()\n elif device == \"cpu\":\n return True\n\n return False\n\n\ndef move_model_to_available_device(model):\n device = get_available_device()\n\n if device == \"cpu\":\n return model\n\n return model.to(device)\n\n\ndef get_device_preferred_dtype(device: str) -> Union[torch.dtype, None]:\n if device == \"cpu\":\n return torch.float32\n elif device == \"cuda\" or device == \"mps\":\n return torch.float16\n elif device == \"xpu\":\n return torch.bfloat16\n\n return None\n\n\ndef is_hf_accelerate_supported(device: str) -> bool:\n return device == \"cuda\" or device == \"xpu\"\n\n\ndef empty_cache():\n if torch.cuda.is_available():\n torch.cuda.empty_cache()\n if torch.backends.mps.is_available():\n torch.mps.empty_cache()\n if is_xpu_available():\n torch.xpu.empty_cache()\n\n\ndef gpu_count():\n if torch.cuda.is_available():\n cuda_visible_devices_env = os.getenv(\"CUDA_VISIBLE_DEVICES\", None)\n\n if cuda_visible_devices_env is None:\n return torch.cuda.device_count()\n\n cuda_visible_devices = (\n cuda_visible_devices_env.split(\",\") if cuda_visible_devices_env else []\n )\n\n return min(torch.cuda.device_count(), len(cuda_visible_devices))\n elif is_xpu_available():\n return torch.xpu.device_count()\n else:\n return 0\n", "path": "xinference/device_utils.py" } ]
diff --git a/xinference/device_utils.py b/xinference/device_utils.py index 731c6ae051..035391ccb4 100644 --- a/xinference/device_utils.py +++ b/xinference/device_utils.py @@ -92,8 +92,6 @@ def gpu_count(): ) return min(torch.cuda.device_count(), len(cuda_visible_devices)) - elif torch.backends.mps.is_available(): - return 1 elif is_xpu_available(): return torch.xpu.device_count() else: diff --git a/xinference/web/ui/src/scenes/launch_model/modelCard.js b/xinference/web/ui/src/scenes/launch_model/modelCard.js index 3755257ff6..d854f0ee3c 100644 --- a/xinference/web/ui/src/scenes/launch_model/modelCard.js +++ b/xinference/web/ui/src/scenes/launch_model/modelCard.js @@ -109,6 +109,14 @@ const ModelCard = ({ url, modelData, gpuAvailable, is_custom = false }) => { } }, [modelFormat, modelSize, modelData]) + const getNGPURange = () => { + if (gpuAvailable === 0) { + // remain 'auto' for distributed situation + return ['auto', 'CPU'] + } + return ['auto', 'CPU'].concat(range(1, gpuAvailable)) + } + const launchModel = (url) => { if (isCallingApi || isUpdatingModel) { return @@ -124,7 +132,11 @@ const ModelCard = ({ url, modelData, gpuAvailable, is_custom = false }) => { model_size_in_billions: convertModelSize(modelSize), quantization: quantization, n_gpu: - nGPU === '0' ? null : nGPU === 'auto' ? 'auto' : parseInt(nGPU, 10), + parseInt(nGPU, 10) === 0 || nGPU === 'CPU' + ? null + : nGPU === 'auto' + ? 'auto' + : parseInt(nGPU, 10), replica: replica, } @@ -512,24 +524,13 @@ const ModelCard = ({ url, modelData, gpuAvailable, is_custom = false }) => { onChange={(e) => setNGPU(e.target.value)} label="N-GPU" > - {['auto'] - .concat( - range( - 0, - modelFormat !== 'pytorch' && - modelFormat !== 'gptq' && - modelFormat !== 'awq' - ? 1 - : gpuAvailable - ) + {getNGPURange().map((v) => { + return ( + <MenuItem key={v} value={v}> + {v} + </MenuItem> ) - .map((v) => { - return ( - <MenuItem key={v} value={v}> - {v} - </MenuItem> - ) - })} + })} </Select> </FormControl> ) : (
freedomofpress__securedrop-5369
doc-linkcheck needs some appeasement ## Description We have a [CI failure because of a link to a private repo](https://app.circleci.com/jobs/github/freedomofpress/securedrop/42146). ## Steps to Reproduce Run `make docs-linkcheck`. ## Expected Behavior That it would complete with no error. ## Actual Behavior The link to the private repo causes a 404. ## Comments That private URL should be added to `linkcheck_ignore` in `docs/conf.py`.
[ { "content": "# -*- coding: utf-8 -*-\n#\n# SecureDrop documentation build configuration file, created by\n# sphinx-quickstart on Tue Oct 13 12:08:52 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport os\n\n# Detect if we're being built by Read the Docs\n# https://docs.readthedocs.org/en/latest/faq.html#how-do-i-change-behavior-for-read-the-docs\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n# sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.todo', ]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'SecureDrop'\ncopyright = u'2015-2020, Freedom of the Press Foundation'\nauthor = u'SecureDrop Team and Contributors'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '1.4.1'\n# The full version, including alpha/beta/rc tags.\nrelease = '1.4.1'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n# keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nif on_rtd:\n html_theme = 'default'\nelse:\n try:\n # If you want to build the docs locally using the RTD theme,\n # you may need to install it: ``pip install sphinx_rtd_theme``.\n # https://github.com/snide/sphinx_rtd_theme#via-package\n import sphinx_rtd_theme\n html_theme = \"sphinx_rtd_theme\"\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n except ImportError:\n # This theme is included with Sphinx and is quite nice (based\n # on the Pocoo themes), but since we're using the RTD theme\n # for the production docs, it's best to use that to avoid\n # issues due to discrepancies between the themes.\n html_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n# html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n# html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = '../securedrop/static/i/favicon.png'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n# html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n# html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n# html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n# html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n# html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n# html_domain_indices = True\n\n# If false, no index is generated.\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n# html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n# html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n# html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n# html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'SecureDropdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n\n # Latex figure (float) alignment\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'SecureDrop.tex', u'SecureDrop Documentation',\n author, 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n# latex_use_parts = False\n\n# If true, show page references after internal links.\n# latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n# latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n# latex_appendices = []\n\n# If false, no module index is generated.\n# latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'securedrop', u'SecureDrop Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n# man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'SecureDrop', u'SecureDrop Documentation',\n author, 'SecureDrop', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n# texinfo_appendices = []\n\n# If false, no module index is generated.\n# texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n# texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n# texinfo_no_detailmenu = False\n\n# -- Options for linkcheck --\n\nlinkcheck_retries = 3\n\nlinkcheck_ignore = [\n r'http://127.0.0.1(:\\d+)?/?',\n r'http://localhost(:\\d+)?/?',\n 'https://forum.securedrop.org/admin/users/list/active',\n 'https://weblate.securedrop.org/projects/securedrop/securedrop/#repository',\n]\n", "path": "docs/conf.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\n#\n# SecureDrop documentation build configuration file, created by\n# sphinx-quickstart on Tue Oct 13 12:08:52 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport os\n\n# Detect if we're being built by Read the Docs\n# https://docs.readthedocs.org/en/latest/faq.html#how-do-i-change-behavior-for-read-the-docs\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n# sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.todo', ]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'SecureDrop'\ncopyright = u'2015-2020, Freedom of the Press Foundation'\nauthor = u'SecureDrop Team and Contributors'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '1.4.1'\n# The full version, including alpha/beta/rc tags.\nrelease = '1.4.1'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n# keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nif on_rtd:\n html_theme = 'default'\nelse:\n try:\n # If you want to build the docs locally using the RTD theme,\n # you may need to install it: ``pip install sphinx_rtd_theme``.\n # https://github.com/snide/sphinx_rtd_theme#via-package\n import sphinx_rtd_theme\n html_theme = \"sphinx_rtd_theme\"\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n except ImportError:\n # This theme is included with Sphinx and is quite nice (based\n # on the Pocoo themes), but since we're using the RTD theme\n # for the production docs, it's best to use that to avoid\n # issues due to discrepancies between the themes.\n html_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n# html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n# html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = '../securedrop/static/i/favicon.png'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n# html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n# html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n# html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n# html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n# html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n# html_domain_indices = True\n\n# If false, no index is generated.\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n# html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n# html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n# html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n# html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'SecureDropdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n\n # Latex figure (float) alignment\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'SecureDrop.tex', u'SecureDrop Documentation',\n author, 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n# latex_use_parts = False\n\n# If true, show page references after internal links.\n# latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n# latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n# latex_appendices = []\n\n# If false, no module index is generated.\n# latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'securedrop', u'SecureDrop Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n# man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'SecureDrop', u'SecureDrop Documentation',\n author, 'SecureDrop', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n# texinfo_appendices = []\n\n# If false, no module index is generated.\n# texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n# texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n# texinfo_no_detailmenu = False\n\n# -- Options for linkcheck --\n\nlinkcheck_retries = 3\n\nlinkcheck_ignore = [\n r'http://127.0.0.1(:\\d+)?/?',\n r'http://localhost(:\\d+)?/?',\n 'https://forum.securedrop.org/admin/users/list/active',\n 'https://weblate.securedrop.org/projects/securedrop/securedrop/#repository',\n 'https://github.com/freedomofpress/securedrop-debian-packages-lfs',\n]\n", "path": "docs/conf.py" } ]
diff --git a/docs/conf.py b/docs/conf.py index 46b973917e..c11c77fac9 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -309,4 +309,5 @@ r'http://localhost(:\d+)?/?', 'https://forum.securedrop.org/admin/users/list/active', 'https://weblate.securedrop.org/projects/securedrop/securedrop/#repository', + 'https://github.com/freedomofpress/securedrop-debian-packages-lfs', ] diff --git a/docs/development/i18n.rst b/docs/development/i18n.rst index 1a8fd2e6f7..6d738e7873 100644 --- a/docs/development/i18n.rst +++ b/docs/development/i18n.rst @@ -351,20 +351,20 @@ Two weeks before the release: string freeze When features for a new SecureDrop release are frozen, the localization manager for the release will: * :ref:`merge_develop_to_weblate`. +* Update the `i18n timeline`_ in the translation section of the forum. * Post an announcement `to the translation section of the forum <https://forum.securedrop.org/c/translations>`__ (see `an example <https://forum.securedrop.org/t/4-securedrop-strings-need-work-march-2018-string-freeze/461>`__). -* Add a prominent Weblate whiteboard announcement that reads `The X.Y.Z deadline is Month day, year at midnight US/Pacific time. String freeze is in effect: no source strings will be modified before the release.`. -* Remind all developers about the string freeze, in the `chat room <https://gitter.im/freedomofpress/securedrop>`__. +* Remind all developers about the string freeze in `Gitter <https://gitter.im/freedomofpress/securedrop>`__. +* Add a `Weblate announcement`_ with the translation timeline for the release. * Create a pull request for every source string suggestion coming from translators. -* Update the `i18n timeline`_ and `Weblate whiteboard`_. Release day ^^^^^^^^^^^ * :ref:`merge_weblate_to_develop`. * :ref:`Update the screenshots <updating_screenshots>`. -* Remove the prominent Weblate whiteboard announcement. +* Remove the `Weblate announcement`_ about this release's translation timeline. * Provide translator credits to add to the SecureDrop release announcement. -* Update the `i18n timeline`_ and `Weblate whiteboard`_. +* Update the `i18n timeline`_ in the forum. Translator credits ^^^^^^^^^^^^^^^^^^ @@ -466,7 +466,7 @@ with a release looming, the server can be rebooted. .. _`Weblate translation creation page`: https://weblate.securedrop.org/new-lang/securedrop/securedrop/ .. _`Weblate desktop translation creation page`: https://weblate.securedrop.org/new-lang/securedrop/desktop/ .. _`i18n timeline`: https://forum.securedrop.org/t/about-the-translations-category/16 -.. _`Weblate whiteboard`: https://weblate.securedrop.org/admin/trans/whiteboardmessage/6/change/ +.. _`Weblate announcement`: https://weblate.securedrop.org/admin/trans/announcement .. |Weblate commit Lock| image:: ../images/weblate/admin-lock.png .. |Weblate commit Locked| image:: ../images/weblate/admin-locked.png diff --git a/docs/hardware.rst b/docs/hardware.rst index f6be365fe5..040ba17057 100644 --- a/docs/hardware.rst +++ b/docs/hardware.rst @@ -434,10 +434,6 @@ support is preferable, since you want neither WiFi nor Bluetooth. updating the BIOS according to `these instructions <http://arstechnica.com/gadgets/2014/02/new-intel-nuc-bios-update-fixes-steamos-other-linux-booting-problems/>`__. -.. caution:: Some older NUC BIOS versions will cause the server to `brick itself <https://communities.intel.com/message/359708>`__ if the device - attempts to suspend. This has `since been fixed <https://communities.intel.com/message/432692>`__ - in a BIOS update. See these `release notes <https://downloadmirror.intel.com/29454/eng/RY_0384_ReleaseNotes.pdf>`__ (PDF) for more details. - 2014 Mac Minis ~~~~~~~~~~~~~~ diff --git a/docs/includes/update-gui.txt b/docs/includes/update-gui.txt index ee9d8b5cd0..08aa668c46 100644 --- a/docs/includes/update-gui.txt +++ b/docs/includes/update-gui.txt @@ -8,4 +8,4 @@ in the updater to perform the update. do so, you will need to reboot to enable it. .. _`Tails Administrator - password`: https://tails.boum.org/doc/first_steps/welcome_screen/administration_password/index.en.html + password`: https://tails.boum.org/doc/first_steps/welcome_screen/administration_password/ diff --git a/docs/set_up_transfer_and_export_device.rst b/docs/set_up_transfer_and_export_device.rst index d61d4ad232..0eb5298845 100644 --- a/docs/set_up_transfer_and_export_device.rst +++ b/docs/set_up_transfer_and_export_device.rst @@ -232,7 +232,7 @@ mitigate that risk. One option is to restrict write access to the *Export Device* before it is plugged into a device other than the *Secure Viewing Station*. Some USB flash -drives come with a physical write protection switch, and `write blockers <https://www.forensicswiki.org/wiki/Write_Blockers>`__ +drives come with a physical write protection switch, and `write blockers <https://forensicswiki.xyz/wiki/index.php?title=Write_Blockers>`__ are used in forensics to ensure storage media are not modified during examination. diff --git a/docs/tails_printing_guide.rst b/docs/tails_printing_guide.rst index 339c9b00b6..4456d8478a 100644 --- a/docs/tails_printing_guide.rst +++ b/docs/tails_printing_guide.rst @@ -41,7 +41,7 @@ Installing and Printing via the Tails GUI Let's look at the flow in Tails 4 for installing a USB-connected printer. On the Tails welcome screen, unlock your persistent volume, and -`set an admin password <https://tails.boum.org/doc/first_steps/startup_options/administration_password/index.en.html>`__. +`set an admin password <https://tails.boum.org/doc/first_steps/welcome_screen/administration_password/>`__. This ensures that you won't have to reinstall the printer each time you start Tails.
huggingface__accelerate-177
TypeError: Can't apply _send_to_device on object of type <class 'int'>, only of nested list/tuple/dicts of objects that satisfy _has_to_method. When I use 0.4.0 version , it is ok. But it is error when using 0.5.1 version. And the data is a dict, including tensor and List[List[int]]
[ { "content": "# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport importlib\nimport os\nimport random\nfrom dataclasses import dataclass, field\nfrom enum import Enum\nfrom typing import List, Optional, Union\n\nimport numpy as np\nimport torch\n\nfrom .state import AcceleratorState, DistributedType, is_deepspeed_available, is_tpu_available\n\n\nif is_tpu_available():\n import torch_xla.core.xla_model as xm\n\n\ndef is_boto3_available():\n return importlib.util.find_spec(\"boto3\") is not None\n\n\ndef is_sagemaker_available():\n return importlib.util.find_spec(\"sagemaker\") is not None\n\n\nif is_deepspeed_available():\n from deepspeed import DeepSpeedEngine\n\n\nclass RNGType(Enum):\n TORCH = \"torch\"\n CUDA = \"cuda\"\n XLA = \"xla\"\n GENERATOR = \"generator\"\n\n\n@dataclass\nclass TensorInformation:\n shape: torch.Size\n dtype: torch.dtype\n\n\ndef set_seed(seed: int):\n \"\"\"\n Helper function for reproducible behavior to set the seed in ``random``, ``numpy``, ``torch``.\n\n Args:\n seed (:obj:`int`): The seed to set.\n \"\"\"\n random.seed(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n torch.cuda.manual_seed_all(seed)\n # ^^ safe to call this function even if cuda is not available\n if is_tpu_available():\n xm.set_rng_state(seed)\n\n\ndef synchronize_rng_state(rng_type: Optional[RNGType] = None, generator: Optional[torch.Generator] = None):\n # Get the proper rng state\n if rng_type == RNGType.TORCH:\n rng_state = torch.get_rng_state()\n elif rng_type == RNGType.CUDA:\n rng_state = torch.cuda.get_rng_state()\n elif rng_type == RNGType.XLA:\n assert is_tpu_available(), \"Can't synchronize XLA seeds on an environment without TPUs.\"\n rng_state = torch.tensor(xm.get_rng_state())\n elif rng_type == RNGType.GENERATOR:\n assert generator is not None, \"Need a generator to synchronize its seed.\"\n rng_state = generator.get_state()\n\n # Broadcast the rng state from device 0 to other devices\n state = AcceleratorState()\n if state.distributed_type == DistributedType.TPU:\n rng_state = xm.mesh_reduce(\"random_seed\", rng_state, lambda x: x[0])\n elif state.distributed_type == DistributedType.MULTI_GPU:\n rng_state = rng_state.to(state.device)\n torch.distributed.broadcast(rng_state, 0)\n rng_state = rng_state.cpu()\n elif state.distributed_type == DistributedType.MULTI_CPU:\n torch.distributed.broadcast(rng_state, 0)\n\n # Set the broadcast rng state\n if rng_type == RNGType.TORCH:\n torch.set_rng_state(rng_state)\n elif rng_type == RNGType.CUDA:\n torch.cuda.set_rng_state(rng_state)\n elif rng_type == RNGType.XLA:\n xm.set_rng_state(rng_state.item())\n elif rng_type == RNGType.GENERATOR:\n generator.set_state(rng_state)\n\n\ndef synchronize_rng_states(rng_types: List[Union[str, RNGType]], generator: Optional[torch.Generator] = None):\n for rng_type in rng_types:\n synchronize_rng_state(RNGType(rng_type), generator=generator)\n\n\ndef honor_type(obj, generator):\n \"\"\"\n Cast a generator to the same type as obj (list, tuple or namedtuple)\n \"\"\"\n # There is no direct check whether an object if of type namedtuple sadly, this is a workaround.\n if isinstance(obj, tuple) and hasattr(obj, \"_fields\"):\n # Can instantiate a namedtuple from a generator directly, contrary to a tuple/list.\n return type(obj)(*list(generator))\n return type(obj)(generator)\n\n\ndef is_torch_tensor(tensor):\n return isinstance(tensor, torch.Tensor)\n\n\ndef is_tensor_information(tensor_info):\n return isinstance(tensor_info, TensorInformation)\n\n\ndef recursively_apply(func, data, *args, test_type=is_torch_tensor, error_on_other_type=False, **kwargs):\n \"\"\"\n Recursively apply a function on a data structure that is a nested list/tuple/dictionary of a given base type.\n\n Args:\n func (:obj:`callable`):\n The function to recursively apply.\n data (nested list/tuple/dictionary of :obj:`main_type`):\n The data on which to apply :obj:`func`\n *args:\n Positional arguments that will be passed to :obj:`func` when applied on the unpacked data.\n main_type (:obj:`type`, `optional`, defaults to :obj:`torch.Tensor`):\n The base type of the objects to which apply :obj:`func`.\n error_on_other_type (:obj:`bool`, `optional`, defaults to :obj:`False`):\n Whether to return an error or not if after unpacking :obj:`data`, we get on an object that is not of type\n :obj:`main_type`. If :obj:`False`, the function will leave objects of types different than :obj:`main_type`\n unchanged.\n **kwargs:\n Keyword arguments that will be passed to :obj:`func` when applied on the unpacked data.\n\n Returns:\n The same data structure as :obj:`data` with :obj:`func` applied to every object of type :obj:`main_type`.\n \"\"\"\n if isinstance(data, (tuple, list)):\n return honor_type(\n data,\n (\n recursively_apply(\n func, o, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs\n )\n for o in data\n ),\n )\n elif isinstance(data, dict):\n return type(data)(\n **{\n k: recursively_apply(\n func, v, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs\n )\n for k, v in data.items()\n }\n )\n elif test_type(data):\n return func(data, *args, **kwargs)\n elif error_on_other_type:\n raise TypeError(\n f\"Can't apply {func.__name__} on object of type {type(data)}, only of nested list/tuple/dicts of objects \"\n f\"that satisfy {test_type.__name__}.\"\n )\n return data\n\n\ndef send_to_device(tensor, device):\n \"\"\"\n Recursively sends the elements in a nested list/tuple/dictionary of tensors to a given device.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to send to a given device.\n device (:obj:`torch.device`):\n The device to send the data to\n\n Returns:\n The same data structure as :obj:`tensor` with all tensors sent to the proper device.\n \"\"\"\n\n def _send_to_device(t, device):\n return t.to(device)\n\n def _has_to_method(t):\n return hasattr(t, \"to\")\n\n return recursively_apply(_send_to_device, tensor, device, test_type=_has_to_method, error_on_other_type=True)\n\n\ndef get_data_structure(data):\n \"\"\"\n Recursively gathers the information needed to rebuild a nested list/tuple/dictionary of tensors.\n\n Args:\n data (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to send to analyze.\n\n Returns:\n The same data structure as :obj:`data` with :class:`~accelerate.utils.TensorInformation` instead of tensors.\n \"\"\"\n\n def _get_data_structure(tensor):\n return TensorInformation(shape=tensor.shape, dtype=tensor.dtype)\n\n return recursively_apply(_get_data_structure, data)\n\n\ndef initialize_tensors(data_structure):\n \"\"\"\n Recursively initializes tensors from a nested list/tuple/dictionary of\n :class:`~accelerate.utils.TensorInformation`.\n\n Returns:\n The same data structure as :obj:`data` with tensors instead of :class:`~accelerate.utils.TensorInformation`.\n \"\"\"\n\n def _initialize_tensor(tensor_info):\n return torch.empty(*tensor_info.shape, dtype=tensor_info.dtype)\n\n return recursively_apply(_initialize_tensor, data_structure, test_type=is_tensor_information)\n\n\ndef convert_to_fp32(tensor):\n \"\"\"\n Recursively converts the elements nested list/tuple/dictionary of tensors in FP16 precision to FP32.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to convert from FP16 to FP32.\n\n Returns:\n The same data structure as :obj:`tensor` with all tensors that were in FP16 precision converted to FP32.\n \"\"\"\n\n def _convert_to_fp32(tensor):\n return tensor.float()\n\n def _is_fp16_tensor(tensor):\n return hasattr(tensor, \"dtype\") and tensor.dtype == torch.float16\n\n return recursively_apply(_convert_to_fp32, tensor, test_type=_is_fp16_tensor)\n\n\ndef convert_outputs_to_fp32(model_forward):\n \"\"\"\n Decorator to apply to a function outputing tensors (like a model forward pass) that ensures the outputs in FP16\n precision will be convert back to FP32.\n\n Args:\n model_forward (:obj:`Callable`):\n The function which outputs we want to treat.\n\n Returns:\n The same function as :obj:`model_forward` but with converted outputs.\n \"\"\"\n\n def convert_outputs(*args, **kwargs):\n outputs = model_forward(*args, **kwargs)\n return convert_to_fp32(outputs)\n\n return convert_outputs\n\n\ndef extract_model_from_parallel(model):\n \"\"\"\n Extract a model from its distributed containers.\n\n Args:\n model (:obj:`torch.nn.Module`): The model to extract.\n\n Returns:\n :obj:`torch.nn.Module`: The extracted model.\n \"\"\"\n options = (torch.nn.parallel.DistributedDataParallel, torch.nn.DataParallel)\n if is_deepspeed_available():\n options += (DeepSpeedEngine,)\n\n while isinstance(model, options):\n model = model.module\n return model\n\n\ndef _tpu_gather(tensor, name=\"gather tensor\"):\n if isinstance(tensor, (list, tuple)):\n return honor_type(tensor, (_tpu_gather(t, name=f\"{name}_{i}\") for i, t in enumerate(tensor)))\n elif isinstance(tensor, dict):\n return type(tensor)({k: _tpu_gather(v, name=f\"{name}_{k}\") for k, v in tensor.items()})\n elif not isinstance(tensor, torch.Tensor):\n raise TypeError(f\"Can't gather the values of type {type(tensor)}, only of nested list/tuple/dicts of tensors.\")\n if tensor.ndim == 0:\n tensor = tensor.clone()[None]\n return xm.mesh_reduce(name, tensor, torch.cat)\n\n\ndef _gpu_gather(tensor):\n def _gpu_gather_one(tensor):\n if tensor.ndim == 0:\n tensor = tensor.clone()[None]\n output_tensors = [tensor.clone() for _ in range(torch.distributed.get_world_size())]\n torch.distributed.all_gather(output_tensors, tensor)\n return torch.cat(output_tensors, dim=0)\n\n return recursively_apply(_gpu_gather_one, tensor, error_on_other_type=True)\n\n\n_cpu_gather = _gpu_gather\n\n\ndef gather(tensor):\n \"\"\"\n Recursively gather tensor in a nested list/tuple/dictionary of tensors from all devices.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to gather.\n\n Returns:\n The same data structure as :obj:`tensor` with all tensors sent to the proper device.\n \"\"\"\n if AcceleratorState().distributed_type == DistributedType.TPU:\n return _tpu_gather(tensor, name=\"accelerate.utils.gather\")\n elif AcceleratorState().distributed_type == DistributedType.MULTI_GPU:\n return _gpu_gather(tensor)\n elif AcceleratorState().distributed_type == DistributedType.MULTI_CPU:\n return _cpu_gather(tensor)\n else:\n return tensor\n\n\ndef _gpu_broadcast(data, src=0):\n def _gpu_broadcast_one(tensor, src=0):\n torch.distributed.broadcast(tensor, src=src)\n return tensor\n\n return recursively_apply(_gpu_broadcast_one, data, error_on_other_type=True, src=src)\n\n\ndef _tpu_broadcast(tensor, src=0, name=\"broadcast tensor\"):\n if isinstance(tensor, (list, tuple)):\n return honor_type(tensor, (_tpu_broadcast(t, name=f\"{name}_{i}\") for i, t in enumerate(tensor)))\n elif isinstance(tensor, dict):\n return type(tensor)({k: _tpu_broadcast(v, name=f\"{name}_{k}\") for k, v in tensor.items()})\n return xm.mesh_reduce(name, tensor, lambda x: x[src])\n\n\ndef broadcast(tensor, from_process: int = 0):\n \"\"\"\n Recursively broadcast tensor in a nested list/tuple/dictionary of tensors to all devices.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to gather.\n from_process (:obj:`int`, `optional`, defaults to 0):\n The process from which to send the data\n\n Returns:\n The same data structure as :obj:`tensor` with all tensors broadcasted to the proper device.\n \"\"\"\n if AcceleratorState().distributed_type == DistributedType.TPU:\n return _tpu_broadcast(tensor, src=from_process, name=\"accelerate.utils.broadcast\")\n elif AcceleratorState().distributed_type == DistributedType.MULTI_GPU:\n return _gpu_broadcast(tensor, src=from_process)\n elif AcceleratorState().distributed_type == DistributedType.MULTI_CPU:\n return _gpu_broadcast(tensor, src=from_process)\n else:\n return tensor\n\n\ndef broadcast_object_list(object_list, from_process: int = 0):\n \"\"\"\n Broadcast a list of picklable objects form one process to the others.\n\n Args:\n object_list (list of picklable objects):\n The list of objects to broadcast. This list will be modified inplace.\n from_process (:obj:`int`, `optional`, defaults to 0):\n The process from which to send the data.\n\n Returns:\n The same list containing the objects from process 0.\n \"\"\"\n if AcceleratorState().distributed_type == DistributedType.TPU:\n for i, obj in enumerate(object_list):\n object_list[i] = xm.mesh_reduce(\"accelerate.utils.broadcast_object_list\", obj, lambda x: x[from_process])\n elif AcceleratorState().distributed_type == DistributedType.MULTI_GPU:\n torch.distributed.broadcast_object_list(object_list, src=from_process)\n elif AcceleratorState().distributed_type == DistributedType.MULTI_CPU:\n torch.distributed.broadcast_object_list(object_list, src=from_process)\n return object_list\n\n\ndef slice_tensors(data, tensor_slice):\n \"\"\"\n Recursively takes a slice in a nested list/tuple/dictionary of tensors.\n\n Args:\n data (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to slice.\n tensor_slice (:obj:`slice`):\n The slice to take.\n\n Returns:\n The same data structure as :obj:`data` with all the tensors slices.\n \"\"\"\n\n def _slice_tensor(tensor, tensor_slice):\n return tensor[tensor_slice]\n\n return recursively_apply(_slice_tensor, data, tensor_slice)\n\n\ndef find_batch_size(data):\n \"\"\"\n Recursively finds the batch size in a nested list/tuple/dictionary of lists of tensors.\n\n Args:\n data (nested list/tuple/dictionary of :obj:`torch.Tensor`): The data from which to find the batch size.\n\n Returns:\n :obj:`int`: The batch size.\n \"\"\"\n if isinstance(data, (tuple, list)):\n return find_batch_size(data[0])\n elif isinstance(data, dict):\n for k in data.keys():\n return find_batch_size(data[k])\n elif not isinstance(data, torch.Tensor):\n raise TypeError(f\"Can only find the batch size of tensors but got {type(data)}.\")\n return data.shape[0]\n\n\ndef concatenate(data, dim=0):\n \"\"\"\n Recursively concatenate the tensors in a nested list/tuple/dictionary of lists of tensors with the same shape.\n\n Args:\n data (nested list/tuple/dictionary of lists of tensors :obj:`torch.Tensor`):\n The data to concatenate.\n dim (:obj:`int`, `optional`, defaults to 0):\n The dimension on which to concatenate.\n\n Returns:\n The same data structure as :obj:`data` with all the tensors concatenated.\n \"\"\"\n if isinstance(data[0], (tuple, list)):\n return honor_type(data[0], (concatenate([d[i] for d in data], dim=dim) for i in range(len(data[0]))))\n elif isinstance(data[0], dict):\n return type(data[0])(**{k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()})\n elif not isinstance(data[0], torch.Tensor):\n raise TypeError(f\"Can only concatenate tensors but got {type(data[0])}\")\n return torch.cat(data, dim=dim)\n\n\ndef pad_across_processes(tensor, dim=0, pad_index=0, pad_first=False):\n \"\"\"\n Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so they\n can safely be gathered.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to gather.\n dim (:obj:`int`, `optional`, defaults to 0):\n The dimension on which to pad.\n pad_index (:obj:`int`, `optional`, defaults to 0):\n The value with which to pad.\n pad_first (:obj:`bool`, `optional`, defaults to :obj:`False`):\n Whether to pad at the beginning or the end.\n \"\"\"\n\n def _pad_across_processes(tensor, dim=0, pad_index=0, pad_first=False):\n if dim >= len(tensor.shape):\n return tensor\n\n # Gather all sizes\n size = torch.tensor(tensor.shape, device=tensor.device)[None]\n sizes = gather(size).cpu()\n # Then pad to the maximum size\n max_size = max(s[dim] for s in sizes)\n if max_size == tensor.shape[dim]:\n return tensor\n\n old_size = tensor.shape\n new_size = list(old_size)\n new_size[dim] = max_size\n new_tensor = tensor.new_zeros(tuple(new_size)) + pad_index\n if pad_first:\n indices = tuple(\n slice(max_size - old_size[dim], max_size) if i == dim else slice(None) for i in range(len(new_size))\n )\n else:\n indices = tuple(slice(0, old_size[dim]) if i == dim else slice(None) for i in range(len(new_size)))\n new_tensor[indices] = tensor\n return new_tensor\n\n return recursively_apply(\n _pad_across_processes, tensor, error_on_other_type=True, dim=dim, pad_index=pad_index, pad_first=pad_first\n )\n\n\ndef wait_for_everyone():\n \"\"\"\n Introduces a blocking point in the script, making sure all processes have reached this point before continuing.\n\n Warning::\n\n Make sure all processes will reach this instruction otherwise one of your processes will hang forever.\n \"\"\"\n if (\n AcceleratorState().distributed_type == DistributedType.MULTI_GPU\n or AcceleratorState().distributed_type == DistributedType.MULTI_CPU\n or AcceleratorState().distributed_type == DistributedType.DEEPSPEED\n ):\n torch.distributed.barrier()\n elif AcceleratorState().distributed_type == DistributedType.TPU:\n xm.rendezvous(\"accelerate.utils.wait_for_everyone\")\n\n\ndef save(obj, f):\n \"\"\"\n Save the data to disk. Use in place of :obj:`torch.save()`.\n\n Args:\n obj: The data to save\n f: The file (or file-like object) to use to save the data\n \"\"\"\n if AcceleratorState().distributed_type == DistributedType.TPU:\n xm.save(obj, f)\n elif AcceleratorState().local_process_index == 0:\n torch.save(obj, f)\n\n\nclass PrepareForLaunch:\n \"\"\"\n Prepare a function that will launched in a distributed setup.\n\n Args:\n launcher (:obj:`Callable`):\n The function to launch.\n distributed_type (:class:`~accelerate.state.DistributedType`):\n The distributed type to prepare for.\n \"\"\"\n\n def __init__(self, launcher, distributed_type=\"NO\"):\n self.launcher = launcher\n self.distributed_type = DistributedType(distributed_type)\n\n def __call__(self, index, *args):\n if self.distributed_type == DistributedType.MULTI_GPU or self.distributed_type == DistributedType.MULTI_CPU:\n # Prepare the environment for torch.distributed\n os.environ[\"LOCAL_RANK\"] = str(index)\n os.environ[\"RANK\"] = str(index)\n\n self.launcher(*args)\n\n\n@dataclass\nclass DeepSpeedPlugin:\n\n gradient_accumulation_steps: int = field(\n default=None, metadata={\"help\": \"Number of steps to accumulate gradients before updating optimizer states\"}\n )\n zero_stage: int = field(\n default=None,\n metadata={\"help\": \"Possible options are 0,1,2,3; Default will be taken from environment variable\"},\n )\n is_train_batch_min: str = field(\n default=True,\n metadata={\"help\": \"If both train & eval dataloaders are specified, this will decide the train_batch_size\"},\n )\n\n auto_opt_mapping: bool = field(\n default=True,\n metadata={\"help\": \"whether to map torch.adam to deepspeed optimizer version of adam based on config\"},\n )\n\n offload_optimizer_device: bool = field(default=None, metadata={\"help\": \"Possible options are none|cpu|nvme\"})\n\n def __post_init__(self):\n\n if self.gradient_accumulation_steps is None:\n self.gradient_accumulation_steps = int(os.environ.get(\"GRADIENT_ACCUMULATION_STEPS\", 1))\n\n if self.zero_stage is None:\n self.zero_stage = int(os.environ.get(\"DEEPSPEED_ZERO_STAGE\", 2))\n\n if self.offload_optimizer_device is None:\n self.offload_optimizer_device = os.environ.get(\"DEEPSPEED_OFFLOAD_OPTIMIZER_DEVICE\", \"none\")\n\n self.deepspeed_config = {\n \"train_batch_size\": None,\n \"gradient_accumulation_steps\": self.gradient_accumulation_steps,\n \"zero_optimization\": {\n \"stage\": self.zero_stage,\n \"offload_optimizer\": {\n \"device\": self.offload_optimizer_device,\n },\n },\n \"steps_per_print\": float(\"inf\"), # this will stop deepspeed from logging @ stdout\n \"zero_allow_untested_optimizer\": True,\n }\n", "path": "src/accelerate/utils.py" } ]
[ { "content": "# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport importlib\nimport os\nimport random\nfrom dataclasses import dataclass, field\nfrom enum import Enum\nfrom typing import List, Optional, Union\n\nimport numpy as np\nimport torch\n\nfrom .state import AcceleratorState, DistributedType, is_deepspeed_available, is_tpu_available\n\n\nif is_tpu_available():\n import torch_xla.core.xla_model as xm\n\n\ndef is_boto3_available():\n return importlib.util.find_spec(\"boto3\") is not None\n\n\ndef is_sagemaker_available():\n return importlib.util.find_spec(\"sagemaker\") is not None\n\n\nif is_deepspeed_available():\n from deepspeed import DeepSpeedEngine\n\n\nclass RNGType(Enum):\n TORCH = \"torch\"\n CUDA = \"cuda\"\n XLA = \"xla\"\n GENERATOR = \"generator\"\n\n\n@dataclass\nclass TensorInformation:\n shape: torch.Size\n dtype: torch.dtype\n\n\ndef set_seed(seed: int):\n \"\"\"\n Helper function for reproducible behavior to set the seed in ``random``, ``numpy``, ``torch``.\n\n Args:\n seed (:obj:`int`): The seed to set.\n \"\"\"\n random.seed(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n torch.cuda.manual_seed_all(seed)\n # ^^ safe to call this function even if cuda is not available\n if is_tpu_available():\n xm.set_rng_state(seed)\n\n\ndef synchronize_rng_state(rng_type: Optional[RNGType] = None, generator: Optional[torch.Generator] = None):\n # Get the proper rng state\n if rng_type == RNGType.TORCH:\n rng_state = torch.get_rng_state()\n elif rng_type == RNGType.CUDA:\n rng_state = torch.cuda.get_rng_state()\n elif rng_type == RNGType.XLA:\n assert is_tpu_available(), \"Can't synchronize XLA seeds on an environment without TPUs.\"\n rng_state = torch.tensor(xm.get_rng_state())\n elif rng_type == RNGType.GENERATOR:\n assert generator is not None, \"Need a generator to synchronize its seed.\"\n rng_state = generator.get_state()\n\n # Broadcast the rng state from device 0 to other devices\n state = AcceleratorState()\n if state.distributed_type == DistributedType.TPU:\n rng_state = xm.mesh_reduce(\"random_seed\", rng_state, lambda x: x[0])\n elif state.distributed_type == DistributedType.MULTI_GPU:\n rng_state = rng_state.to(state.device)\n torch.distributed.broadcast(rng_state, 0)\n rng_state = rng_state.cpu()\n elif state.distributed_type == DistributedType.MULTI_CPU:\n torch.distributed.broadcast(rng_state, 0)\n\n # Set the broadcast rng state\n if rng_type == RNGType.TORCH:\n torch.set_rng_state(rng_state)\n elif rng_type == RNGType.CUDA:\n torch.cuda.set_rng_state(rng_state)\n elif rng_type == RNGType.XLA:\n xm.set_rng_state(rng_state.item())\n elif rng_type == RNGType.GENERATOR:\n generator.set_state(rng_state)\n\n\ndef synchronize_rng_states(rng_types: List[Union[str, RNGType]], generator: Optional[torch.Generator] = None):\n for rng_type in rng_types:\n synchronize_rng_state(RNGType(rng_type), generator=generator)\n\n\ndef honor_type(obj, generator):\n \"\"\"\n Cast a generator to the same type as obj (list, tuple or namedtuple)\n \"\"\"\n # There is no direct check whether an object if of type namedtuple sadly, this is a workaround.\n if isinstance(obj, tuple) and hasattr(obj, \"_fields\"):\n # Can instantiate a namedtuple from a generator directly, contrary to a tuple/list.\n return type(obj)(*list(generator))\n return type(obj)(generator)\n\n\ndef is_torch_tensor(tensor):\n return isinstance(tensor, torch.Tensor)\n\n\ndef is_tensor_information(tensor_info):\n return isinstance(tensor_info, TensorInformation)\n\n\ndef recursively_apply(func, data, *args, test_type=is_torch_tensor, error_on_other_type=False, **kwargs):\n \"\"\"\n Recursively apply a function on a data structure that is a nested list/tuple/dictionary of a given base type.\n\n Args:\n func (:obj:`callable`):\n The function to recursively apply.\n data (nested list/tuple/dictionary of :obj:`main_type`):\n The data on which to apply :obj:`func`\n *args:\n Positional arguments that will be passed to :obj:`func` when applied on the unpacked data.\n main_type (:obj:`type`, `optional`, defaults to :obj:`torch.Tensor`):\n The base type of the objects to which apply :obj:`func`.\n error_on_other_type (:obj:`bool`, `optional`, defaults to :obj:`False`):\n Whether to return an error or not if after unpacking :obj:`data`, we get on an object that is not of type\n :obj:`main_type`. If :obj:`False`, the function will leave objects of types different than :obj:`main_type`\n unchanged.\n **kwargs:\n Keyword arguments that will be passed to :obj:`func` when applied on the unpacked data.\n\n Returns:\n The same data structure as :obj:`data` with :obj:`func` applied to every object of type :obj:`main_type`.\n \"\"\"\n if isinstance(data, (tuple, list)):\n return honor_type(\n data,\n (\n recursively_apply(\n func, o, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs\n )\n for o in data\n ),\n )\n elif isinstance(data, dict):\n return type(data)(\n **{\n k: recursively_apply(\n func, v, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs\n )\n for k, v in data.items()\n }\n )\n elif test_type(data):\n return func(data, *args, **kwargs)\n elif error_on_other_type:\n raise TypeError(\n f\"Can't apply {func.__name__} on object of type {type(data)}, only of nested list/tuple/dicts of objects \"\n f\"that satisfy {test_type.__name__}.\"\n )\n return data\n\n\ndef send_to_device(tensor, device):\n \"\"\"\n Recursively sends the elements in a nested list/tuple/dictionary of tensors to a given device.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to send to a given device.\n device (:obj:`torch.device`):\n The device to send the data to\n\n Returns:\n The same data structure as :obj:`tensor` with all tensors sent to the proper device.\n \"\"\"\n\n def _send_to_device(t, device):\n return t.to(device)\n\n def _has_to_method(t):\n return hasattr(t, \"to\")\n\n return recursively_apply(_send_to_device, tensor, device, test_type=_has_to_method)\n\n\ndef get_data_structure(data):\n \"\"\"\n Recursively gathers the information needed to rebuild a nested list/tuple/dictionary of tensors.\n\n Args:\n data (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to send to analyze.\n\n Returns:\n The same data structure as :obj:`data` with :class:`~accelerate.utils.TensorInformation` instead of tensors.\n \"\"\"\n\n def _get_data_structure(tensor):\n return TensorInformation(shape=tensor.shape, dtype=tensor.dtype)\n\n return recursively_apply(_get_data_structure, data)\n\n\ndef initialize_tensors(data_structure):\n \"\"\"\n Recursively initializes tensors from a nested list/tuple/dictionary of\n :class:`~accelerate.utils.TensorInformation`.\n\n Returns:\n The same data structure as :obj:`data` with tensors instead of :class:`~accelerate.utils.TensorInformation`.\n \"\"\"\n\n def _initialize_tensor(tensor_info):\n return torch.empty(*tensor_info.shape, dtype=tensor_info.dtype)\n\n return recursively_apply(_initialize_tensor, data_structure, test_type=is_tensor_information)\n\n\ndef convert_to_fp32(tensor):\n \"\"\"\n Recursively converts the elements nested list/tuple/dictionary of tensors in FP16 precision to FP32.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to convert from FP16 to FP32.\n\n Returns:\n The same data structure as :obj:`tensor` with all tensors that were in FP16 precision converted to FP32.\n \"\"\"\n\n def _convert_to_fp32(tensor):\n return tensor.float()\n\n def _is_fp16_tensor(tensor):\n return hasattr(tensor, \"dtype\") and tensor.dtype == torch.float16\n\n return recursively_apply(_convert_to_fp32, tensor, test_type=_is_fp16_tensor)\n\n\ndef convert_outputs_to_fp32(model_forward):\n \"\"\"\n Decorator to apply to a function outputing tensors (like a model forward pass) that ensures the outputs in FP16\n precision will be convert back to FP32.\n\n Args:\n model_forward (:obj:`Callable`):\n The function which outputs we want to treat.\n\n Returns:\n The same function as :obj:`model_forward` but with converted outputs.\n \"\"\"\n\n def convert_outputs(*args, **kwargs):\n outputs = model_forward(*args, **kwargs)\n return convert_to_fp32(outputs)\n\n return convert_outputs\n\n\ndef extract_model_from_parallel(model):\n \"\"\"\n Extract a model from its distributed containers.\n\n Args:\n model (:obj:`torch.nn.Module`): The model to extract.\n\n Returns:\n :obj:`torch.nn.Module`: The extracted model.\n \"\"\"\n options = (torch.nn.parallel.DistributedDataParallel, torch.nn.DataParallel)\n if is_deepspeed_available():\n options += (DeepSpeedEngine,)\n\n while isinstance(model, options):\n model = model.module\n return model\n\n\ndef _tpu_gather(tensor, name=\"gather tensor\"):\n if isinstance(tensor, (list, tuple)):\n return honor_type(tensor, (_tpu_gather(t, name=f\"{name}_{i}\") for i, t in enumerate(tensor)))\n elif isinstance(tensor, dict):\n return type(tensor)({k: _tpu_gather(v, name=f\"{name}_{k}\") for k, v in tensor.items()})\n elif not isinstance(tensor, torch.Tensor):\n raise TypeError(f\"Can't gather the values of type {type(tensor)}, only of nested list/tuple/dicts of tensors.\")\n if tensor.ndim == 0:\n tensor = tensor.clone()[None]\n return xm.mesh_reduce(name, tensor, torch.cat)\n\n\ndef _gpu_gather(tensor):\n def _gpu_gather_one(tensor):\n if tensor.ndim == 0:\n tensor = tensor.clone()[None]\n output_tensors = [tensor.clone() for _ in range(torch.distributed.get_world_size())]\n torch.distributed.all_gather(output_tensors, tensor)\n return torch.cat(output_tensors, dim=0)\n\n return recursively_apply(_gpu_gather_one, tensor, error_on_other_type=True)\n\n\n_cpu_gather = _gpu_gather\n\n\ndef gather(tensor):\n \"\"\"\n Recursively gather tensor in a nested list/tuple/dictionary of tensors from all devices.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to gather.\n\n Returns:\n The same data structure as :obj:`tensor` with all tensors sent to the proper device.\n \"\"\"\n if AcceleratorState().distributed_type == DistributedType.TPU:\n return _tpu_gather(tensor, name=\"accelerate.utils.gather\")\n elif AcceleratorState().distributed_type == DistributedType.MULTI_GPU:\n return _gpu_gather(tensor)\n elif AcceleratorState().distributed_type == DistributedType.MULTI_CPU:\n return _cpu_gather(tensor)\n else:\n return tensor\n\n\ndef _gpu_broadcast(data, src=0):\n def _gpu_broadcast_one(tensor, src=0):\n torch.distributed.broadcast(tensor, src=src)\n return tensor\n\n return recursively_apply(_gpu_broadcast_one, data, error_on_other_type=True, src=src)\n\n\ndef _tpu_broadcast(tensor, src=0, name=\"broadcast tensor\"):\n if isinstance(tensor, (list, tuple)):\n return honor_type(tensor, (_tpu_broadcast(t, name=f\"{name}_{i}\") for i, t in enumerate(tensor)))\n elif isinstance(tensor, dict):\n return type(tensor)({k: _tpu_broadcast(v, name=f\"{name}_{k}\") for k, v in tensor.items()})\n return xm.mesh_reduce(name, tensor, lambda x: x[src])\n\n\ndef broadcast(tensor, from_process: int = 0):\n \"\"\"\n Recursively broadcast tensor in a nested list/tuple/dictionary of tensors to all devices.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to gather.\n from_process (:obj:`int`, `optional`, defaults to 0):\n The process from which to send the data\n\n Returns:\n The same data structure as :obj:`tensor` with all tensors broadcasted to the proper device.\n \"\"\"\n if AcceleratorState().distributed_type == DistributedType.TPU:\n return _tpu_broadcast(tensor, src=from_process, name=\"accelerate.utils.broadcast\")\n elif AcceleratorState().distributed_type == DistributedType.MULTI_GPU:\n return _gpu_broadcast(tensor, src=from_process)\n elif AcceleratorState().distributed_type == DistributedType.MULTI_CPU:\n return _gpu_broadcast(tensor, src=from_process)\n else:\n return tensor\n\n\ndef broadcast_object_list(object_list, from_process: int = 0):\n \"\"\"\n Broadcast a list of picklable objects form one process to the others.\n\n Args:\n object_list (list of picklable objects):\n The list of objects to broadcast. This list will be modified inplace.\n from_process (:obj:`int`, `optional`, defaults to 0):\n The process from which to send the data.\n\n Returns:\n The same list containing the objects from process 0.\n \"\"\"\n if AcceleratorState().distributed_type == DistributedType.TPU:\n for i, obj in enumerate(object_list):\n object_list[i] = xm.mesh_reduce(\"accelerate.utils.broadcast_object_list\", obj, lambda x: x[from_process])\n elif AcceleratorState().distributed_type == DistributedType.MULTI_GPU:\n torch.distributed.broadcast_object_list(object_list, src=from_process)\n elif AcceleratorState().distributed_type == DistributedType.MULTI_CPU:\n torch.distributed.broadcast_object_list(object_list, src=from_process)\n return object_list\n\n\ndef slice_tensors(data, tensor_slice):\n \"\"\"\n Recursively takes a slice in a nested list/tuple/dictionary of tensors.\n\n Args:\n data (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to slice.\n tensor_slice (:obj:`slice`):\n The slice to take.\n\n Returns:\n The same data structure as :obj:`data` with all the tensors slices.\n \"\"\"\n\n def _slice_tensor(tensor, tensor_slice):\n return tensor[tensor_slice]\n\n return recursively_apply(_slice_tensor, data, tensor_slice)\n\n\ndef find_batch_size(data):\n \"\"\"\n Recursively finds the batch size in a nested list/tuple/dictionary of lists of tensors.\n\n Args:\n data (nested list/tuple/dictionary of :obj:`torch.Tensor`): The data from which to find the batch size.\n\n Returns:\n :obj:`int`: The batch size.\n \"\"\"\n if isinstance(data, (tuple, list)):\n return find_batch_size(data[0])\n elif isinstance(data, dict):\n for k in data.keys():\n return find_batch_size(data[k])\n elif not isinstance(data, torch.Tensor):\n raise TypeError(f\"Can only find the batch size of tensors but got {type(data)}.\")\n return data.shape[0]\n\n\ndef concatenate(data, dim=0):\n \"\"\"\n Recursively concatenate the tensors in a nested list/tuple/dictionary of lists of tensors with the same shape.\n\n Args:\n data (nested list/tuple/dictionary of lists of tensors :obj:`torch.Tensor`):\n The data to concatenate.\n dim (:obj:`int`, `optional`, defaults to 0):\n The dimension on which to concatenate.\n\n Returns:\n The same data structure as :obj:`data` with all the tensors concatenated.\n \"\"\"\n if isinstance(data[0], (tuple, list)):\n return honor_type(data[0], (concatenate([d[i] for d in data], dim=dim) for i in range(len(data[0]))))\n elif isinstance(data[0], dict):\n return type(data[0])(**{k: concatenate([d[k] for d in data], dim=dim) for k in data[0].keys()})\n elif not isinstance(data[0], torch.Tensor):\n raise TypeError(f\"Can only concatenate tensors but got {type(data[0])}\")\n return torch.cat(data, dim=dim)\n\n\ndef pad_across_processes(tensor, dim=0, pad_index=0, pad_first=False):\n \"\"\"\n Recursively pad the tensors in a nested list/tuple/dictionary of tensors from all devices to the same size so they\n can safely be gathered.\n\n Args:\n tensor (nested list/tuple/dictionary of :obj:`torch.Tensor`):\n The data to gather.\n dim (:obj:`int`, `optional`, defaults to 0):\n The dimension on which to pad.\n pad_index (:obj:`int`, `optional`, defaults to 0):\n The value with which to pad.\n pad_first (:obj:`bool`, `optional`, defaults to :obj:`False`):\n Whether to pad at the beginning or the end.\n \"\"\"\n\n def _pad_across_processes(tensor, dim=0, pad_index=0, pad_first=False):\n if dim >= len(tensor.shape):\n return tensor\n\n # Gather all sizes\n size = torch.tensor(tensor.shape, device=tensor.device)[None]\n sizes = gather(size).cpu()\n # Then pad to the maximum size\n max_size = max(s[dim] for s in sizes)\n if max_size == tensor.shape[dim]:\n return tensor\n\n old_size = tensor.shape\n new_size = list(old_size)\n new_size[dim] = max_size\n new_tensor = tensor.new_zeros(tuple(new_size)) + pad_index\n if pad_first:\n indices = tuple(\n slice(max_size - old_size[dim], max_size) if i == dim else slice(None) for i in range(len(new_size))\n )\n else:\n indices = tuple(slice(0, old_size[dim]) if i == dim else slice(None) for i in range(len(new_size)))\n new_tensor[indices] = tensor\n return new_tensor\n\n return recursively_apply(\n _pad_across_processes, tensor, error_on_other_type=True, dim=dim, pad_index=pad_index, pad_first=pad_first\n )\n\n\ndef wait_for_everyone():\n \"\"\"\n Introduces a blocking point in the script, making sure all processes have reached this point before continuing.\n\n Warning::\n\n Make sure all processes will reach this instruction otherwise one of your processes will hang forever.\n \"\"\"\n if (\n AcceleratorState().distributed_type == DistributedType.MULTI_GPU\n or AcceleratorState().distributed_type == DistributedType.MULTI_CPU\n or AcceleratorState().distributed_type == DistributedType.DEEPSPEED\n ):\n torch.distributed.barrier()\n elif AcceleratorState().distributed_type == DistributedType.TPU:\n xm.rendezvous(\"accelerate.utils.wait_for_everyone\")\n\n\ndef save(obj, f):\n \"\"\"\n Save the data to disk. Use in place of :obj:`torch.save()`.\n\n Args:\n obj: The data to save\n f: The file (or file-like object) to use to save the data\n \"\"\"\n if AcceleratorState().distributed_type == DistributedType.TPU:\n xm.save(obj, f)\n elif AcceleratorState().local_process_index == 0:\n torch.save(obj, f)\n\n\nclass PrepareForLaunch:\n \"\"\"\n Prepare a function that will launched in a distributed setup.\n\n Args:\n launcher (:obj:`Callable`):\n The function to launch.\n distributed_type (:class:`~accelerate.state.DistributedType`):\n The distributed type to prepare for.\n \"\"\"\n\n def __init__(self, launcher, distributed_type=\"NO\"):\n self.launcher = launcher\n self.distributed_type = DistributedType(distributed_type)\n\n def __call__(self, index, *args):\n if self.distributed_type == DistributedType.MULTI_GPU or self.distributed_type == DistributedType.MULTI_CPU:\n # Prepare the environment for torch.distributed\n os.environ[\"LOCAL_RANK\"] = str(index)\n os.environ[\"RANK\"] = str(index)\n\n self.launcher(*args)\n\n\n@dataclass\nclass DeepSpeedPlugin:\n\n gradient_accumulation_steps: int = field(\n default=None, metadata={\"help\": \"Number of steps to accumulate gradients before updating optimizer states\"}\n )\n zero_stage: int = field(\n default=None,\n metadata={\"help\": \"Possible options are 0,1,2,3; Default will be taken from environment variable\"},\n )\n is_train_batch_min: str = field(\n default=True,\n metadata={\"help\": \"If both train & eval dataloaders are specified, this will decide the train_batch_size\"},\n )\n\n auto_opt_mapping: bool = field(\n default=True,\n metadata={\"help\": \"whether to map torch.adam to deepspeed optimizer version of adam based on config\"},\n )\n\n offload_optimizer_device: bool = field(default=None, metadata={\"help\": \"Possible options are none|cpu|nvme\"})\n\n def __post_init__(self):\n\n if self.gradient_accumulation_steps is None:\n self.gradient_accumulation_steps = int(os.environ.get(\"GRADIENT_ACCUMULATION_STEPS\", 1))\n\n if self.zero_stage is None:\n self.zero_stage = int(os.environ.get(\"DEEPSPEED_ZERO_STAGE\", 2))\n\n if self.offload_optimizer_device is None:\n self.offload_optimizer_device = os.environ.get(\"DEEPSPEED_OFFLOAD_OPTIMIZER_DEVICE\", \"none\")\n\n self.deepspeed_config = {\n \"train_batch_size\": None,\n \"gradient_accumulation_steps\": self.gradient_accumulation_steps,\n \"zero_optimization\": {\n \"stage\": self.zero_stage,\n \"offload_optimizer\": {\n \"device\": self.offload_optimizer_device,\n },\n },\n \"steps_per_print\": float(\"inf\"), # this will stop deepspeed from logging @ stdout\n \"zero_allow_untested_optimizer\": True,\n }\n", "path": "src/accelerate/utils.py" } ]
diff --git a/src/accelerate/utils.py b/src/accelerate/utils.py index e25b80c4976..28a38539adb 100644 --- a/src/accelerate/utils.py +++ b/src/accelerate/utils.py @@ -201,7 +201,7 @@ def _send_to_device(t, device): def _has_to_method(t): return hasattr(t, "to") - return recursively_apply(_send_to_device, tensor, device, test_type=_has_to_method, error_on_other_type=True) + return recursively_apply(_send_to_device, tensor, device, test_type=_has_to_method) def get_data_structure(data): diff --git a/tests/test_utils.py b/tests/test_utils.py index ca617634dd9..9b16aba7bed 100644 --- a/tests/test_utils.py +++ b/tests/test_utils.py @@ -20,7 +20,7 @@ from accelerate.utils import send_to_device -TestNamedTuple = namedtuple("TestNamedTuple", "a b") +TestNamedTuple = namedtuple("TestNamedTuple", "a b c") class UtilsTester(unittest.TestCase): @@ -31,23 +31,26 @@ def test_send_to_device(self): result1 = send_to_device(tensor, device) self.assertTrue(torch.equal(result1.cpu(), tensor)) - result2 = send_to_device((tensor, [tensor, tensor]), device) + result2 = send_to_device((tensor, [tensor, tensor], 1), device) self.assertIsInstance(result2, tuple) self.assertTrue(torch.equal(result2[0].cpu(), tensor)) self.assertIsInstance(result2[1], list) self.assertTrue(torch.equal(result2[1][0].cpu(), tensor)) self.assertTrue(torch.equal(result2[1][1].cpu(), tensor)) + self.assertEqual(result2[2], 1) - result2 = send_to_device({"a": tensor, "b": [tensor, tensor]}, device) + result2 = send_to_device({"a": tensor, "b": [tensor, tensor], "c": 1}, device) self.assertIsInstance(result2, dict) self.assertTrue(torch.equal(result2["a"].cpu(), tensor)) self.assertIsInstance(result2["b"], list) self.assertTrue(torch.equal(result2["b"][0].cpu(), tensor)) self.assertTrue(torch.equal(result2["b"][1].cpu(), tensor)) + self.assertEqual(result2["c"], 1) - result3 = send_to_device(TestNamedTuple(a=tensor, b=[tensor, tensor]), device) + result3 = send_to_device(TestNamedTuple(a=tensor, b=[tensor, tensor], c=1), device) self.assertIsInstance(result3, TestNamedTuple) self.assertTrue(torch.equal(result3.a.cpu(), tensor)) self.assertIsInstance(result3.b, list) self.assertTrue(torch.equal(result3.b[0].cpu(), tensor)) self.assertTrue(torch.equal(result3.b[1].cpu(), tensor)) + self.assertEqual(result3.c, 1)
readthedocs__readthedocs.org-10947
Teams: project form doesn't allow for null/empty project list I found trying to remove a project from a team that it was impossible to remove the project if it was the last project attached to the team. The form expects some non-null value and throws a validation error if the list is empty. To reproduce: - Add a team - Add a project to the team - Try to remove the project from the team - You'll get a validation error on the form Instead, this should be a valid form submission and the team should have 0 projects attached.
[ { "content": "\"\"\"Organization forms.\"\"\"\nfrom django import forms\nfrom django.contrib.auth.models import User\nfrom django.core.exceptions import NON_FIELD_ERRORS, ValidationError\nfrom django.core.validators import EmailValidator\nfrom django.db.models import Q\nfrom django.utils.translation import gettext_lazy as _\n\nfrom readthedocs.core.history import SimpleHistoryModelForm\nfrom readthedocs.core.permissions import AdminPermission\nfrom readthedocs.core.utils import slugify\nfrom readthedocs.core.utils.extend import SettingsOverrideObject\nfrom readthedocs.invitations.models import Invitation\nfrom readthedocs.organizations.constants import ADMIN_ACCESS, READ_ONLY_ACCESS\nfrom readthedocs.organizations.models import (\n Organization,\n OrganizationOwner,\n Team,\n TeamMember,\n)\n\n\nclass OrganizationForm(SimpleHistoryModelForm):\n\n \"\"\"\n Base organization form.\n\n :param user: User instance, responsible for ownership of Organization\n :type user: django.contrib.auth.models.User\n \"\"\"\n\n # We use the organization slug + project name\n # to form the final project slug.\n # A valid project slug is 63 chars long.\n name = forms.CharField(max_length=32)\n\n class Meta:\n model = Organization\n fields = [\"name\", \"email\", \"description\", \"url\"]\n labels = {\n \"name\": _(\"Organization Name\"),\n \"email\": _(\"Billing Email\"),\n }\n\n # Don't use a URLField as a widget, the validation is too strict on FF\n url = forms.URLField(\n widget=forms.TextInput(attrs={\"placeholder\": \"http://\"}),\n label=_(\"Site URL\"),\n required=False,\n )\n\n def __init__(self, *args, **kwargs):\n try:\n self.user = kwargs.pop(\"user\")\n except KeyError:\n raise TypeError(\n \"OrganizationForm expects a `user` keyword argument\",\n )\n super().__init__(*args, **kwargs)\n\n def clean_name(self):\n \"\"\"Raise exception on duplicate organization slug.\"\"\"\n name = self.cleaned_data[\"name\"]\n\n # Skip slug validation on already created organizations.\n if self.instance.pk:\n return name\n\n potential_slug = slugify(name)\n if not potential_slug:\n raise forms.ValidationError(\n _(\"Invalid organization name: no slug generated\")\n )\n if Organization.objects.filter(slug=potential_slug).exists():\n raise forms.ValidationError(\n _(\"Organization %(name)s already exists\"),\n params={\"name\": name},\n )\n return name\n\n\nclass OrganizationSignupFormBase(OrganizationForm):\n\n \"\"\"\n Simple organization creation form.\n\n This trims down the number of inputs required to create a new organization.\n This is used on the initial organization signup, to keep signup terse.\n\n :param user: User instance, responsible for ownership of Organization\n :type user: django.contrib.auth.models.User\n \"\"\"\n\n class Meta:\n model = Organization\n fields = [\"name\", \"email\"]\n labels = {\n \"name\": _(\"Organization Name\"),\n \"email\": _(\"Billing Email\"),\n }\n\n url = None\n\n @staticmethod\n def _create_default_teams(organization):\n organization.teams.create(name=\"Admins\", access=ADMIN_ACCESS)\n organization.teams.create(name=\"Read Only\", access=READ_ONLY_ACCESS)\n\n def save(self, commit=True):\n org = super().save(commit)\n\n # If not committing, we can't save M2M fields\n if not commit:\n return org\n\n # Add default teams\n OrganizationOwner.objects.create(\n owner=self.user,\n organization=org,\n )\n self._create_default_teams(org)\n return org\n\n\nclass OrganizationSignupForm(SettingsOverrideObject):\n _default_class = OrganizationSignupFormBase\n\n\nclass OrganizationOwnerForm(forms.Form):\n\n \"\"\"Form to manage owners of the organization.\"\"\"\n\n username_or_email = forms.CharField(label=_(\"Email address or username\"))\n\n def __init__(self, *args, **kwargs):\n self.organization = kwargs.pop(\"organization\", None)\n self.request = kwargs.pop(\"request\", None)\n super().__init__(*args, **kwargs)\n\n def clean_username_or_email(self):\n \"\"\"Lookup owner by username or email, detect collisions with existing owners.\"\"\"\n username = self.cleaned_data[\"username_or_email\"]\n user = User.objects.filter(\n Q(username=username)\n | Q(emailaddress__verified=True, emailaddress__email=username)\n ).first()\n if user is None:\n raise forms.ValidationError(\n _(\"User %(username)s does not exist\"),\n params={\"username\": username},\n )\n if self.organization.owners.filter(pk=user.pk).exists():\n raise forms.ValidationError(\n _(\"User %(username)s is already an owner\"),\n params={\"username\": username},\n )\n return user\n\n def save(self):\n invitation, _ = Invitation.objects.invite(\n from_user=self.request.user,\n to_user=self.cleaned_data[\"username_or_email\"],\n obj=self.organization,\n request=self.request,\n )\n return invitation\n\n\nclass OrganizationTeamBasicFormBase(SimpleHistoryModelForm):\n\n \"\"\"Form to manage teams.\"\"\"\n\n class Meta:\n model = Team\n fields = [\"name\", \"access\", \"organization\"]\n error_messages = {\n NON_FIELD_ERRORS: {\n \"unique_together\": _(\"Team already exists\"),\n },\n }\n\n organization = forms.CharField(widget=forms.HiddenInput(), required=False)\n\n def __init__(self, *args, **kwargs):\n self.organization = kwargs.pop(\"organization\", None)\n super().__init__(*args, **kwargs)\n\n def clean_organization(self):\n \"\"\"Hard code organization return on form.\"\"\"\n return self.organization\n\n\nclass OrganizationTeamBasicForm(SettingsOverrideObject):\n _default_class = OrganizationTeamBasicFormBase\n\n\nclass OrganizationTeamProjectForm(forms.ModelForm):\n\n \"\"\"Form to manage access of teams to projects.\"\"\"\n\n class Meta:\n model = Team\n fields = [\"projects\"]\n\n def __init__(self, *args, **kwargs):\n self.organization = kwargs.pop(\"organization\", None)\n super().__init__(*args, **kwargs)\n self.fields[\"projects\"] = forms.ModelMultipleChoiceField(\n queryset=self.organization.projects,\n widget=forms.CheckboxSelectMultiple,\n )\n\n\nclass OrganizationTeamMemberForm(forms.Form):\n\n \"\"\"Form to manage all members of the organization.\"\"\"\n\n username_or_email = forms.CharField(label=_(\"Email address or username\"))\n\n def __init__(self, *args, **kwargs):\n self.team = kwargs.pop(\"team\", None)\n self.request = kwargs.pop(\"request\", None)\n super().__init__(*args, **kwargs)\n\n def clean_username_or_email(self):\n \"\"\"\n Validate the user to invite to.\n\n We search for an existing user by username or email,\n if none is found, we try to validate if the input is an email,\n in that case we send an invitation to that email.\n \"\"\"\n username = self.cleaned_data[\"username_or_email\"]\n user = User.objects.filter(\n Q(username=username)\n | Q(emailaddress__verified=True, emailaddress__email=username)\n ).first()\n\n if user:\n return self.validate_member_user(user)\n\n # If it's a valid email,\n # then try sending an invitation to it.\n try:\n validator = EmailValidator(code=\"lookup not an email\")\n validator(username)\n return username\n except ValidationError as error:\n if error.code != \"lookup not an email\":\n raise\n\n raise forms.ValidationError(\n _(\"User %(username)s does not exist\"), params={\"username\": username}\n )\n\n def validate_member_user(self, member):\n \"\"\"Verify duplicate team member doesn't already exists.\"\"\"\n if TeamMember.objects.filter(team=self.team, member=member).exists():\n raise forms.ValidationError(\n _(\"User is already a team member\"),\n )\n return member\n\n def save(self):\n \"\"\"Create an invitation only if the user isn't already a member.\"\"\"\n user = self.cleaned_data[\"username_or_email\"]\n if isinstance(user, User):\n # If the user is already a member or the organization\n # don't create an invitation.\n if (\n AdminPermission.members(self.team.organization)\n .filter(pk=user.pk)\n .exists()\n ):\n member = self.team.organization.add_member(user, self.team)\n if user != self.request.user:\n member.send_add_notification(self.request)\n return user\n invitation, _ = Invitation.objects.invite(\n from_user=self.request.user,\n to_user=user,\n obj=self.team,\n request=self.request,\n )\n return invitation\n invitation, _ = Invitation.objects.invite(\n from_user=self.request.user,\n to_email=user,\n obj=self.team,\n request=self.request,\n )\n return invitation\n", "path": "readthedocs/organizations/forms.py" } ]
[ { "content": "\"\"\"Organization forms.\"\"\"\nfrom django import forms\nfrom django.contrib.auth.models import User\nfrom django.core.exceptions import NON_FIELD_ERRORS, ValidationError\nfrom django.core.validators import EmailValidator\nfrom django.db.models import Q\nfrom django.utils.translation import gettext_lazy as _\n\nfrom readthedocs.core.history import SimpleHistoryModelForm\nfrom readthedocs.core.permissions import AdminPermission\nfrom readthedocs.core.utils import slugify\nfrom readthedocs.core.utils.extend import SettingsOverrideObject\nfrom readthedocs.invitations.models import Invitation\nfrom readthedocs.organizations.constants import ADMIN_ACCESS, READ_ONLY_ACCESS\nfrom readthedocs.organizations.models import (\n Organization,\n OrganizationOwner,\n Team,\n TeamMember,\n)\n\n\nclass OrganizationForm(SimpleHistoryModelForm):\n\n \"\"\"\n Base organization form.\n\n :param user: User instance, responsible for ownership of Organization\n :type user: django.contrib.auth.models.User\n \"\"\"\n\n # We use the organization slug + project name\n # to form the final project slug.\n # A valid project slug is 63 chars long.\n name = forms.CharField(max_length=32)\n\n class Meta:\n model = Organization\n fields = [\"name\", \"email\", \"description\", \"url\"]\n labels = {\n \"name\": _(\"Organization Name\"),\n \"email\": _(\"Billing Email\"),\n }\n\n # Don't use a URLField as a widget, the validation is too strict on FF\n url = forms.URLField(\n widget=forms.TextInput(attrs={\"placeholder\": \"http://\"}),\n label=_(\"Site URL\"),\n required=False,\n )\n\n def __init__(self, *args, **kwargs):\n try:\n self.user = kwargs.pop(\"user\")\n except KeyError:\n raise TypeError(\n \"OrganizationForm expects a `user` keyword argument\",\n )\n super().__init__(*args, **kwargs)\n\n def clean_name(self):\n \"\"\"Raise exception on duplicate organization slug.\"\"\"\n name = self.cleaned_data[\"name\"]\n\n # Skip slug validation on already created organizations.\n if self.instance.pk:\n return name\n\n potential_slug = slugify(name)\n if not potential_slug:\n raise forms.ValidationError(\n _(\"Invalid organization name: no slug generated\")\n )\n if Organization.objects.filter(slug=potential_slug).exists():\n raise forms.ValidationError(\n _(\"Organization %(name)s already exists\"),\n params={\"name\": name},\n )\n return name\n\n\nclass OrganizationSignupFormBase(OrganizationForm):\n\n \"\"\"\n Simple organization creation form.\n\n This trims down the number of inputs required to create a new organization.\n This is used on the initial organization signup, to keep signup terse.\n\n :param user: User instance, responsible for ownership of Organization\n :type user: django.contrib.auth.models.User\n \"\"\"\n\n class Meta:\n model = Organization\n fields = [\"name\", \"email\"]\n labels = {\n \"name\": _(\"Organization Name\"),\n \"email\": _(\"Billing Email\"),\n }\n\n url = None\n\n @staticmethod\n def _create_default_teams(organization):\n organization.teams.create(name=\"Admins\", access=ADMIN_ACCESS)\n organization.teams.create(name=\"Read Only\", access=READ_ONLY_ACCESS)\n\n def save(self, commit=True):\n org = super().save(commit)\n\n # If not committing, we can't save M2M fields\n if not commit:\n return org\n\n # Add default teams\n OrganizationOwner.objects.create(\n owner=self.user,\n organization=org,\n )\n self._create_default_teams(org)\n return org\n\n\nclass OrganizationSignupForm(SettingsOverrideObject):\n _default_class = OrganizationSignupFormBase\n\n\nclass OrganizationOwnerForm(forms.Form):\n\n \"\"\"Form to manage owners of the organization.\"\"\"\n\n username_or_email = forms.CharField(label=_(\"Email address or username\"))\n\n def __init__(self, *args, **kwargs):\n self.organization = kwargs.pop(\"organization\", None)\n self.request = kwargs.pop(\"request\", None)\n super().__init__(*args, **kwargs)\n\n def clean_username_or_email(self):\n \"\"\"Lookup owner by username or email, detect collisions with existing owners.\"\"\"\n username = self.cleaned_data[\"username_or_email\"]\n user = User.objects.filter(\n Q(username=username)\n | Q(emailaddress__verified=True, emailaddress__email=username)\n ).first()\n if user is None:\n raise forms.ValidationError(\n _(\"User %(username)s does not exist\"),\n params={\"username\": username},\n )\n if self.organization.owners.filter(pk=user.pk).exists():\n raise forms.ValidationError(\n _(\"User %(username)s is already an owner\"),\n params={\"username\": username},\n )\n return user\n\n def save(self):\n invitation, _ = Invitation.objects.invite(\n from_user=self.request.user,\n to_user=self.cleaned_data[\"username_or_email\"],\n obj=self.organization,\n request=self.request,\n )\n return invitation\n\n\nclass OrganizationTeamBasicFormBase(SimpleHistoryModelForm):\n\n \"\"\"Form to manage teams.\"\"\"\n\n class Meta:\n model = Team\n fields = [\"name\", \"access\", \"organization\"]\n error_messages = {\n NON_FIELD_ERRORS: {\n \"unique_together\": _(\"Team already exists\"),\n },\n }\n\n organization = forms.CharField(widget=forms.HiddenInput(), required=False)\n\n def __init__(self, *args, **kwargs):\n self.organization = kwargs.pop(\"organization\", None)\n super().__init__(*args, **kwargs)\n\n def clean_organization(self):\n \"\"\"Hard code organization return on form.\"\"\"\n return self.organization\n\n\nclass OrganizationTeamBasicForm(SettingsOverrideObject):\n _default_class = OrganizationTeamBasicFormBase\n\n\nclass OrganizationTeamProjectForm(forms.ModelForm):\n\n \"\"\"Form to manage access of teams to projects.\"\"\"\n\n class Meta:\n model = Team\n fields = [\"projects\"]\n\n def __init__(self, *args, **kwargs):\n self.organization = kwargs.pop(\"organization\", None)\n super().__init__(*args, **kwargs)\n self.fields[\"projects\"] = forms.ModelMultipleChoiceField(\n queryset=self.organization.projects,\n widget=forms.CheckboxSelectMultiple,\n required=False,\n )\n\n\nclass OrganizationTeamMemberForm(forms.Form):\n\n \"\"\"Form to manage all members of the organization.\"\"\"\n\n username_or_email = forms.CharField(label=_(\"Email address or username\"))\n\n def __init__(self, *args, **kwargs):\n self.team = kwargs.pop(\"team\", None)\n self.request = kwargs.pop(\"request\", None)\n super().__init__(*args, **kwargs)\n\n def clean_username_or_email(self):\n \"\"\"\n Validate the user to invite to.\n\n We search for an existing user by username or email,\n if none is found, we try to validate if the input is an email,\n in that case we send an invitation to that email.\n \"\"\"\n username = self.cleaned_data[\"username_or_email\"]\n user = User.objects.filter(\n Q(username=username)\n | Q(emailaddress__verified=True, emailaddress__email=username)\n ).first()\n\n if user:\n return self.validate_member_user(user)\n\n # If it's a valid email,\n # then try sending an invitation to it.\n try:\n validator = EmailValidator(code=\"lookup not an email\")\n validator(username)\n return username\n except ValidationError as error:\n if error.code != \"lookup not an email\":\n raise\n\n raise forms.ValidationError(\n _(\"User %(username)s does not exist\"), params={\"username\": username}\n )\n\n def validate_member_user(self, member):\n \"\"\"Verify duplicate team member doesn't already exists.\"\"\"\n if TeamMember.objects.filter(team=self.team, member=member).exists():\n raise forms.ValidationError(\n _(\"User is already a team member\"),\n )\n return member\n\n def save(self):\n \"\"\"Create an invitation only if the user isn't already a member.\"\"\"\n user = self.cleaned_data[\"username_or_email\"]\n if isinstance(user, User):\n # If the user is already a member or the organization\n # don't create an invitation.\n if (\n AdminPermission.members(self.team.organization)\n .filter(pk=user.pk)\n .exists()\n ):\n member = self.team.organization.add_member(user, self.team)\n if user != self.request.user:\n member.send_add_notification(self.request)\n return user\n invitation, _ = Invitation.objects.invite(\n from_user=self.request.user,\n to_user=user,\n obj=self.team,\n request=self.request,\n )\n return invitation\n invitation, _ = Invitation.objects.invite(\n from_user=self.request.user,\n to_email=user,\n obj=self.team,\n request=self.request,\n )\n return invitation\n", "path": "readthedocs/organizations/forms.py" } ]
diff --git a/readthedocs/organizations/forms.py b/readthedocs/organizations/forms.py index 0120c05ae5a..85fc7ae3818 100644 --- a/readthedocs/organizations/forms.py +++ b/readthedocs/organizations/forms.py @@ -208,6 +208,7 @@ def __init__(self, *args, **kwargs): self.fields["projects"] = forms.ModelMultipleChoiceField( queryset=self.organization.projects, widget=forms.CheckboxSelectMultiple, + required=False, )
vyperlang__vyper-2905
Missing @view decorator for interface ERC20Detailed.py ### Version Information * vyper Version (output of `vyper --version`): 0.3.3 * OS: linux * Python Version (output of `python --version`): Python 3.9.5 ### What's your issue about? **Issue** Error using `ERC20Detailed.py` as an interface to a vyper class. Trying to compile the following snippet produces the following error. ``` # @version 0.3.3 from vyper.interfaces import ERC20Detailed @view @external def getSymbol() -> String[32]: return ERC20Detailed(0x5f3b5DfEb7B28CDbD7FAba78963EE202a494e2A2).symbol() ``` **Error** ``` vyper.exceptions.StateAccessViolation: May not call state modifying function 'symbol' within a constant function.vyper.exceptions.StateAccessViolation: May not call state modifying function 'symbol' within a constant function. ``` **Reason** This issue occurs because `ERC20Detailed.py` does not contain `@view` decorator for its interfaces ### How can it be fixed? Adding `@view` decorator to interface under `vyper.builtin_interfaces.ERC20Detailed.py` ``` @external @view def name() -> String[1]: pass @external @view def symbol() -> String[1]: pass @external @view def decimals() -> uint8: pass ``` **Why?** Running `vyper -f interface examples/tokens/ERC20.vy` generates the following ``` ... @view @external def name() -> String[32]: pass @view @external def symbol() -> String[32]: pass @view @external def decimals() -> uint8: pass ... ``` Adding `@view` decorator to `vyper.builtin_interfaces.ERC20Detailed.py` would make interface consistent.
[ { "content": "\"\"\"\nNOTE: interface uses `String[1]` where 1 is the lower bound of the string returned by the function.\n For end-users this means they can't use `implements: ERC20Detailed` unless their implementation\n uses a value n >= 1. Regardless this is fine as one can't do String[0] where n == 0.\n\"\"\"\n\ninterface_code = \"\"\"\n@external\ndef name() -> String[1]:\n pass\n\n@external\ndef symbol() -> String[1]:\n pass\n\n@external\ndef decimals() -> uint8:\n pass\n\"\"\"\n", "path": "vyper/builtin_interfaces/ERC20Detailed.py" } ]
[ { "content": "\"\"\"\nNOTE: interface uses `String[1]` where 1 is the lower bound of the string returned by the function.\n For end-users this means they can't use `implements: ERC20Detailed` unless their implementation\n uses a value n >= 1. Regardless this is fine as one can't do String[0] where n == 0.\n\"\"\"\n\ninterface_code = \"\"\"\n@view\n@external\ndef name() -> String[1]:\n pass\n\n@view\n@external\ndef symbol() -> String[1]:\n pass\n\n@view\n@external\ndef decimals() -> uint8:\n pass\n\"\"\"\n", "path": "vyper/builtin_interfaces/ERC20Detailed.py" } ]
diff --git a/vyper/builtin_interfaces/ERC20Detailed.py b/vyper/builtin_interfaces/ERC20Detailed.py index 23f4a8a844..03dd597e8a 100644 --- a/vyper/builtin_interfaces/ERC20Detailed.py +++ b/vyper/builtin_interfaces/ERC20Detailed.py @@ -5,14 +5,17 @@ """ interface_code = """ +@view @external def name() -> String[1]: pass +@view @external def symbol() -> String[1]: pass +@view @external def decimals() -> uint8: pass
ivy-llc__ivy-14109
frombuffer
[ { "content": "# local\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes\nfrom ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back\n\n\n@to_ivy_arrays_and_back\ndef empty(\n *args,\n size=None,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n pin_memory=False,\n memory_format=None,\n):\n if args and size:\n raise TypeError(\"empty() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list)) else args\n return ivy.empty(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef full(\n size,\n fill_value,\n *,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=None,\n):\n ret = ivy.full(\n shape=size, fill_value=fill_value, dtype=dtype, device=device, out=out\n )\n return ret\n\n\n@to_ivy_arrays_and_back\ndef ones(*args, size=None, out=None, dtype=None, device=None, requires_grad=False):\n if args and size:\n raise TypeError(\"ones() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list)) else args\n return ivy.ones(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef ones_like_v_0p3p0_to_0p3p1(input, out=None):\n return ivy.ones_like(input, out=None)\n\n\n@to_ivy_arrays_and_back\ndef heaviside(input, values, *, out=None):\n return ivy.heaviside(input, values, out=out)\n\n\n@to_ivy_arrays_and_back\ndef ones_like_v_0p4p0_and_above(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.ones_like(input, dtype=dtype, device=device)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef zeros(*args, size=None, out=None, dtype=None, device=None, requires_grad=False):\n if args and size:\n raise TypeError(\"zeros() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list)) else args\n return ivy.zeros(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef zeros_like(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.zeros_like(input, dtype=dtype, device=device)\n return ret\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"float16\",)}, \"torch\")\ndef arange(\n *args,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n if len(args) == 1:\n end = args[0]\n start = 0\n step = 1\n elif len(args) == 3:\n start, end, step = args\n else:\n ivy.utils.assertions.check_true(\n len(args) == 1 or len(args) == 3,\n \"only 1 or 3 positional arguments are supported\",\n )\n return ivy.arange(start, end, step, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"float16\",)}, \"torch\")\ndef range(\n *args,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n if len(args) == 1:\n end = args[0]\n start = 0\n step = 1\n elif len(args) == 3:\n start, end, step = args\n else:\n ivy.utils.assertions.check_true(\n len(args) == 1 or len(args) == 3,\n \"only 1 or 3 positional arguments are supported\",\n )\n range_vec = []\n elem = start\n while 1:\n range_vec = range_vec + [elem]\n elem += step\n if start == end:\n break\n if start < end:\n if elem > end:\n break\n else:\n if elem < end:\n break\n return ivy.array(range_vec, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"float16\",)}, \"torch\")\ndef linspace(\n start,\n end,\n steps,\n *,\n out=None,\n dtype=None,\n device=None,\n layout=None,\n requires_grad=False,\n):\n ret = ivy.linspace(start, end, num=steps, dtype=dtype, device=device, out=out)\n return ret\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"float16\",)}, \"torch\")\ndef logspace(\n start,\n end,\n steps,\n *,\n base=10.0,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n ret = ivy.logspace(\n start, end, num=steps, base=base, dtype=dtype, device=device, out=out\n )\n return ret\n\n\n@to_ivy_arrays_and_back\ndef eye(\n n, m=None, *, out=None, dtype=None, layout=None, device=None, requires_grad=False\n):\n ret = ivy.eye(n_rows=n, n_columns=m, dtype=dtype, device=device, out=out)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef from_dlpack(ext_tensor):\n return ivy.from_dlpack(ext_tensor)\n\n\n@to_ivy_arrays_and_back\ndef empty_like(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.empty_like(input, dtype=dtype, device=device)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef full_like(\n input,\n fill_value,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n return ivy.full_like(input, fill_value, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef as_tensor(\n data,\n *,\n dtype=None,\n device=None,\n):\n return ivy.asarray(data, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef from_numpy(data, /):\n return ivy.asarray(data, dtype=ivy.dtype(data))\n\n\nfrom_numpy.supported_dtypes = (\"ndarray\",)\n\n\n@to_ivy_arrays_and_back\ndef as_strided(input, size, stride, storage_offset=None):\n ind = ivy.array([0], dtype=ivy.int64)\n for i, (size_i, stride_i) in enumerate(zip(size, stride)):\n r_size = [1] * len(stride)\n r_size[i] = -1\n ind = ind + ivy.reshape(ivy.arange(size_i), r_size) * stride_i\n if storage_offset:\n ind = ind + storage_offset\n return ivy.gather(ivy.flatten(input), ind)\n\n\n@to_ivy_arrays_and_back\ndef tensor(\n data,\n *,\n dtype=None,\n device=None,\n requires_grad=False,\n pin_memory=False,\n):\n return ivy.array(data, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef asarray(\n obj,\n *,\n dtype=None,\n device=None,\n copy=None,\n):\n return ivy.asarray(obj, copy=copy, dtype=dtype, device=device)\n", "path": "ivy/functional/frontends/torch/creation_ops.py" } ]
[ { "content": "# local\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes\nfrom ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back\n\n\n@to_ivy_arrays_and_back\ndef empty(\n *args,\n size=None,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n pin_memory=False,\n memory_format=None,\n):\n if args and size:\n raise TypeError(\"empty() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list)) else args\n return ivy.empty(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef full(\n size,\n fill_value,\n *,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=None,\n):\n ret = ivy.full(\n shape=size, fill_value=fill_value, dtype=dtype, device=device, out=out\n )\n return ret\n\n\n@to_ivy_arrays_and_back\ndef ones(*args, size=None, out=None, dtype=None, device=None, requires_grad=False):\n if args and size:\n raise TypeError(\"ones() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list)) else args\n return ivy.ones(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef ones_like_v_0p3p0_to_0p3p1(input, out=None):\n return ivy.ones_like(input, out=None)\n\n\n@to_ivy_arrays_and_back\ndef heaviside(input, values, *, out=None):\n return ivy.heaviside(input, values, out=out)\n\n\n@to_ivy_arrays_and_back\ndef ones_like_v_0p4p0_and_above(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.ones_like(input, dtype=dtype, device=device)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef zeros(*args, size=None, out=None, dtype=None, device=None, requires_grad=False):\n if args and size:\n raise TypeError(\"zeros() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list)) else args\n return ivy.zeros(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef zeros_like(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.zeros_like(input, dtype=dtype, device=device)\n return ret\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"float16\",)}, \"torch\")\ndef arange(\n *args,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n if len(args) == 1:\n end = args[0]\n start = 0\n step = 1\n elif len(args) == 3:\n start, end, step = args\n else:\n ivy.utils.assertions.check_true(\n len(args) == 1 or len(args) == 3,\n \"only 1 or 3 positional arguments are supported\",\n )\n return ivy.arange(start, end, step, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"float16\",)}, \"torch\")\ndef range(\n *args,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n if len(args) == 1:\n end = args[0]\n start = 0\n step = 1\n elif len(args) == 3:\n start, end, step = args\n else:\n ivy.utils.assertions.check_true(\n len(args) == 1 or len(args) == 3,\n \"only 1 or 3 positional arguments are supported\",\n )\n range_vec = []\n elem = start\n while 1:\n range_vec = range_vec + [elem]\n elem += step\n if start == end:\n break\n if start < end:\n if elem > end:\n break\n else:\n if elem < end:\n break\n return ivy.array(range_vec, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"float16\",)}, \"torch\")\ndef linspace(\n start,\n end,\n steps,\n *,\n out=None,\n dtype=None,\n device=None,\n layout=None,\n requires_grad=False,\n):\n ret = ivy.linspace(start, end, num=steps, dtype=dtype, device=device, out=out)\n return ret\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"float16\",)}, \"torch\")\ndef logspace(\n start,\n end,\n steps,\n *,\n base=10.0,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n ret = ivy.logspace(\n start, end, num=steps, base=base, dtype=dtype, device=device, out=out\n )\n return ret\n\n\n@to_ivy_arrays_and_back\ndef eye(\n n, m=None, *, out=None, dtype=None, layout=None, device=None, requires_grad=False\n):\n ret = ivy.eye(n_rows=n, n_columns=m, dtype=dtype, device=device, out=out)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef from_dlpack(ext_tensor):\n return ivy.from_dlpack(ext_tensor)\n\n\n@to_ivy_arrays_and_back\ndef empty_like(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.empty_like(input, dtype=dtype, device=device)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef full_like(\n input,\n fill_value,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n return ivy.full_like(input, fill_value, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef as_tensor(\n data,\n *,\n dtype=None,\n device=None,\n):\n return ivy.asarray(data, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef from_numpy(data, /):\n return ivy.asarray(data, dtype=ivy.dtype(data))\n\n\nfrom_numpy.supported_dtypes = (\"ndarray\",)\n\n\n@to_ivy_arrays_and_back\ndef as_strided(input, size, stride, storage_offset=None):\n ind = ivy.array([0], dtype=ivy.int64)\n for i, (size_i, stride_i) in enumerate(zip(size, stride)):\n r_size = [1] * len(stride)\n r_size[i] = -1\n ind = ind + ivy.reshape(ivy.arange(size_i), r_size) * stride_i\n if storage_offset:\n ind = ind + storage_offset\n return ivy.gather(ivy.flatten(input), ind)\n\n\n@to_ivy_arrays_and_back\ndef tensor(\n data,\n *,\n dtype=None,\n device=None,\n requires_grad=False,\n pin_memory=False,\n):\n return ivy.array(data, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef asarray(\n obj,\n *,\n dtype=None,\n device=None,\n copy=None,\n):\n return ivy.asarray(obj, copy=copy, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef frombuffer(\n buffer, \n *, \n dtype,\n count=-1,\n offset=0,\n requires_grad=False,\n):\n return ivy.frombuffer(buffer, dtype=dtype, count=count, offset=offset)\n", "path": "ivy/functional/frontends/torch/creation_ops.py" } ]
diff --git a/ivy/functional/frontends/torch/creation_ops.py b/ivy/functional/frontends/torch/creation_ops.py index 458d7e00a8de8..8231d6368b7fc 100644 --- a/ivy/functional/frontends/torch/creation_ops.py +++ b/ivy/functional/frontends/torch/creation_ops.py @@ -285,3 +285,15 @@ def asarray( copy=None, ): return ivy.asarray(obj, copy=copy, dtype=dtype, device=device) + + +@to_ivy_arrays_and_back +def frombuffer( + buffer, + *, + dtype, + count=-1, + offset=0, + requires_grad=False, +): + return ivy.frombuffer(buffer, dtype=dtype, count=count, offset=offset) diff --git a/ivy_tests/test_ivy/test_frontends/test_torch/test_creation_ops.py b/ivy_tests/test_ivy/test_frontends/test_torch/test_creation_ops.py index af62da7f03174..a5b373590888c 100644 --- a/ivy_tests/test_ivy/test_frontends/test_torch/test_creation_ops.py +++ b/ivy_tests/test_ivy/test_frontends/test_torch/test_creation_ops.py @@ -2,6 +2,7 @@ import ivy from hypothesis import strategies as st, assume import math +import numpy as np # local import ivy_tests.test_ivy.helpers as helpers @@ -688,3 +689,48 @@ def test_torch_from_dlpack( fn_tree=fn_tree, on_device=on_device, ) + + [email protected] +def _get_dtype_buffer_count_offset(draw): + dtype, value = draw( + helpers.dtype_and_values( + available_dtypes=helpers.get_dtypes("valid"), + ) + ) + value = np.array(value) + length = value.size + value = value.tobytes() + + offset = draw(helpers.ints(min_value=0, max_value=length - 1)) + count = draw(helpers.ints(min_value=-(2**30), max_value=length - offset)) + if count == 0: + count = -1 + offset = offset * np.dtype(dtype[0]).itemsize + + return dtype, value, count, offset + + +@handle_frontend_test( + fn_tree="torch.frombuffer", + dtype_buffer_count_offset=_get_dtype_buffer_count_offset(), +) +def test_torch_frombuffer( + dtype_buffer_count_offset, + test_flags, + frontend, + fn_tree, + on_device, +): + input_dtype, buffer, count, offset = dtype_buffer_count_offset + helpers.test_frontend_function( + input_dtypes=input_dtype, + test_flags=test_flags, + on_device=on_device, + frontend=frontend, + fn_tree=fn_tree, + buffer=buffer, + dtype=input_dtype[0], + count=count, + offset=offset, + )
pymedusa__Medusa-1543
[APP SUBMITTED]: ==================================================================== ### INFO **Python Version**: `2.7.12 (v2.7.12:d33e0cf91556, Jun 27 2016, 15:19:22) [MSC v.1500 32 bit (Intel)]` **Operating System**: `Windows-8.1-6.3.9600` **Locale**: `cp1252` **Branch**: [master](../tree/master) **Commit**: PyMedusa/SickRage@9c6346bedbf9deec3efc4ea60cf0f9bb5ce818b5 **Link to Log**: https://gist.github.com/a98d151bbf9b8345c030445e3fdfe507 ### ERROR <pre> 2016-09-12 22:30:22 ERROR Thread-19 :: [9c6346b] Failed doing web ui callback: Traceback (most recent call last): File "C:\Users\uTorrentVM\SickRage\sickbeard\server\web\core\base.py", line 270, in async_call result = function(**kwargs) File "C:\Users\uTorrentVM\SickRage\sickbeard\server\web\home\post_process.py", line 50, in processEpisode is_priority=argToBool(is_priority), delete_on=argToBool(delete_on), failed=argToBool(failed), proc_type=type, ignore_subs=argToBool(ignore_subs) File "C:\Users\uTorrentVM\SickRage\sickbeard\processTV.py", line 303, in processDir process_media(processPath, videoFiles, nzbName, process_method, force, is_priority, ignore_subs, result) File "C:\Users\uTorrentVM\SickRage\sickbeard\processTV.py", line 571, in process_media if already_postprocessed(processPath, cur_video_file, force, result): File "C:\Users\uTorrentVM\SickRage\sickbeard\processTV.py", line 531, in already_postprocessed parse_result = NameParser(try_indexers=True).parse(dirName) File "C:\Users\uTorrentVM\SickRage\sickbeard\name_parser\parser.py", line 235, in parse result = self._parse_string(name) File "C:\Users\uTorrentVM\SickRage\sickbeard\name_parser\parser.py", line 62, in _parse_string guess = guessit.guessit(name, dict(show_type=self.show_type)) File "C:\Users\uTorrentVM\SickRage\sickbeard\name_parser\guessit_parser.py", line 99, in guessit return default_api.guessit(name, options=final_options) File "C:\Users\uTorrentVM\SickRage\lib\guessit\api.py", line 113, in guessit raise GuessitException(string, options) GuessitException: An internal error has occured in guessit. ===================== Guessit Exception Report ===================== version=2.1.0.dev0 string=C:\Users\uTorrentVM\Downloads\Complete\sb\[HorribleSubs]_Mob_Psycho_100_-_10_[1080p]_mkv_[5_7]_-_`[HorribleSubs]_Mob_Psycho_100_-_10_[1080p]_vol03+04_par2` options={'allowed_languages': ['ru', 'hu', 'en', 'nl', 'pt', 'de', 'jp', 'sv', 'it', 'fr', 'es', 'uk', 'ro', 'pl', 'he'], 'expected_group': ['re:\\bCDD\\b', 're:\\bF4ST3R\\b', 're:\\bTGNF4ST\\b', 're:\\bNovaRip\\b', 're:\\bRiPRG\\b', 're:\\bPtM\\b', 're:\\bbyEMP\\b', 're:\\b4EVERHD\\b', 're:\\bPOURMOi\\b', 're:\\bPARTiCLE\\b', 're:\\bTV2LAX9\\b', 're:\\bCDP\\b', 're:\\bELITETORRENT\\b', 're:\\bRipPourBox\\b', 're:\\bF4ST\\b', 're:\\bHDD\\b'], 'expected_title': ['re:(?<![^/\\\\])\\w+ it\\b', 're:\\bMob +Psycho +100\\b'], 'allowed_countries': ['gb', 'us'], 'episode_prefer_number': False, 'show_type': None, 'type': u'episode', 'implicit': True} -------------------------------------------------------------------- Traceback (most recent call last): File "C:\Users\uTorrentVM\SickRage\lib\guessit\api.py", line 102, in guessit matches = self.rebulk.matches(string, options) File "C:\Users\uTorrentVM\SickRage\lib\rebulk\rebulk.py", line 275, in matches self._execute_rules(matches, context) File "C:\Users\uTorrentVM\SickRage\lib\rebulk\rebulk.py", line 306, in _execute_rules rules.execute_all_rules(matches, context) File "C:\Users\uTorrentVM\SickRage\lib\rebulk\rules.py", line 318, in execute_all_rules when_response = execute_rule(rule, matches, context) File "C:\Users\uTorrentVM\SickRage\lib\rebulk\rules.py", line 339, in execute_rule when_response = rule.when(matches, context) File "C:\Users\uTorrentVM\SickRage\sickbeard\name_parser\rules\rules.py", line 1006, in when predicate=lambda match: (match.name == 'episode' and File "C:\Users\uTorrentVM\SickRage\lib\rebulk\match.py", line 115, in next current = match.start + 1 AttributeError: 'NoneType' object has no attribute 'start' -------------------------------------------------------------------- Please report at https://github.com/guessit-io/guessit/issues. ==================================================================== Traceback (most recent call last): File "C:\Users\uTorrentVM\SickRage\sickbeard\server\web\core\base.py", line 270, in async_call result = function(**kwargs) File "C:\Users\uTorrentVM\SickRage\sickbeard\server\web\home\post_process.py", line 50, in processEpisode is_priority=argToBool(is_priority), delete_on=argToBool(delete_on), failed=argToBool(failed), proc_type=type, ignore_subs=argToBool(ignore_subs) File "C:\Users\uTorrentVM\SickRage\sickbeard\processTV.py", line 303, in processDir process_media(processPath, videoFiles, nzbName, process_method, force, is_priority, ignore_subs, result) File "C:\Users\uTorrentVM\SickRage\sickbeard\processTV.py", line 571, in process_media if already_postprocessed(processPath, cur_video_file, force, result): File "C:\Users\uTorrentVM\SickRage\sickbeard\processTV.py", line 531, in already_postprocessed parse_result = NameParser(try_indexers=True).parse(dirName) File "C:\Users\uTorrentVM\SickRage\sickbeard\name_parser\parser.py", line 235, in parse result = self._parse_string(name) File "C:\Users\uTorrentVM\SickRage\sickbeard\name_parser\parser.py", line 62, in _parse_string guess = guessit.guessit(name, dict(show_type=self.show_type)) File "C:\Users\uTorrentVM\SickRage\sickbeard\name_parser\guessit_parser.py", line 99, in guessit return default_api.guessit(name, options=final_options) File "C:\Users\uTorrentVM\SickRage\lib\guessit\api.py", line 113, in guessit raise GuessitException(string, options) GuessitException: An internal error has occured in guessit. ===================== Guessit Exception Report ===================== version=2.1.0.dev0 string=C:\Users\uTorrentVM\Downloads\Complete\sb\[HorribleSubs]_Mob_Psycho_100_-_10_[1080p]_mkv_[5_7]_-_`[HorribleSubs]_Mob_Psycho_100_-_10_[1080p]_vol03+04_par2` options={'allowed_languages': ['ru', 'hu', 'en', 'nl', 'pt', 'de', 'jp', 'sv', 'it', 'fr', 'es', 'uk', 'ro', 'pl', 'he'], 'expected_group': ['re:\\bCDD\\b', 're:\\bF4ST3R\\b', 're:\\bTGNF4ST\\b', 're:\\bNovaRip\\b', 're:\\bRiPRG\\b', 're:\\bPtM\\b', 're:\\bbyEMP\\b', 're:\\b4EVERHD\\b', 're:\\bPOURMOi\\b', 're:\\bPARTiCLE\\b', 're:\\bTV2LAX9\\b', 're:\\bCDP\\b', 're:\\bELITETORRENT\\b', 're:\\bRipPourBox\\b', 're:\\bF4ST\\b', 're:\\bHDD\\b'], 'expected_title': ['re:(?<![^/\\\\])\\w+ it\\b', 're:\\bMob +Psycho +100\\b'], 'allowed_countries': ['gb', 'us'], 'episode_prefer_number': False, 'show_type': None, 'type': u'episode', 'implicit': True} -------------------------------------------------------------------- Traceback (most recent call last): File "C:\Users\uTorrentVM\SickRage\lib\guessit\api.py", line 102, in guessit matches = self.rebulk.matches(string, options) File "C:\Users\uTorrentVM\SickRage\lib\rebulk\rebulk.py", line 275, in matches self._execute_rules(matches, context) File "C:\Users\uTorrentVM\SickRage\lib\rebulk\rebulk.py", line 306, in _execute_rules rules.execute_all_rules(matches, context) File "C:\Users\uTorrentVM\SickRage\lib\rebulk\rules.py", line 318, in execute_all_rules when_response = execute_rule(rule, matches, context) File "C:\Users\uTorrentVM\SickRage\lib\rebulk\rules.py", line 339, in execute_rule when_response = rule.when(matches, context) File "C:\Users\uTorrentVM\SickRage\sickbeard\name_parser\rules\rules.py", line 1006, in when predicate=lambda match: (match.name == 'episode' and File "C:\Users\uTorrentVM\SickRage\lib\rebulk\match.py", line 115, in next current = match.start + 1 AttributeError: 'NoneType' object has no attribute 'start' -------------------------------------------------------------------- Please report at https://github.com/guessit-io/guessit/issues. ==================================================================== </pre> --- _STAFF NOTIFIED_: @pymedusa/support @pymedusa/moderators
[ { "content": "# coding=utf-8\n# Author: Nic Wolfe <[email protected]>\n#\n# This file is part of Medusa.\n#\n# Medusa is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Medusa is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Medusa. If not, see <http://www.gnu.org/licenses/>.\n\nimport os\nimport shutil\nimport stat\n\nimport medusa as app\nimport shutil_custom\nfrom unrar2 import RarFile\nfrom unrar2.rar_exceptions import (ArchiveHeaderBroken, FileOpenError, IncorrectRARPassword, InvalidRARArchive,\n InvalidRARArchiveUsage)\nfrom . import db, failed_processor, helpers, logger, notifiers, post_processor\nfrom .helper.common import is_sync_file, is_torrent_or_nzb_file, subtitle_extensions\nfrom .helper.encoding import ss\nfrom .helper.exceptions import EpisodePostProcessingFailedException, FailedPostProcessingFailedException, ex\nfrom .name_parser.parser import InvalidNameException, InvalidShowException, NameParser\nfrom .subtitles import accept_any, accept_unknown, get_embedded_subtitles\n\nshutil.copyfile = shutil_custom.copyfile_custom\n\n\nclass ProcessResult(object): # pylint: disable=too-few-public-methods\n def __init__(self):\n self.result = True\n self.output = ''\n self.missedfiles = []\n self.aggresult = True\n\n\ndef delete_folder(folder, check_empty=True):\n \"\"\"\n Remove a folder from the filesystem.\n\n :param folder: Path to folder to remove\n :param check_empty: Boolean, check if the folder is empty before removing it, defaults to True\n :return: True on success, False on failure\n \"\"\"\n # check if it's a folder\n if not os.path.isdir(folder):\n return False\n\n # check if it isn't TV_DOWNLOAD_DIR\n if app.TV_DOWNLOAD_DIR:\n if helpers.real_path(folder) == helpers.real_path(app.TV_DOWNLOAD_DIR):\n return False\n\n # check if it's empty folder when wanted checked\n if check_empty:\n check_files = os.listdir(folder)\n if check_files:\n logger.log(u\"Not deleting folder %s found the following files: %s\" %\n (folder, check_files), logger.INFO)\n return False\n\n try:\n logger.log(u\"Deleting folder (if it's empty): %s\" % folder)\n os.rmdir(folder)\n except (OSError, IOError) as e:\n logger.log(u\"Warning: unable to delete folder: %s: %s\" % (folder, ex(e)), logger.WARNING)\n return False\n else:\n try:\n logger.log(u\"Deleting folder: \" + folder)\n shutil.rmtree(folder)\n except (OSError, IOError) as e:\n logger.log(u\"Warning: unable to delete folder: %s: %s\" % (folder, ex(e)), logger.WARNING)\n return False\n\n return True\n\n\ndef delete_files(processPath, notwantedFiles, result, force=False):\n \"\"\"\n Remove files from filesystem.\n\n :param processPath: path to process\n :param notwantedFiles: files we do not want\n :param result: Processor results\n :param force: Boolean, force deletion, defaults to false\n \"\"\"\n if not result.result and force:\n result.output += logHelper(u\"Forcing deletion of files, even though last result was not successful\", logger.DEBUG)\n elif not result.result:\n return\n\n # Delete all file not needed\n for cur_file in notwantedFiles:\n\n cur_file_path = os.path.join(processPath, cur_file)\n\n if not os.path.isfile(cur_file_path):\n continue # Prevent error when a notwantedfiles is an associated files\n\n result.output += logHelper(u\"Deleting file: %s\" % cur_file, logger.DEBUG)\n\n # check first the read-only attribute\n file_attribute = os.stat(cur_file_path)[0]\n if not file_attribute & stat.S_IWRITE:\n # File is read-only, so make it writeable\n result.output += logHelper(u\"Changing ReadOnly Flag for file: %s\" % cur_file, logger.DEBUG)\n try:\n os.chmod(cur_file_path, stat.S_IWRITE)\n except OSError as e:\n result.output += logHelper(u\"Cannot change permissions of %s: %s\" %\n (cur_file_path, ex(e)), logger.DEBUG)\n try:\n os.remove(cur_file_path)\n except OSError as e:\n result.output += logHelper(u\"Unable to delete file %s: %s\" % (cur_file, e.strerror), logger.DEBUG)\n\n\ndef logHelper(logMessage, logLevel=logger.INFO):\n logger.log(logMessage, logLevel)\n return logMessage + u\"\\n\"\n\n\n#def OneRunPP():\n# isRunning = [False]\n#\n# def decorate(func):\n# @wraps(func)\n# def func_wrapper(*args, **kargs):\n# if isRunning[0]:\n# return logHelper(u'Post processor is already running', logger.WARNING)\n\n# isRunning[0] = True\n# ret = func(*args, **kargs)\n# isRunning[0] = False\n# return ret\n# return func_wrapper\n# return decorate\n\n\n# pylint: disable=too-many-arguments,too-many-branches,too-many-statements,too-many-locals\n#@OneRunPP()\ndef processDir(dirName, nzbName=None, process_method=None, force=False, is_priority=None,\n delete_on=False, failed=False, proc_type=\"auto\", ignore_subs=False):\n \"\"\"\n Scan through the files in dirName and process whatever media files are found.\n\n :param dirName: The folder name to look in\n :param nzbName: The NZB name which resulted in this folder being downloaded\n :param process_method: Process methodo: hardlink, move, softlink, etc.\n :param force: True to postprocess already postprocessed files\n :param is_priority: Boolean for whether or not is a priority download\n :param delete_on: Boolean for whether or not it should delete files\n :param failed: Boolean for whether or not the download failed\n :param proc_type: Type of postprocessing auto or manual\n :param ignore_subs: True to ignore setting 'postpone if no subs'\n \"\"\"\n\n result = ProcessResult()\n\n # if they passed us a real dir then assume it's the one we want\n if os.path.isdir(dirName):\n dirName = os.path.realpath(dirName)\n result.output += logHelper(u\"Processing folder %s\" % dirName, logger.DEBUG)\n\n # if the client and the application are not on the same machine translate the directory into a network directory\n elif all([app.TV_DOWNLOAD_DIR,\n os.path.isdir(app.TV_DOWNLOAD_DIR),\n os.path.normpath(dirName) == os.path.normpath(app.TV_DOWNLOAD_DIR)]):\n dirName = os.path.join(app.TV_DOWNLOAD_DIR, os.path.abspath(dirName).split(os.path.sep)[-1])\n result.output += logHelper(u\"Trying to use folder: %s \" % dirName, logger.DEBUG)\n\n # if we didn't find a real dir then quit\n if not os.path.isdir(dirName):\n result.output += logHelper(u\"Unable to figure out what folder to process. \"\n u\"If your downloader and Medusa aren't on the same PC \"\n u\"make sure you fill out your TV download dir in the config.\",\n logger.DEBUG)\n return result.output\n\n path, dirs, files = get_path_dir_files(dirName, nzbName, proc_type)\n\n files = [x for x in files if not is_torrent_or_nzb_file(x)]\n SyncFiles = [x for x in files if is_sync_file(x)]\n nzbNameOriginal = nzbName\n\n # Don't post process if files are still being synced and option is activated\n postpone = SyncFiles and app.POSTPONE_IF_SYNC_FILES\n\n # Warn user if 'postpone if no subs' is enabled. Will debug possible user issues with PP\n if app.POSTPONE_IF_NO_SUBS:\n result.output += logHelper(u\"Feature 'postpone post-processing if no subtitle available' is enabled\", logger.INFO)\n\n if not postpone:\n result.output += logHelper(u\"PostProcessing Path: %s\" % path, logger.INFO)\n result.output += logHelper(u\"PostProcessing Dirs: %s\" % str(dirs), logger.DEBUG)\n\n videoFiles = [x for x in files if helpers.isMediaFile(x)]\n rarFiles = [x for x in files if helpers.isRarFile(x)]\n rarContent = \"\"\n if rarFiles and not (app.POSTPONE_IF_NO_SUBS and videoFiles):\n # Unpack only if video file was not already extracted by 'postpone if no subs' feature\n rarContent = unRAR(path, rarFiles, force, result)\n files += rarContent\n videoFiles += [x for x in rarContent if helpers.isMediaFile(x)]\n videoInRar = [x for x in rarContent if helpers.isMediaFile(x)] if rarContent else ''\n\n result.output += logHelper(u\"PostProcessing Files: %s\" % files, logger.DEBUG)\n result.output += logHelper(u\"PostProcessing VideoFiles: %s\" % videoFiles, logger.DEBUG)\n result.output += logHelper(u\"PostProcessing RarContent: %s\" % rarContent, logger.DEBUG)\n result.output += logHelper(u\"PostProcessing VideoInRar: %s\" % videoInRar, logger.DEBUG)\n\n # If nzbName is set and there's more than one videofile in the folder, files will be lost (overwritten).\n nzbName = None if len(videoFiles) >= 2 else nzbName\n\n process_method = process_method if process_method else app.PROCESS_METHOD\n result.result = True\n\n # Don't Link media when the media is extracted from a rar in the same path\n if process_method in (u'hardlink', u'symlink') and videoInRar:\n process_media(path, videoInRar, nzbName, u'move', force, is_priority, ignore_subs, result)\n delete_files(path, rarContent, result)\n for video in set(videoFiles) - set(videoInRar):\n process_media(path, [video], nzbName, process_method, force, is_priority, ignore_subs, result)\n elif app.DELRARCONTENTS and videoInRar:\n process_media(path, videoInRar, nzbName, process_method, force, is_priority, ignore_subs, result)\n delete_files(path, rarContent, result, True)\n for video in set(videoFiles) - set(videoInRar):\n process_media(path, [video], nzbName, process_method, force, is_priority, ignore_subs, result)\n else:\n for video in videoFiles:\n process_media(path, [video], nzbName, process_method, force, is_priority, ignore_subs, result)\n\n else:\n result.output += logHelper(u\"Found temporary sync files: %s in path: %s\" % (SyncFiles, path))\n result.output += logHelper(u\"Skipping post processing for folder: %s\" % path)\n result.missedfiles.append(u\"%s : Syncfiles found\" % path)\n\n # Process Video File in all TV Subdir\n for curDir in [x for x in dirs if validateDir(path, x, nzbNameOriginal, failed, result)]:\n result.result = True\n\n for processPath, _, fileList in os.walk(os.path.join(path, curDir), topdown=False):\n\n if not validateDir(path, processPath, nzbNameOriginal, failed, result):\n continue\n\n SyncFiles = [x for x in fileList if is_sync_file(x)]\n\n # Don't post process if files are still being synced and option is activated\n postpone = SyncFiles and app.POSTPONE_IF_SYNC_FILES\n\n if not postpone:\n videoFiles = [x for x in fileList if helpers.isMediaFile(x)]\n rarFiles = [x for x in fileList if helpers.isRarFile(x)]\n rarContent = \"\"\n if rarFiles and not (app.POSTPONE_IF_NO_SUBS and videoFiles):\n # Unpack only if video file was not already extracted by 'postpone if no subs' feature\n rarContent = unRAR(processPath, rarFiles, force, result)\n fileList = set(fileList + rarContent)\n videoFiles += [x for x in rarContent if helpers.isMediaFile(x)]\n\n videoInRar = [x for x in rarContent if helpers.isMediaFile(x)] if rarContent else ''\n notwantedFiles = [x for x in fileList if x not in videoFiles]\n if notwantedFiles:\n result.output += logHelper(u\"Found unwanted files: %s\" % notwantedFiles, logger.DEBUG)\n\n # Don't Link media when the media is extracted from a rar in the same path\n if process_method in (u'hardlink', u'symlink') and videoInRar:\n process_media(processPath, videoInRar, nzbName, u'move', force, is_priority, ignore_subs, result)\n process_media(processPath, set(videoFiles) - set(videoInRar), nzbName, process_method, force,\n is_priority, ignore_subs, result)\n delete_files(processPath, rarContent, result)\n elif app.DELRARCONTENTS and videoInRar:\n process_media(processPath, videoInRar, nzbName, process_method, force, is_priority, ignore_subs, result)\n process_media(processPath, set(videoFiles) - set(videoInRar), nzbName, process_method, force,\n is_priority, ignore_subs, result)\n delete_files(processPath, rarContent, result, True)\n else:\n process_media(processPath, videoFiles, nzbName, process_method, force, is_priority, ignore_subs, result)\n\n # Delete all file not needed and avoid deleting files if Manual PostProcessing\n if not(process_method == u\"move\" and result.result) or (proc_type == u\"manual\" and not delete_on):\n continue\n\n delete_folder(os.path.join(processPath, u'@eaDir'))\n delete_files(processPath, notwantedFiles, result)\n\n if all([not app.NO_DELETE or proc_type == u\"manual\",\n process_method == u\"move\",\n os.path.normpath(processPath) != os.path.normpath(app.TV_DOWNLOAD_DIR)]):\n\n if delete_folder(processPath, check_empty=True):\n result.output += logHelper(u\"Deleted folder: %s\" % processPath, logger.DEBUG)\n\n else:\n result.output += logHelper(u\"Found temporary sync files: %s in path: %s\" % (SyncFiles, processPath))\n result.output += logHelper(u\"Skipping post processing for folder: %s\" % processPath)\n result.missedfiles.append(u\"%s : Syncfiles found\" % path)\n\n if result.aggresult:\n result.output += logHelper(u\"Successfully processed\")\n\n # Clean library from KODI after PP ended\n if app.KODI_LIBRARY_CLEAN_PENDING and notifiers.kodi_notifier.clean_library():\n app.KODI_LIBRARY_CLEAN_PENDING = False\n\n if result.missedfiles:\n result.output += logHelper(u\"I did encounter some unprocessable items: \")\n for missedfile in result.missedfiles:\n result.output += logHelper(u\"[%s]\" % missedfile)\n else:\n result.output += logHelper(u\"Problem(s) during processing, failed the following files/folders: \", logger.WARNING)\n for missedfile in result.missedfiles:\n result.output += logHelper(u\"[%s]\" % missedfile, logger.WARNING)\n\n return result.output\n\n\ndef validateDir(path, dirName, nzbNameOriginal, failed, result):\n \"\"\"\n Check if directory is valid for processing.\n\n :param path: Path to use\n :param dirName: Directory to check\n :param nzbNameOriginal: Original NZB name\n :param failed: Previously failed objects\n :param result: Previous results\n :return: True if dir is valid for processing, False if not\n \"\"\"\n dirName = ss(dirName)\n\n IGNORED_FOLDERS = [u'.AppleDouble', u'.@__thumb', u'@eaDir']\n folder_name = os.path.basename(dirName)\n if folder_name in IGNORED_FOLDERS:\n return False\n\n result.output += logHelper(u\"Processing folder \" + dirName, logger.DEBUG)\n\n if folder_name.startswith(u'_FAILED_'):\n result.output += logHelper(u\"The directory name indicates it failed to extract.\", logger.DEBUG)\n failed = True\n elif folder_name.startswith(u'_UNDERSIZED_'):\n result.output += logHelper(u\"The directory name indicates that it was previously rejected for being undersized.\", logger.DEBUG)\n failed = True\n elif folder_name.upper().startswith(u'_UNPACK'):\n result.output += logHelper(u\"The directory name indicates that this release is in the process of being unpacked.\", logger.DEBUG)\n result.missedfiles.append(u\"%s : Being unpacked\" % dirName)\n return False\n\n if failed:\n process_failed(os.path.join(path, dirName), nzbNameOriginal, result)\n result.missedfiles.append(u\"%s : Failed download\" % dirName)\n return False\n\n if helpers.is_hidden_folder(os.path.join(path, dirName)):\n result.output += logHelper(u\"Ignoring hidden folder: %s\" % dirName, logger.DEBUG)\n result.missedfiles.append(u\"%s : Hidden folder\" % dirName)\n return False\n\n # make sure the dir isn't inside a show dir\n main_db_con = db.DBConnection()\n sql_results = main_db_con.select(\"SELECT location FROM tv_shows\")\n\n for sqlShow in sql_results:\n if dirName.lower().startswith(os.path.realpath(sqlShow[\"location\"]).lower() + os.sep) or \\\n dirName.lower() == os.path.realpath(sqlShow[\"location\"]).lower():\n\n result.output += logHelper(\n u\"Cannot process an episode that's already been moved to its show dir, skipping \" + dirName,\n logger.WARNING)\n return False\n\n # Get the videofile list for the next checks\n allFiles = []\n allDirs = []\n for _, processdir, fileList in os.walk(os.path.join(path, dirName), topdown=False):\n allDirs += processdir\n allFiles += fileList\n\n videoFiles = [x for x in allFiles if helpers.isMediaFile(x)]\n allDirs.append(dirName)\n\n # check if the dir have at least one tv video file\n for video in videoFiles:\n try:\n NameParser().parse(video, cache_result=False)\n return True\n except (InvalidNameException, InvalidShowException) as error:\n result.output += logHelper(u\"{}\".format(error), logger.DEBUG)\n\n for proc_dir in allDirs:\n try:\n NameParser().parse(proc_dir, cache_result=False)\n return True\n except (InvalidNameException, InvalidShowException) as error:\n result.output += logHelper(u\"{}\".format(error), logger.DEBUG)\n\n if app.UNPACK:\n # Search for packed release\n packedFiles = [x for x in allFiles if helpers.isRarFile(x)]\n\n for packed in packedFiles:\n try:\n NameParser().parse(packed, cache_result=False)\n return True\n except (InvalidNameException, InvalidShowException) as error:\n result.output += logHelper(u\"{}\".format(error), logger.DEBUG)\n\n result.output += logHelper(u\"%s : No processable items found in the folder\" % dirName, logger.DEBUG)\n return False\n\n\ndef unRAR(path, rarFiles, force, result):\n \"\"\"\n Extract RAR files.\n\n :param path: Path to look for files in\n :param rarFiles: Names of RAR files\n :param force: process currently processing items\n :param result: Previous results\n :return: List of unpacked file names\n \"\"\"\n unpacked_files = []\n\n if app.UNPACK and rarFiles:\n\n result.output += logHelper(u\"Packed releases detected: %s\" % rarFiles, logger.DEBUG)\n\n for archive in rarFiles:\n\n result.output += logHelper(u\"Unpacking archive: %s\" % archive, logger.DEBUG)\n\n failure = None\n try:\n rar_handle = RarFile(os.path.join(path, archive))\n\n # Skip extraction if any file in archive has previously been extracted\n skip_file = False\n for file_in_archive in [os.path.basename(x.filename) for x in rar_handle.infolist() if not x.isdir]:\n if already_postprocessed(path, file_in_archive, force, result):\n result.output += logHelper(u\"Archive file already post-processed, extraction skipped: %s\" %\n file_in_archive, logger.DEBUG)\n skip_file = True\n break\n\n if skip_file:\n continue\n\n rar_handle.extract(path=path, withSubpath=False, overwrite=False)\n for x in rar_handle.infolist():\n if not x.isdir:\n basename = os.path.basename(x.filename)\n if basename not in unpacked_files:\n unpacked_files.append(basename)\n del rar_handle\n\n except ArchiveHeaderBroken:\n failure = (u'Archive Header Broken', u'Unpacking failed because the Archive Header is Broken')\n except IncorrectRARPassword:\n failure = (u'Incorrect RAR Password', u'Unpacking failed because of an Incorrect Rar Password')\n except FileOpenError:\n failure = (u'File Open Error, check the parent folder and destination file permissions.',\n u'Unpacking failed with a File Open Error (file permissions?)')\n except InvalidRARArchiveUsage:\n failure = (u'Invalid Rar Archive Usage', u'Unpacking Failed with Invalid Rar Archive Usage')\n except InvalidRARArchive:\n failure = (u'Invalid Rar Archive', u'Unpacking Failed with an Invalid Rar Archive Error')\n except Exception as e:\n failure = (ex(e), u'Unpacking failed for an unknown reason')\n\n if failure is not None:\n result.output += logHelper(u'Failed Unrar archive {}: {}'.format(archive, failure[0]), logger.ERROR)\n result.missedfiles.append(u'{} : Unpacking failed: {}'.format(archive, failure[1]))\n result.result = False\n continue\n\n result.output += logHelper(u\"UnRar content: %s\" % unpacked_files, logger.DEBUG)\n\n return unpacked_files\n\n\ndef already_postprocessed(dir_name, video_file, force, result):\n \"\"\"\n Check if we already post processed a file.\n\n :param dir_name: Directory a file resides in\n :param video_file: File name\n :param force: Force checking when already checking (currently unused)\n :param result: True if file is already postprocessed, False if not\n :return:\n \"\"\"\n if force:\n return False\n\n main_db_con = db.DBConnection()\n history_result = main_db_con.select(\n 'SELECT * FROM history '\n \"WHERE action LIKE '%04' \"\n 'AND resource LIKE ?',\n ['%' + video_file])\n\n if history_result:\n result.output += logHelper(u\"You're trying to post-process a file that has already \"\n u\"been processed, skipping: {0}\".format(video_file), logger.DEBUG)\n return True\n\n return False\n\n\ndef process_media(processPath, videoFiles, nzbName, process_method, force, is_priority, ignore_subs, result):\n \"\"\"\n Postprocess mediafiles.\n\n :param processPath: Path to postprocess in\n :param videoFiles: Filenames to look for and postprocess\n :param nzbName: Name of NZB file related\n :param process_method: auto/manual\n :param force: Postprocess currently postprocessing file\n :param is_priority: Boolean, is this a priority download\n :param result: Previous results\n :param ignore_subs: True to ignore setting 'postpone if no subs'\n \"\"\"\n processor = None\n for cur_video_file in videoFiles:\n cur_video_file_path = os.path.join(processPath, cur_video_file)\n\n if already_postprocessed(processPath, cur_video_file, force, result):\n result.output += logHelper(u\"Skipping already processed file: %s\" % cur_video_file, logger.DEBUG)\n result.output += logHelper(u\"Skipping already processed dir: %s\" % processPath, logger.DEBUG)\n continue\n\n try:\n processor = post_processor.PostProcessor(cur_video_file_path, nzbName, process_method, is_priority)\n\n # This feature prevents PP for files that do not have subtitle associated with the video file\n if app.POSTPONE_IF_NO_SUBS:\n if not ignore_subs:\n if subtitles_enabled(cur_video_file_path, nzbName):\n embedded_subs = set() if app.IGNORE_EMBEDDED_SUBS else get_embedded_subtitles(cur_video_file_path)\n\n # If user don't want to ignore embedded subtitles and video has at least one, don't post pone PP\n if accept_unknown(embedded_subs):\n result.output += logHelper(u\"Found embedded unknown subtitles and we don't want to ignore them. \"\n u\"Continuing the post-process of this file: %s\" % cur_video_file)\n elif accept_any(embedded_subs):\n result.output += logHelper(u\"Found wanted embedded subtitles. \"\n u\"Continuing the post-process of this file: %s\" % cur_video_file)\n else:\n associated_files = processor.list_associated_files(cur_video_file_path, subtitles_only=True)\n if not [f for f in associated_files if f[-3:] in subtitle_extensions]:\n result.output += logHelper(u\"No subtitles associated. Postponing the post-process of this file:\"\n u\" %s\" % cur_video_file, logger.DEBUG)\n continue\n else:\n result.output += logHelper(u\"Found subtitles associated. \"\n u\"Continuing the post-process of this file: %s\" % cur_video_file)\n else:\n result.output += logHelper(u\"Subtitles disabled for this show. \"\n u\"Continuing the post-process of this file: %s\" % cur_video_file)\n else:\n result.output += logHelper(u\"Subtitles check was disabled for this episode in Manual PP. \"\n u\"Continuing the post-process of this file: %s\" % cur_video_file)\n\n result.result = processor.process()\n process_fail_message = u\"\"\n except EpisodePostProcessingFailedException as e:\n result.result = False\n process_fail_message = ex(e)\n\n if processor:\n result.output += processor.log\n\n if result.result:\n result.output += logHelper(u\"Processing succeeded for %s\" % cur_video_file_path)\n else:\n result.output += logHelper(u\"Processing failed for %s: %s\" % (cur_video_file_path, process_fail_message), logger.WARNING)\n result.missedfiles.append(u\"%s : Processing failed: %s\" % (cur_video_file_path, process_fail_message))\n result.aggresult = False\n\n\ndef get_path_dir_files(dirName, nzbName, proc_type):\n \"\"\"\n Get files in a path\n\n :param dirName: Directory to start in\n :param nzbName: NZB file, if present\n :param proc_type: auto/manual\n :return: a tuple of (path,dirs,files)\n \"\"\"\n path = u\"\"\n dirs = []\n files = []\n\n if dirName == app.TV_DOWNLOAD_DIR and not nzbName or proc_type == u\"manual\": # Scheduled Post Processing Active\n # Get at first all the subdir in the dirName\n for path, dirs, files in os.walk(dirName):\n break\n else:\n path, dirs = os.path.split(dirName) # Script Post Processing\n if not (nzbName is None or nzbName.endswith(u'.nzb')) and os.path.isfile(os.path.join(dirName, nzbName)): # For single torrent file without Dir\n dirs = []\n files = [os.path.join(dirName, nzbName)]\n else:\n dirs = [dirs]\n files = []\n\n return path, dirs, files\n\n\ndef process_failed(dirName, nzbName, result):\n \"\"\"Process a download that did not complete correctly.\"\"\"\n if app.USE_FAILED_DOWNLOADS:\n processor = None\n\n try:\n processor = failed_processor.FailedProcessor(dirName, nzbName)\n result.result = processor.process()\n process_fail_message = u\"\"\n except FailedPostProcessingFailedException as e:\n result.result = False\n process_fail_message = ex(e)\n\n if processor:\n result.output += processor.log\n\n if app.DELETE_FAILED and result.result:\n if delete_folder(dirName, check_empty=False):\n result.output += logHelper(u\"Deleted folder: %s\" % dirName, logger.DEBUG)\n\n if result.result:\n result.output += logHelper(u\"Failed Download Processing succeeded: (%s, %s)\" % (nzbName, dirName))\n else:\n result.output += logHelper(u\"Failed Download Processing failed: (%s, %s): %s\" %\n (nzbName, dirName, process_fail_message), logger.WARNING)\n\n\ndef subtitles_enabled(*args):\n \"\"\"Try to parse names to a show and check whether the show has subtitles enabled.\n\n :param args:\n :return:\n :rtype: bool\n \"\"\"\n for name in args:\n try:\n parse_result = NameParser().parse(name, cache_result=True)\n if parse_result.show.indexerid:\n main_db_con = db.DBConnection()\n sql_results = main_db_con.select(\"SELECT subtitles FROM tv_shows WHERE indexer_id = ? LIMIT 1\",\n [parse_result.show.indexerid])\n return bool(sql_results[0][\"subtitles\"]) if sql_results else False\n\n logger.log(u'Empty indexer ID for: {name}'.format(name=name), logger.WARNING)\n except (InvalidNameException, InvalidShowException):\n logger.log(u'Not enough information to parse filename into a valid show. Consider adding scene exceptions '\n u'or improve naming for: {name}'.format(name=name), logger.WARNING)\n return False\n", "path": "medusa/process_tv.py" } ]
[ { "content": "# coding=utf-8\n# Author: Nic Wolfe <[email protected]>\n#\n# This file is part of Medusa.\n#\n# Medusa is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Medusa is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Medusa. If not, see <http://www.gnu.org/licenses/>.\n\nimport os\nimport shutil\nimport stat\n\nimport medusa as app\nimport shutil_custom\nfrom unrar2 import RarFile\nfrom unrar2.rar_exceptions import (ArchiveHeaderBroken, FileOpenError, IncorrectRARPassword, InvalidRARArchive,\n InvalidRARArchiveUsage)\nfrom . import db, failed_processor, helpers, logger, notifiers, post_processor\nfrom .helper.common import is_sync_file, is_torrent_or_nzb_file, subtitle_extensions\nfrom .helper.encoding import ss\nfrom .helper.exceptions import EpisodePostProcessingFailedException, FailedPostProcessingFailedException, ex\nfrom .name_parser.parser import InvalidNameException, InvalidShowException, NameParser\nfrom .subtitles import accept_any, accept_unknown, get_embedded_subtitles\n\nshutil.copyfile = shutil_custom.copyfile_custom\n\n\nclass ProcessResult(object): # pylint: disable=too-few-public-methods\n def __init__(self):\n self.result = True\n self.output = ''\n self.missedfiles = []\n self.aggresult = True\n\n\ndef delete_folder(folder, check_empty=True):\n \"\"\"\n Remove a folder from the filesystem.\n\n :param folder: Path to folder to remove\n :param check_empty: Boolean, check if the folder is empty before removing it, defaults to True\n :return: True on success, False on failure\n \"\"\"\n # check if it's a folder\n if not os.path.isdir(folder):\n return False\n\n # check if it isn't TV_DOWNLOAD_DIR\n if app.TV_DOWNLOAD_DIR:\n if helpers.real_path(folder) == helpers.real_path(app.TV_DOWNLOAD_DIR):\n return False\n\n # check if it's empty folder when wanted checked\n if check_empty:\n check_files = os.listdir(folder)\n if check_files:\n logger.log(u\"Not deleting folder %s found the following files: %s\" %\n (folder, check_files), logger.INFO)\n return False\n\n try:\n logger.log(u\"Deleting folder (if it's empty): %s\" % folder)\n os.rmdir(folder)\n except (OSError, IOError) as e:\n logger.log(u\"Warning: unable to delete folder: %s: %s\" % (folder, ex(e)), logger.WARNING)\n return False\n else:\n try:\n logger.log(u\"Deleting folder: \" + folder)\n shutil.rmtree(folder)\n except (OSError, IOError) as e:\n logger.log(u\"Warning: unable to delete folder: %s: %s\" % (folder, ex(e)), logger.WARNING)\n return False\n\n return True\n\n\ndef delete_files(processPath, notwantedFiles, result, force=False):\n \"\"\"\n Remove files from filesystem.\n\n :param processPath: path to process\n :param notwantedFiles: files we do not want\n :param result: Processor results\n :param force: Boolean, force deletion, defaults to false\n \"\"\"\n if not result.result and force:\n result.output += logHelper(u\"Forcing deletion of files, even though last result was not successful\", logger.DEBUG)\n elif not result.result:\n return\n\n # Delete all file not needed\n for cur_file in notwantedFiles:\n\n cur_file_path = os.path.join(processPath, cur_file)\n\n if not os.path.isfile(cur_file_path):\n continue # Prevent error when a notwantedfiles is an associated files\n\n result.output += logHelper(u\"Deleting file: %s\" % cur_file, logger.DEBUG)\n\n # check first the read-only attribute\n file_attribute = os.stat(cur_file_path)[0]\n if not file_attribute & stat.S_IWRITE:\n # File is read-only, so make it writeable\n result.output += logHelper(u\"Changing ReadOnly Flag for file: %s\" % cur_file, logger.DEBUG)\n try:\n os.chmod(cur_file_path, stat.S_IWRITE)\n except OSError as e:\n result.output += logHelper(u\"Cannot change permissions of %s: %s\" %\n (cur_file_path, ex(e)), logger.DEBUG)\n try:\n os.remove(cur_file_path)\n except OSError as e:\n result.output += logHelper(u\"Unable to delete file %s: %s\" % (cur_file, e.strerror), logger.DEBUG)\n\n\ndef logHelper(logMessage, logLevel=logger.INFO):\n logger.log(logMessage, logLevel)\n return logMessage + u\"\\n\"\n\n\n#def OneRunPP():\n# isRunning = [False]\n#\n# def decorate(func):\n# @wraps(func)\n# def func_wrapper(*args, **kargs):\n# if isRunning[0]:\n# return logHelper(u'Post processor is already running', logger.WARNING)\n\n# isRunning[0] = True\n# ret = func(*args, **kargs)\n# isRunning[0] = False\n# return ret\n# return func_wrapper\n# return decorate\n\n\n# pylint: disable=too-many-arguments,too-many-branches,too-many-statements,too-many-locals\n#@OneRunPP()\ndef processDir(dirName, nzbName=None, process_method=None, force=False, is_priority=None,\n delete_on=False, failed=False, proc_type=\"auto\", ignore_subs=False):\n \"\"\"\n Scan through the files in dirName and process whatever media files are found.\n\n :param dirName: The folder name to look in\n :param nzbName: The NZB name which resulted in this folder being downloaded\n :param process_method: Process methodo: hardlink, move, softlink, etc.\n :param force: True to postprocess already postprocessed files\n :param is_priority: Boolean for whether or not is a priority download\n :param delete_on: Boolean for whether or not it should delete files\n :param failed: Boolean for whether or not the download failed\n :param proc_type: Type of postprocessing auto or manual\n :param ignore_subs: True to ignore setting 'postpone if no subs'\n \"\"\"\n\n result = ProcessResult()\n\n # if they passed us a real dir then assume it's the one we want\n if os.path.isdir(dirName):\n dirName = os.path.realpath(dirName)\n result.output += logHelper(u\"Processing folder %s\" % dirName, logger.DEBUG)\n\n # if the client and the application are not on the same machine translate the directory into a network directory\n elif all([app.TV_DOWNLOAD_DIR,\n os.path.isdir(app.TV_DOWNLOAD_DIR),\n os.path.normpath(dirName) == os.path.normpath(app.TV_DOWNLOAD_DIR)]):\n dirName = os.path.join(app.TV_DOWNLOAD_DIR, os.path.abspath(dirName).split(os.path.sep)[-1])\n result.output += logHelper(u\"Trying to use folder: %s \" % dirName, logger.DEBUG)\n\n # if we didn't find a real dir then quit\n if not os.path.isdir(dirName):\n result.output += logHelper(u\"Unable to figure out what folder to process. \"\n u\"If your downloader and Medusa aren't on the same PC \"\n u\"make sure you fill out your TV download dir in the config.\",\n logger.DEBUG)\n return result.output\n\n path, dirs, files = get_path_dir_files(dirName, nzbName, proc_type)\n\n files = [x for x in files if not is_torrent_or_nzb_file(x)]\n SyncFiles = [x for x in files if is_sync_file(x)]\n nzbNameOriginal = nzbName\n\n # Don't post process if files are still being synced and option is activated\n postpone = SyncFiles and app.POSTPONE_IF_SYNC_FILES\n\n # Warn user if 'postpone if no subs' is enabled. Will debug possible user issues with PP\n if app.POSTPONE_IF_NO_SUBS:\n result.output += logHelper(u\"Feature 'postpone post-processing if no subtitle available' is enabled\", logger.INFO)\n\n if not postpone:\n result.output += logHelper(u\"PostProcessing Path: %s\" % path, logger.INFO)\n result.output += logHelper(u\"PostProcessing Dirs: %s\" % str(dirs), logger.DEBUG)\n\n videoFiles = [x for x in files if helpers.isMediaFile(x)]\n rarFiles = [x for x in files if helpers.isRarFile(x)]\n rarContent = \"\"\n if rarFiles and not (app.POSTPONE_IF_NO_SUBS and videoFiles):\n # Unpack only if video file was not already extracted by 'postpone if no subs' feature\n rarContent = unRAR(path, rarFiles, force, result)\n files += rarContent\n videoFiles += [x for x in rarContent if helpers.isMediaFile(x)]\n videoInRar = [x for x in rarContent if helpers.isMediaFile(x)] if rarContent else ''\n\n result.output += logHelper(u\"PostProcessing Files: %s\" % files, logger.DEBUG)\n result.output += logHelper(u\"PostProcessing VideoFiles: %s\" % videoFiles, logger.DEBUG)\n result.output += logHelper(u\"PostProcessing RarContent: %s\" % rarContent, logger.DEBUG)\n result.output += logHelper(u\"PostProcessing VideoInRar: %s\" % videoInRar, logger.DEBUG)\n\n # If nzbName is set and there's more than one videofile in the folder, files will be lost (overwritten).\n nzbName = None if len(videoFiles) >= 2 else nzbName\n\n process_method = process_method if process_method else app.PROCESS_METHOD\n result.result = True\n\n # Don't Link media when the media is extracted from a rar in the same path\n if process_method in (u'hardlink', u'symlink') and videoInRar:\n process_media(path, videoInRar, nzbName, u'move', force, is_priority, ignore_subs, result)\n delete_files(path, rarContent, result)\n for video in set(videoFiles) - set(videoInRar):\n process_media(path, [video], nzbName, process_method, force, is_priority, ignore_subs, result)\n elif app.DELRARCONTENTS and videoInRar:\n process_media(path, videoInRar, nzbName, process_method, force, is_priority, ignore_subs, result)\n delete_files(path, rarContent, result, True)\n for video in set(videoFiles) - set(videoInRar):\n process_media(path, [video], nzbName, process_method, force, is_priority, ignore_subs, result)\n else:\n for video in videoFiles:\n process_media(path, [video], nzbName, process_method, force, is_priority, ignore_subs, result)\n\n else:\n result.output += logHelper(u\"Found temporary sync files: %s in path: %s\" % (SyncFiles, path))\n result.output += logHelper(u\"Skipping post processing for folder: %s\" % path)\n result.missedfiles.append(u\"%s : Syncfiles found\" % path)\n\n # Process Video File in all TV Subdir\n for curDir in [x for x in dirs if validateDir(path, x, nzbNameOriginal, failed, result)]:\n result.result = True\n\n for processPath, _, fileList in os.walk(os.path.join(path, curDir), topdown=False):\n\n if not validateDir(path, processPath, nzbNameOriginal, failed, result):\n continue\n\n SyncFiles = [x for x in fileList if is_sync_file(x)]\n\n # Don't post process if files are still being synced and option is activated\n postpone = SyncFiles and app.POSTPONE_IF_SYNC_FILES\n\n if not postpone:\n videoFiles = [x for x in fileList if helpers.isMediaFile(x)]\n rarFiles = [x for x in fileList if helpers.isRarFile(x)]\n rarContent = \"\"\n if rarFiles and not (app.POSTPONE_IF_NO_SUBS and videoFiles):\n # Unpack only if video file was not already extracted by 'postpone if no subs' feature\n rarContent = unRAR(processPath, rarFiles, force, result)\n fileList = set(fileList + rarContent)\n videoFiles += [x for x in rarContent if helpers.isMediaFile(x)]\n\n videoInRar = [x for x in rarContent if helpers.isMediaFile(x)] if rarContent else ''\n notwantedFiles = [x for x in fileList if x not in videoFiles]\n if notwantedFiles:\n result.output += logHelper(u\"Found unwanted files: %s\" % notwantedFiles, logger.DEBUG)\n\n # Don't Link media when the media is extracted from a rar in the same path\n if process_method in (u'hardlink', u'symlink') and videoInRar:\n process_media(processPath, videoInRar, nzbName, u'move', force, is_priority, ignore_subs, result)\n process_media(processPath, set(videoFiles) - set(videoInRar), nzbName, process_method, force,\n is_priority, ignore_subs, result)\n delete_files(processPath, rarContent, result)\n elif app.DELRARCONTENTS and videoInRar:\n process_media(processPath, videoInRar, nzbName, process_method, force, is_priority, ignore_subs, result)\n process_media(processPath, set(videoFiles) - set(videoInRar), nzbName, process_method, force,\n is_priority, ignore_subs, result)\n delete_files(processPath, rarContent, result, True)\n else:\n process_media(processPath, videoFiles, nzbName, process_method, force, is_priority, ignore_subs, result)\n\n # Delete all file not needed and avoid deleting files if Manual PostProcessing\n if not(process_method == u\"move\" and result.result) or (proc_type == u\"manual\" and not delete_on):\n continue\n\n delete_folder(os.path.join(processPath, u'@eaDir'))\n delete_files(processPath, notwantedFiles, result)\n\n if all([not app.NO_DELETE or proc_type == u\"manual\",\n process_method == u\"move\",\n os.path.normpath(processPath) != os.path.normpath(app.TV_DOWNLOAD_DIR)]):\n\n if delete_folder(processPath, check_empty=True):\n result.output += logHelper(u\"Deleted folder: %s\" % processPath, logger.DEBUG)\n\n else:\n result.output += logHelper(u\"Found temporary sync files: %s in path: %s\" % (SyncFiles, processPath))\n result.output += logHelper(u\"Skipping post processing for folder: %s\" % processPath)\n result.missedfiles.append(u\"%s : Syncfiles found\" % path)\n\n if result.aggresult:\n result.output += logHelper(u\"Successfully processed\")\n\n # Clean library from KODI after PP ended\n if app.KODI_LIBRARY_CLEAN_PENDING and notifiers.kodi_notifier.clean_library():\n app.KODI_LIBRARY_CLEAN_PENDING = False\n\n if result.missedfiles:\n result.output += logHelper(u\"I did encounter some unprocessable items: \")\n for missedfile in result.missedfiles:\n result.output += logHelper(u\"[%s]\" % missedfile)\n else:\n result.output += logHelper(u\"Problem(s) during processing, failed the following files/folders: \", logger.WARNING)\n for missedfile in result.missedfiles:\n result.output += logHelper(u\"[%s]\" % missedfile, logger.WARNING)\n\n return result.output\n\n\ndef validateDir(path, dirName, nzbNameOriginal, failed, result):\n \"\"\"\n Check if directory is valid for processing.\n\n :param path: Path to use\n :param dirName: Directory to check\n :param nzbNameOriginal: Original NZB name\n :param failed: Previously failed objects\n :param result: Previous results\n :return: True if dir is valid for processing, False if not\n \"\"\"\n dirName = ss(dirName)\n\n IGNORED_FOLDERS = [u'.AppleDouble', u'.@__thumb', u'@eaDir']\n folder_name = os.path.basename(dirName)\n if folder_name in IGNORED_FOLDERS:\n return False\n\n result.output += logHelper(u\"Processing folder \" + dirName, logger.DEBUG)\n\n if folder_name.startswith(u'_FAILED_'):\n result.output += logHelper(u\"The directory name indicates it failed to extract.\", logger.DEBUG)\n failed = True\n elif folder_name.startswith(u'_UNDERSIZED_'):\n result.output += logHelper(u\"The directory name indicates that it was previously rejected for being undersized.\", logger.DEBUG)\n failed = True\n elif folder_name.upper().startswith(u'_UNPACK'):\n result.output += logHelper(u\"The directory name indicates that this release is in the process of being unpacked.\", logger.DEBUG)\n result.missedfiles.append(u\"%s : Being unpacked\" % dirName)\n return False\n\n if failed:\n process_failed(os.path.join(path, dirName), nzbNameOriginal, result)\n result.missedfiles.append(u\"%s : Failed download\" % dirName)\n return False\n\n if helpers.is_hidden_folder(os.path.join(path, dirName)):\n result.output += logHelper(u\"Ignoring hidden folder: %s\" % dirName, logger.DEBUG)\n result.missedfiles.append(u\"%s : Hidden folder\" % dirName)\n return False\n\n # make sure the dir isn't inside a show dir\n main_db_con = db.DBConnection()\n sql_results = main_db_con.select(\"SELECT location FROM tv_shows\")\n\n for sqlShow in sql_results:\n if dirName.lower().startswith(os.path.realpath(sqlShow[\"location\"]).lower() + os.sep) or \\\n dirName.lower() == os.path.realpath(sqlShow[\"location\"]).lower():\n\n result.output += logHelper(\n u\"Cannot process an episode that's already been moved to its show dir, skipping \" + dirName,\n logger.WARNING)\n return False\n\n # Get the videofile list for the next checks\n allFiles = []\n allDirs = []\n for _, processdir, fileList in os.walk(os.path.join(path, dirName), topdown=False):\n allDirs += processdir\n allFiles += fileList\n\n videoFiles = [x for x in allFiles if helpers.isMediaFile(x)]\n allDirs.append(dirName)\n\n # check if the dir have at least one tv video file\n for video in videoFiles:\n try:\n NameParser().parse(video, cache_result=False)\n return True\n except (InvalidNameException, InvalidShowException) as error:\n result.output += logHelper(u\"{}\".format(error), logger.DEBUG)\n\n for proc_dir in allDirs:\n try:\n NameParser().parse(proc_dir, cache_result=False)\n return True\n except (InvalidNameException, InvalidShowException) as error:\n result.output += logHelper(u\"{}\".format(error), logger.DEBUG)\n\n if app.UNPACK:\n # Search for packed release\n packedFiles = [x for x in allFiles if helpers.isRarFile(x)]\n\n for packed in packedFiles:\n try:\n NameParser().parse(packed, cache_result=False)\n return True\n except (InvalidNameException, InvalidShowException) as error:\n result.output += logHelper(u\"{}\".format(error), logger.DEBUG)\n\n result.output += logHelper(u\"%s : No processable items found in the folder\" % dirName, logger.DEBUG)\n return False\n\n\ndef unRAR(path, rarFiles, force, result):\n \"\"\"\n Extract RAR files.\n\n :param path: Path to look for files in\n :param rarFiles: Names of RAR files\n :param force: process currently processing items\n :param result: Previous results\n :return: List of unpacked file names\n \"\"\"\n unpacked_files = []\n\n if app.UNPACK and rarFiles:\n\n result.output += logHelper(u\"Packed releases detected: %s\" % rarFiles, logger.DEBUG)\n\n for archive in rarFiles:\n\n result.output += logHelper(u\"Unpacking archive: %s\" % archive, logger.DEBUG)\n\n failure = None\n try:\n rar_handle = RarFile(os.path.join(path, archive))\n\n # Skip extraction if any file in archive has previously been extracted\n skip_file = False\n for file_in_archive in [os.path.basename(x.filename) for x in rar_handle.infolist() if not x.isdir]:\n if already_postprocessed(path, file_in_archive, force, result):\n result.output += logHelper(u\"Archive file already post-processed, extraction skipped: %s\" %\n file_in_archive, logger.DEBUG)\n skip_file = True\n break\n\n if skip_file:\n continue\n\n rar_handle.extract(path=path, withSubpath=False, overwrite=False)\n for x in rar_handle.infolist():\n if not x.isdir:\n basename = os.path.basename(x.filename)\n if basename not in unpacked_files:\n unpacked_files.append(basename)\n del rar_handle\n\n except ArchiveHeaderBroken:\n failure = (u'Archive Header Broken', u'Unpacking failed because the Archive Header is Broken')\n except IncorrectRARPassword:\n failure = (u'Incorrect RAR Password', u'Unpacking failed because of an Incorrect Rar Password')\n except FileOpenError:\n failure = (u'File Open Error, check the parent folder and destination file permissions.',\n u'Unpacking failed with a File Open Error (file permissions?)')\n except InvalidRARArchiveUsage:\n failure = (u'Invalid Rar Archive Usage', u'Unpacking Failed with Invalid Rar Archive Usage')\n except InvalidRARArchive:\n failure = (u'Invalid Rar Archive', u'Unpacking Failed with an Invalid Rar Archive Error')\n except Exception as e:\n failure = (ex(e), u'Unpacking failed for an unknown reason')\n\n if failure is not None:\n result.output += logHelper(u'Failed Unrar archive {}: {}'.format(archive, failure[0]), logger.ERROR)\n result.missedfiles.append(u'{} : Unpacking failed: {}'.format(archive, failure[1]))\n result.result = False\n continue\n\n result.output += logHelper(u\"UnRar content: %s\" % unpacked_files, logger.DEBUG)\n\n return unpacked_files\n\n\ndef already_postprocessed(dir_name, video_file, force, result):\n \"\"\"\n Check if we already post processed a file.\n\n :param dir_name: Directory a file resides in\n :param video_file: File name\n :param force: Force checking when already checking (currently unused)\n :param result: True if file is already postprocessed, False if not\n :return:\n \"\"\"\n if force:\n return False\n\n main_db_con = db.DBConnection()\n history_result = main_db_con.select(\n 'SELECT * FROM history '\n \"WHERE action LIKE '%04' \"\n 'AND resource LIKE ?',\n ['%' + video_file])\n\n if history_result:\n result.output += logHelper(u\"You're trying to post-process a file that has already \"\n u\"been processed, skipping: {0}\".format(video_file), logger.DEBUG)\n return True\n\n return False\n\n\ndef process_media(processPath, videoFiles, nzbName, process_method, force, is_priority, ignore_subs, result):\n \"\"\"\n Postprocess mediafiles.\n\n :param processPath: Path to postprocess in\n :param videoFiles: Filenames to look for and postprocess\n :param nzbName: Name of NZB file related\n :param process_method: auto/manual\n :param force: Postprocess currently postprocessing file\n :param is_priority: Boolean, is this a priority download\n :param result: Previous results\n :param ignore_subs: True to ignore setting 'postpone if no subs'\n \"\"\"\n processor = None\n for cur_video_file in videoFiles:\n cur_video_file_path = os.path.join(processPath, cur_video_file)\n\n if already_postprocessed(processPath, cur_video_file, force, result):\n result.output += logHelper(u\"Skipping already processed file: %s\" % cur_video_file, logger.DEBUG)\n result.output += logHelper(u\"Skipping already processed dir: %s\" % processPath, logger.DEBUG)\n continue\n\n try:\n processor = post_processor.PostProcessor(cur_video_file_path, nzbName, process_method, is_priority)\n\n # This feature prevents PP for files that do not have subtitle associated with the video file\n if app.POSTPONE_IF_NO_SUBS:\n if not ignore_subs:\n if subtitles_enabled(cur_video_file_path, nzbName):\n embedded_subs = set() if app.IGNORE_EMBEDDED_SUBS else get_embedded_subtitles(cur_video_file_path)\n\n # If user don't want to ignore embedded subtitles and video has at least one, don't post pone PP\n if accept_unknown(embedded_subs):\n result.output += logHelper(u\"Found embedded unknown subtitles and we don't want to ignore them. \"\n u\"Continuing the post-process of this file: %s\" % cur_video_file)\n elif accept_any(embedded_subs):\n result.output += logHelper(u\"Found wanted embedded subtitles. \"\n u\"Continuing the post-process of this file: %s\" % cur_video_file)\n else:\n associated_files = processor.list_associated_files(cur_video_file_path, subtitles_only=True)\n if not [f for f in associated_files if f[-3:] in subtitle_extensions]:\n result.output += logHelper(u\"No subtitles associated. Postponing the post-process of this file:\"\n u\" %s\" % cur_video_file, logger.DEBUG)\n continue\n else:\n result.output += logHelper(u\"Found subtitles associated. \"\n u\"Continuing the post-process of this file: %s\" % cur_video_file)\n else:\n result.output += logHelper(u\"Subtitles disabled for this show. \"\n u\"Continuing the post-process of this file: %s\" % cur_video_file)\n else:\n result.output += logHelper(u\"Subtitles check was disabled for this episode in Manual PP. \"\n u\"Continuing the post-process of this file: %s\" % cur_video_file)\n\n result.result = processor.process()\n process_fail_message = u\"\"\n except EpisodePostProcessingFailedException as e:\n result.result = False\n process_fail_message = ex(e)\n\n if processor:\n result.output += processor.log\n\n if result.result:\n result.output += logHelper(u\"Processing succeeded for %s\" % cur_video_file_path)\n else:\n result.output += logHelper(u\"Processing failed for %s: %s\" % (cur_video_file_path, process_fail_message), logger.WARNING)\n result.missedfiles.append(u\"%s : Processing failed: %s\" % (cur_video_file_path, process_fail_message))\n result.aggresult = False\n\n\ndef get_path_dir_files(dirName, nzbName, proc_type):\n \"\"\"\n Get files in a path\n\n :param dirName: Directory to start in\n :param nzbName: NZB file, if present\n :param proc_type: auto/manual\n :return: a tuple of (path,dirs,files)\n \"\"\"\n path = u\"\"\n dirs = []\n files = []\n\n if dirName == app.TV_DOWNLOAD_DIR and not nzbName or proc_type == u\"manual\": # Scheduled Post Processing Active\n # Get at first all the subdir in the dirName\n for path, dirs, files in os.walk(dirName):\n break\n else:\n path, dirs = os.path.split(dirName) # Script Post Processing\n if not (nzbName is None or nzbName.endswith(u'.nzb')) and os.path.isfile(os.path.join(dirName, nzbName)): # For single torrent file without Dir\n dirs = []\n files = [os.path.join(dirName, nzbName)]\n else:\n dirs = [dirs]\n files = []\n\n return path, dirs, files\n\n\ndef process_failed(dirName, nzbName, result):\n \"\"\"Process a download that did not complete correctly.\"\"\"\n if app.USE_FAILED_DOWNLOADS:\n processor = None\n\n try:\n processor = failed_processor.FailedProcessor(dirName, nzbName)\n result.result = processor.process()\n process_fail_message = u\"\"\n except FailedPostProcessingFailedException as e:\n result.result = False\n process_fail_message = ex(e)\n\n if processor:\n result.output += processor.log\n\n if app.DELETE_FAILED and result.result:\n if delete_folder(dirName, check_empty=False):\n result.output += logHelper(u\"Deleted folder: %s\" % dirName, logger.DEBUG)\n\n if result.result:\n result.output += logHelper(u\"Failed Download Processing succeeded: (%s, %s)\" % (nzbName, dirName))\n else:\n result.output += logHelper(u\"Failed Download Processing failed: (%s, %s): %s\" %\n (nzbName, dirName, process_fail_message), logger.WARNING)\n\n\ndef subtitles_enabled(*args):\n \"\"\"Try to parse names to a show and check whether the show has subtitles enabled.\n\n :param args:\n :return:\n :rtype: bool\n \"\"\"\n for name in args:\n if not name:\n continue\n\n try:\n parse_result = NameParser().parse(name, cache_result=True)\n if parse_result.show.indexerid:\n main_db_con = db.DBConnection()\n sql_results = main_db_con.select(\"SELECT subtitles FROM tv_shows WHERE indexer_id = ? LIMIT 1\",\n [parse_result.show.indexerid])\n return bool(sql_results[0][\"subtitles\"]) if sql_results else False\n\n logger.log(u'Empty indexer ID for: {name}'.format(name=name), logger.WARNING)\n except (InvalidNameException, InvalidShowException):\n logger.log(u'Not enough information to parse filename into a valid show. Consider adding scene exceptions '\n u'or improve naming for: {name}'.format(name=name), logger.WARNING)\n return False\n", "path": "medusa/process_tv.py" } ]
diff --git a/medusa/process_tv.py b/medusa/process_tv.py index 439becf959..8969357c02 100644 --- a/medusa/process_tv.py +++ b/medusa/process_tv.py @@ -652,6 +652,9 @@ def subtitles_enabled(*args): :rtype: bool """ for name in args: + if not name: + continue + try: parse_result = NameParser().parse(name, cache_result=True) if parse_result.show.indexerid:
mkdocs__mkdocs-694
current_page.ancestors only contains direct ancestor and not the full path of the page I'm using the mkdocs theme and tried to enhance it with a breadcrumb trail. The page navigation is created automatically by mkdocs (I don't use the pages confguration since I have almost 300 pages). I copied and adapted the `breadcrumbs.html` file from the readthedocs theme and integrated it in `content.html`: ``` <ol class="breadcrumb"> <li><a href="{{ homepage_url }}">Docs</a></li> {% if current_page %} {% for doc in current_page.ancestors %} {% if doc.link %} <li><a href="{{ doc.link|e }}">{{ doc.title }}</a></li> {% else %} <li>{{ doc.title }}</li> {% endif %} {% endfor %} {% endif %} {% if current_page %}<li>{{ current_page.title }}</li>{% endif %} </ol> ``` My file path (starting from the `docs_dir`) is: `beheerteam/diensten/algemeen/ActiveDirectory.md` The generated breadcrumb trail is: `Docs/algemeen/ActiveDirectory` `algemeen` is the only part that originates from the loop `for doc in current_page.ancestors`. Maybe this is a stupid question or it is just not possible, but I couldn't find i in the documentation and I'm just starting with mkdocs and couldn't understand the source on how this works.
[ { "content": "# coding: utf-8\n\n\"\"\"\nDeals with generating the site-wide navigation.\n\nThis consists of building a set of interlinked page and header objects.\n\"\"\"\n\nfrom __future__ import unicode_literals\nimport datetime\nimport logging\nimport os\n\nfrom mkdocs import utils, exceptions\n\nlog = logging.getLogger(__name__)\n\n\ndef filename_to_title(filename):\n \"\"\"\n Automatically generate a default title, given a filename.\n \"\"\"\n if utils.is_homepage(filename):\n return 'Home'\n\n return utils.filename_to_title(filename)\n\n\nclass SiteNavigation(object):\n def __init__(self, pages_config, use_directory_urls=True):\n self.url_context = URLContext()\n self.file_context = FileContext()\n self.nav_items, self.pages = _generate_site_navigation(\n pages_config, self.url_context, use_directory_urls)\n self.homepage = self.pages[0] if self.pages else None\n self.use_directory_urls = use_directory_urls\n\n def __str__(self):\n return ''.join([str(item) for item in self])\n\n def __iter__(self):\n return iter(self.nav_items)\n\n def walk_pages(self):\n \"\"\"\n Returns each page in the site in turn.\n\n Additionally this sets the active status of the pages and headers,\n in the site navigation, so that the rendered navbar can correctly\n highlight the currently active page and/or header item.\n \"\"\"\n page = self.homepage\n page.set_active()\n self.url_context.set_current_url(page.abs_url)\n self.file_context.set_current_path(page.input_path)\n yield page\n while page.next_page:\n page.set_active(False)\n page = page.next_page\n page.set_active()\n self.url_context.set_current_url(page.abs_url)\n self.file_context.set_current_path(page.input_path)\n yield page\n page.set_active(False)\n\n @property\n def source_files(self):\n if not hasattr(self, '_source_files'):\n self._source_files = set([page.input_path for page in self.pages])\n return self._source_files\n\n\nclass URLContext(object):\n \"\"\"\n The URLContext is used to ensure that we can generate the appropriate\n relative URLs to other pages from any given page in the site.\n\n We use relative URLs so that static sites can be deployed to any location\n without having to specify what the path component on the host will be\n if the documentation is not hosted at the root path.\n \"\"\"\n\n def __init__(self):\n self.base_path = '/'\n\n def set_current_url(self, current_url):\n self.base_path = os.path.dirname(current_url)\n\n def make_relative(self, url):\n \"\"\"\n Given a URL path return it as a relative URL,\n given the context of the current page.\n \"\"\"\n suffix = '/' if (url.endswith('/') and len(url) > 1) else ''\n # Workaround for bug on `os.path.relpath()` in Python 2.6\n if self.base_path == '/':\n if url == '/':\n # Workaround for static assets\n return '.'\n return url.lstrip('/')\n # Under Python 2.6, relative_path adds an extra '/' at the end.\n relative_path = os.path.relpath(url, start=self.base_path)\n relative_path = relative_path.rstrip('/') + suffix\n\n return utils.path_to_url(relative_path)\n\n\nclass FileContext(object):\n \"\"\"\n The FileContext is used to ensure that we can generate the appropriate\n full path for other pages given their relative path from a particular page.\n\n This is used when we have relative hyperlinks in the documentation, so that\n we can ensure that they point to markdown documents that actually exist\n in the `pages` config.\n \"\"\"\n def __init__(self):\n self.current_file = None\n self.base_path = ''\n\n def set_current_path(self, current_path):\n self.current_file = current_path\n self.base_path = os.path.dirname(current_path)\n\n def make_absolute(self, path):\n \"\"\"\n Given a relative file path return it as a POSIX-style\n absolute filepath, given the context of the current page.\n \"\"\"\n return os.path.normpath(os.path.join(self.base_path, path))\n\n\nclass Page(object):\n def __init__(self, title, url, path, url_context):\n\n self.title = title\n self.abs_url = url\n self.active = False\n self.url_context = url_context\n self.update_date = datetime.datetime.now().strftime(\"%Y-%m-%d\")\n\n # Relative paths to the input markdown file and output html file.\n self.input_path = path\n self.output_path = utils.get_html_path(path)\n\n # Links to related pages\n self.previous_page = None\n self.next_page = None\n self.ancestors = []\n\n @property\n def url(self):\n return self.url_context.make_relative(self.abs_url)\n\n @property\n def is_homepage(self):\n return utils.is_homepage(self.input_path)\n\n @property\n def is_top_level(self):\n return len(self.ancestors) == 0\n\n def __str__(self):\n return self.indent_print()\n\n def indent_print(self, depth=0):\n indent = ' ' * depth\n active_marker = ' [*]' if self.active else ''\n title = self.title if (self.title is not None) else '[blank]'\n return '%s%s - %s%s\\n' % (indent, title, self.abs_url, active_marker)\n\n def set_active(self, active=True):\n self.active = active\n for ancestor in self.ancestors:\n ancestor.set_active(active)\n\n\nclass Header(object):\n def __init__(self, title, children):\n self.title, self.children = title, children\n self.active = False\n self.ancestors = []\n\n def __str__(self):\n return self.indent_print()\n\n @property\n def is_top_level(self):\n return len(self.ancestors) == 0\n\n def indent_print(self, depth=0):\n indent = ' ' * depth\n active_marker = ' [*]' if self.active else ''\n ret = '%s%s%s\\n' % (indent, self.title, active_marker)\n for item in self.children:\n ret += item.indent_print(depth + 1)\n return ret\n\n def set_active(self, active=True):\n self.active = active\n for ancestor in self.ancestors:\n ancestor.set_active(active)\n\n\ndef _path_to_page(path, title, url_context, use_directory_urls):\n if title is None:\n title = filename_to_title(path.split(os.path.sep)[-1])\n url = utils.get_url_path(path, use_directory_urls)\n return Page(title=title, url=url, path=path,\n url_context=url_context)\n\n\ndef _follow(config_line, url_context, use_dir_urls, header=None, title=None):\n\n if isinstance(config_line, utils.string_types):\n path = os.path.normpath(config_line)\n page = _path_to_page(path, title, url_context, use_dir_urls)\n\n if header:\n page.ancestors = [header]\n header.children.append(page)\n\n yield page\n raise StopIteration\n\n elif not isinstance(config_line, dict):\n msg = (\"Line in 'page' config is of type {0}, dict or string \"\n \"expected. Config: {1}\").format(type(config_line), config_line)\n raise exceptions.ConfigurationError(msg)\n\n if len(config_line) > 1:\n raise exceptions.ConfigurationError(\n \"Page configs should be in the format 'name: markdown.md'. The \"\n \"config contains an invalid entry: {0}\".format(config_line))\n elif len(config_line) == 0:\n log.warning(\"Ignoring empty line in the pages config.\")\n raise StopIteration\n\n next_cat_or_title, subpages_or_path = next(iter(config_line.items()))\n\n if isinstance(subpages_or_path, utils.string_types):\n path = subpages_or_path\n for sub in _follow(path, url_context, use_dir_urls, header=header, title=next_cat_or_title):\n yield sub\n raise StopIteration\n\n elif not isinstance(subpages_or_path, list):\n msg = (\"Line in 'page' config is of type {0}, list or string \"\n \"expected for sub pages. Config: {1}\"\n ).format(type(config_line), config_line)\n raise exceptions.ConfigurationError(msg)\n\n next_header = Header(title=next_cat_or_title, children=[])\n if header:\n next_header.ancestors = [header]\n header.children.append(next_header)\n yield next_header\n\n subpages = subpages_or_path\n\n for subpage in subpages:\n for sub in _follow(subpage, url_context, use_dir_urls, next_header):\n yield sub\n\n\ndef _generate_site_navigation(pages_config, url_context, use_dir_urls=True):\n \"\"\"\n Returns a list of Page and Header instances that represent the\n top level site navigation.\n \"\"\"\n nav_items = []\n pages = []\n\n previous = None\n\n for config_line in pages_config:\n\n for page_or_header in _follow(\n config_line, url_context, use_dir_urls):\n\n if isinstance(page_or_header, Header):\n\n if page_or_header.is_top_level:\n nav_items.append(page_or_header)\n\n elif isinstance(page_or_header, Page):\n\n if page_or_header.is_top_level:\n nav_items.append(page_or_header)\n\n pages.append(page_or_header)\n\n if previous:\n page_or_header.previous_page = previous\n previous.next_page = page_or_header\n previous = page_or_header\n\n if len(pages) == 0:\n raise exceptions.ConfigurationError(\n \"No pages found in the pages config. \"\n \"Remove it entirely to enable automatic page discovery.\")\n\n return (nav_items, pages)\n", "path": "mkdocs/nav.py" } ]
[ { "content": "# coding: utf-8\n\n\"\"\"\nDeals with generating the site-wide navigation.\n\nThis consists of building a set of interlinked page and header objects.\n\"\"\"\n\nfrom __future__ import unicode_literals\nimport datetime\nimport logging\nimport os\n\nfrom mkdocs import utils, exceptions\n\nlog = logging.getLogger(__name__)\n\n\ndef filename_to_title(filename):\n \"\"\"\n Automatically generate a default title, given a filename.\n \"\"\"\n if utils.is_homepage(filename):\n return 'Home'\n\n return utils.filename_to_title(filename)\n\n\nclass SiteNavigation(object):\n def __init__(self, pages_config, use_directory_urls=True):\n self.url_context = URLContext()\n self.file_context = FileContext()\n self.nav_items, self.pages = _generate_site_navigation(\n pages_config, self.url_context, use_directory_urls)\n self.homepage = self.pages[0] if self.pages else None\n self.use_directory_urls = use_directory_urls\n\n def __str__(self):\n return ''.join([str(item) for item in self])\n\n def __iter__(self):\n return iter(self.nav_items)\n\n def walk_pages(self):\n \"\"\"\n Returns each page in the site in turn.\n\n Additionally this sets the active status of the pages and headers,\n in the site navigation, so that the rendered navbar can correctly\n highlight the currently active page and/or header item.\n \"\"\"\n page = self.homepage\n page.set_active()\n self.url_context.set_current_url(page.abs_url)\n self.file_context.set_current_path(page.input_path)\n yield page\n while page.next_page:\n page.set_active(False)\n page = page.next_page\n page.set_active()\n self.url_context.set_current_url(page.abs_url)\n self.file_context.set_current_path(page.input_path)\n yield page\n page.set_active(False)\n\n @property\n def source_files(self):\n if not hasattr(self, '_source_files'):\n self._source_files = set([page.input_path for page in self.pages])\n return self._source_files\n\n\nclass URLContext(object):\n \"\"\"\n The URLContext is used to ensure that we can generate the appropriate\n relative URLs to other pages from any given page in the site.\n\n We use relative URLs so that static sites can be deployed to any location\n without having to specify what the path component on the host will be\n if the documentation is not hosted at the root path.\n \"\"\"\n\n def __init__(self):\n self.base_path = '/'\n\n def set_current_url(self, current_url):\n self.base_path = os.path.dirname(current_url)\n\n def make_relative(self, url):\n \"\"\"\n Given a URL path return it as a relative URL,\n given the context of the current page.\n \"\"\"\n suffix = '/' if (url.endswith('/') and len(url) > 1) else ''\n # Workaround for bug on `os.path.relpath()` in Python 2.6\n if self.base_path == '/':\n if url == '/':\n # Workaround for static assets\n return '.'\n return url.lstrip('/')\n # Under Python 2.6, relative_path adds an extra '/' at the end.\n relative_path = os.path.relpath(url, start=self.base_path)\n relative_path = relative_path.rstrip('/') + suffix\n\n return utils.path_to_url(relative_path)\n\n\nclass FileContext(object):\n \"\"\"\n The FileContext is used to ensure that we can generate the appropriate\n full path for other pages given their relative path from a particular page.\n\n This is used when we have relative hyperlinks in the documentation, so that\n we can ensure that they point to markdown documents that actually exist\n in the `pages` config.\n \"\"\"\n def __init__(self):\n self.current_file = None\n self.base_path = ''\n\n def set_current_path(self, current_path):\n self.current_file = current_path\n self.base_path = os.path.dirname(current_path)\n\n def make_absolute(self, path):\n \"\"\"\n Given a relative file path return it as a POSIX-style\n absolute filepath, given the context of the current page.\n \"\"\"\n return os.path.normpath(os.path.join(self.base_path, path))\n\n\nclass Page(object):\n def __init__(self, title, url, path, url_context):\n\n self.title = title\n self.abs_url = url\n self.active = False\n self.url_context = url_context\n self.update_date = datetime.datetime.now().strftime(\"%Y-%m-%d\")\n\n # Relative paths to the input markdown file and output html file.\n self.input_path = path\n self.output_path = utils.get_html_path(path)\n\n # Links to related pages\n self.previous_page = None\n self.next_page = None\n self.ancestors = []\n\n @property\n def url(self):\n return self.url_context.make_relative(self.abs_url)\n\n @property\n def is_homepage(self):\n return utils.is_homepage(self.input_path)\n\n @property\n def is_top_level(self):\n return len(self.ancestors) == 0\n\n def __str__(self):\n return self.indent_print()\n\n def indent_print(self, depth=0):\n indent = ' ' * depth\n active_marker = ' [*]' if self.active else ''\n title = self.title if (self.title is not None) else '[blank]'\n return '%s%s - %s%s\\n' % (indent, title, self.abs_url, active_marker)\n\n def set_active(self, active=True):\n self.active = active\n for ancestor in self.ancestors:\n ancestor.set_active(active)\n\n\nclass Header(object):\n def __init__(self, title, children):\n self.title, self.children = title, children\n self.active = False\n self.ancestors = []\n\n def __str__(self):\n return self.indent_print()\n\n @property\n def is_top_level(self):\n return len(self.ancestors) == 0\n\n def indent_print(self, depth=0):\n indent = ' ' * depth\n active_marker = ' [*]' if self.active else ''\n ret = '%s%s%s\\n' % (indent, self.title, active_marker)\n for item in self.children:\n ret += item.indent_print(depth + 1)\n return ret\n\n def set_active(self, active=True):\n self.active = active\n for ancestor in self.ancestors:\n ancestor.set_active(active)\n\n\ndef _path_to_page(path, title, url_context, use_directory_urls):\n if title is None:\n title = filename_to_title(path.split(os.path.sep)[-1])\n url = utils.get_url_path(path, use_directory_urls)\n return Page(title=title, url=url, path=path,\n url_context=url_context)\n\n\ndef _follow(config_line, url_context, use_dir_urls, header=None, title=None):\n\n if isinstance(config_line, utils.string_types):\n path = os.path.normpath(config_line)\n page = _path_to_page(path, title, url_context, use_dir_urls)\n\n if header:\n page.ancestors = header.ancestors + [header, ]\n header.children.append(page)\n\n yield page\n raise StopIteration\n\n elif not isinstance(config_line, dict):\n msg = (\"Line in 'page' config is of type {0}, dict or string \"\n \"expected. Config: {1}\").format(type(config_line), config_line)\n raise exceptions.ConfigurationError(msg)\n\n if len(config_line) > 1:\n raise exceptions.ConfigurationError(\n \"Page configs should be in the format 'name: markdown.md'. The \"\n \"config contains an invalid entry: {0}\".format(config_line))\n elif len(config_line) == 0:\n log.warning(\"Ignoring empty line in the pages config.\")\n raise StopIteration\n\n next_cat_or_title, subpages_or_path = next(iter(config_line.items()))\n\n if isinstance(subpages_or_path, utils.string_types):\n path = subpages_or_path\n for sub in _follow(path, url_context, use_dir_urls, header=header, title=next_cat_or_title):\n yield sub\n raise StopIteration\n\n elif not isinstance(subpages_or_path, list):\n msg = (\"Line in 'page' config is of type {0}, list or string \"\n \"expected for sub pages. Config: {1}\"\n ).format(type(config_line), config_line)\n raise exceptions.ConfigurationError(msg)\n\n next_header = Header(title=next_cat_or_title, children=[])\n if header:\n next_header.ancestors = [header]\n header.children.append(next_header)\n yield next_header\n\n subpages = subpages_or_path\n\n for subpage in subpages:\n for sub in _follow(subpage, url_context, use_dir_urls, next_header):\n yield sub\n\n\ndef _generate_site_navigation(pages_config, url_context, use_dir_urls=True):\n \"\"\"\n Returns a list of Page and Header instances that represent the\n top level site navigation.\n \"\"\"\n nav_items = []\n pages = []\n\n previous = None\n\n for config_line in pages_config:\n\n for page_or_header in _follow(\n config_line, url_context, use_dir_urls):\n\n if isinstance(page_or_header, Header):\n\n if page_or_header.is_top_level:\n nav_items.append(page_or_header)\n\n elif isinstance(page_or_header, Page):\n\n if page_or_header.is_top_level:\n nav_items.append(page_or_header)\n\n pages.append(page_or_header)\n\n if previous:\n page_or_header.previous_page = previous\n previous.next_page = page_or_header\n previous = page_or_header\n\n if len(pages) == 0:\n raise exceptions.ConfigurationError(\n \"No pages found in the pages config. \"\n \"Remove it entirely to enable automatic page discovery.\")\n\n return (nav_items, pages)\n", "path": "mkdocs/nav.py" } ]
diff --git a/docs/about/release-notes.md b/docs/about/release-notes.md index 72c1604487..9418d6e7a3 100644 --- a/docs/about/release-notes.md +++ b/docs/about/release-notes.md @@ -50,6 +50,8 @@ themes theme. (#631) * Bugfix: Ensure consistent ordering of auto-populated pages. (#638) * Bugfix: Scroll the TOC on the MkDocs theme if it is too long for the page. +* Bugfix: Add all ancestors to the page attribute `ancestors` rather than just + the initial one. [site_description]: /user-guide/configuration.md#site_description [site_author]: /user-guide/configuration.md#site_author diff --git a/mkdocs/nav.py b/mkdocs/nav.py index 672ff4fffd..b72e21feb2 100644 --- a/mkdocs/nav.py +++ b/mkdocs/nav.py @@ -217,7 +217,7 @@ def _follow(config_line, url_context, use_dir_urls, header=None, title=None): page = _path_to_page(path, title, url_context, use_dir_urls) if header: - page.ancestors = [header] + page.ancestors = header.ancestors + [header, ] header.children.append(page) yield page diff --git a/mkdocs/tests/nav_tests.py b/mkdocs/tests/nav_tests.py index 3c300887b7..84357d8d96 100644 --- a/mkdocs/tests/nav_tests.py +++ b/mkdocs/tests/nav_tests.py @@ -325,6 +325,9 @@ def test_ancestors(self): {'Running': 'api-guide/running.md'}, {'Testing': 'api-guide/testing.md'}, {'Debugging': 'api-guide/debugging.md'}, + {'Advanced': [ + {'Part 1': 'api-guide/advanced/part-1.md'}, + ]}, ]}, {'About': [ {'Release notes': 'about/release-notes.md'}, @@ -338,12 +341,18 @@ def test_ancestors(self): [site_navigation.nav_items[1]], [site_navigation.nav_items[1]], [site_navigation.nav_items[1]], + [site_navigation.nav_items[1], + site_navigation.pages[4].ancestors[-1]], [site_navigation.nav_items[2]], [site_navigation.nav_items[2]], ) - for page, expected_ancestor in zip(site_navigation.pages, ancestors): - self.assertEqual(page.ancestors, expected_ancestor) + self.assertEqual(len(site_navigation.pages), len(ancestors)) + + for i, (page, expected_ancestor) in enumerate( + zip(site_navigation.pages, ancestors)): + self.assertEqual(page.ancestors, expected_ancestor, + "Failed on ancestor test {0}".format(i)) def test_nesting(self):
GeotrekCE__Geotrek-admin-1571
Error - SHAPE export Trying to export TRAILS as a shape, send some errors emails to ADMIN : [Geotrek] ERROR: GDAL_ERROR 1: Field id of width 255 truncated to 254. [Geotrek] ERROR: GDAL_ERROR 1: Field name of width 255 truncated to 254. [Geotrek] ERROR: GDAL_ERROR 1: Field departure of width 255 truncated to 254. [Geotrek] ERROR: GDAL_ERROR 1: Field arrival of width 255 truncated to 254. Same when exporting PATH to shapes.
[ { "content": "import os\nimport logging\n\nfrom django.conf import settings\nfrom django.contrib.gis.db import models\nfrom django.core.exceptions import ValidationError\nfrom django.core.validators import MinValueValidator\nfrom django.template.defaultfilters import slugify\nfrom django.utils.translation import get_language, ugettext_lazy as _\n\nimport simplekml\nfrom mapentity.models import MapEntityMixin\nfrom mapentity.serializers import plain_text\n\nfrom geotrek.authent.models import StructureRelated\nfrom geotrek.core.models import Path, Topology\nfrom geotrek.common.utils import intersecting, classproperty\nfrom geotrek.common.mixins import (PicturesMixin, PublishableMixin,\n PictogramMixin, OptionalPictogramMixin)\nfrom geotrek.common.models import Theme\nfrom geotrek.maintenance.models import Intervention, Project\nfrom geotrek.tourism import models as tourism_models\n\nfrom .templatetags import trekking_tags\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass TrekOrderedChildManager(models.Manager):\n use_for_related_fields = True\n\n def get_queryset(self):\n # Select treks foreign keys by default\n qs = super(TrekOrderedChildManager, self).get_queryset().select_related('parent', 'child')\n # Exclude deleted treks\n return qs.exclude(parent__deleted=True).exclude(child__deleted=True)\n\n\nclass OrderedTrekChild(models.Model):\n parent = models.ForeignKey('Trek', related_name='trek_children', on_delete=models.CASCADE)\n child = models.ForeignKey('Trek', related_name='trek_parents', on_delete=models.CASCADE)\n order = models.PositiveIntegerField(default=0)\n\n objects = TrekOrderedChildManager()\n\n class Meta:\n db_table = 'o_r_itineraire_itineraire2'\n ordering = ('parent__id', 'order')\n unique_together = (\n ('parent', 'child'),\n )\n\n\nclass Trek(StructureRelated, PicturesMixin, PublishableMixin, MapEntityMixin, Topology):\n topo_object = models.OneToOneField(Topology, parent_link=True,\n db_column='evenement')\n departure = models.CharField(verbose_name=_(u\"Departure\"), max_length=128, blank=True,\n help_text=_(u\"Departure description\"), db_column='depart')\n arrival = models.CharField(verbose_name=_(u\"Arrival\"), max_length=128, blank=True,\n help_text=_(u\"Arrival description\"), db_column='arrivee')\n description_teaser = models.TextField(verbose_name=_(u\"Description teaser\"), blank=True,\n help_text=_(u\"A brief summary (map pop-ups)\"), db_column='chapeau')\n description = models.TextField(verbose_name=_(u\"Description\"), blank=True, db_column='description',\n help_text=_(u\"Complete description\"))\n ambiance = models.TextField(verbose_name=_(u\"Ambiance\"), blank=True, db_column='ambiance',\n help_text=_(u\"Main attraction and interest\"))\n access = models.TextField(verbose_name=_(u\"Access\"), blank=True, db_column='acces',\n help_text=_(u\"Best way to go\"))\n disabled_infrastructure = models.TextField(verbose_name=_(u\"Disabled infrastructure\"), db_column='handicap',\n blank=True, help_text=_(u\"Any specific infrastructure\"))\n duration = models.FloatField(verbose_name=_(u\"Duration\"), default=0, blank=True, db_column='duree',\n help_text=_(u\"In hours (1.5 = 1 h 30, 24 = 1 day, 48 = 2 days)\"),\n validators=[MinValueValidator(0)])\n is_park_centered = models.BooleanField(verbose_name=_(u\"Is in the midst of the park\"), db_column='coeur',\n help_text=_(u\"Crosses center of park\"))\n advised_parking = models.CharField(verbose_name=_(u\"Advised parking\"), max_length=128, blank=True, db_column='parking',\n help_text=_(u\"Where to park\"))\n parking_location = models.PointField(verbose_name=_(u\"Parking location\"), db_column='geom_parking',\n srid=settings.SRID, spatial_index=False, blank=True, null=True)\n public_transport = models.TextField(verbose_name=_(u\"Public transport\"), blank=True, db_column='transport',\n help_text=_(u\"Train, bus (see web links)\"))\n advice = models.TextField(verbose_name=_(u\"Advice\"), blank=True, db_column='recommandation',\n help_text=_(u\"Risks, danger, best period, ...\"))\n themes = models.ManyToManyField(Theme, related_name=\"treks\",\n db_table=\"o_r_itineraire_theme\", blank=True, null=True, verbose_name=_(u\"Themes\"),\n help_text=_(u\"Main theme(s)\"))\n networks = models.ManyToManyField('TrekNetwork', related_name=\"treks\",\n db_table=\"o_r_itineraire_reseau\", blank=True, null=True, verbose_name=_(u\"Networks\"),\n help_text=_(u\"Hiking networks\"))\n practice = models.ForeignKey('Practice', related_name=\"treks\",\n blank=True, null=True, verbose_name=_(u\"Practice\"), db_column='pratique')\n accessibilities = models.ManyToManyField('Accessibility', related_name=\"treks\",\n db_table=\"o_r_itineraire_accessibilite\", blank=True, null=True,\n verbose_name=_(u\"Accessibility\"))\n route = models.ForeignKey('Route', related_name='treks',\n blank=True, null=True, verbose_name=_(u\"Route\"), db_column='parcours')\n difficulty = models.ForeignKey('DifficultyLevel', related_name='treks',\n blank=True, null=True, verbose_name=_(u\"Difficulty\"), db_column='difficulte')\n web_links = models.ManyToManyField('WebLink', related_name=\"treks\",\n db_table=\"o_r_itineraire_web\", blank=True, null=True, verbose_name=_(u\"Web links\"),\n help_text=_(u\"External resources\"))\n related_treks = models.ManyToManyField('self', through='TrekRelationship',\n verbose_name=_(u\"Related treks\"), symmetrical=False,\n help_text=_(u\"Connections between treks\"),\n related_name='related_treks+') # Hide reverse attribute\n information_desks = models.ManyToManyField(tourism_models.InformationDesk, related_name='treks',\n db_table=\"o_r_itineraire_renseignement\", blank=True, null=True,\n verbose_name=_(u\"Information desks\"),\n help_text=_(u\"Where to obtain information\"))\n points_reference = models.MultiPointField(verbose_name=_(u\"Points of reference\"), db_column='geom_points_reference',\n srid=settings.SRID, spatial_index=False, blank=True, null=True)\n source = models.ManyToManyField('common.RecordSource',\n null=True, blank=True, related_name='treks',\n verbose_name=_(\"Source\"), db_table='o_r_itineraire_source')\n eid = models.CharField(verbose_name=_(u\"External id\"), max_length=128, blank=True, null=True, db_column='id_externe')\n eid2 = models.CharField(verbose_name=_(u\"Second external id\"), max_length=128, blank=True, null=True, db_column='id_externe2')\n\n objects = Topology.get_manager_cls(models.GeoManager)()\n\n category_id_prefix = 'T'\n\n class Meta:\n db_table = 'o_t_itineraire'\n verbose_name = _(u\"Trek\")\n verbose_name_plural = _(u\"Treks\")\n ordering = ['name']\n\n def __unicode__(self):\n return self.name\n\n @models.permalink\n def get_document_public_url(self):\n \"\"\" Override ``geotrek.common.mixins.PublishableMixin``\n \"\"\"\n return ('trekking:trek_document_public', [], {'lang': get_language(), 'pk': self.pk, 'slug': self.slug})\n\n @property\n def related(self):\n return self.related_treks.exclude(deleted=True).exclude(pk=self.pk).distinct()\n\n @classproperty\n def related_verbose_name(cls):\n return _(\"Related treks\")\n\n @property\n def relationships(self):\n # Does not matter if a or b\n return TrekRelationship.objects.filter(trek_a=self)\n\n @property\n def published_relationships(self):\n return self.relationships.filter(trek_b__published=True)\n\n @property\n def poi_types(self):\n if settings.TREKKING_TOPOLOGY_ENABLED:\n # Can't use values_list and must add 'ordering' because of bug:\n # https://code.djangoproject.com/ticket/14930\n values = self.pois.values('ordering', 'type')\n else:\n values = self.pois.values('type')\n pks = [value['type'] for value in values]\n return POIType.objects.filter(pk__in=set(pks))\n\n @property\n def length_kilometer(self):\n return \"%.1f\" % (self.length / 1000.0)\n\n @property\n def networks_display(self):\n return ', '.join([unicode(n) for n in self.networks.all()])\n\n @property\n def districts_display(self):\n return ', '.join([unicode(d) for d in self.districts])\n\n @property\n def themes_display(self):\n return ', '.join([unicode(n) for n in self.themes.all()])\n\n @property\n def city_departure(self):\n cities = self.cities\n return unicode(cities[0]) if len(cities) > 0 else ''\n\n def kml(self):\n \"\"\" Exports trek into KML format, add geometry as linestring and POI\n as place marks \"\"\"\n kml = simplekml.Kml()\n # Main itinerary\n geom3d = self.geom_3d.transform(4326, clone=True) # KML uses WGS84\n line = kml.newlinestring(name=self.name,\n description=plain_text(self.description),\n coords=geom3d.coords)\n line.style.linestyle.color = simplekml.Color.red # Red\n line.style.linestyle.width = 4 # pixels\n # Place marks\n for poi in self.pois:\n place = poi.geom_3d.transform(settings.API_SRID, clone=True)\n kml.newpoint(name=poi.name,\n description=plain_text(poi.description),\n coords=[place.coords])\n return kml._genkml()\n\n def has_geom_valid(self):\n \"\"\"A trek should be a LineString, even if it's a loop.\n \"\"\"\n return super(Trek, self).has_geom_valid() and self.geom.geom_type.lower() == 'linestring'\n\n @property\n def duration_pretty(self):\n return trekking_tags.duration(self.duration)\n\n @classproperty\n def duration_pretty_verbose_name(cls):\n return _(\"Formated duration\")\n\n @classmethod\n def path_treks(cls, path):\n treks = cls.objects.existing().filter(aggregations__path=path)\n # The following part prevents conflict with default trek ordering\n # ProgrammingError: SELECT DISTINCT ON expressions must match initial ORDER BY expressions\n return treks.order_by('topo_object').distinct('topo_object')\n\n @classmethod\n def topology_treks(cls, topology):\n if settings.TREKKING_TOPOLOGY_ENABLED:\n qs = cls.overlapping(topology)\n else:\n area = topology.geom.buffer(settings.TREK_POI_INTERSECTION_MARGIN)\n qs = cls.objects.existing().filter(geom__intersects=area)\n return qs\n\n @classmethod\n def published_topology_treks(cls, topology):\n return cls.topology_treks(topology).filter(published=True)\n\n # Rando v1 compat\n @property\n def usages(self):\n return [self.practice] if self.practice else []\n\n @classmethod\n def get_create_label(cls):\n return _(u\"Add a new trek\")\n\n @property\n def parents(self):\n return Trek.objects.filter(trek_children__child=self)\n\n @property\n def parents_id(self):\n parents = self.trek_parents.values_list('parent__id', flat=True)\n return list(parents)\n\n @property\n def children(self):\n return Trek.objects.filter(trek_parents__parent=self).order_by('trek_parents__order')\n\n @property\n def children_id(self):\n \"\"\"\n Get children IDs\n \"\"\"\n children = self.trek_children.order_by('order')\\\n .values_list('child__id',\n flat=True)\n return list(children)\n\n def previous_id_for(self, parent):\n children_id = parent.children_id\n index = children_id.index(self.id)\n if index == 0:\n return None\n return children_id[index - 1]\n\n def next_id_for(self, parent):\n children_id = parent.children_id\n index = children_id.index(self.id)\n if index == len(children_id) - 1:\n return None\n return children_id[index + 1]\n\n @property\n def previous_id(self):\n \"\"\"\n Dict of parent -> previous child\n \"\"\"\n return {parent.id: self.previous_id_for(parent) for parent in self.parents.filter(published=True)}\n\n @property\n def next_id(self):\n \"\"\"\n Dict of parent -> next child\n \"\"\"\n return {parent.id: self.next_id_for(parent) for parent in self.parents.filter(published=True)}\n\n def clean(self):\n \"\"\"\n Custom model validation\n \"\"\"\n if self.pk in self.trek_children.values_list('child__id', flat=True):\n raise ValidationError(_(u\"Cannot use itself as child trek.\"))\n\n @property\n def prefixed_category_id(self):\n if settings.SPLIT_TREKS_CATEGORIES_BY_PRACTICE and self.practice:\n return '{prefix}{id}'.format(prefix=self.category_id_prefix, id=self.practice.id)\n else:\n return self.category_id_prefix\n\n def distance(self, to_cls):\n if self.practice and self.practice.distance is not None:\n return self.practice.distance\n else:\n return settings.TOURISM_INTERSECTION_MARGIN\n\n def is_public(self):\n for parent in self.parents:\n if parent.any_published:\n return True\n return self.any_published\n\n @property\n def picture_print(self):\n picture = super(Trek, self).picture_print\n if picture:\n return picture\n for poi in self.published_pois:\n picture = poi.picture_print\n if picture:\n return picture\n\n def save(self, *args, **kwargs):\n if self.pk is not None and kwargs.get('update_fields', None) is None:\n field_names = set()\n for field in self._meta.concrete_fields:\n if not field.primary_key and not hasattr(field, 'through'):\n field_names.add(field.attname)\n old_trek = Trek.objects.get(pk=self.pk)\n if self.geom is not None and old_trek.geom.equals_exact(self.geom, tolerance=0.00001):\n field_names.remove('geom')\n if self.geom_3d is not None and old_trek.geom_3d.equals_exact(self.geom_3d, tolerance=0.00001):\n field_names.remove('geom_3d')\n return super(Trek, self).save(update_fields=field_names, *args, **kwargs)\n super(Trek, self).save(*args, **kwargs)\n\nPath.add_property('treks', Trek.path_treks, _(u\"Treks\"))\nTopology.add_property('treks', Trek.topology_treks, _(u\"Treks\"))\nif settings.HIDE_PUBLISHED_TREKS_IN_TOPOLOGIES:\n Topology.add_property('published_treks', lambda self: [], _(u\"Published treks\"))\nelse:\n Topology.add_property('published_treks', lambda self: intersecting(Trek, self).filter(published=True), _(u\"Published treks\"))\nIntervention.add_property('treks', lambda self: self.topology.treks if self.topology else [], _(u\"Treks\"))\nProject.add_property('treks', lambda self: self.edges_by_attr('treks'), _(u\"Treks\"))\ntourism_models.TouristicContent.add_property('treks', lambda self: intersecting(Trek, self), _(u\"Treks\"))\ntourism_models.TouristicContent.add_property('published_treks', lambda self: intersecting(Trek, self).filter(published=True), _(u\"Published treks\"))\ntourism_models.TouristicEvent.add_property('treks', lambda self: intersecting(Trek, self), _(u\"Treks\"))\ntourism_models.TouristicEvent.add_property('published_treks', lambda self: intersecting(Trek, self).filter(published=True), _(u\"Published treks\"))\n\n\nclass TrekRelationshipManager(models.Manager):\n use_for_related_fields = True\n\n def get_queryset(self):\n # Select treks foreign keys by default\n qs = super(TrekRelationshipManager, self).get_queryset().select_related('trek_a', 'trek_b')\n # Exclude deleted treks\n return qs.exclude(trek_a__deleted=True).exclude(trek_b__deleted=True)\n\n\nclass TrekRelationship(models.Model):\n \"\"\"\n Relationships between treks : symmetrical aspect is managed by a trigger that\n duplicates all couples (trek_a, trek_b)\n \"\"\"\n has_common_departure = models.BooleanField(verbose_name=_(u\"Common departure\"), db_column='depart_commun', default=False)\n has_common_edge = models.BooleanField(verbose_name=_(u\"Common edge\"), db_column='troncons_communs', default=False)\n is_circuit_step = models.BooleanField(verbose_name=_(u\"Circuit step\"), db_column='etape_circuit', default=False)\n\n trek_a = models.ForeignKey(Trek, related_name=\"trek_relationship_a\", db_column='itineraire_a')\n trek_b = models.ForeignKey(Trek, related_name=\"trek_relationship_b\", db_column='itineraire_b', verbose_name=_(u\"Trek\"))\n\n objects = TrekRelationshipManager()\n\n class Meta:\n db_table = 'o_r_itineraire_itineraire'\n verbose_name = _(u\"Trek relationship\")\n verbose_name_plural = _(u\"Trek relationships\")\n unique_together = ('trek_a', 'trek_b')\n\n def __unicode__(self):\n return u\"%s <--> %s\" % (self.trek_a, self.trek_b)\n\n @property\n def relation(self):\n return u\"%s %s%s%s\" % (\n self.trek_b.name_display,\n _(\"Departure\") if self.has_common_departure else '',\n _(\"Path\") if self.has_common_edge else '',\n _(\"Circuit\") if self.is_circuit_step else ''\n )\n\n @property\n def relation_display(self):\n return self.relation\n\n\nclass TrekNetwork(PictogramMixin):\n\n network = models.CharField(verbose_name=_(u\"Name\"), max_length=128, db_column='reseau')\n\n class Meta:\n db_table = 'o_b_reseau'\n verbose_name = _(u\"Trek network\")\n verbose_name_plural = _(u\"Trek networks\")\n ordering = ['network']\n\n def __unicode__(self):\n return self.network\n\n\nclass Practice(PictogramMixin):\n\n name = models.CharField(verbose_name=_(u\"Name\"), max_length=128, db_column='nom')\n distance = models.IntegerField(verbose_name=_(u\"Distance\"), blank=True, null=True, db_column='distance',\n help_text=_(u\"Touristic contents and events will associate within this distance (meters)\"))\n cirkwi = models.ForeignKey('cirkwi.CirkwiLocomotion', verbose_name=_(u\"Cirkwi locomotion\"), null=True, blank=True)\n order = models.IntegerField(verbose_name=_(u\"Order\"), null=True, blank=True, db_column='tri',\n help_text=_(u\"Alphabetical order if blank\"))\n\n class Meta:\n db_table = 'o_b_pratique'\n verbose_name = _(u\"Practice\")\n verbose_name_plural = _(u\"Practices\")\n ordering = ['order', 'name']\n\n def __unicode__(self):\n return self.name\n\n @property\n def slug(self):\n return slugify(self.name) or str(self.pk)\n\n\nclass Accessibility(OptionalPictogramMixin):\n\n name = models.CharField(verbose_name=_(u\"Name\"), max_length=128, db_column='nom')\n cirkwi = models.ForeignKey('cirkwi.CirkwiTag', verbose_name=_(u\"Cirkwi tag\"), null=True, blank=True)\n\n id_prefix = 'A'\n\n class Meta:\n db_table = 'o_b_accessibilite'\n verbose_name = _(u\"Accessibility\")\n verbose_name_plural = _(u\"Accessibilities\")\n ordering = ['name']\n\n def __unicode__(self):\n return self.name\n\n @property\n def prefixed_id(self):\n return '{prefix}{id}'.format(prefix=self.id_prefix, id=self.id)\n\n @property\n def slug(self):\n return slugify(self.name) or str(self.pk)\n\n\nclass Route(OptionalPictogramMixin):\n\n route = models.CharField(verbose_name=_(u\"Name\"), max_length=128, db_column='parcours')\n\n class Meta:\n db_table = 'o_b_parcours'\n verbose_name = _(u\"Route\")\n verbose_name_plural = _(u\"Routes\")\n ordering = ['route']\n\n def __unicode__(self):\n return self.route\n\n\nclass DifficultyLevel(OptionalPictogramMixin):\n\n \"\"\"We use an IntegerField for id, since we want to edit it in Admin.\n This column is used to order difficulty levels, especially in public website\n where treks are filtered by difficulty ids.\n \"\"\"\n id = models.IntegerField(primary_key=True)\n difficulty = models.CharField(verbose_name=_(u\"Difficulty level\"),\n max_length=128, db_column='difficulte')\n cirkwi_level = models.IntegerField(verbose_name=_(u\"Cirkwi level\"), blank=True, null=True,\n db_column='niveau_cirkwi', help_text=_(u\"Between 1 and 8\"))\n cirkwi = models.ForeignKey('cirkwi.CirkwiTag', verbose_name=_(u\"Cirkwi tag\"), null=True, blank=True)\n\n class Meta:\n db_table = 'o_b_difficulte'\n verbose_name = _(u\"Difficulty level\")\n verbose_name_plural = _(u\"Difficulty levels\")\n ordering = ['id']\n\n def __unicode__(self):\n return self.difficulty\n\n def save(self, *args, **kwargs):\n \"\"\"Manually auto-increment ids\"\"\"\n if not self.id:\n try:\n last = self.__class__.objects.all().order_by('-id')[0]\n self.id = last.id + 1\n except IndexError:\n self.id = 1\n super(DifficultyLevel, self).save(*args, **kwargs)\n\n\nclass WebLinkManager(models.Manager):\n def get_queryset(self):\n return super(WebLinkManager, self).get_queryset().select_related('category')\n\n\nclass WebLink(models.Model):\n\n name = models.CharField(verbose_name=_(u\"Name\"), max_length=128, db_column='nom')\n url = models.URLField(verbose_name=_(u\"URL\"), max_length=128, db_column='url')\n category = models.ForeignKey('WebLinkCategory', verbose_name=_(u\"Category\"),\n related_name='links', null=True, blank=True,\n db_column='categorie')\n\n objects = WebLinkManager()\n\n class Meta:\n db_table = 'o_t_web'\n verbose_name = _(u\"Web link\")\n verbose_name_plural = _(u\"Web links\")\n ordering = ['name']\n\n def __unicode__(self):\n category = \"%s - \" % self.category.label if self.category else \"\"\n return u\"%s%s (%s)\" % (category, self.name, self.url)\n\n @classmethod\n @models.permalink\n def get_add_url(cls):\n return ('trekking:weblink_add', )\n\n\nclass WebLinkCategory(PictogramMixin):\n\n label = models.CharField(verbose_name=_(u\"Label\"), max_length=128, db_column='nom')\n\n class Meta:\n db_table = 'o_b_web_category'\n verbose_name = _(u\"Web link category\")\n verbose_name_plural = _(u\"Web link categories\")\n ordering = ['label']\n\n def __unicode__(self):\n return u\"%s\" % self.label\n\n\nclass POIManager(models.GeoManager):\n def get_queryset(self):\n return super(POIManager, self).get_queryset().select_related('type', 'structure')\n\n\nclass POI(StructureRelated, PicturesMixin, PublishableMixin, MapEntityMixin, Topology):\n\n topo_object = models.OneToOneField(Topology, parent_link=True,\n db_column='evenement')\n description = models.TextField(verbose_name=_(u\"Description\"), db_column='description',\n help_text=_(u\"History, details, ...\"))\n type = models.ForeignKey('POIType', related_name='pois', verbose_name=_(u\"Type\"), db_column='type')\n eid = models.CharField(verbose_name=_(u\"External id\"), max_length=128, blank=True, null=True, db_column='id_externe')\n\n class Meta:\n db_table = 'o_t_poi'\n verbose_name = _(u\"POI\")\n verbose_name_plural = _(u\"POI\")\n\n # Override default manager\n objects = Topology.get_manager_cls(POIManager)()\n\n def __unicode__(self):\n return u\"%s (%s)\" % (self.name, self.type)\n\n @models.permalink\n def get_document_public_url(self):\n \"\"\" Override ``geotrek.common.mixins.PublishableMixin``\n \"\"\"\n return ('trekking:poi_document_public', [], {'lang': get_language(), 'pk': self.pk, 'slug': self.slug})\n\n def save(self, *args, **kwargs):\n super(POI, self).save(*args, **kwargs)\n # Invalidate treks map\n for trek in self.treks.all():\n try:\n os.remove(trek.get_map_image_path())\n except OSError:\n pass\n\n @property\n def type_display(self):\n return unicode(self.type)\n\n @property\n def serializable_type(self):\n return {'label': self.type.label,\n 'pictogram': self.type.get_pictogram_url()}\n\n @classmethod\n def path_pois(cls, path):\n return cls.objects.existing().filter(aggregations__path=path).distinct('pk')\n\n @classmethod\n def topology_pois(cls, topology):\n if settings.TREKKING_TOPOLOGY_ENABLED:\n qs = cls.overlapping(topology)\n else:\n area = topology.geom.buffer(settings.TREK_POI_INTERSECTION_MARGIN)\n qs = cls.objects.existing().filter(geom__intersects=area)\n return qs\n\n @classmethod\n def published_topology_pois(cls, topology):\n return cls.topology_pois(topology).filter(published=True)\n\n def distance(self, to_cls):\n return settings.TOURISM_INTERSECTION_MARGIN\n\nPath.add_property('pois', POI.path_pois, _(u\"POIs\"))\nTopology.add_property('pois', POI.topology_pois, _(u\"POIs\"))\nTopology.add_property('published_pois', POI.published_topology_pois, _(u\"Published POIs\"))\nIntervention.add_property('pois', lambda self: self.topology.pois if self.topology else [], _(u\"POIs\"))\nProject.add_property('pois', lambda self: self.edges_by_attr('pois'), _(u\"POIs\"))\ntourism_models.TouristicContent.add_property('pois', lambda self: intersecting(POI, self), _(u\"POIs\"))\ntourism_models.TouristicContent.add_property('published_pois', lambda self: intersecting(POI, self).filter(published=True), _(u\"Published POIs\"))\ntourism_models.TouristicEvent.add_property('pois', lambda self: intersecting(POI, self), _(u\"POIs\"))\ntourism_models.TouristicEvent.add_property('published_pois', lambda self: intersecting(POI, self).filter(published=True), _(u\"Published POIs\"))\n\n\nclass POIType(PictogramMixin):\n\n label = models.CharField(verbose_name=_(u\"Label\"), max_length=128, db_column='nom')\n cirkwi = models.ForeignKey('cirkwi.CirkwiPOICategory', verbose_name=_(u\"Cirkwi POI category\"), null=True, blank=True)\n\n class Meta:\n db_table = 'o_b_poi'\n verbose_name = _(u\"POI type\")\n verbose_name_plural = _(u\"POI types\")\n ordering = ['label']\n\n def __unicode__(self):\n return self.label\n\n\nclass ServiceType(PictogramMixin, PublishableMixin):\n\n practices = models.ManyToManyField('Practice', related_name=\"services\",\n db_table=\"o_r_service_pratique\", blank=True, null=True,\n verbose_name=_(u\"Practices\"))\n\n class Meta:\n db_table = 'o_b_service'\n verbose_name = _(u\"Service type\")\n verbose_name_plural = _(u\"Service types\")\n ordering = ['name']\n\n def __unicode__(self):\n return self.name\n\n\nclass ServiceManager(models.GeoManager):\n def get_queryset(self):\n return super(ServiceManager, self).get_queryset().select_related('type', 'structure')\n\n\nclass Service(StructureRelated, MapEntityMixin, Topology):\n\n topo_object = models.OneToOneField(Topology, parent_link=True,\n db_column='evenement')\n type = models.ForeignKey('ServiceType', related_name='services', verbose_name=_(u\"Type\"), db_column='type')\n eid = models.CharField(verbose_name=_(u\"External id\"), max_length=128, blank=True, null=True, db_column='id_externe')\n\n class Meta:\n db_table = 'o_t_service'\n verbose_name = _(u\"Service\")\n verbose_name_plural = _(u\"Services\")\n\n # Override default manager\n objects = Topology.get_manager_cls(ServiceManager)()\n\n def __unicode__(self):\n return unicode(self.type)\n\n @property\n def name(self):\n return self.type.name\n\n @property\n def name_display(self):\n s = u'<a data-pk=\"%s\" href=\"%s\" title=\"%s\">%s</a>' % (self.pk,\n self.get_detail_url(),\n self.name,\n self.name)\n if self.type.published:\n s = u'<span class=\"badge badge-success\" title=\"%s\">&#x2606;</span> ' % _(\"Published\") + s\n elif self.type.review:\n s = u'<span class=\"badge badge-warning\" title=\"%s\">&#x2606;</span> ' % _(\"Waiting for publication\") + s\n return s\n\n @classproperty\n def name_verbose_name(cls):\n return _(\"Type\")\n\n @property\n def type_display(self):\n return unicode(self.type)\n\n @property\n def serializable_type(self):\n return {'label': self.type.label,\n 'pictogram': self.type.get_pictogram_url()}\n\n @classmethod\n def path_services(cls, path):\n return cls.objects.existing().filter(aggregations__path=path).distinct('pk')\n\n @classmethod\n def topology_services(cls, topology):\n if settings.TREKKING_TOPOLOGY_ENABLED:\n qs = cls.overlapping(topology)\n else:\n area = topology.geom.buffer(settings.TREK_POI_INTERSECTION_MARGIN)\n qs = cls.objects.existing().filter(geom__intersects=area)\n if isinstance(topology, Trek):\n qs = qs.filter(type__practices=topology.practice)\n return qs\n\n @classmethod\n def published_topology_services(cls, topology):\n return cls.topology_services(topology).filter(type__published=True)\n\n def distance(self, to_cls):\n return settings.TOURISM_INTERSECTION_MARGIN\n\nPath.add_property('services', Service.path_services, _(u\"Services\"))\nTopology.add_property('services', Service.topology_services, _(u\"Services\"))\nTopology.add_property('published_services', Service.published_topology_services, _(u\"Published Services\"))\nIntervention.add_property('services', lambda self: self.topology.services if self.topology else [], _(u\"Services\"))\nProject.add_property('services', lambda self: self.edges_by_attr('services'), _(u\"Services\"))\ntourism_models.TouristicContent.add_property('services', lambda self: intersecting(Service, self), _(u\"Services\"))\ntourism_models.TouristicContent.add_property('published_services', lambda self: intersecting(Service, self).filter(published=True), _(u\"Published Services\"))\ntourism_models.TouristicEvent.add_property('services', lambda self: intersecting(Service, self), _(u\"Services\"))\ntourism_models.TouristicEvent.add_property('published_services', lambda self: intersecting(Service, self).filter(published=True), _(u\"Published Services\"))\n", "path": "geotrek/trekking/models.py" } ]
[ { "content": "import os\nimport logging\n\nfrom django.conf import settings\nfrom django.contrib.gis.db import models\nfrom django.core.exceptions import ValidationError\nfrom django.core.validators import MinValueValidator\nfrom django.template.defaultfilters import slugify\nfrom django.utils.translation import get_language, ugettext_lazy as _\n\nimport simplekml\nfrom mapentity.models import MapEntityMixin\nfrom mapentity.serializers import plain_text\n\nfrom geotrek.authent.models import StructureRelated\nfrom geotrek.core.models import Path, Topology\nfrom geotrek.common.utils import intersecting, classproperty\nfrom geotrek.common.mixins import (PicturesMixin, PublishableMixin,\n PictogramMixin, OptionalPictogramMixin)\nfrom geotrek.common.models import Theme\nfrom geotrek.maintenance.models import Intervention, Project\nfrom geotrek.tourism import models as tourism_models\n\nfrom .templatetags import trekking_tags\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass TrekOrderedChildManager(models.Manager):\n use_for_related_fields = True\n\n def get_queryset(self):\n # Select treks foreign keys by default\n qs = super(TrekOrderedChildManager, self).get_queryset().select_related('parent', 'child')\n # Exclude deleted treks\n return qs.exclude(parent__deleted=True).exclude(child__deleted=True)\n\n\nclass OrderedTrekChild(models.Model):\n parent = models.ForeignKey('Trek', related_name='trek_children', on_delete=models.CASCADE)\n child = models.ForeignKey('Trek', related_name='trek_parents', on_delete=models.CASCADE)\n order = models.PositiveIntegerField(default=0)\n\n objects = TrekOrderedChildManager()\n\n class Meta:\n db_table = 'o_r_itineraire_itineraire2'\n ordering = ('parent__id', 'order')\n unique_together = (\n ('parent', 'child'),\n )\n\n\nclass Trek(StructureRelated, PicturesMixin, PublishableMixin, MapEntityMixin, Topology):\n topo_object = models.OneToOneField(Topology, parent_link=True,\n db_column='evenement')\n departure = models.CharField(verbose_name=_(u\"Departure\"), max_length=128, blank=True,\n help_text=_(u\"Departure description\"), db_column='depart')\n arrival = models.CharField(verbose_name=_(u\"Arrival\"), max_length=128, blank=True,\n help_text=_(u\"Arrival description\"), db_column='arrivee')\n description_teaser = models.TextField(verbose_name=_(u\"Description teaser\"), blank=True,\n help_text=_(u\"A brief summary (map pop-ups)\"), db_column='chapeau')\n description = models.TextField(verbose_name=_(u\"Description\"), blank=True, db_column='description',\n help_text=_(u\"Complete description\"))\n ambiance = models.TextField(verbose_name=_(u\"Ambiance\"), blank=True, db_column='ambiance',\n help_text=_(u\"Main attraction and interest\"))\n access = models.TextField(verbose_name=_(u\"Access\"), blank=True, db_column='acces',\n help_text=_(u\"Best way to go\"))\n disabled_infrastructure = models.TextField(verbose_name=_(u\"Disabled infrastructure\"), db_column='handicap',\n blank=True, help_text=_(u\"Any specific infrastructure\"))\n duration = models.FloatField(verbose_name=_(u\"Duration\"), default=0, blank=True, db_column='duree',\n help_text=_(u\"In hours (1.5 = 1 h 30, 24 = 1 day, 48 = 2 days)\"),\n validators=[MinValueValidator(0)])\n is_park_centered = models.BooleanField(verbose_name=_(u\"Is in the midst of the park\"), db_column='coeur',\n help_text=_(u\"Crosses center of park\"))\n advised_parking = models.CharField(verbose_name=_(u\"Advised parking\"), max_length=128, blank=True, db_column='parking',\n help_text=_(u\"Where to park\"))\n parking_location = models.PointField(verbose_name=_(u\"Parking location\"), db_column='geom_parking',\n srid=settings.SRID, spatial_index=False, blank=True, null=True)\n public_transport = models.TextField(verbose_name=_(u\"Public transport\"), blank=True, db_column='transport',\n help_text=_(u\"Train, bus (see web links)\"))\n advice = models.TextField(verbose_name=_(u\"Advice\"), blank=True, db_column='recommandation',\n help_text=_(u\"Risks, danger, best period, ...\"))\n themes = models.ManyToManyField(Theme, related_name=\"treks\",\n db_table=\"o_r_itineraire_theme\", blank=True, null=True, verbose_name=_(u\"Themes\"),\n help_text=_(u\"Main theme(s)\"))\n networks = models.ManyToManyField('TrekNetwork', related_name=\"treks\",\n db_table=\"o_r_itineraire_reseau\", blank=True, null=True, verbose_name=_(u\"Networks\"),\n help_text=_(u\"Hiking networks\"))\n practice = models.ForeignKey('Practice', related_name=\"treks\",\n blank=True, null=True, verbose_name=_(u\"Practice\"), db_column='pratique')\n accessibilities = models.ManyToManyField('Accessibility', related_name=\"treks\",\n db_table=\"o_r_itineraire_accessibilite\", blank=True, null=True,\n verbose_name=_(u\"Accessibility\"))\n route = models.ForeignKey('Route', related_name='treks',\n blank=True, null=True, verbose_name=_(u\"Route\"), db_column='parcours')\n difficulty = models.ForeignKey('DifficultyLevel', related_name='treks',\n blank=True, null=True, verbose_name=_(u\"Difficulty\"), db_column='difficulte')\n web_links = models.ManyToManyField('WebLink', related_name=\"treks\",\n db_table=\"o_r_itineraire_web\", blank=True, null=True, verbose_name=_(u\"Web links\"),\n help_text=_(u\"External resources\"))\n related_treks = models.ManyToManyField('self', through='TrekRelationship',\n verbose_name=_(u\"Related treks\"), symmetrical=False,\n help_text=_(u\"Connections between treks\"),\n related_name='related_treks+') # Hide reverse attribute\n information_desks = models.ManyToManyField(tourism_models.InformationDesk, related_name='treks',\n db_table=\"o_r_itineraire_renseignement\", blank=True, null=True,\n verbose_name=_(u\"Information desks\"),\n help_text=_(u\"Where to obtain information\"))\n points_reference = models.MultiPointField(verbose_name=_(u\"Points of reference\"), db_column='geom_points_reference',\n srid=settings.SRID, spatial_index=False, blank=True, null=True)\n source = models.ManyToManyField('common.RecordSource',\n null=True, blank=True, related_name='treks',\n verbose_name=_(\"Source\"), db_table='o_r_itineraire_source')\n eid = models.CharField(verbose_name=_(u\"External id\"), max_length=128, blank=True, null=True, db_column='id_externe')\n eid2 = models.CharField(verbose_name=_(u\"Second external id\"), max_length=128, blank=True, null=True, db_column='id_externe2')\n\n objects = Topology.get_manager_cls(models.GeoManager)()\n\n category_id_prefix = 'T'\n capture_map_image_waitfor = '.poi_enum_loaded.services_loaded.info_desks_loaded.ref_points_loaded'\n\n class Meta:\n db_table = 'o_t_itineraire'\n verbose_name = _(u\"Trek\")\n verbose_name_plural = _(u\"Treks\")\n ordering = ['name']\n\n def __unicode__(self):\n return self.name\n\n @models.permalink\n def get_document_public_url(self):\n \"\"\" Override ``geotrek.common.mixins.PublishableMixin``\n \"\"\"\n return ('trekking:trek_document_public', [], {'lang': get_language(), 'pk': self.pk, 'slug': self.slug})\n\n @property\n def related(self):\n return self.related_treks.exclude(deleted=True).exclude(pk=self.pk).distinct()\n\n @classproperty\n def related_verbose_name(cls):\n return _(\"Related treks\")\n\n @property\n def relationships(self):\n # Does not matter if a or b\n return TrekRelationship.objects.filter(trek_a=self)\n\n @property\n def published_relationships(self):\n return self.relationships.filter(trek_b__published=True)\n\n @property\n def poi_types(self):\n if settings.TREKKING_TOPOLOGY_ENABLED:\n # Can't use values_list and must add 'ordering' because of bug:\n # https://code.djangoproject.com/ticket/14930\n values = self.pois.values('ordering', 'type')\n else:\n values = self.pois.values('type')\n pks = [value['type'] for value in values]\n return POIType.objects.filter(pk__in=set(pks))\n\n @property\n def length_kilometer(self):\n return \"%.1f\" % (self.length / 1000.0)\n\n @property\n def networks_display(self):\n return ', '.join([unicode(n) for n in self.networks.all()])\n\n @property\n def districts_display(self):\n return ', '.join([unicode(d) for d in self.districts])\n\n @property\n def themes_display(self):\n return ', '.join([unicode(n) for n in self.themes.all()])\n\n @property\n def city_departure(self):\n cities = self.cities\n return unicode(cities[0]) if len(cities) > 0 else ''\n\n def kml(self):\n \"\"\" Exports trek into KML format, add geometry as linestring and POI\n as place marks \"\"\"\n kml = simplekml.Kml()\n # Main itinerary\n geom3d = self.geom_3d.transform(4326, clone=True) # KML uses WGS84\n line = kml.newlinestring(name=self.name,\n description=plain_text(self.description),\n coords=geom3d.coords)\n line.style.linestyle.color = simplekml.Color.red # Red\n line.style.linestyle.width = 4 # pixels\n # Place marks\n for poi in self.pois:\n place = poi.geom_3d.transform(settings.API_SRID, clone=True)\n kml.newpoint(name=poi.name,\n description=plain_text(poi.description),\n coords=[place.coords])\n return kml._genkml()\n\n def has_geom_valid(self):\n \"\"\"A trek should be a LineString, even if it's a loop.\n \"\"\"\n return super(Trek, self).has_geom_valid() and self.geom.geom_type.lower() == 'linestring'\n\n @property\n def duration_pretty(self):\n return trekking_tags.duration(self.duration)\n\n @classproperty\n def duration_pretty_verbose_name(cls):\n return _(\"Formated duration\")\n\n @classmethod\n def path_treks(cls, path):\n treks = cls.objects.existing().filter(aggregations__path=path)\n # The following part prevents conflict with default trek ordering\n # ProgrammingError: SELECT DISTINCT ON expressions must match initial ORDER BY expressions\n return treks.order_by('topo_object').distinct('topo_object')\n\n @classmethod\n def topology_treks(cls, topology):\n if settings.TREKKING_TOPOLOGY_ENABLED:\n qs = cls.overlapping(topology)\n else:\n area = topology.geom.buffer(settings.TREK_POI_INTERSECTION_MARGIN)\n qs = cls.objects.existing().filter(geom__intersects=area)\n return qs\n\n @classmethod\n def published_topology_treks(cls, topology):\n return cls.topology_treks(topology).filter(published=True)\n\n # Rando v1 compat\n @property\n def usages(self):\n return [self.practice] if self.practice else []\n\n @classmethod\n def get_create_label(cls):\n return _(u\"Add a new trek\")\n\n @property\n def parents(self):\n return Trek.objects.filter(trek_children__child=self)\n\n @property\n def parents_id(self):\n parents = self.trek_parents.values_list('parent__id', flat=True)\n return list(parents)\n\n @property\n def children(self):\n return Trek.objects.filter(trek_parents__parent=self).order_by('trek_parents__order')\n\n @property\n def children_id(self):\n \"\"\"\n Get children IDs\n \"\"\"\n children = self.trek_children.order_by('order')\\\n .values_list('child__id',\n flat=True)\n return list(children)\n\n def previous_id_for(self, parent):\n children_id = parent.children_id\n index = children_id.index(self.id)\n if index == 0:\n return None\n return children_id[index - 1]\n\n def next_id_for(self, parent):\n children_id = parent.children_id\n index = children_id.index(self.id)\n if index == len(children_id) - 1:\n return None\n return children_id[index + 1]\n\n @property\n def previous_id(self):\n \"\"\"\n Dict of parent -> previous child\n \"\"\"\n return {parent.id: self.previous_id_for(parent) for parent in self.parents.filter(published=True)}\n\n @property\n def next_id(self):\n \"\"\"\n Dict of parent -> next child\n \"\"\"\n return {parent.id: self.next_id_for(parent) for parent in self.parents.filter(published=True)}\n\n def clean(self):\n \"\"\"\n Custom model validation\n \"\"\"\n if self.pk in self.trek_children.values_list('child__id', flat=True):\n raise ValidationError(_(u\"Cannot use itself as child trek.\"))\n\n @property\n def prefixed_category_id(self):\n if settings.SPLIT_TREKS_CATEGORIES_BY_PRACTICE and self.practice:\n return '{prefix}{id}'.format(prefix=self.category_id_prefix, id=self.practice.id)\n else:\n return self.category_id_prefix\n\n def distance(self, to_cls):\n if self.practice and self.practice.distance is not None:\n return self.practice.distance\n else:\n return settings.TOURISM_INTERSECTION_MARGIN\n\n def is_public(self):\n for parent in self.parents:\n if parent.any_published:\n return True\n return self.any_published\n\n @property\n def picture_print(self):\n picture = super(Trek, self).picture_print\n if picture:\n return picture\n for poi in self.published_pois:\n picture = poi.picture_print\n if picture:\n return picture\n\n def save(self, *args, **kwargs):\n if self.pk is not None and kwargs.get('update_fields', None) is None:\n field_names = set()\n for field in self._meta.concrete_fields:\n if not field.primary_key and not hasattr(field, 'through'):\n field_names.add(field.attname)\n old_trek = Trek.objects.get(pk=self.pk)\n if self.geom is not None and old_trek.geom.equals_exact(self.geom, tolerance=0.00001):\n field_names.remove('geom')\n if self.geom_3d is not None and old_trek.geom_3d.equals_exact(self.geom_3d, tolerance=0.00001):\n field_names.remove('geom_3d')\n return super(Trek, self).save(update_fields=field_names, *args, **kwargs)\n super(Trek, self).save(*args, **kwargs)\n\nPath.add_property('treks', Trek.path_treks, _(u\"Treks\"))\nTopology.add_property('treks', Trek.topology_treks, _(u\"Treks\"))\nif settings.HIDE_PUBLISHED_TREKS_IN_TOPOLOGIES:\n Topology.add_property('published_treks', lambda self: [], _(u\"Published treks\"))\nelse:\n Topology.add_property('published_treks', lambda self: intersecting(Trek, self).filter(published=True), _(u\"Published treks\"))\nIntervention.add_property('treks', lambda self: self.topology.treks if self.topology else [], _(u\"Treks\"))\nProject.add_property('treks', lambda self: self.edges_by_attr('treks'), _(u\"Treks\"))\ntourism_models.TouristicContent.add_property('treks', lambda self: intersecting(Trek, self), _(u\"Treks\"))\ntourism_models.TouristicContent.add_property('published_treks', lambda self: intersecting(Trek, self).filter(published=True), _(u\"Published treks\"))\ntourism_models.TouristicEvent.add_property('treks', lambda self: intersecting(Trek, self), _(u\"Treks\"))\ntourism_models.TouristicEvent.add_property('published_treks', lambda self: intersecting(Trek, self).filter(published=True), _(u\"Published treks\"))\n\n\nclass TrekRelationshipManager(models.Manager):\n use_for_related_fields = True\n\n def get_queryset(self):\n # Select treks foreign keys by default\n qs = super(TrekRelationshipManager, self).get_queryset().select_related('trek_a', 'trek_b')\n # Exclude deleted treks\n return qs.exclude(trek_a__deleted=True).exclude(trek_b__deleted=True)\n\n\nclass TrekRelationship(models.Model):\n \"\"\"\n Relationships between treks : symmetrical aspect is managed by a trigger that\n duplicates all couples (trek_a, trek_b)\n \"\"\"\n has_common_departure = models.BooleanField(verbose_name=_(u\"Common departure\"), db_column='depart_commun', default=False)\n has_common_edge = models.BooleanField(verbose_name=_(u\"Common edge\"), db_column='troncons_communs', default=False)\n is_circuit_step = models.BooleanField(verbose_name=_(u\"Circuit step\"), db_column='etape_circuit', default=False)\n\n trek_a = models.ForeignKey(Trek, related_name=\"trek_relationship_a\", db_column='itineraire_a')\n trek_b = models.ForeignKey(Trek, related_name=\"trek_relationship_b\", db_column='itineraire_b', verbose_name=_(u\"Trek\"))\n\n objects = TrekRelationshipManager()\n\n class Meta:\n db_table = 'o_r_itineraire_itineraire'\n verbose_name = _(u\"Trek relationship\")\n verbose_name_plural = _(u\"Trek relationships\")\n unique_together = ('trek_a', 'trek_b')\n\n def __unicode__(self):\n return u\"%s <--> %s\" % (self.trek_a, self.trek_b)\n\n @property\n def relation(self):\n return u\"%s %s%s%s\" % (\n self.trek_b.name_display,\n _(\"Departure\") if self.has_common_departure else '',\n _(\"Path\") if self.has_common_edge else '',\n _(\"Circuit\") if self.is_circuit_step else ''\n )\n\n @property\n def relation_display(self):\n return self.relation\n\n\nclass TrekNetwork(PictogramMixin):\n\n network = models.CharField(verbose_name=_(u\"Name\"), max_length=128, db_column='reseau')\n\n class Meta:\n db_table = 'o_b_reseau'\n verbose_name = _(u\"Trek network\")\n verbose_name_plural = _(u\"Trek networks\")\n ordering = ['network']\n\n def __unicode__(self):\n return self.network\n\n\nclass Practice(PictogramMixin):\n\n name = models.CharField(verbose_name=_(u\"Name\"), max_length=128, db_column='nom')\n distance = models.IntegerField(verbose_name=_(u\"Distance\"), blank=True, null=True, db_column='distance',\n help_text=_(u\"Touristic contents and events will associate within this distance (meters)\"))\n cirkwi = models.ForeignKey('cirkwi.CirkwiLocomotion', verbose_name=_(u\"Cirkwi locomotion\"), null=True, blank=True)\n order = models.IntegerField(verbose_name=_(u\"Order\"), null=True, blank=True, db_column='tri',\n help_text=_(u\"Alphabetical order if blank\"))\n\n class Meta:\n db_table = 'o_b_pratique'\n verbose_name = _(u\"Practice\")\n verbose_name_plural = _(u\"Practices\")\n ordering = ['order', 'name']\n\n def __unicode__(self):\n return self.name\n\n @property\n def slug(self):\n return slugify(self.name) or str(self.pk)\n\n\nclass Accessibility(OptionalPictogramMixin):\n\n name = models.CharField(verbose_name=_(u\"Name\"), max_length=128, db_column='nom')\n cirkwi = models.ForeignKey('cirkwi.CirkwiTag', verbose_name=_(u\"Cirkwi tag\"), null=True, blank=True)\n\n id_prefix = 'A'\n\n class Meta:\n db_table = 'o_b_accessibilite'\n verbose_name = _(u\"Accessibility\")\n verbose_name_plural = _(u\"Accessibilities\")\n ordering = ['name']\n\n def __unicode__(self):\n return self.name\n\n @property\n def prefixed_id(self):\n return '{prefix}{id}'.format(prefix=self.id_prefix, id=self.id)\n\n @property\n def slug(self):\n return slugify(self.name) or str(self.pk)\n\n\nclass Route(OptionalPictogramMixin):\n\n route = models.CharField(verbose_name=_(u\"Name\"), max_length=128, db_column='parcours')\n\n class Meta:\n db_table = 'o_b_parcours'\n verbose_name = _(u\"Route\")\n verbose_name_plural = _(u\"Routes\")\n ordering = ['route']\n\n def __unicode__(self):\n return self.route\n\n\nclass DifficultyLevel(OptionalPictogramMixin):\n\n \"\"\"We use an IntegerField for id, since we want to edit it in Admin.\n This column is used to order difficulty levels, especially in public website\n where treks are filtered by difficulty ids.\n \"\"\"\n id = models.IntegerField(primary_key=True)\n difficulty = models.CharField(verbose_name=_(u\"Difficulty level\"),\n max_length=128, db_column='difficulte')\n cirkwi_level = models.IntegerField(verbose_name=_(u\"Cirkwi level\"), blank=True, null=True,\n db_column='niveau_cirkwi', help_text=_(u\"Between 1 and 8\"))\n cirkwi = models.ForeignKey('cirkwi.CirkwiTag', verbose_name=_(u\"Cirkwi tag\"), null=True, blank=True)\n\n class Meta:\n db_table = 'o_b_difficulte'\n verbose_name = _(u\"Difficulty level\")\n verbose_name_plural = _(u\"Difficulty levels\")\n ordering = ['id']\n\n def __unicode__(self):\n return self.difficulty\n\n def save(self, *args, **kwargs):\n \"\"\"Manually auto-increment ids\"\"\"\n if not self.id:\n try:\n last = self.__class__.objects.all().order_by('-id')[0]\n self.id = last.id + 1\n except IndexError:\n self.id = 1\n super(DifficultyLevel, self).save(*args, **kwargs)\n\n\nclass WebLinkManager(models.Manager):\n def get_queryset(self):\n return super(WebLinkManager, self).get_queryset().select_related('category')\n\n\nclass WebLink(models.Model):\n\n name = models.CharField(verbose_name=_(u\"Name\"), max_length=128, db_column='nom')\n url = models.URLField(verbose_name=_(u\"URL\"), max_length=128, db_column='url')\n category = models.ForeignKey('WebLinkCategory', verbose_name=_(u\"Category\"),\n related_name='links', null=True, blank=True,\n db_column='categorie')\n\n objects = WebLinkManager()\n\n class Meta:\n db_table = 'o_t_web'\n verbose_name = _(u\"Web link\")\n verbose_name_plural = _(u\"Web links\")\n ordering = ['name']\n\n def __unicode__(self):\n category = \"%s - \" % self.category.label if self.category else \"\"\n return u\"%s%s (%s)\" % (category, self.name, self.url)\n\n @classmethod\n @models.permalink\n def get_add_url(cls):\n return ('trekking:weblink_add', )\n\n\nclass WebLinkCategory(PictogramMixin):\n\n label = models.CharField(verbose_name=_(u\"Label\"), max_length=128, db_column='nom')\n\n class Meta:\n db_table = 'o_b_web_category'\n verbose_name = _(u\"Web link category\")\n verbose_name_plural = _(u\"Web link categories\")\n ordering = ['label']\n\n def __unicode__(self):\n return u\"%s\" % self.label\n\n\nclass POIManager(models.GeoManager):\n def get_queryset(self):\n return super(POIManager, self).get_queryset().select_related('type', 'structure')\n\n\nclass POI(StructureRelated, PicturesMixin, PublishableMixin, MapEntityMixin, Topology):\n\n topo_object = models.OneToOneField(Topology, parent_link=True,\n db_column='evenement')\n description = models.TextField(verbose_name=_(u\"Description\"), db_column='description',\n help_text=_(u\"History, details, ...\"))\n type = models.ForeignKey('POIType', related_name='pois', verbose_name=_(u\"Type\"), db_column='type')\n eid = models.CharField(verbose_name=_(u\"External id\"), max_length=128, blank=True, null=True, db_column='id_externe')\n\n class Meta:\n db_table = 'o_t_poi'\n verbose_name = _(u\"POI\")\n verbose_name_plural = _(u\"POI\")\n\n # Override default manager\n objects = Topology.get_manager_cls(POIManager)()\n\n def __unicode__(self):\n return u\"%s (%s)\" % (self.name, self.type)\n\n @models.permalink\n def get_document_public_url(self):\n \"\"\" Override ``geotrek.common.mixins.PublishableMixin``\n \"\"\"\n return ('trekking:poi_document_public', [], {'lang': get_language(), 'pk': self.pk, 'slug': self.slug})\n\n def save(self, *args, **kwargs):\n super(POI, self).save(*args, **kwargs)\n # Invalidate treks map\n for trek in self.treks.all():\n try:\n os.remove(trek.get_map_image_path())\n except OSError:\n pass\n\n @property\n def type_display(self):\n return unicode(self.type)\n\n @property\n def serializable_type(self):\n return {'label': self.type.label,\n 'pictogram': self.type.get_pictogram_url()}\n\n @classmethod\n def path_pois(cls, path):\n return cls.objects.existing().filter(aggregations__path=path).distinct('pk')\n\n @classmethod\n def topology_pois(cls, topology):\n if settings.TREKKING_TOPOLOGY_ENABLED:\n qs = cls.overlapping(topology)\n else:\n area = topology.geom.buffer(settings.TREK_POI_INTERSECTION_MARGIN)\n qs = cls.objects.existing().filter(geom__intersects=area)\n return qs\n\n @classmethod\n def published_topology_pois(cls, topology):\n return cls.topology_pois(topology).filter(published=True)\n\n def distance(self, to_cls):\n return settings.TOURISM_INTERSECTION_MARGIN\n\nPath.add_property('pois', POI.path_pois, _(u\"POIs\"))\nTopology.add_property('pois', POI.topology_pois, _(u\"POIs\"))\nTopology.add_property('published_pois', POI.published_topology_pois, _(u\"Published POIs\"))\nIntervention.add_property('pois', lambda self: self.topology.pois if self.topology else [], _(u\"POIs\"))\nProject.add_property('pois', lambda self: self.edges_by_attr('pois'), _(u\"POIs\"))\ntourism_models.TouristicContent.add_property('pois', lambda self: intersecting(POI, self), _(u\"POIs\"))\ntourism_models.TouristicContent.add_property('published_pois', lambda self: intersecting(POI, self).filter(published=True), _(u\"Published POIs\"))\ntourism_models.TouristicEvent.add_property('pois', lambda self: intersecting(POI, self), _(u\"POIs\"))\ntourism_models.TouristicEvent.add_property('published_pois', lambda self: intersecting(POI, self).filter(published=True), _(u\"Published POIs\"))\n\n\nclass POIType(PictogramMixin):\n\n label = models.CharField(verbose_name=_(u\"Label\"), max_length=128, db_column='nom')\n cirkwi = models.ForeignKey('cirkwi.CirkwiPOICategory', verbose_name=_(u\"Cirkwi POI category\"), null=True, blank=True)\n\n class Meta:\n db_table = 'o_b_poi'\n verbose_name = _(u\"POI type\")\n verbose_name_plural = _(u\"POI types\")\n ordering = ['label']\n\n def __unicode__(self):\n return self.label\n\n\nclass ServiceType(PictogramMixin, PublishableMixin):\n\n practices = models.ManyToManyField('Practice', related_name=\"services\",\n db_table=\"o_r_service_pratique\", blank=True, null=True,\n verbose_name=_(u\"Practices\"))\n\n class Meta:\n db_table = 'o_b_service'\n verbose_name = _(u\"Service type\")\n verbose_name_plural = _(u\"Service types\")\n ordering = ['name']\n\n def __unicode__(self):\n return self.name\n\n\nclass ServiceManager(models.GeoManager):\n def get_queryset(self):\n return super(ServiceManager, self).get_queryset().select_related('type', 'structure')\n\n\nclass Service(StructureRelated, MapEntityMixin, Topology):\n\n topo_object = models.OneToOneField(Topology, parent_link=True,\n db_column='evenement')\n type = models.ForeignKey('ServiceType', related_name='services', verbose_name=_(u\"Type\"), db_column='type')\n eid = models.CharField(verbose_name=_(u\"External id\"), max_length=128, blank=True, null=True, db_column='id_externe')\n\n class Meta:\n db_table = 'o_t_service'\n verbose_name = _(u\"Service\")\n verbose_name_plural = _(u\"Services\")\n\n # Override default manager\n objects = Topology.get_manager_cls(ServiceManager)()\n\n def __unicode__(self):\n return unicode(self.type)\n\n @property\n def name(self):\n return self.type.name\n\n @property\n def name_display(self):\n s = u'<a data-pk=\"%s\" href=\"%s\" title=\"%s\">%s</a>' % (self.pk,\n self.get_detail_url(),\n self.name,\n self.name)\n if self.type.published:\n s = u'<span class=\"badge badge-success\" title=\"%s\">&#x2606;</span> ' % _(\"Published\") + s\n elif self.type.review:\n s = u'<span class=\"badge badge-warning\" title=\"%s\">&#x2606;</span> ' % _(\"Waiting for publication\") + s\n return s\n\n @classproperty\n def name_verbose_name(cls):\n return _(\"Type\")\n\n @property\n def type_display(self):\n return unicode(self.type)\n\n @property\n def serializable_type(self):\n return {'label': self.type.label,\n 'pictogram': self.type.get_pictogram_url()}\n\n @classmethod\n def path_services(cls, path):\n return cls.objects.existing().filter(aggregations__path=path).distinct('pk')\n\n @classmethod\n def topology_services(cls, topology):\n if settings.TREKKING_TOPOLOGY_ENABLED:\n qs = cls.overlapping(topology)\n else:\n area = topology.geom.buffer(settings.TREK_POI_INTERSECTION_MARGIN)\n qs = cls.objects.existing().filter(geom__intersects=area)\n if isinstance(topology, Trek):\n qs = qs.filter(type__practices=topology.practice)\n return qs\n\n @classmethod\n def published_topology_services(cls, topology):\n return cls.topology_services(topology).filter(type__published=True)\n\n def distance(self, to_cls):\n return settings.TOURISM_INTERSECTION_MARGIN\n\nPath.add_property('services', Service.path_services, _(u\"Services\"))\nTopology.add_property('services', Service.topology_services, _(u\"Services\"))\nTopology.add_property('published_services', Service.published_topology_services, _(u\"Published Services\"))\nIntervention.add_property('services', lambda self: self.topology.services if self.topology else [], _(u\"Services\"))\nProject.add_property('services', lambda self: self.edges_by_attr('services'), _(u\"Services\"))\ntourism_models.TouristicContent.add_property('services', lambda self: intersecting(Service, self), _(u\"Services\"))\ntourism_models.TouristicContent.add_property('published_services', lambda self: intersecting(Service, self).filter(published=True), _(u\"Published Services\"))\ntourism_models.TouristicEvent.add_property('services', lambda self: intersecting(Service, self), _(u\"Services\"))\ntourism_models.TouristicEvent.add_property('published_services', lambda self: intersecting(Service, self).filter(published=True), _(u\"Published Services\"))\n", "path": "geotrek/trekking/models.py" } ]
diff --git a/conf/buildout.cfg b/conf/buildout.cfg index e6542ec3fd..fb7898606a 100644 --- a/conf/buildout.cfg +++ b/conf/buildout.cfg @@ -69,7 +69,7 @@ paths = ${django:staticroot} [omelette] recipe = collective.recipe.omelette # We need mapentity templates and static dirs -eggs = +eggs = mapentity django-celery celery @@ -89,7 +89,7 @@ zc.buildout = 1.7.1 # From Geotrek # Django = 1.6.5 -mapentity = 2.5.2 +mapentity = 2.7.0 GDAL=1.10.0 tif2geojson=0.1.3 django-extended-choices = 0.3.0 diff --git a/docs/changelog.rst b/docs/changelog.rst index 7489fd31c4..461451e9e3 100644 --- a/docs/changelog.rst +++ b/docs/changelog.rst @@ -15,6 +15,10 @@ CHANGELOG **Bug fixes** * Allow NULL values for id_externe fields in database +* Fix missing elements (eg. POI enumeration) on trek map capture +* Prevent overlaping controls at bottom of list view +* Translation of column names in shapefiles export +* UTF-8 and truncated alerts in shapefile export 2.7.2 (2016-01-26) diff --git a/geotrek/maintenance/tests/test_views.py b/geotrek/maintenance/tests/test_views.py index 529f2d3f6c..164e8075fd 100644 --- a/geotrek/maintenance/tests/test_views.py +++ b/geotrek/maintenance/tests/test_views.py @@ -366,8 +366,8 @@ def test_shape_mixed(self): u'id', u'name', u'period', u'type', u'domain', u'constraint', u'global_cos', u'interventi', u'interven_1', u'comments', u'contractor', u'project_ow', u'project_ma', u'founders', - u'structure', u'date_inser', u'date_updat', - u'cities', u'districts', u'areas' + u'related_st', u'insertion_', u'update_dat', + u'cities', u'districts', u'restricted' ]) self.assertEquals(len(layer_point), 1) diff --git a/geotrek/trekking/models.py b/geotrek/trekking/models.py index d1777d8453..fa05c2ff0f 100755 --- a/geotrek/trekking/models.py +++ b/geotrek/trekking/models.py @@ -119,6 +119,7 @@ class Trek(StructureRelated, PicturesMixin, PublishableMixin, MapEntityMixin, To objects = Topology.get_manager_cls(models.GeoManager)() category_id_prefix = 'T' + capture_map_image_waitfor = '.poi_enum_loaded.services_loaded.info_desks_loaded.ref_points_loaded' class Meta: db_table = 'o_t_itineraire' diff --git a/geotrek/trekking/templates/trekking/trek_detail.html b/geotrek/trekking/templates/trekking/trek_detail.html index b3d43b0846..51edc0c893 100644 --- a/geotrek/trekking/templates/trekking/trek_detail.html +++ b/geotrek/trekking/templates/trekking/trek_detail.html @@ -73,6 +73,7 @@ map.addLayer(pois); pois.showEnumeration(); + $('.map-panel').addClass('poi_enum_loaded'); }); // @@ -90,6 +91,7 @@ }); map.layerscontrol.addOverlay(services, tr('services'), tr('Objects')); map.addLayer(services); + $('.map-panel').addClass('services_loaded'); }); // @@ -105,6 +107,7 @@ return L.marker(latlng, {icon: infoDeskIcon}); } }).addTo(map); + $('.map-panel').addClass('info_desks_loaded'); }); // @@ -125,6 +128,7 @@ }; })() }).addTo(map); + $('.map-panel').addClass('ref_points_loaded'); })(map); });
google__jax-5751
jax.devices() behaves unintuitively Hi! I noticed some unexpected (?) behaviour in the following code: ```python import jax from jax.config import config config.update("jax_backend_target", 'grpc://some endpoint:8470') config.update("jax_xla_backend", 'tpu_driver') print(jax.devices('cpu')) #prints tpu devices instead of cpu ``` I can get the expected behaviour if I do ```python import jax print(jax.devices('cpu')) #prints cpu devices from jax.config import config config.update("jax_backend_target", 'grpc://11.111.11.1118470') config.update("jax_xla_backend", 'tpu_driver') print(jax.devices('cpu')) #now prints cpu devices print(jax.devices('tpu')) #now prints tpu devices ``` I think this may be related the caching of `jax.lib.xla_bridge.get_backend()`. Not sure if this is expected behaviour or a bug. I noticed this because I was trying `jit` a few smaller functions on the host vm cpu during a larger TPU computation . I tried using the `backend=cpu` argument and `device_put`, but was unable to obtain the desired behaviour. In the end the only thing that seemed to work was to clear the cache of `get_backend()` reconfigure `jax.config` to cpu.
[ { "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Interface and utility functions to XLA.\n\nThis module wraps the XLA client(s) and builders to standardize their interfaces\nand provide some automatic type mapping logic for converting between Numpy and\nXLA. There are also a handful of related casting utilities.\n\"\"\"\n\n\nfrom functools import partial, lru_cache\nimport os\nfrom typing import Callable, Dict, List, Optional, Tuple, Union\n\nfrom absl import logging\n# Disable \"WARNING: Logging before flag parsing goes to stderr.\" message\nlogging._warn_preinit_stderr = 0\n\nfrom ..config import flags\nfrom jax._src import util\nfrom .. import dtypes\nimport numpy as np\nimport threading\n\ntry:\n from . import tpu_client\nexcept ImportError:\n tpu_client = None\nfrom . import xla_client\n\nxops = xla_client.ops\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string(\n 'jax_xla_backend', 'xla',\n 'Default is \"xla\" for the XLA service directly, '\n 'or \"tpu_driver\" for using high-performance access to Cloud TPU hardware.')\nflags.DEFINE_string(\n 'jax_backend_target', 'local',\n 'Either \"local\" or \"rpc:address\" to connect to a remote service target.')\nflags.DEFINE_string(\n 'jax_platform_name',\n os.getenv('JAX_PLATFORM_NAME', ''),\n 'Platform name for XLA. The default is to attempt to use a GPU if '\n 'available, but fall back to CPU otherwise. To set the platform manually, '\n 'pass \"cpu\" for CPU or \"gpu\" for GPU.')\nflags.DEFINE_bool(\n 'jax_disable_most_optimizations', False,\n 'Try not to do much optimization work. This can be useful if the cost of '\n 'optimization is greater than that of running a less-optimized program.')\n\n\ndef get_compile_options(\n num_replicas: int,\n num_partitions: int,\n device_assignment=None,\n use_spmd_partitioning: bool = True,\n) -> xla_client.CompileOptions:\n \"\"\"Returns the compile options to use, as derived from flag values.\n\n Args:\n num_replicas: Number of replicas for which to compile.\n num_partitions: Number of partitions for which to compile.\n device_assignment: Optional tuple of integers indicating the assignment of\n logical replicas to physical devices (default inherited from\n xla_client.CompileOptions). Must be consistent with `num_replicas` and\n `num_partitions`.\n use_spmd_partitioning: boolean indicating whether to enable SPMD or MPMD\n partitioning in XLA.\n \"\"\"\n compile_options = xla_client.CompileOptions()\n compile_options.num_replicas = num_replicas\n compile_options.num_partitions = num_partitions\n build_options = compile_options.executable_build_options\n build_options.use_spmd_partitioning = use_spmd_partitioning\n if device_assignment is not None:\n logging.vlog(\n 2,\n 'get_compile_options: num_replicas=%s num_partitions=%s device_assignment=%s',\n num_replicas, num_partitions, device_assignment)\n device_assignment = np.array(device_assignment)\n\n # Allow 1D device assignment if num_partitions is 1.\n if (device_assignment.ndim == 1) and (num_partitions == 1):\n device_assignment = device_assignment[:, None]\n\n if num_replicas != device_assignment.shape[0]:\n msg = 'device_assignment does not match num_replicas: {} vs {}.'\n raise ValueError(msg.format(device_assignment, num_replicas))\n\n if num_partitions != device_assignment.shape[1]:\n msg = 'device_assignment does not match num_partitions: {} vs {}.'\n raise ValueError(msg.format(device_assignment, num_partitions))\n\n device_assignment = xla_client.DeviceAssignment.create(device_assignment)\n assert device_assignment.replica_count() == num_replicas\n assert device_assignment.computation_count() == num_partitions\n compile_options.device_assignment = device_assignment\n\n if FLAGS.jax_disable_most_optimizations:\n debug_options = compile_options.executable_build_options.debug_options\n debug_options.xla_backend_optimization_level = 0\n debug_options.xla_llvm_disable_expensive_passes = True\n debug_options.xla_test_all_input_layouts = False\n\n return compile_options\n\n_backends = {}\n\ndef register_backend(name, factory):\n _backends[name] = factory\n\ndef _get_local_backend(platform=None):\n if not platform:\n platform = FLAGS.jax_platform_name or None\n\n backend = xla_client.get_local_backend(platform)\n if backend is None:\n raise RuntimeError(\"No local XLA backends found.\")\n\n if backend.platform == 'cpu' and platform != 'cpu':\n logging.warning('No GPU/TPU found, falling back to CPU. '\n '(Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)')\n\n return backend\n\n\nregister_backend('xla', _get_local_backend)\n\n# memoize the TPU driver to be consistent with xla_client behavior\n_tpu_backend = None\n\ndef _get_tpu_driver_backend(platform):\n del platform\n global _tpu_backend\n if _tpu_backend is None:\n backend_target = FLAGS.jax_backend_target\n if backend_target is None:\n raise ValueError('When using TPU Driver as the backend, you must specify '\n '--jax_backend_target=<hostname>:8470.')\n _tpu_backend = tpu_client.TpuBackend.create(worker=backend_target)\n return _tpu_backend\n\n\nif tpu_client:\n register_backend('tpu_driver', _get_tpu_driver_backend)\n\n\n_backend_lock = threading.Lock()\n\n@lru_cache(maxsize=None) # don't use util.memoize because there is no X64 dependence.\ndef get_backend(platform=None):\n # TODO(mattjj,skyewm): remove this input polymorphism after we clean up how\n # 'backend' values are handled\n if not isinstance(platform, (type(None), str)):\n return platform\n\n with _backend_lock:\n backend = _backends.get(FLAGS.jax_xla_backend)\n if backend is None:\n msg = 'Unknown jax_xla_backend value \"{}\".'\n raise ValueError(msg.format(FLAGS.jax_xla_backend))\n return backend(platform)\n\n\ndef get_device_backend(device=None):\n \"\"\"Returns the Backend associated with `device`, or the default Backend.\"\"\"\n platform = device.platform if device else None\n return get_backend(platform)\n\n\ndef device_count(backend: Optional[str] = None) -> int:\n \"\"\"Returns the total number of devices.\n\n On most platforms, this is the same as :py:func:`jax.local_device_count`.\n However, on multi-host platforms, this will return the total number of devices\n across all hosts.\n\n Args:\n backend: This is an experimental feature and the API is likely to change.\n Optional, a string representing the xla backend: ``'cpu'``, ``'gpu'``, or\n ``'tpu'``.\n\n Returns:\n Number of devices.\n \"\"\"\n return int(get_backend(backend).device_count())\n\n\ndef local_device_count(backend: Optional[str] = None) -> int:\n \"\"\"Returns the number of devices on this host.\"\"\"\n return int(get_backend(backend).local_device_count())\n\n\ndef devices(backend: Optional[str] = None) -> List[xla_client.Device]:\n \"\"\"Returns a list of all devices for a given backend.\n\n Each device is represented by a subclass of :class:`Device` (e.g.\n :class:`CpuDevice`, :class:`GpuDevice`). The length of the returned list is\n equal to ``device_count(backend)``. Local devices can be identified by comparing\n :meth:`Device.host_id` to the value returned by :py:func:`jax.host_id`.\n\n If ``backend`` is ``None``, returns all the devices from the default backend.\n The default backend is generally ``'gpu'`` or ``'tpu'`` if available,\n otherwise ``'cpu'``.\n\n Args:\n backend: This is an experimental feature and the API is likely to change.\n Optional, a string representing the xla backend: ``'cpu'``, ``'gpu'``, or\n ``'tpu'``.\n\n Returns:\n List of Device subclasses.\n \"\"\"\n return get_backend(backend).devices()\n\n\ndef default_backend() -> str:\n \"\"\"Returns the platform name of the default XLA backend.\"\"\"\n return get_backend(None).platform\n\n\ndef local_devices(host_id: Optional[int] = None,\n backend: Optional[str] = None) -> List[xla_client.Device]:\n \"\"\"Like :py:func:`jax.devices`, but only returns devices local to a given host.\n\n If ``host_id`` is ``None``, returns devices local to this host.\n\n Args:\n host_id: the integer ID of the host. Host IDs can be retrieved via\n :py:func:`jax.host_ids`.\n backend: This is an experimental feature and the API is likely to change.\n Optional, a string representing the xla backend: ``'cpu'``, ``'gpu'``, or\n ``'tpu'``.\n\n Returns:\n List of Device subclasses.\n \"\"\"\n if host_id is None:\n host_id = get_backend(backend).host_id()\n if host_id not in host_ids():\n raise ValueError(f\"Unknown host_id {host_id}\")\n return [d for d in devices(backend) if d.host_id == host_id]\n\n\ndef host_id(backend: Optional[str] = None) -> int:\n \"\"\"Returns the integer host ID of this host.\n\n On most platforms, this will always be 0. This will vary on multi-host\n platforms though.\n\n Args:\n backend: This is an experimental feature and the API is likely to change.\n Optional, a string representing the xla backend: ``'cpu'``, ``'gpu'``, or\n ``'tpu'``.\n\n Returns:\n Integer host ID.\n \"\"\"\n return get_backend(backend).host_id()\n\n\ndef host_ids(backend: Optional[str] = None) -> List[int]:\n \"\"\"Returns a sorted list of all host IDs.\"\"\"\n return sorted({d.host_id for d in devices(backend)})\n\n\ndef host_count(backend: Optional[str] = None) -> int:\n \"\"\"Returns the number of hosts.\"\"\"\n return len(host_ids(backend))\n\n\n### utility functions\n\[email protected]\ndef dtype_to_etype(dtype):\n \"\"\"Convert from dtype to canonical etype (reading config.x64_enabled).\"\"\"\n return xla_client.dtype_to_etype(dtypes.canonicalize_dtype(dtype))\n\n\[email protected]\ndef supported_numpy_dtypes():\n return {dtypes.canonicalize_dtype(dtype)\n for dtype in xla_client.XLA_ELEMENT_TYPE_TO_DTYPE.values()}\n\n\n# TODO(mattjj,frostig): try to remove this function\ndef normalize_to_xla_dtypes(val):\n \"\"\"Normalize dtypes in a value.\"\"\"\n if hasattr(val, '__array__') or np.isscalar(val):\n return np.asarray(val, dtype=dtypes.canonicalize_dtype(dtypes.result_type(val)))\n elif isinstance(val, (tuple, list)):\n return tuple(normalize_to_xla_dtypes(x) for x in val)\n raise TypeError('Can\\'t convert to XLA: {}'.format(val))\n\ndef _numpy_array_constant(builder, value, canonicalize_types=True):\n if canonicalize_types:\n value = normalize_to_xla_dtypes(value)\n return xops.ConstantLiteral(builder, value)\n\ndef parameter(builder, num, shape, name=None, replicated=None):\n if name is None:\n name = ''\n if replicated is None:\n replicated = []\n elif isinstance(replicated, bool):\n replicated = [replicated] * shape.leaf_count()\n\n return xops.Parameter(builder, num,\n shape.with_major_to_minor_layout_if_absent(), name,\n replicated)\n\n\ndef constant(builder, py_val, canonicalize_types=True):\n \"\"\"Translate constant `py_val` to a constant, canonicalizing its dtype.\n\n Args:\n py_val: a Python value to be translated to a constant.\n\n Returns:\n A representation of the constant, either a ComputationDataHandle or None\n \"\"\"\n py_type = type(py_val)\n if py_type in _constant_handlers:\n return _constant_handlers[py_type](builder, py_val, canonicalize_types)\n else:\n raise TypeError(\"No constant handler for type: {}\".format(py_type))\n\n# HLO instructions optionally can be annotated to say how the output should be\n# spatially partitioned (represented in XLA as OpSharding protos, see\n# _sharding_to_proto). For array outputs, the annotation is either an int per\n# dimension specifying the number of ways that dimension divided (i.e. the total\n# number of shards is the product), or None to indicate the array should be\n# replicated. Tuple outputs are represented as tuples thereof. XLA supports\n# arbitrary tuple nesting, but JAX only uses one level of tupling (and our type\n# checkers don't support recursive types), so we only represent one level of\n# nesting in this type definition.\nSpatialSharding = Union[Tuple[int, ...],\n None,\n Tuple[Union[Tuple[int, ...], None], ...]]\n\ndef _sharding_to_proto(sharding: SpatialSharding):\n \"\"\"Converts a SpatialSharding to an OpSharding.\n\n See\n https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/xla_data.proto#L601\n for details on the OpSharding proto.\n \"\"\"\n proto = xla_client.OpSharding()\n if isinstance(sharding, tuple) and not isinstance(sharding[0], int):\n assert all(s is None or isinstance(s, tuple) for s in sharding)\n return tuple_sharding_proto(list(map(_sharding_to_proto, sharding))) # type: ignore\n\n if sharding is None:\n proto.type = xla_client.OpSharding.Type.REPLICATED\n else:\n proto.type = xla_client.OpSharding.Type.OTHER\n proto.tile_assignment_dimensions = list(sharding)\n proto.tile_assignment_devices = list(range(np.product(sharding)))\n return proto\n\ndef tuple_sharding_proto(elems):\n proto = xla_client.OpSharding()\n assert all(isinstance(e, type(proto)) for e in elems)\n proto.type = xla_client.OpSharding.Type.TUPLE\n proto.tuple_shardings = elems\n return proto\n\ndef set_sharding_proto(builder, op, sharding_proto):\n \"\"\"Uses CustomCall to annotate a value as sharded.\"\"\"\n # \"Sharding\" is a built-in custom call target that acts like an identity\n # function, and is used to attach an OpSharding to.\n return with_sharding_proto(builder, sharding_proto, xops.CustomCall,\n builder, b\"Sharding\", [op], builder.get_shape(op))\n\ndef with_sharding_proto(builder, sharding_proto, op_fn, *args, **kwargs):\n \"\"\"Builds op_fn(*args, **kwargs) with sharding annotation.\"\"\"\n builder.set_sharding(sharding_proto)\n try:\n return op_fn(*args, **kwargs)\n finally:\n builder.clear_sharding()\n\ndef set_sharding(builder, op, sharding: SpatialSharding):\n \"\"\"Uses CustomCall to annotate a value as sharded.\"\"\"\n return set_sharding_proto(builder, op, _sharding_to_proto(sharding))\n\ndef with_sharding(builder, sharding: SpatialSharding, op_fn, *args, **kwargs):\n \"\"\"Builds op_fn(*args, **kwargs) with sharding annotation.\"\"\"\n return with_sharding_proto(builder, _sharding_to_proto(sharding), op_fn, *args, **kwargs)\n\ndef make_computation_builder(name):\n return xla_client.XlaBuilder(name)\n\n\ndef register_constant_handler(type_, handler_fun):\n _constant_handlers[type_] = handler_fun\n_constant_handlers: Dict[type, Callable] = {}\n\n\ndef _ndarray_constant_handler(c, val, canonicalize_types=True):\n \"\"\"Constant handler for ndarray literals, handling zero-size strides.\n\n This function essentially calls _numpy_array_constant(val) except it has\n special handling of arrays with any strides of size zero: for those, it\n generates appropriate calls to NumpyArrayConstant, Broadcast, and Transpose\n to avoid staging in large literals that might arise from np.zeros or np.ones\n or the output of lax.broadcast (which uses np.broadcast_to which in turn\n uses size-zero strides).\n\n Args:\n c: an XlaBuilder\n val: an ndarray.\n\n Returns:\n An XLA ComputationDataHandle / XlaOp representing the constant ndarray\n staged into the XLA Computation.\n \"\"\"\n # TODO(mattjj): revise this to use xops.BroadcastInDim rather than Transpose\n if dtypes.result_type(val) == dtypes.float0:\n return _numpy_array_constant(c, np.zeros(val.shape, dtype=np.bool_))\n elif np.any(np.equal(0, val.strides)) and val.size > 0:\n zero_stride_axes, = np.where(np.equal(0, val.strides))\n other_axes, = np.where(np.not_equal(0, val.strides))\n collapsed_val = val[tuple(0 if ax in zero_stride_axes else slice(None)\n for ax in range(val.ndim))]\n xla_val = xops.Broadcast(\n _numpy_array_constant(c, collapsed_val, canonicalize_types),\n np.take(val.shape, zero_stride_axes))\n permutation = np.argsort(tuple(zero_stride_axes) + tuple(other_axes))\n return xops.Transpose(xla_val, permutation)\n else:\n return _numpy_array_constant(c, val, canonicalize_types)\nregister_constant_handler(np.ndarray, _ndarray_constant_handler)\n\n\ndef _scalar_constant_handler(c, val, canonicalize_types=True):\n return _numpy_array_constant(c, val, canonicalize_types)\n\nfor scalar_type in [np.int8, np.int16, np.int32, np.int64,\n np.uint8, np.uint16, np.uint32, np.uint64,\n np.float16, np.float32, np.float64,\n np.bool_, np.longlong,\n xla_client.bfloat16]:\n register_constant_handler(scalar_type, _scalar_constant_handler)\n\n# https://github.com/winpython/winpython/issues/613#issuecomment-380121523\nif hasattr(np, \"float128\"):\n register_constant_handler(np.float128, _scalar_constant_handler)\n\ndef _python_scalar_handler(dtype, c, val, canonicalize_dtypes=True):\n return _numpy_array_constant(c, dtype.type(val))\n\nfor ptype, dtype in dtypes.python_scalar_dtypes.items():\n register_constant_handler(ptype, partial(_python_scalar_handler, dtype))\n", "path": "jax/lib/xla_bridge.py" } ]
[ { "content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Interface and utility functions to XLA.\n\nThis module wraps the XLA client(s) and builders to standardize their interfaces\nand provide some automatic type mapping logic for converting between Numpy and\nXLA. There are also a handful of related casting utilities.\n\"\"\"\n\n\nfrom functools import partial, lru_cache\nimport os\nfrom typing import Callable, Dict, List, Optional, Tuple, Union\n\nfrom absl import logging\n# Disable \"WARNING: Logging before flag parsing goes to stderr.\" message\nlogging._warn_preinit_stderr = 0\n\nfrom ..config import flags\nfrom jax._src import util\nfrom .. import dtypes\nimport numpy as np\nimport threading\n\ntry:\n from . import tpu_client\nexcept ImportError:\n tpu_client = None\nfrom . import xla_client\n\nxops = xla_client.ops\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string(\n 'jax_xla_backend', 'xla',\n 'Default is \"xla\" for the XLA service directly, '\n 'or \"tpu_driver\" for using high-performance access to Cloud TPU hardware.')\nflags.DEFINE_string(\n 'jax_backend_target', 'local',\n 'Either \"local\" or \"rpc:address\" to connect to a remote service target.')\nflags.DEFINE_string(\n 'jax_platform_name',\n os.getenv('JAX_PLATFORM_NAME', ''),\n 'Platform name for XLA. The default is to attempt to use a GPU if '\n 'available, but fall back to CPU otherwise. To set the platform manually, '\n 'pass \"cpu\" for CPU or \"gpu\" for GPU.')\nflags.DEFINE_bool(\n 'jax_disable_most_optimizations', False,\n 'Try not to do much optimization work. This can be useful if the cost of '\n 'optimization is greater than that of running a less-optimized program.')\n\n\ndef get_compile_options(\n num_replicas: int,\n num_partitions: int,\n device_assignment=None,\n use_spmd_partitioning: bool = True,\n) -> xla_client.CompileOptions:\n \"\"\"Returns the compile options to use, as derived from flag values.\n\n Args:\n num_replicas: Number of replicas for which to compile.\n num_partitions: Number of partitions for which to compile.\n device_assignment: Optional tuple of integers indicating the assignment of\n logical replicas to physical devices (default inherited from\n xla_client.CompileOptions). Must be consistent with `num_replicas` and\n `num_partitions`.\n use_spmd_partitioning: boolean indicating whether to enable SPMD or MPMD\n partitioning in XLA.\n \"\"\"\n compile_options = xla_client.CompileOptions()\n compile_options.num_replicas = num_replicas\n compile_options.num_partitions = num_partitions\n build_options = compile_options.executable_build_options\n build_options.use_spmd_partitioning = use_spmd_partitioning\n if device_assignment is not None:\n logging.vlog(\n 2,\n 'get_compile_options: num_replicas=%s num_partitions=%s device_assignment=%s',\n num_replicas, num_partitions, device_assignment)\n device_assignment = np.array(device_assignment)\n\n # Allow 1D device assignment if num_partitions is 1.\n if (device_assignment.ndim == 1) and (num_partitions == 1):\n device_assignment = device_assignment[:, None]\n\n if num_replicas != device_assignment.shape[0]:\n msg = 'device_assignment does not match num_replicas: {} vs {}.'\n raise ValueError(msg.format(device_assignment, num_replicas))\n\n if num_partitions != device_assignment.shape[1]:\n msg = 'device_assignment does not match num_partitions: {} vs {}.'\n raise ValueError(msg.format(device_assignment, num_partitions))\n\n device_assignment = xla_client.DeviceAssignment.create(device_assignment)\n assert device_assignment.replica_count() == num_replicas\n assert device_assignment.computation_count() == num_partitions\n compile_options.device_assignment = device_assignment\n\n if FLAGS.jax_disable_most_optimizations:\n debug_options = compile_options.executable_build_options.debug_options\n debug_options.xla_backend_optimization_level = 0\n debug_options.xla_llvm_disable_expensive_passes = True\n debug_options.xla_test_all_input_layouts = False\n\n return compile_options\n\n_backends = {}\n\ndef register_backend(name, factory):\n _backends[name] = factory\n\ndef _get_local_backend(platform=None):\n if not platform:\n platform = FLAGS.jax_platform_name or None\n\n backend = xla_client.get_local_backend(platform)\n if backend is None:\n raise RuntimeError(\"No local XLA backends found.\")\n\n if backend.platform == 'cpu' and platform != 'cpu':\n logging.warning('No GPU/TPU found, falling back to CPU. '\n '(Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)')\n\n return backend\n\n\nregister_backend('xla', _get_local_backend)\n\n# memoize the TPU driver to be consistent with xla_client behavior\n_tpu_backend = None\n\ndef _get_tpu_driver_backend(platform):\n if platform == \"cpu\":\n return _get_local_backend(\"cpu\")\n\n global _tpu_backend\n if _tpu_backend is None:\n backend_target = FLAGS.jax_backend_target\n if backend_target is None:\n raise ValueError('When using TPU Driver as the backend, you must specify '\n '--jax_backend_target=<hostname>:8470.')\n _tpu_backend = tpu_client.TpuBackend.create(worker=backend_target)\n return _tpu_backend\n\n\nif tpu_client:\n register_backend('tpu_driver', _get_tpu_driver_backend)\n\n\n_backend_lock = threading.Lock()\n\n@lru_cache(maxsize=None) # don't use util.memoize because there is no X64 dependence.\ndef get_backend(platform=None):\n # TODO(mattjj,skyewm): remove this input polymorphism after we clean up how\n # 'backend' values are handled\n if not isinstance(platform, (type(None), str)):\n return platform\n\n with _backend_lock:\n backend = _backends.get(FLAGS.jax_xla_backend)\n if backend is None:\n msg = 'Unknown jax_xla_backend value \"{}\".'\n raise ValueError(msg.format(FLAGS.jax_xla_backend))\n return backend(platform)\n\n\ndef get_device_backend(device=None):\n \"\"\"Returns the Backend associated with `device`, or the default Backend.\"\"\"\n platform = device.platform if device else None\n return get_backend(platform)\n\n\ndef device_count(backend: Optional[str] = None) -> int:\n \"\"\"Returns the total number of devices.\n\n On most platforms, this is the same as :py:func:`jax.local_device_count`.\n However, on multi-host platforms, this will return the total number of devices\n across all hosts.\n\n Args:\n backend: This is an experimental feature and the API is likely to change.\n Optional, a string representing the xla backend: ``'cpu'``, ``'gpu'``, or\n ``'tpu'``.\n\n Returns:\n Number of devices.\n \"\"\"\n return int(get_backend(backend).device_count())\n\n\ndef local_device_count(backend: Optional[str] = None) -> int:\n \"\"\"Returns the number of devices on this host.\"\"\"\n return int(get_backend(backend).local_device_count())\n\n\ndef devices(backend: Optional[str] = None) -> List[xla_client.Device]:\n \"\"\"Returns a list of all devices for a given backend.\n\n Each device is represented by a subclass of :class:`Device` (e.g.\n :class:`CpuDevice`, :class:`GpuDevice`). The length of the returned list is\n equal to ``device_count(backend)``. Local devices can be identified by comparing\n :meth:`Device.host_id` to the value returned by :py:func:`jax.host_id`.\n\n If ``backend`` is ``None``, returns all the devices from the default backend.\n The default backend is generally ``'gpu'`` or ``'tpu'`` if available,\n otherwise ``'cpu'``.\n\n Args:\n backend: This is an experimental feature and the API is likely to change.\n Optional, a string representing the xla backend: ``'cpu'``, ``'gpu'``, or\n ``'tpu'``.\n\n Returns:\n List of Device subclasses.\n \"\"\"\n return get_backend(backend).devices()\n\n\ndef default_backend() -> str:\n \"\"\"Returns the platform name of the default XLA backend.\"\"\"\n return get_backend(None).platform\n\n\ndef local_devices(host_id: Optional[int] = None,\n backend: Optional[str] = None) -> List[xla_client.Device]:\n \"\"\"Like :py:func:`jax.devices`, but only returns devices local to a given host.\n\n If ``host_id`` is ``None``, returns devices local to this host.\n\n Args:\n host_id: the integer ID of the host. Host IDs can be retrieved via\n :py:func:`jax.host_ids`.\n backend: This is an experimental feature and the API is likely to change.\n Optional, a string representing the xla backend: ``'cpu'``, ``'gpu'``, or\n ``'tpu'``.\n\n Returns:\n List of Device subclasses.\n \"\"\"\n if host_id is None:\n host_id = get_backend(backend).host_id()\n if host_id not in host_ids():\n raise ValueError(f\"Unknown host_id {host_id}\")\n return [d for d in devices(backend) if d.host_id == host_id]\n\n\ndef host_id(backend: Optional[str] = None) -> int:\n \"\"\"Returns the integer host ID of this host.\n\n On most platforms, this will always be 0. This will vary on multi-host\n platforms though.\n\n Args:\n backend: This is an experimental feature and the API is likely to change.\n Optional, a string representing the xla backend: ``'cpu'``, ``'gpu'``, or\n ``'tpu'``.\n\n Returns:\n Integer host ID.\n \"\"\"\n return get_backend(backend).host_id()\n\n\ndef host_ids(backend: Optional[str] = None) -> List[int]:\n \"\"\"Returns a sorted list of all host IDs.\"\"\"\n return sorted({d.host_id for d in devices(backend)})\n\n\ndef host_count(backend: Optional[str] = None) -> int:\n \"\"\"Returns the number of hosts.\"\"\"\n return len(host_ids(backend))\n\n\n### utility functions\n\[email protected]\ndef dtype_to_etype(dtype):\n \"\"\"Convert from dtype to canonical etype (reading config.x64_enabled).\"\"\"\n return xla_client.dtype_to_etype(dtypes.canonicalize_dtype(dtype))\n\n\[email protected]\ndef supported_numpy_dtypes():\n return {dtypes.canonicalize_dtype(dtype)\n for dtype in xla_client.XLA_ELEMENT_TYPE_TO_DTYPE.values()}\n\n\n# TODO(mattjj,frostig): try to remove this function\ndef normalize_to_xla_dtypes(val):\n \"\"\"Normalize dtypes in a value.\"\"\"\n if hasattr(val, '__array__') or np.isscalar(val):\n return np.asarray(val, dtype=dtypes.canonicalize_dtype(dtypes.result_type(val)))\n elif isinstance(val, (tuple, list)):\n return tuple(normalize_to_xla_dtypes(x) for x in val)\n raise TypeError('Can\\'t convert to XLA: {}'.format(val))\n\ndef _numpy_array_constant(builder, value, canonicalize_types=True):\n if canonicalize_types:\n value = normalize_to_xla_dtypes(value)\n return xops.ConstantLiteral(builder, value)\n\ndef parameter(builder, num, shape, name=None, replicated=None):\n if name is None:\n name = ''\n if replicated is None:\n replicated = []\n elif isinstance(replicated, bool):\n replicated = [replicated] * shape.leaf_count()\n\n return xops.Parameter(builder, num,\n shape.with_major_to_minor_layout_if_absent(), name,\n replicated)\n\n\ndef constant(builder, py_val, canonicalize_types=True):\n \"\"\"Translate constant `py_val` to a constant, canonicalizing its dtype.\n\n Args:\n py_val: a Python value to be translated to a constant.\n\n Returns:\n A representation of the constant, either a ComputationDataHandle or None\n \"\"\"\n py_type = type(py_val)\n if py_type in _constant_handlers:\n return _constant_handlers[py_type](builder, py_val, canonicalize_types)\n else:\n raise TypeError(\"No constant handler for type: {}\".format(py_type))\n\n# HLO instructions optionally can be annotated to say how the output should be\n# spatially partitioned (represented in XLA as OpSharding protos, see\n# _sharding_to_proto). For array outputs, the annotation is either an int per\n# dimension specifying the number of ways that dimension divided (i.e. the total\n# number of shards is the product), or None to indicate the array should be\n# replicated. Tuple outputs are represented as tuples thereof. XLA supports\n# arbitrary tuple nesting, but JAX only uses one level of tupling (and our type\n# checkers don't support recursive types), so we only represent one level of\n# nesting in this type definition.\nSpatialSharding = Union[Tuple[int, ...],\n None,\n Tuple[Union[Tuple[int, ...], None], ...]]\n\ndef _sharding_to_proto(sharding: SpatialSharding):\n \"\"\"Converts a SpatialSharding to an OpSharding.\n\n See\n https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/xla_data.proto#L601\n for details on the OpSharding proto.\n \"\"\"\n proto = xla_client.OpSharding()\n if isinstance(sharding, tuple) and not isinstance(sharding[0], int):\n assert all(s is None or isinstance(s, tuple) for s in sharding)\n return tuple_sharding_proto(list(map(_sharding_to_proto, sharding))) # type: ignore\n\n if sharding is None:\n proto.type = xla_client.OpSharding.Type.REPLICATED\n else:\n proto.type = xla_client.OpSharding.Type.OTHER\n proto.tile_assignment_dimensions = list(sharding)\n proto.tile_assignment_devices = list(range(np.product(sharding)))\n return proto\n\ndef tuple_sharding_proto(elems):\n proto = xla_client.OpSharding()\n assert all(isinstance(e, type(proto)) for e in elems)\n proto.type = xla_client.OpSharding.Type.TUPLE\n proto.tuple_shardings = elems\n return proto\n\ndef set_sharding_proto(builder, op, sharding_proto):\n \"\"\"Uses CustomCall to annotate a value as sharded.\"\"\"\n # \"Sharding\" is a built-in custom call target that acts like an identity\n # function, and is used to attach an OpSharding to.\n return with_sharding_proto(builder, sharding_proto, xops.CustomCall,\n builder, b\"Sharding\", [op], builder.get_shape(op))\n\ndef with_sharding_proto(builder, sharding_proto, op_fn, *args, **kwargs):\n \"\"\"Builds op_fn(*args, **kwargs) with sharding annotation.\"\"\"\n builder.set_sharding(sharding_proto)\n try:\n return op_fn(*args, **kwargs)\n finally:\n builder.clear_sharding()\n\ndef set_sharding(builder, op, sharding: SpatialSharding):\n \"\"\"Uses CustomCall to annotate a value as sharded.\"\"\"\n return set_sharding_proto(builder, op, _sharding_to_proto(sharding))\n\ndef with_sharding(builder, sharding: SpatialSharding, op_fn, *args, **kwargs):\n \"\"\"Builds op_fn(*args, **kwargs) with sharding annotation.\"\"\"\n return with_sharding_proto(builder, _sharding_to_proto(sharding), op_fn, *args, **kwargs)\n\ndef make_computation_builder(name):\n return xla_client.XlaBuilder(name)\n\n\ndef register_constant_handler(type_, handler_fun):\n _constant_handlers[type_] = handler_fun\n_constant_handlers: Dict[type, Callable] = {}\n\n\ndef _ndarray_constant_handler(c, val, canonicalize_types=True):\n \"\"\"Constant handler for ndarray literals, handling zero-size strides.\n\n This function essentially calls _numpy_array_constant(val) except it has\n special handling of arrays with any strides of size zero: for those, it\n generates appropriate calls to NumpyArrayConstant, Broadcast, and Transpose\n to avoid staging in large literals that might arise from np.zeros or np.ones\n or the output of lax.broadcast (which uses np.broadcast_to which in turn\n uses size-zero strides).\n\n Args:\n c: an XlaBuilder\n val: an ndarray.\n\n Returns:\n An XLA ComputationDataHandle / XlaOp representing the constant ndarray\n staged into the XLA Computation.\n \"\"\"\n # TODO(mattjj): revise this to use xops.BroadcastInDim rather than Transpose\n if dtypes.result_type(val) == dtypes.float0:\n return _numpy_array_constant(c, np.zeros(val.shape, dtype=np.bool_))\n elif np.any(np.equal(0, val.strides)) and val.size > 0:\n zero_stride_axes, = np.where(np.equal(0, val.strides))\n other_axes, = np.where(np.not_equal(0, val.strides))\n collapsed_val = val[tuple(0 if ax in zero_stride_axes else slice(None)\n for ax in range(val.ndim))]\n xla_val = xops.Broadcast(\n _numpy_array_constant(c, collapsed_val, canonicalize_types),\n np.take(val.shape, zero_stride_axes))\n permutation = np.argsort(tuple(zero_stride_axes) + tuple(other_axes))\n return xops.Transpose(xla_val, permutation)\n else:\n return _numpy_array_constant(c, val, canonicalize_types)\nregister_constant_handler(np.ndarray, _ndarray_constant_handler)\n\n\ndef _scalar_constant_handler(c, val, canonicalize_types=True):\n return _numpy_array_constant(c, val, canonicalize_types)\n\nfor scalar_type in [np.int8, np.int16, np.int32, np.int64,\n np.uint8, np.uint16, np.uint32, np.uint64,\n np.float16, np.float32, np.float64,\n np.bool_, np.longlong,\n xla_client.bfloat16]:\n register_constant_handler(scalar_type, _scalar_constant_handler)\n\n# https://github.com/winpython/winpython/issues/613#issuecomment-380121523\nif hasattr(np, \"float128\"):\n register_constant_handler(np.float128, _scalar_constant_handler)\n\ndef _python_scalar_handler(dtype, c, val, canonicalize_dtypes=True):\n return _numpy_array_constant(c, dtype.type(val))\n\nfor ptype, dtype in dtypes.python_scalar_dtypes.items():\n register_constant_handler(ptype, partial(_python_scalar_handler, dtype))\n", "path": "jax/lib/xla_bridge.py" } ]
diff --git a/jax/lib/xla_bridge.py b/jax/lib/xla_bridge.py index aba3c6615edc..a292730c3db2 100644 --- a/jax/lib/xla_bridge.py +++ b/jax/lib/xla_bridge.py @@ -144,7 +144,9 @@ def _get_local_backend(platform=None): _tpu_backend = None def _get_tpu_driver_backend(platform): - del platform + if platform == "cpu": + return _get_local_backend("cpu") + global _tpu_backend if _tpu_backend is None: backend_target = FLAGS.jax_backend_target
scikit-hep__pyhf-1524
jaxlib v0.1.68 breaks CI with segfault on macOS # Description On 2021-06-22 the scheduled nightly CI for `v0.6.2` [was passing](https://github.com/scikit-hep/pyhf/actions/runs/962645978) and had installed libraries [pass-pip-list.txt](https://github.com/scikit-hep/pyhf/files/6713928/pass-pip-list.txt). Then on 2021-06-23 the CI [fails](https://github.com/scikit-hep/pyhf/actions/runs/966295835) with a segfault and had and had installed libraries [fail-pip-list.txt](https://github.com/scikit-hep/pyhf/files/6713929/fail-pip-list.txt), where the difference between them is the versions of `jax` and `jaxlib`. ``` $ diff pass-pip-list.txt fail-pip-list.txt 5a6 > appnope 0.1.2 41,42c42,43 < jax 0.2.14 < jaxlib 0.1.67 --- > jax 0.2.16 > jaxlib 0.1.68 97c98 < pyhf 0.6.2 /home/runner/work/pyhf/pyhf/src --- > pyhf 0.6.2 /Users/runner/work/pyhf/pyhf/src ``` The relevant section of the logs for the failure is the following: ```pytb src/pyhf/infer/utils.py .. [ 3%] Fatal Python error: Segmentation fault Thread 0x000070000dda9000 (most recent call first): File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/threading.py", line 306 in wait File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/threading.py", line 558 in wait File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/tqdm/_monitor.py", line 60 in run File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/threading.py", line 932 in _bootstrap_inner File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/threading.py", line 890 in _bootstrap Current thread 0x00000001050cfdc0 (most recent call first): File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/jaxlib/xla_client.py", line 67 in make_cpu_client File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/jax/lib/xla_bridge.py", line 206 in backends File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/jax/lib/xla_bridge.py", line 242 in get_backend File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/jax/lib/xla_bridge.py", line 263 in get_device_backend File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/jax/interpreters/xla.py", line 138 in _device_put_array File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/jax/interpreters/xla.py", line 133 in device_put File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/jax/_src/lax/lax.py", line 1596 in _device_put_raw File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 3025 in array File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 3064 in asarray File "/Users/runner/work/pyhf/pyhf/src/pyhf/tensor/jax_backend.py", line 230 in astensor File "/Users/runner/work/pyhf/pyhf/src/pyhf/tensor/common.py", line 30 in _precompute File "/Users/runner/work/pyhf/pyhf/src/pyhf/events.py", line 36 in __call__ File "/Users/runner/work/pyhf/pyhf/src/pyhf/__init__.py", line 147 in set_backend File "/Users/runner/work/pyhf/pyhf/src/pyhf/events.py", line 93 in register_wrapper File "<doctest pyhf.tensor.jax_backend.jax_backend.astensor[1]>", line 1 in <module> File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/doctest.py", line 1336 in __run File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/doctest.py", line 1483 in run File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/doctest.py", line 1844 in run File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/_pytest/doctest.py", line 287 in runtest File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/_pytest/runner.py", line 162 in pytest_runtest_call File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda> File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__ File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/_pytest/runner.py", line 255 in <lambda> File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/_pytest/runner.py", line 311 in from_call File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/_pytest/runner.py", line 254 in call_runtest_hook File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/_pytest/runner.py", line 215 in call_and_report File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/_pytest/runner.py", line 126 in runtestprotocol File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/_pytest/runner.py", line 109 in pytest_runtest_protocol File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda> File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__ File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/_pytest/main.py", line 348 in pytest_runtestloop File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda> File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__ File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/_pytest/main.py", line 323 in _main File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/_pytest/main.py", line 269 in wrap_session File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/_pytest/main.py", line 316 in pytest_cmdline_main File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda> File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__ File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/_pytest/config/__init__.py", line 162 in main File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/_pytest/config/__init__.py", line 185 in console_main File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pytest/__main__.py", line 5 in <module> File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/runpy.py", line 87 in _run_code File "/Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/runpy.py", line 194 in _run_module_as_main /Users/runner/work/_temp/b65896af-bc5b-4842-94da-e0fd5882e8d5.sh: line 1: 1785 Segmentation fault: 11 python -m pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py /Users/runner/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' src/pyhf/tensor/jax_backend.py Error: Process completed with exit code 139. ``` Both `jax` and `jaxlib` had releases on 2021-06-23: - [`jax` `v0.2.16`](https://pypi.org/project/jax/0.2.16/#history) - [`jaxlib` `v0.1.68`](https://pypi.org/project/jaxlib/0.1.68/#history) @lukasheinrich @kratsg we'll need to follow up with the JAX team.
[ { "content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.1', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.1',\n ],\n 'torch': ['torch~=1.8'],\n 'jax': ['jax~=0.2.8', 'jaxlib~=0.1.58,<0.1.68'],\n 'xmlio': [\n 'uproot3>=3.14.1',\n 'uproot~=4.0',\n ], # uproot3 required until writing to ROOT supported in uproot4\n 'minuit': ['iminuit>=2.4'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['lint'] = sorted({'flake8', 'black'})\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'graphviz',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'sphinx>=4.0.0',\n 'sphinxcontrib-bibtex~=2.1',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py" } ]
[ { "content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow~=2.2.1', # TensorFlow minor releases are as volatile as major\n 'tensorflow-probability~=0.10.1',\n ],\n 'torch': ['torch~=1.8'],\n 'jax': ['jax~=0.2.8', 'jaxlib~=0.1.58,!=0.1.68'], # c.f. Issue 1501\n 'xmlio': [\n 'uproot3>=3.14.1',\n 'uproot~=4.0',\n ], # uproot3 required until writing to ROOT supported in uproot4\n 'minuit': ['iminuit>=2.4'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['lint'] = sorted({'flake8', 'black'})\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'pytest~=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'graphviz',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'sphinx>=4.0.0',\n 'sphinxcontrib-bibtex~=2.1',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py" } ]
diff --git a/setup.py b/setup.py index 55a4de07c7..036df19c41 100644 --- a/setup.py +++ b/setup.py @@ -7,7 +7,7 @@ 'tensorflow-probability~=0.10.1', ], 'torch': ['torch~=1.8'], - 'jax': ['jax~=0.2.8', 'jaxlib~=0.1.58,<0.1.68'], + 'jax': ['jax~=0.2.8', 'jaxlib~=0.1.58,!=0.1.68'], # c.f. Issue 1501 'xmlio': [ 'uproot3>=3.14.1', 'uproot~=4.0',
pwndbg__pwndbg-958
xinfo command doesn't like anonymous page names The naming of anonymous memory pages introduced in 462eb53 is cool but it doesn't play nicely with the `xinfo` command. The following figure shows `xinfo` behaving correctly when displaying info on a regular mapping, followed by an error when used with an anonymous mapping. ![xinfo](https://user-images.githubusercontent.com/16000770/133327381-c4f9d13b-5d20-4fd3-80b5-01aad51a5387.png) `xinfo` uses `page.is_memory_mapped_file()` to determine whether a page is file backed, this in turn is based on the object name: https://github.com/pwndbg/pwndbg/blob/648c7f014e25a2944ee40891000fb43031182e51/pwndbg/memory.py#L409-L411 Because 462eb53 names anonymous pages that previously had no name, the above function reports them as memory mapped files which `xinfo` tries to open. A possible solution could be to enclose the anonymous page names in square brackets, which `is_memory_mapped_file()` ignores.
[ { "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nRoutines to enumerate mapped memory, and attempt to associate\naddress ranges with various ELF files and permissions.\n\nThe reason that we need robustness is that not every operating\nsystem has /proc/$$/maps, which backs 'info proc mapping'.\n\"\"\"\nimport bisect\nimport os\nimport sys\n\nimport gdb\n\nimport pwndbg.abi\nimport pwndbg.elf\nimport pwndbg.events\nimport pwndbg.file\nimport pwndbg.memoize\nimport pwndbg.memory\nimport pwndbg.proc\nimport pwndbg.qemu\nimport pwndbg.regs\nimport pwndbg.remote\nimport pwndbg.stack\nimport pwndbg.typeinfo\n\n# List of manually-explored pages which were discovered\n# by analyzing the stack or register context.\nexplored_pages = []\n\n# List of custom pages that can be managed manually by vmmap_* commands family\ncustom_pages = []\n\[email protected]_on_start\[email protected]_on_stop\ndef get():\n if not pwndbg.proc.alive:\n return tuple()\n pages = []\n pages.extend(proc_pid_maps())\n\n if not pages and pwndbg.arch.current in ('i386', 'x86-64') and pwndbg.qemu.is_qemu():\n pages.extend(monitor_info_mem())\n\n if not pages:\n # If debugee is launched from a symlink the debugee memory maps will be\n # labeled with symlink path while in normal scenario the /proc/pid/maps\n # labels debugee memory maps with real path (after symlinks).\n # This is because the exe path in AUXV (and so `info auxv`) is before\n # following links.\n pages.extend(info_auxv())\n\n if pages:\n pages.extend(info_sharedlibrary())\n else:\n if pwndbg.qemu.is_usermode():\n return (\n pwndbg.memory.Page(0, pwndbg.arch.ptrmask, 7, 0, '[qemu-user]'),\n )\n pages.extend(info_files())\n\n pages.extend(pwndbg.stack.stacks.values())\n\n pages.extend(explored_pages)\n pages.extend(custom_pages)\n pages.sort()\n return tuple(pages)\n\[email protected]_on_stop\ndef find(address):\n if address is None:\n return None\n\n address = int(address)\n\n for page in get():\n if address in page:\n return page\n\n return explore(address)\n\[email protected]()\ndef explore(address_maybe):\n \"\"\"\n Given a potential address, check to see what permissions it has.\n\n Returns:\n Page object\n\n Note:\n Adds the Page object to a persistent list of pages which are\n only reset when the process dies. This means pages which are\n added this way will not be removed when unmapped.\n\n Also assumes the entire contiguous section has the same permission.\n \"\"\"\n if proc_pid_maps():\n return None\n\n address_maybe = pwndbg.memory.page_align(address_maybe)\n\n flags = 4 if pwndbg.memory.peek(address_maybe) else 0\n\n if not flags:\n return None\n\n flags |= 2 if pwndbg.memory.poke(address_maybe) else 0\n flags |= 1 if not pwndbg.stack.nx else 0\n\n page = find_boundaries(address_maybe)\n page.objfile = '<explored>'\n page.flags = flags\n\n explored_pages.append(page)\n\n return page\n\n# Automatically ensure that all registers are explored on each stop\n#@pwndbg.events.stop\ndef explore_registers():\n for regname in pwndbg.regs.common:\n find(pwndbg.regs[regname])\n\n\n#@pwndbg.events.exit\ndef clear_explored_pages():\n while explored_pages:\n explored_pages.pop()\n\n\ndef add_custom_page(page):\n bisect.insort(custom_pages, page)\n\n # Reset all the cache\n # We can not reset get() only, since the result may be used by others.\n # TODO: avoid flush all caches\n pwndbg.memoize.reset()\n\n\ndef clear_custom_page():\n while custom_pages:\n custom_pages.pop()\n\n # Reset all the cache\n # We can not reset get() only, since the result may be used by others.\n # TODO: avoid flush all caches\n pwndbg.memoize.reset()\n\n\[email protected]_on_start\[email protected]_on_stop\ndef proc_pid_maps():\n \"\"\"\n Parse the contents of /proc/$PID/maps on the server.\n\n Returns:\n A list of pwndbg.memory.Page objects.\n \"\"\"\n\n # If we debug remotely a qemu-user or qemu-system target,\n # there is no point of hitting things further\n if pwndbg.qemu.is_qemu():\n return tuple()\n\n example_proc_pid_maps = \"\"\"\n 7f95266fa000-7f95268b5000 r-xp 00000000 08:01 418404 /lib/x86_64-linux-gnu/libc-2.19.so\n 7f95268b5000-7f9526ab5000 ---p 001bb000 08:01 418404 /lib/x86_64-linux-gnu/libc-2.19.so\n 7f9526ab5000-7f9526ab9000 r--p 001bb000 08:01 418404 /lib/x86_64-linux-gnu/libc-2.19.so\n 7f9526ab9000-7f9526abb000 rw-p 001bf000 08:01 418404 /lib/x86_64-linux-gnu/libc-2.19.so\n 7f9526abb000-7f9526ac0000 rw-p 00000000 00:00 0\n 7f9526ac0000-7f9526ae3000 r-xp 00000000 08:01 418153 /lib/x86_64-linux-gnu/ld-2.19.so\n 7f9526cbe000-7f9526cc1000 rw-p 00000000 00:00 0\n 7f9526ce0000-7f9526ce2000 rw-p 00000000 00:00 0\n 7f9526ce2000-7f9526ce3000 r--p 00022000 08:01 418153 /lib/x86_64-linux-gnu/ld-2.19.so\n 7f9526ce3000-7f9526ce4000 rw-p 00023000 08:01 418153 /lib/x86_64-linux-gnu/ld-2.19.so\n 7f9526ce4000-7f9526ce5000 rw-p 00000000 00:00 0\n 7f9526ce5000-7f9526d01000 r-xp 00000000 08:01 786466 /bin/dash\n 7f9526f00000-7f9526f02000 r--p 0001b000 08:01 786466 /bin/dash\n 7f9526f02000-7f9526f03000 rw-p 0001d000 08:01 786466 /bin/dash\n 7f9526f03000-7f9526f05000 rw-p 00000000 00:00 0\n 7f95279fe000-7f9527a1f000 rw-p 00000000 00:00 0 [heap]\n 7fff3c177000-7fff3c199000 rw-p 00000000 00:00 0 [stack]\n 7fff3c1e8000-7fff3c1ea000 r-xp 00000000 00:00 0 [vdso]\n ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]\n \"\"\"\n\n locations = [\n '/proc/%s/maps' % pwndbg.proc.pid,\n '/proc/%s/map' % pwndbg.proc.pid,\n '/usr/compat/linux/proc/%s/maps' % pwndbg.proc.pid,\n ]\n\n for location in locations:\n try:\n data = pwndbg.file.get(location)\n break\n except (OSError, gdb.error):\n continue\n else:\n return tuple()\n\n data = data.decode()\n\n pages = []\n for line in data.splitlines():\n maps, perm, offset, dev, inode_objfile = line.split(None, 4)\n\n start, stop = maps.split('-')\n \n try:\n inode, objfile = inode_objfile.split(None, 1)\n except:\n objfile = 'anon_' + start[:-3]\n\n start = int(start, 16)\n stop = int(stop, 16)\n offset = int(offset, 16)\n size = stop-start\n\n flags = 0\n if 'r' in perm: flags |= 4\n if 'w' in perm: flags |= 2\n if 'x' in perm: flags |= 1\n\n page = pwndbg.memory.Page(start, size, flags, offset, objfile)\n pages.append(page)\n\n return tuple(pages)\n\[email protected]_on_stop\ndef monitor_info_mem():\n # NOTE: This works only on X86/X64/RISC-V\n # See: https://github.com/pwndbg/pwndbg/pull/685\n # (TODO: revisit with future QEMU versions)\n #\n # pwndbg> monitor info mem\n # ffff903580000000-ffff903580099000 0000000000099000 -rw\n # ffff903580099000-ffff90358009b000 0000000000002000 -r-\n # ffff90358009b000-ffff903582200000 0000000002165000 -rw\n # ffff903582200000-ffff903582803000 0000000000603000 -r-\n try:\n lines = gdb.execute('monitor info mem', to_string=True).splitlines()\n except gdb.error:\n # Likely a `gdb.error: \"monitor\" command not supported by this target.`\n # TODO: add debug logging\n return tuple()\n\n # Handle disabled PG\n # This will prevent a crash on abstract architectures\n if len(lines) == 1 and lines[0] == 'PG disabled':\n return tuple()\n\n pages = []\n for line in lines:\n dash_idx = line.index('-')\n space_idx = line.index(' ')\n rspace_idx = line.rindex(' ')\n\n start = int(line[:dash_idx], 16)\n end = int(line[dash_idx+1:space_idx], 16)\n size = int(line[space_idx+1:rspace_idx], 16)\n assert end-start == size, \"monitor info mem output didn't pass a sanity check\"\n perm = line[rspace_idx+1:]\n\n flags = 0\n if 'r' in perm: flags |= 4\n if 'w' in perm: flags |= 2\n # QEMU does not expose X/NX bit, see #685\n #if 'x' in perm: flags |= 1\n flags |= 1\n\n pages.append(pwndbg.memory.Page(start, size, flags, 0, '<qemu>'))\n\n return tuple(pages)\n\n\[email protected]_on_stop\ndef info_sharedlibrary():\n \"\"\"\n Parses the output of `info sharedlibrary`.\n\n Specifically, all we really want is any valid pointer into each library,\n and the path to the library on disk.\n\n With this information, we can use the ELF parser to get all of the\n page permissions for every mapped page in the ELF.\n\n Returns:\n A list of pwndbg.memory.Page objects.\n \"\"\"\n\n exmaple_info_sharedlibrary_freebsd = \"\"\"\n From To Syms Read Shared Object Library\n 0x280fbea0 0x2810e570 Yes (*) /libexec/ld-elf.so.1\n 0x281260a0 0x281495c0 Yes (*) /lib/libncurses.so.8\n 0x28158390 0x2815dcf0 Yes (*) /usr/local/lib/libintl.so.9\n 0x28188b00 0x2828e060 Yes (*) /lib/libc.so.7\n (*): Shared library is missing debugging information.\n \"\"\"\n\n exmaple_info_sharedlibrary_linux = \"\"\"\n From To Syms Read Shared Object Library\n 0x00007ffff7ddaae0 0x00007ffff7df54e0 Yes /lib64/ld-linux-x86-64.so.2\n 0x00007ffff7bbd3d0 0x00007ffff7bc9028 Yes (*) /lib/x86_64-linux-gnu/libtinfo.so.5\n 0x00007ffff79aded0 0x00007ffff79ae9ce Yes /lib/x86_64-linux-gnu/libdl.so.2\n 0x00007ffff76064a0 0x00007ffff774c113 Yes /lib/x86_64-linux-gnu/libc.so.6\n (*): Shared library is missing debugging information.\n \"\"\"\n pages = []\n\n for line in gdb.execute('info sharedlibrary', to_string=True).splitlines():\n if not line.startswith('0x'):\n continue\n\n tokens = line.split()\n text = int(tokens[0], 16)\n obj = tokens[-1]\n\n pages.extend(pwndbg.elf.map(text, obj))\n\n return tuple(sorted(pages))\n\[email protected]_on_stop\ndef info_files():\n\n example_info_files_linues = \"\"\"\n Symbols from \"/bin/bash\".\n Unix child process:\n Using the running image of child process 5903.\n While running this, GDB does not access memory from...\n Local exec file:\n `/bin/bash', file type elf64-x86-64.\n Entry point: 0x42020b\n 0x0000000000400238 - 0x0000000000400254 is .interp\n 0x0000000000400254 - 0x0000000000400274 is .note.ABI-tag\n ...\n 0x00000000006f06c0 - 0x00000000006f8ca8 is .data\n 0x00000000006f8cc0 - 0x00000000006fe898 is .bss\n 0x00007ffff7dda1c8 - 0x00007ffff7dda1ec is .note.gnu.build-id in /lib64/ld-linux-x86-64.so.2\n 0x00007ffff7dda1f0 - 0x00007ffff7dda2ac is .hash in /lib64/ld-linux-x86-64.so.2\n 0x00007ffff7dda2b0 - 0x00007ffff7dda38c is .gnu.hash in /lib64/ld-linux-x86-64.so.2\n \"\"\"\n\n seen_files = set()\n pages = list()\n main_exe = ''\n\n for line in gdb.execute('info files', to_string=True).splitlines():\n line = line.strip()\n\n # The name of the main executable\n if line.startswith('`'):\n exename, filetype = line.split(None, 1)\n main_exe = exename.strip(\"`,'\")\n continue\n\n # Everything else should be addresses\n if not line.startswith('0x'):\n continue\n\n # start, stop, _, segment, _, filename = line.split(None,6)\n fields = line.split(None,6)\n vaddr = int(fields[0], 16)\n\n if len(fields) == 5: objfile = main_exe\n elif len(fields) == 7: objfile = fields[6]\n else:\n print(\"Bad data: %r\" % line)\n continue\n\n if objfile in seen_files:\n continue\n else:\n seen_files.add(objfile)\n\n pages.extend(pwndbg.elf.map(vaddr, objfile))\n\n return tuple(pages)\n\n\n\[email protected]_on_exit\ndef info_auxv(skip_exe=False):\n \"\"\"\n Extracts the name of the executable from the output of the command\n \"info auxv\". Note that if the executable path is a symlink,\n it is not dereferenced by `info auxv` and we also don't dereference it.\n\n Arguments:\n skip_exe(bool): Do not return any mappings that belong to the exe.\n\n Returns:\n A list of pwndbg.memory.Page objects.\n \"\"\"\n auxv = pwndbg.auxv.get()\n\n if not auxv:\n return tuple()\n\n pages = []\n exe_name = auxv.AT_EXECFN or 'main.exe'\n entry = auxv.AT_ENTRY\n base = auxv.AT_BASE\n vdso = auxv.AT_SYSINFO_EHDR or auxv.AT_SYSINFO\n phdr = auxv.AT_PHDR\n\n if not skip_exe and (entry or phdr):\n pages.extend(pwndbg.elf.map(entry or phdr, exe_name))\n\n if base:\n pages.extend(pwndbg.elf.map(base, '[linker]'))\n\n if vdso:\n pages.extend(pwndbg.elf.map(vdso, '[vdso]'))\n\n return tuple(sorted(pages))\n\n\ndef find_boundaries(addr, name='', min=0):\n \"\"\"\n Given a single address, find all contiguous pages\n which are mapped.\n \"\"\"\n start = pwndbg.memory.find_lower_boundary(addr)\n end = pwndbg.memory.find_upper_boundary(addr)\n\n if start < min:\n start = min\n\n return pwndbg.memory.Page(start, end-start, 4, 0, name)\n\ndef check_aslr():\n \"\"\"\n Detects the ASLR status. Returns True, False or None.\n\n None is returned when we can't detect ASLR.\n \"\"\"\n # QEMU does not support this concept.\n if pwndbg.qemu.is_qemu():\n return None, 'Could not detect ASLR on QEMU targets'\n\n # Systemwide ASLR is disabled\n try:\n data = pwndbg.file.get('/proc/sys/kernel/randomize_va_space')\n if b'0' in data:\n return False, 'kernel.randomize_va_space == 0'\n except Exception as e:\n print(\"Could not check ASLR: can't read randomize_va_space\")\n pass\n\n # Check the personality of the process\n if pwndbg.proc.alive:\n try:\n data = pwndbg.file.get('/proc/%i/personality' % pwndbg.proc.pid)\n personality = int(data, 16)\n return (personality & 0x40000 == 0), 'read status from process\\' personality'\n except:\n print(\"Could not check ASLR: can't read process' personality\")\n pass\n\n # Just go with whatever GDB says it did.\n #\n # This should usually be identical to the above, but we may not have\n # access to procfs.\n output = gdb.execute('show disable-randomization', to_string=True)\n return (\"is off.\" in output), 'show disable-randomization'\n\[email protected]\ndef mark_pc_as_executable():\n mapping = find(pwndbg.regs.pc)\n if mapping and not mapping.execute:\n mapping.flags |= os.X_OK\n", "path": "pwndbg/vmmap.py" } ]
[ { "content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nRoutines to enumerate mapped memory, and attempt to associate\naddress ranges with various ELF files and permissions.\n\nThe reason that we need robustness is that not every operating\nsystem has /proc/$$/maps, which backs 'info proc mapping'.\n\"\"\"\nimport bisect\nimport os\nimport sys\n\nimport gdb\n\nimport pwndbg.abi\nimport pwndbg.elf\nimport pwndbg.events\nimport pwndbg.file\nimport pwndbg.memoize\nimport pwndbg.memory\nimport pwndbg.proc\nimport pwndbg.qemu\nimport pwndbg.regs\nimport pwndbg.remote\nimport pwndbg.stack\nimport pwndbg.typeinfo\n\n# List of manually-explored pages which were discovered\n# by analyzing the stack or register context.\nexplored_pages = []\n\n# List of custom pages that can be managed manually by vmmap_* commands family\ncustom_pages = []\n\[email protected]_on_start\[email protected]_on_stop\ndef get():\n if not pwndbg.proc.alive:\n return tuple()\n pages = []\n pages.extend(proc_pid_maps())\n\n if not pages and pwndbg.arch.current in ('i386', 'x86-64') and pwndbg.qemu.is_qemu():\n pages.extend(monitor_info_mem())\n\n if not pages:\n # If debugee is launched from a symlink the debugee memory maps will be\n # labeled with symlink path while in normal scenario the /proc/pid/maps\n # labels debugee memory maps with real path (after symlinks).\n # This is because the exe path in AUXV (and so `info auxv`) is before\n # following links.\n pages.extend(info_auxv())\n\n if pages:\n pages.extend(info_sharedlibrary())\n else:\n if pwndbg.qemu.is_usermode():\n return (\n pwndbg.memory.Page(0, pwndbg.arch.ptrmask, 7, 0, '[qemu-user]'),\n )\n pages.extend(info_files())\n\n pages.extend(pwndbg.stack.stacks.values())\n\n pages.extend(explored_pages)\n pages.extend(custom_pages)\n pages.sort()\n return tuple(pages)\n\[email protected]_on_stop\ndef find(address):\n if address is None:\n return None\n\n address = int(address)\n\n for page in get():\n if address in page:\n return page\n\n return explore(address)\n\[email protected]()\ndef explore(address_maybe):\n \"\"\"\n Given a potential address, check to see what permissions it has.\n\n Returns:\n Page object\n\n Note:\n Adds the Page object to a persistent list of pages which are\n only reset when the process dies. This means pages which are\n added this way will not be removed when unmapped.\n\n Also assumes the entire contiguous section has the same permission.\n \"\"\"\n if proc_pid_maps():\n return None\n\n address_maybe = pwndbg.memory.page_align(address_maybe)\n\n flags = 4 if pwndbg.memory.peek(address_maybe) else 0\n\n if not flags:\n return None\n\n flags |= 2 if pwndbg.memory.poke(address_maybe) else 0\n flags |= 1 if not pwndbg.stack.nx else 0\n\n page = find_boundaries(address_maybe)\n page.objfile = '<explored>'\n page.flags = flags\n\n explored_pages.append(page)\n\n return page\n\n# Automatically ensure that all registers are explored on each stop\n#@pwndbg.events.stop\ndef explore_registers():\n for regname in pwndbg.regs.common:\n find(pwndbg.regs[regname])\n\n\n#@pwndbg.events.exit\ndef clear_explored_pages():\n while explored_pages:\n explored_pages.pop()\n\n\ndef add_custom_page(page):\n bisect.insort(custom_pages, page)\n\n # Reset all the cache\n # We can not reset get() only, since the result may be used by others.\n # TODO: avoid flush all caches\n pwndbg.memoize.reset()\n\n\ndef clear_custom_page():\n while custom_pages:\n custom_pages.pop()\n\n # Reset all the cache\n # We can not reset get() only, since the result may be used by others.\n # TODO: avoid flush all caches\n pwndbg.memoize.reset()\n\n\[email protected]_on_start\[email protected]_on_stop\ndef proc_pid_maps():\n \"\"\"\n Parse the contents of /proc/$PID/maps on the server.\n\n Returns:\n A list of pwndbg.memory.Page objects.\n \"\"\"\n\n # If we debug remotely a qemu-user or qemu-system target,\n # there is no point of hitting things further\n if pwndbg.qemu.is_qemu():\n return tuple()\n\n example_proc_pid_maps = \"\"\"\n 7f95266fa000-7f95268b5000 r-xp 00000000 08:01 418404 /lib/x86_64-linux-gnu/libc-2.19.so\n 7f95268b5000-7f9526ab5000 ---p 001bb000 08:01 418404 /lib/x86_64-linux-gnu/libc-2.19.so\n 7f9526ab5000-7f9526ab9000 r--p 001bb000 08:01 418404 /lib/x86_64-linux-gnu/libc-2.19.so\n 7f9526ab9000-7f9526abb000 rw-p 001bf000 08:01 418404 /lib/x86_64-linux-gnu/libc-2.19.so\n 7f9526abb000-7f9526ac0000 rw-p 00000000 00:00 0\n 7f9526ac0000-7f9526ae3000 r-xp 00000000 08:01 418153 /lib/x86_64-linux-gnu/ld-2.19.so\n 7f9526cbe000-7f9526cc1000 rw-p 00000000 00:00 0\n 7f9526ce0000-7f9526ce2000 rw-p 00000000 00:00 0\n 7f9526ce2000-7f9526ce3000 r--p 00022000 08:01 418153 /lib/x86_64-linux-gnu/ld-2.19.so\n 7f9526ce3000-7f9526ce4000 rw-p 00023000 08:01 418153 /lib/x86_64-linux-gnu/ld-2.19.so\n 7f9526ce4000-7f9526ce5000 rw-p 00000000 00:00 0\n 7f9526ce5000-7f9526d01000 r-xp 00000000 08:01 786466 /bin/dash\n 7f9526f00000-7f9526f02000 r--p 0001b000 08:01 786466 /bin/dash\n 7f9526f02000-7f9526f03000 rw-p 0001d000 08:01 786466 /bin/dash\n 7f9526f03000-7f9526f05000 rw-p 00000000 00:00 0\n 7f95279fe000-7f9527a1f000 rw-p 00000000 00:00 0 [heap]\n 7fff3c177000-7fff3c199000 rw-p 00000000 00:00 0 [stack]\n 7fff3c1e8000-7fff3c1ea000 r-xp 00000000 00:00 0 [vdso]\n ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]\n \"\"\"\n\n locations = [\n '/proc/%s/maps' % pwndbg.proc.pid,\n '/proc/%s/map' % pwndbg.proc.pid,\n '/usr/compat/linux/proc/%s/maps' % pwndbg.proc.pid,\n ]\n\n for location in locations:\n try:\n data = pwndbg.file.get(location)\n break\n except (OSError, gdb.error):\n continue\n else:\n return tuple()\n\n data = data.decode()\n\n pages = []\n for line in data.splitlines():\n maps, perm, offset, dev, inode_objfile = line.split(None, 4)\n\n start, stop = maps.split('-')\n \n try:\n inode, objfile = inode_objfile.split(None, 1)\n except:\n objfile = '[anon_' + start[:-3] + ']'\n\n start = int(start, 16)\n stop = int(stop, 16)\n offset = int(offset, 16)\n size = stop-start\n\n flags = 0\n if 'r' in perm: flags |= 4\n if 'w' in perm: flags |= 2\n if 'x' in perm: flags |= 1\n\n page = pwndbg.memory.Page(start, size, flags, offset, objfile)\n pages.append(page)\n\n return tuple(pages)\n\[email protected]_on_stop\ndef monitor_info_mem():\n # NOTE: This works only on X86/X64/RISC-V\n # See: https://github.com/pwndbg/pwndbg/pull/685\n # (TODO: revisit with future QEMU versions)\n #\n # pwndbg> monitor info mem\n # ffff903580000000-ffff903580099000 0000000000099000 -rw\n # ffff903580099000-ffff90358009b000 0000000000002000 -r-\n # ffff90358009b000-ffff903582200000 0000000002165000 -rw\n # ffff903582200000-ffff903582803000 0000000000603000 -r-\n try:\n lines = gdb.execute('monitor info mem', to_string=True).splitlines()\n except gdb.error:\n # Likely a `gdb.error: \"monitor\" command not supported by this target.`\n # TODO: add debug logging\n return tuple()\n\n # Handle disabled PG\n # This will prevent a crash on abstract architectures\n if len(lines) == 1 and lines[0] == 'PG disabled':\n return tuple()\n\n pages = []\n for line in lines:\n dash_idx = line.index('-')\n space_idx = line.index(' ')\n rspace_idx = line.rindex(' ')\n\n start = int(line[:dash_idx], 16)\n end = int(line[dash_idx+1:space_idx], 16)\n size = int(line[space_idx+1:rspace_idx], 16)\n assert end-start == size, \"monitor info mem output didn't pass a sanity check\"\n perm = line[rspace_idx+1:]\n\n flags = 0\n if 'r' in perm: flags |= 4\n if 'w' in perm: flags |= 2\n # QEMU does not expose X/NX bit, see #685\n #if 'x' in perm: flags |= 1\n flags |= 1\n\n pages.append(pwndbg.memory.Page(start, size, flags, 0, '<qemu>'))\n\n return tuple(pages)\n\n\[email protected]_on_stop\ndef info_sharedlibrary():\n \"\"\"\n Parses the output of `info sharedlibrary`.\n\n Specifically, all we really want is any valid pointer into each library,\n and the path to the library on disk.\n\n With this information, we can use the ELF parser to get all of the\n page permissions for every mapped page in the ELF.\n\n Returns:\n A list of pwndbg.memory.Page objects.\n \"\"\"\n\n exmaple_info_sharedlibrary_freebsd = \"\"\"\n From To Syms Read Shared Object Library\n 0x280fbea0 0x2810e570 Yes (*) /libexec/ld-elf.so.1\n 0x281260a0 0x281495c0 Yes (*) /lib/libncurses.so.8\n 0x28158390 0x2815dcf0 Yes (*) /usr/local/lib/libintl.so.9\n 0x28188b00 0x2828e060 Yes (*) /lib/libc.so.7\n (*): Shared library is missing debugging information.\n \"\"\"\n\n exmaple_info_sharedlibrary_linux = \"\"\"\n From To Syms Read Shared Object Library\n 0x00007ffff7ddaae0 0x00007ffff7df54e0 Yes /lib64/ld-linux-x86-64.so.2\n 0x00007ffff7bbd3d0 0x00007ffff7bc9028 Yes (*) /lib/x86_64-linux-gnu/libtinfo.so.5\n 0x00007ffff79aded0 0x00007ffff79ae9ce Yes /lib/x86_64-linux-gnu/libdl.so.2\n 0x00007ffff76064a0 0x00007ffff774c113 Yes /lib/x86_64-linux-gnu/libc.so.6\n (*): Shared library is missing debugging information.\n \"\"\"\n pages = []\n\n for line in gdb.execute('info sharedlibrary', to_string=True).splitlines():\n if not line.startswith('0x'):\n continue\n\n tokens = line.split()\n text = int(tokens[0], 16)\n obj = tokens[-1]\n\n pages.extend(pwndbg.elf.map(text, obj))\n\n return tuple(sorted(pages))\n\[email protected]_on_stop\ndef info_files():\n\n example_info_files_linues = \"\"\"\n Symbols from \"/bin/bash\".\n Unix child process:\n Using the running image of child process 5903.\n While running this, GDB does not access memory from...\n Local exec file:\n `/bin/bash', file type elf64-x86-64.\n Entry point: 0x42020b\n 0x0000000000400238 - 0x0000000000400254 is .interp\n 0x0000000000400254 - 0x0000000000400274 is .note.ABI-tag\n ...\n 0x00000000006f06c0 - 0x00000000006f8ca8 is .data\n 0x00000000006f8cc0 - 0x00000000006fe898 is .bss\n 0x00007ffff7dda1c8 - 0x00007ffff7dda1ec is .note.gnu.build-id in /lib64/ld-linux-x86-64.so.2\n 0x00007ffff7dda1f0 - 0x00007ffff7dda2ac is .hash in /lib64/ld-linux-x86-64.so.2\n 0x00007ffff7dda2b0 - 0x00007ffff7dda38c is .gnu.hash in /lib64/ld-linux-x86-64.so.2\n \"\"\"\n\n seen_files = set()\n pages = list()\n main_exe = ''\n\n for line in gdb.execute('info files', to_string=True).splitlines():\n line = line.strip()\n\n # The name of the main executable\n if line.startswith('`'):\n exename, filetype = line.split(None, 1)\n main_exe = exename.strip(\"`,'\")\n continue\n\n # Everything else should be addresses\n if not line.startswith('0x'):\n continue\n\n # start, stop, _, segment, _, filename = line.split(None,6)\n fields = line.split(None,6)\n vaddr = int(fields[0], 16)\n\n if len(fields) == 5: objfile = main_exe\n elif len(fields) == 7: objfile = fields[6]\n else:\n print(\"Bad data: %r\" % line)\n continue\n\n if objfile in seen_files:\n continue\n else:\n seen_files.add(objfile)\n\n pages.extend(pwndbg.elf.map(vaddr, objfile))\n\n return tuple(pages)\n\n\n\[email protected]_on_exit\ndef info_auxv(skip_exe=False):\n \"\"\"\n Extracts the name of the executable from the output of the command\n \"info auxv\". Note that if the executable path is a symlink,\n it is not dereferenced by `info auxv` and we also don't dereference it.\n\n Arguments:\n skip_exe(bool): Do not return any mappings that belong to the exe.\n\n Returns:\n A list of pwndbg.memory.Page objects.\n \"\"\"\n auxv = pwndbg.auxv.get()\n\n if not auxv:\n return tuple()\n\n pages = []\n exe_name = auxv.AT_EXECFN or 'main.exe'\n entry = auxv.AT_ENTRY\n base = auxv.AT_BASE\n vdso = auxv.AT_SYSINFO_EHDR or auxv.AT_SYSINFO\n phdr = auxv.AT_PHDR\n\n if not skip_exe and (entry or phdr):\n pages.extend(pwndbg.elf.map(entry or phdr, exe_name))\n\n if base:\n pages.extend(pwndbg.elf.map(base, '[linker]'))\n\n if vdso:\n pages.extend(pwndbg.elf.map(vdso, '[vdso]'))\n\n return tuple(sorted(pages))\n\n\ndef find_boundaries(addr, name='', min=0):\n \"\"\"\n Given a single address, find all contiguous pages\n which are mapped.\n \"\"\"\n start = pwndbg.memory.find_lower_boundary(addr)\n end = pwndbg.memory.find_upper_boundary(addr)\n\n if start < min:\n start = min\n\n return pwndbg.memory.Page(start, end-start, 4, 0, name)\n\ndef check_aslr():\n \"\"\"\n Detects the ASLR status. Returns True, False or None.\n\n None is returned when we can't detect ASLR.\n \"\"\"\n # QEMU does not support this concept.\n if pwndbg.qemu.is_qemu():\n return None, 'Could not detect ASLR on QEMU targets'\n\n # Systemwide ASLR is disabled\n try:\n data = pwndbg.file.get('/proc/sys/kernel/randomize_va_space')\n if b'0' in data:\n return False, 'kernel.randomize_va_space == 0'\n except Exception as e:\n print(\"Could not check ASLR: can't read randomize_va_space\")\n pass\n\n # Check the personality of the process\n if pwndbg.proc.alive:\n try:\n data = pwndbg.file.get('/proc/%i/personality' % pwndbg.proc.pid)\n personality = int(data, 16)\n return (personality & 0x40000 == 0), 'read status from process\\' personality'\n except:\n print(\"Could not check ASLR: can't read process' personality\")\n pass\n\n # Just go with whatever GDB says it did.\n #\n # This should usually be identical to the above, but we may not have\n # access to procfs.\n output = gdb.execute('show disable-randomization', to_string=True)\n return (\"is off.\" in output), 'show disable-randomization'\n\[email protected]\ndef mark_pc_as_executable():\n mapping = find(pwndbg.regs.pc)\n if mapping and not mapping.execute:\n mapping.flags |= os.X_OK\n", "path": "pwndbg/vmmap.py" } ]
diff --git a/pwndbg/vmmap.py b/pwndbg/vmmap.py index 9f7604c3bd1..456603b0ecc 100644 --- a/pwndbg/vmmap.py +++ b/pwndbg/vmmap.py @@ -212,7 +212,7 @@ def proc_pid_maps(): try: inode, objfile = inode_objfile.split(None, 1) except: - objfile = 'anon_' + start[:-3] + objfile = '[anon_' + start[:-3] + ']' start = int(start, 16) stop = int(stop, 16)
ivy-llc__ivy-22625
remainder
[ { "content": "# local\nimport ivy\nimport ivy.functional.frontends.paddle as paddle_frontend\nfrom ivy.func_wrapper import (\n with_supported_dtypes,\n with_unsupported_dtypes,\n)\nfrom ivy.functional.frontends.paddle.func_wrapper import _to_ivy_array\n\n\nclass Tensor:\n def __init__(self, array, dtype=None, place=\"cpu\", stop_gradient=True):\n self._ivy_array = (\n ivy.array(array, dtype=dtype, device=place)\n if not isinstance(array, ivy.Array)\n else array\n )\n self._dtype = dtype\n self._place = place\n self._stop_gradient = stop_gradient\n\n def __repr__(self):\n return (\n str(self._ivy_array.__repr__())\n .replace(\"ivy.array\", \"ivy.frontends.paddle.Tensor\")\n .replace(\"dev\", \"place\")\n )\n\n # Properties #\n # ---------- #\n\n @property\n def ivy_array(self):\n return self._ivy_array\n\n @property\n def place(self):\n return self.ivy_array.device\n\n @property\n def dtype(self):\n return self._ivy_array.dtype\n\n @property\n def shape(self):\n return self._ivy_array.shape\n\n @property\n def ndim(self):\n return self.dim()\n\n # Setters #\n # --------#\n\n @ivy_array.setter\n def ivy_array(self, array):\n self._ivy_array = (\n ivy.array(array) if not isinstance(array, ivy.Array) else array\n )\n\n # Special Methods #\n # -------------------#\n\n def __getitem__(self, item):\n ivy_args = ivy.nested_map([self, item], _to_ivy_array)\n ret = ivy.get_item(*ivy_args)\n return paddle_frontend.Tensor(ret)\n\n def __setitem__(self, item, value):\n raise ivy.utils.exceptions.IvyException(\n \"ivy.functional.frontends.paddle.Tensor object doesn't support assignment\"\n )\n\n def __iter__(self):\n if self.ndim == 0:\n raise TypeError(\"iteration over a 0-d tensor not supported\")\n for i in range(self.shape[0]):\n yield self[i]\n\n # Instance Methods #\n # ---------------- #\n\n def reshape(self, *args, shape=None):\n if args and shape:\n raise TypeError(\"reshape() got multiple values for argument 'shape'\")\n if shape is not None:\n return paddle_frontend.reshape(self._ivy_array, shape)\n if args:\n if isinstance(args[0], (tuple, list)):\n shape = args[0]\n return paddle_frontend.reshape(self._ivy_array, shape)\n else:\n return paddle_frontend.reshape(self._ivy_array, args)\n return paddle_frontend.reshape(self._ivy_array)\n\n def dim(self):\n return self.ivy_array.ndim\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def abs(self):\n return paddle_frontend.abs(self)\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def acosh(self, name=None):\n return paddle_frontend.Tensor(ivy.acosh(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def ceil(self):\n return paddle_frontend.ceil(self)\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"complex\", \"int8\")}, \"paddle\")\n def numel(self):\n return paddle_frontend.numel(self)\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\",)}, \"paddle\")\n def asinh(self, name=None):\n return paddle_frontend.Tensor(ivy.asinh(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def asin(self, name=None):\n return paddle_frontend.Tensor(ivy.asin(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def cosh(self, name=None):\n return paddle_frontend.Tensor(ivy.cosh(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def log(self, name=None):\n return paddle_frontend.Tensor(ivy.log(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def sin(self, name=None):\n return paddle_frontend.Tensor(ivy.sin(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def sinh(self, name=None):\n return paddle_frontend.Tensor(ivy.sinh(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def argmax(self, axis=None, keepdim=False, dtype=None, name=None):\n return paddle_frontend.Tensor(\n ivy.argmax(self._ivy_array, axis=axis, keepdims=keepdim, dtype=dtype)\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"uint16\")}, \"paddle\")\n def unsqueeze(self, axis=None, name=None):\n return paddle_frontend.Tensor(ivy.expand_dims(self._ivy_array, axis=axis))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def sqrt(self, name=None):\n return paddle_frontend.Tensor(ivy.sqrt(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def sqrt_(self, name=None):\n self.ivy_array = self.sqrt().ivy_array\n return self\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"bfloat16\", \"uint16\")}, \"paddle\")\n def zero_(self):\n self.ivy_array = paddle_frontend.Tensor(\n ivy.zeros_like(self._ivy_array)\n ).ivy_array\n return self\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def cos(self, name=None):\n return paddle_frontend.Tensor(ivy.cos(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def exp(self, name=None):\n return paddle_frontend.Tensor(ivy.exp(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def exp_(self, name=None):\n self.ivy_array = self.exp().ivy_array\n return self\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def erf(self, name=None):\n return paddle_frontend.Tensor(ivy.erf(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def subtract(self, y, name=None):\n return paddle_frontend.Tensor(ivy.subtract(self._ivy_array, _to_ivy_array(y)))\n\n @with_unsupported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"uint8\", \"int8\", \"bool\")}, \"paddle\"\n )\n def subtract_(self, y, name=None):\n self.ivy_array = self.subtract(y).ivy_array\n return self\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def log10(self, name=None):\n return paddle_frontend.Tensor(ivy.log10(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def argsort(self, axis=-1, descending=False, name=None):\n return paddle_frontend.Tensor(\n ivy.argsort(self._ivy_array, axis=axis, descending=descending)\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def floor(self, name=None):\n return paddle_frontend.Tensor(ivy.floor(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def floor_(self):\n self.ivy_array = self.floor().ivy_array\n return self\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n )\n def clip(self, min=None, max=None, name=None):\n ivy.utils.assertions.check_all_or_any_fn(\n min,\n max,\n fn=ivy.exists,\n type=\"any\",\n limit=[1, 2],\n message=\"at most one of min or max can be None\",\n )\n if min is None:\n ret = ivy.minimum(self._ivy_array, max)\n elif max is None:\n ret = ivy.maximum(self._ivy_array, min)\n else:\n ret = ivy.clip(self._ivy_array, min, max)\n return paddle_frontend.Tensor(ret)\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def tanh(self, name=None):\n return paddle_frontend.Tensor(ivy.tanh(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def add_(self, name=None):\n return paddle_frontend.Tensor(ivy.add(self._ivy_array))\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n )\n def isinf(self, name=None):\n return paddle_frontend.Tensor(ivy.isinf(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def square(self, name=None):\n return paddle_frontend.Tensor(ivy.square(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def remainder_(self, y, name=None):\n self.ivy_array = paddle_frontend.Tensor(\n ivy.remainder(self._ivy_array, _to_ivy_array(y))\n ).ivy_array\n return self\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def cholesky(self, upper=False, name=None):\n return paddle_frontend.Tensor(ivy.cholesky(self._ivy_array, upper=upper))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def multiply(self, y, name=None):\n return paddle_frontend.multiply(self, y)\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n )\n def isfinite(self, name=None):\n return paddle_frontend.Tensor(ivy.isfinite(self._ivy_array))\n\n @with_supported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def all(self, axis=None, keepdim=False, dtype=None, name=None):\n return paddle_frontend.Tensor(\n ivy.all(self.ivy_array, axis=axis, keepdims=keepdim, dtype=dtype)\n )\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def allclose(self, other, rtol=1e-05, atol=1e-08, equal_nan=False, name=None):\n return paddle_frontend.Tensor(\n ivy.allclose(\n self._ivy_array, other, rtol=rtol, atol=atol, equal_nan=equal_nan\n )\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def sort(self, axis=-1, descending=False, name=None):\n return paddle_frontend.Tensor(\n ivy.sort(self._ivy_array, axis=axis, descending=descending)\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def log1p(self, name=None):\n return ivy.log1p(self._ivy_array)\n\n @with_supported_dtypes(\n {\n \"2.4.2 and below\": (\n \"bool\",\n \"uint8\",\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n )\n },\n \"paddle\",\n )\n def bitwise_and(self, y, out=None, name=None):\n return paddle_frontend.bitwise_and(self, y)\n\n @with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"bool\",\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n )\n },\n \"paddle\",\n )\n def logical_or(self, y, out=None, name=None):\n return paddle_frontend.logical_or(self, y, out=out)\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"bool\", \"uint8\", \"int8\", \"int16\", \"int32\", \"int64\")},\n \"paddle\",\n )\n def bitwise_xor(self, y, out=None, name=None):\n return paddle_frontend.bitwise_xor(self, y)\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def any(self, axis=None, keepdim=False, name=None):\n return paddle_frontend.Tensor(\n ivy.any(self._ivy_array, axis=axis, keepdims=keepdim)\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": \"bfloat16\"}, \"paddle\")\n def astype(self, dtype):\n return paddle_frontend.Tensor(ivy.astype(self._ivy_array, dtype))\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"bool\", \"uint8\", \"int8\", \"int16\", \"int32\", \"int64\")},\n \"paddle\",\n )\n def bitwise_not(self, out=None, name=None):\n return paddle_frontend.Tensor(ivy.bitwise_invert(self._ivy_array, out=out))\n\n @with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"bool\",\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n )\n },\n \"paddle\",\n )\n def bitwise_or(self, y, out=None, name=None):\n return paddle_frontend.bitwise_or(self, y, out=out)\n\n @with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"bool\",\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n )\n },\n \"paddle\",\n )\n def logical_xor(self, y, out=None, name=None):\n return paddle_frontend.logical_xor(self, y, out=out)\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n )\n def isnan(self, name=None):\n return paddle_frontend.isnan(self)\n\n @with_unsupported_dtypes(\n {\n \"2.5.1 and below\": (\n \"bool\",\n \"uint8\",\n \"int8\",\n \"int16\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n )\n def greater_than(self, y, name=None):\n return paddle_frontend.greater_than(self, y)\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def rsqrt(self, name=None):\n return paddle_frontend.Tensor(ivy.reciprocal(ivy.sqrt(self._ivy_array)))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def reciprocal(self, name=None):\n return paddle_frontend.reciprocal(self)\n\n @with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"bool\",\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n )\n },\n \"paddle\",\n )\n def logical_and(self, y, out=None, name=None):\n return paddle_frontend.logical_and(self, y, out=out)\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def divide(self, y, name=None):\n return paddle_frontend.divide(self, y)\n\n @with_unsupported_dtypes(\n {\n \"2.5.1 and below\": (\n \"bool\",\n \"uint8\",\n \"int8\",\n \"int16\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n )\n def less_than(self, y, name=None):\n return paddle_frontend.less_than(self, y)\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def cumprod(self, dim=None, dtype=None, name=None):\n return paddle_frontend.Tensor(\n ivy.cumprod(self._ivy_array, axis=dim, dtype=dtype)\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def cumsum(self, axis=None, dtype=None, name=None):\n return paddle_frontend.Tensor(\n ivy.cumsum(self._ivy_array, axis=axis, dtype=dtype)\n )\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\", \"float32\", \"float64\")},\n \"paddle\",\n )\n def angle(self, name=None):\n return paddle_frontend.Tensor(ivy.angle(self._ivy_array))\n\n @with_unsupported_dtypes(\n {\n \"2.5.1 and below\": (\n \"uint8\",\n \"int8\",\n \"int16\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n )\n def equal(self, y, name=None):\n return paddle_frontend.equal(self, y)\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def rad2deg(self, name=None):\n return paddle_frontend.Tensor(ivy.rad2deg(self._ivy_array))\n\n @with_unsupported_dtypes(\n {\n \"2.5.1 and below\": (\n \"uint8\",\n \"int8\",\n \"int16\",\n \"float16\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n )\n def equal_all(self, y, name=None):\n return paddle_frontend.Tensor(\n ivy.array_equal(self._ivy_array, _to_ivy_array(y))\n )\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def maximum(self, other, name=None):\n return ivy.maximum(self._ivy_array, other)\n\n @with_unsupported_dtypes({\"2.5.1 and below\": \"bfloat16\"}, \"paddle\")\n def fmax(self, y, name=None):\n return paddle_frontend.Tensor(ivy.fmax(self._ivy_array, _to_ivy_array(y)))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": \"bfloat16\"}, \"paddle\")\n def fmin(self, y, name=None):\n return paddle_frontend.Tensor(ivy.fmin(self._ivy_array, _to_ivy_array(y)))\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n )\n def minimum(self, y, name=None):\n return paddle_frontend.Tensor(ivy.minimum(self._ivy_array, _to_ivy_array(y)))\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n )\n def max(self, axis=None, keepdim=False, name=None):\n return paddle_frontend.Tensor(\n ivy.max(self._ivy_array, axis=axis, keepdims=keepdim)\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def deg2rad(self, name=None):\n return paddle_frontend.Tensor(ivy.deg2rad(self._ivy_array))\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\", \"bool\")}, \"paddle\"\n )\n def rot90(self, k=1, axes=(0, 1), name=None):\n return paddle_frontend.Tensor(ivy.rot90(self._ivy_array, k=k, axes=axes))\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\")},\n \"paddle\",\n )\n def imag(self, name=None):\n return paddle_frontend.imag(self)\n\n def is_tensor(self):\n return paddle_frontend.is_tensor(self._ivy_array)\n\n @with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"float32\",\n \"float64\",\n )\n },\n \"paddle\",\n )\n def isclose(self, y, rtol=1e-05, atol=1e-08, equal_nan=False, name=None):\n return paddle_frontend.isclose(\n self, y, rtol=rtol, atol=atol, equal_nan=equal_nan\n )\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"int32\", \"int64\")}, \"paddle\")\n def floor_divide(self, y, name=None):\n return paddle_frontend.Tensor(\n ivy.floor_divide(self._ivy_array, _to_ivy_array(y))\n )\n\n # cond\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def cond(self, p=None, name=None):\n return paddle_frontend.cond(self, p=p, name=name)\n\n @with_unsupported_dtypes({\"2.4.2 and below\": (\"int16\", \"float16\")}, \"paddle\")\n def conj(self, name=None):\n return paddle_frontend.Tensor(ivy.conj(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def log2(self, name=None):\n return paddle_frontend.Tensor(ivy.log2(self._ivy_array))\n\n @with_unsupported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n )\n def neg(self, name=None):\n return paddle_frontend.neg(self)\n\n @with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"bool\",\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n )\n },\n \"paddle\",\n )\n def logical_not(self, out=None, name=None):\n return paddle_frontend.Tensor(ivy.logical_not(self.ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def sign(self, name=None):\n return ivy.sign(self._ivy_array)\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def var(self, axis=None, unbiased=True, keepdim=False, name=None):\n return paddle_frontend.Tensor(\n ivy.var(\n self._ivy_array, axis=axis, correction=int(unbiased), keepdims=keepdim\n )\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def sgn(self, name=None):\n return paddle_frontend.Tensor(ivy.sign(self._ivy_array, np_variant=True))\n\n def tolist(self):\n return paddle_frontend.Tensor(ivy.to_list(self._ivy_array))\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n )\n def min(self, axis=None, keepdim=False, name=None):\n return ivy.min(self._ivy_array, axis=axis, keepdims=keepdim)\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def atan(self, name=None):\n return ivy.atan(self._ivy_array)\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def atanh(self, name=None):\n return ivy.atanh(self._ivy_array)\n\n @with_unsupported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def std(self, axis=None, unbiased=True, keepdim=False, name=None):\n return paddle_frontend.Tensor(\n ivy.std(self._ivy_array, axis=axis, keepdims=keepdim)\n )\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"int32\", \"int64\", \"float32\", \"float64\")}, \"paddle\"\n )\n def trunc(self, name=None):\n return paddle_frontend.Tensor(ivy.trunc(self._ivy_array))\n", "path": "ivy/functional/frontends/paddle/tensor/tensor.py" } ]
[ { "content": "# local\nimport ivy\nimport ivy.functional.frontends.paddle as paddle_frontend\nfrom ivy.func_wrapper import (\n with_supported_dtypes,\n with_unsupported_dtypes,\n)\nfrom ivy.functional.frontends.paddle.func_wrapper import _to_ivy_array\n\n\nclass Tensor:\n def __init__(self, array, dtype=None, place=\"cpu\", stop_gradient=True):\n self._ivy_array = (\n ivy.array(array, dtype=dtype, device=place)\n if not isinstance(array, ivy.Array)\n else array\n )\n self._dtype = dtype\n self._place = place\n self._stop_gradient = stop_gradient\n\n def __repr__(self):\n return (\n str(self._ivy_array.__repr__())\n .replace(\"ivy.array\", \"ivy.frontends.paddle.Tensor\")\n .replace(\"dev\", \"place\")\n )\n\n # Properties #\n # ---------- #\n\n @property\n def ivy_array(self):\n return self._ivy_array\n\n @property\n def place(self):\n return self.ivy_array.device\n\n @property\n def dtype(self):\n return self._ivy_array.dtype\n\n @property\n def shape(self):\n return self._ivy_array.shape\n\n @property\n def ndim(self):\n return self.dim()\n\n # Setters #\n # --------#\n\n @ivy_array.setter\n def ivy_array(self, array):\n self._ivy_array = (\n ivy.array(array) if not isinstance(array, ivy.Array) else array\n )\n\n # Special Methods #\n # -------------------#\n\n def __getitem__(self, item):\n ivy_args = ivy.nested_map([self, item], _to_ivy_array)\n ret = ivy.get_item(*ivy_args)\n return paddle_frontend.Tensor(ret)\n\n def __setitem__(self, item, value):\n raise ivy.utils.exceptions.IvyException(\n \"ivy.functional.frontends.paddle.Tensor object doesn't support assignment\"\n )\n\n def __iter__(self):\n if self.ndim == 0:\n raise TypeError(\"iteration over a 0-d tensor not supported\")\n for i in range(self.shape[0]):\n yield self[i]\n\n # Instance Methods #\n # ---------------- #\n\n def reshape(self, *args, shape=None):\n if args and shape:\n raise TypeError(\"reshape() got multiple values for argument 'shape'\")\n if shape is not None:\n return paddle_frontend.reshape(self._ivy_array, shape)\n if args:\n if isinstance(args[0], (tuple, list)):\n shape = args[0]\n return paddle_frontend.reshape(self._ivy_array, shape)\n else:\n return paddle_frontend.reshape(self._ivy_array, args)\n return paddle_frontend.reshape(self._ivy_array)\n\n def dim(self):\n return self.ivy_array.ndim\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def abs(self):\n return paddle_frontend.abs(self)\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def acosh(self, name=None):\n return paddle_frontend.Tensor(ivy.acosh(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def ceil(self):\n return paddle_frontend.ceil(self)\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"complex\", \"int8\")}, \"paddle\")\n def numel(self):\n return paddle_frontend.numel(self)\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\",)}, \"paddle\")\n def asinh(self, name=None):\n return paddle_frontend.Tensor(ivy.asinh(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def asin(self, name=None):\n return paddle_frontend.Tensor(ivy.asin(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def cosh(self, name=None):\n return paddle_frontend.Tensor(ivy.cosh(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def log(self, name=None):\n return paddle_frontend.Tensor(ivy.log(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def sin(self, name=None):\n return paddle_frontend.Tensor(ivy.sin(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def sinh(self, name=None):\n return paddle_frontend.Tensor(ivy.sinh(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def argmax(self, axis=None, keepdim=False, dtype=None, name=None):\n return paddle_frontend.Tensor(\n ivy.argmax(self._ivy_array, axis=axis, keepdims=keepdim, dtype=dtype)\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"uint16\")}, \"paddle\")\n def unsqueeze(self, axis=None, name=None):\n return paddle_frontend.Tensor(ivy.expand_dims(self._ivy_array, axis=axis))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def sqrt(self, name=None):\n return paddle_frontend.Tensor(ivy.sqrt(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def sqrt_(self, name=None):\n self.ivy_array = self.sqrt().ivy_array\n return self\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"bfloat16\", \"uint16\")}, \"paddle\")\n def zero_(self):\n self.ivy_array = paddle_frontend.Tensor(\n ivy.zeros_like(self._ivy_array)\n ).ivy_array\n return self\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def cos(self, name=None):\n return paddle_frontend.Tensor(ivy.cos(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def exp(self, name=None):\n return paddle_frontend.Tensor(ivy.exp(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def exp_(self, name=None):\n self.ivy_array = self.exp().ivy_array\n return self\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def erf(self, name=None):\n return paddle_frontend.Tensor(ivy.erf(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def subtract(self, y, name=None):\n return paddle_frontend.Tensor(ivy.subtract(self._ivy_array, _to_ivy_array(y)))\n\n @with_unsupported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"uint8\", \"int8\", \"bool\")}, \"paddle\"\n )\n def subtract_(self, y, name=None):\n self.ivy_array = self.subtract(y).ivy_array\n return self\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def log10(self, name=None):\n return paddle_frontend.Tensor(ivy.log10(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def argsort(self, axis=-1, descending=False, name=None):\n return paddle_frontend.Tensor(\n ivy.argsort(self._ivy_array, axis=axis, descending=descending)\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def floor(self, name=None):\n return paddle_frontend.Tensor(ivy.floor(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def floor_(self):\n self.ivy_array = self.floor().ivy_array\n return self\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n )\n def clip(self, min=None, max=None, name=None):\n ivy.utils.assertions.check_all_or_any_fn(\n min,\n max,\n fn=ivy.exists,\n type=\"any\",\n limit=[1, 2],\n message=\"at most one of min or max can be None\",\n )\n if min is None:\n ret = ivy.minimum(self._ivy_array, max)\n elif max is None:\n ret = ivy.maximum(self._ivy_array, min)\n else:\n ret = ivy.clip(self._ivy_array, min, max)\n return paddle_frontend.Tensor(ret)\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def tanh(self, name=None):\n return paddle_frontend.Tensor(ivy.tanh(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def add_(self, name=None):\n return paddle_frontend.Tensor(ivy.add(self._ivy_array))\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n )\n def isinf(self, name=None):\n return paddle_frontend.Tensor(ivy.isinf(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def square(self, name=None):\n return paddle_frontend.Tensor(ivy.square(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def remainder_(self, y, name=None):\n self.ivy_array = paddle_frontend.Tensor(\n ivy.remainder(self._ivy_array, _to_ivy_array(y))\n ).ivy_array\n return self\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def cholesky(self, upper=False, name=None):\n return paddle_frontend.Tensor(ivy.cholesky(self._ivy_array, upper=upper))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def multiply(self, y, name=None):\n return paddle_frontend.multiply(self, y)\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n )\n def isfinite(self, name=None):\n return paddle_frontend.Tensor(ivy.isfinite(self._ivy_array))\n\n @with_supported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def all(self, axis=None, keepdim=False, dtype=None, name=None):\n return paddle_frontend.Tensor(\n ivy.all(self.ivy_array, axis=axis, keepdims=keepdim, dtype=dtype)\n )\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def allclose(self, other, rtol=1e-05, atol=1e-08, equal_nan=False, name=None):\n return paddle_frontend.Tensor(\n ivy.allclose(\n self._ivy_array, other, rtol=rtol, atol=atol, equal_nan=equal_nan\n )\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def sort(self, axis=-1, descending=False, name=None):\n return paddle_frontend.Tensor(\n ivy.sort(self._ivy_array, axis=axis, descending=descending)\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def log1p(self, name=None):\n return ivy.log1p(self._ivy_array)\n\n @with_supported_dtypes(\n {\n \"2.4.2 and below\": (\n \"bool\",\n \"uint8\",\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n )\n },\n \"paddle\",\n )\n def bitwise_and(self, y, out=None, name=None):\n return paddle_frontend.bitwise_and(self, y)\n\n @with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"bool\",\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n )\n },\n \"paddle\",\n )\n def logical_or(self, y, out=None, name=None):\n return paddle_frontend.logical_or(self, y, out=out)\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"bool\", \"uint8\", \"int8\", \"int16\", \"int32\", \"int64\")},\n \"paddle\",\n )\n def bitwise_xor(self, y, out=None, name=None):\n return paddle_frontend.bitwise_xor(self, y)\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def any(self, axis=None, keepdim=False, name=None):\n return paddle_frontend.Tensor(\n ivy.any(self._ivy_array, axis=axis, keepdims=keepdim)\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": \"bfloat16\"}, \"paddle\")\n def astype(self, dtype):\n return paddle_frontend.Tensor(ivy.astype(self._ivy_array, dtype))\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"bool\", \"uint8\", \"int8\", \"int16\", \"int32\", \"int64\")},\n \"paddle\",\n )\n def bitwise_not(self, out=None, name=None):\n return paddle_frontend.Tensor(ivy.bitwise_invert(self._ivy_array, out=out))\n\n @with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"bool\",\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n )\n },\n \"paddle\",\n )\n def bitwise_or(self, y, out=None, name=None):\n return paddle_frontend.bitwise_or(self, y, out=out)\n\n @with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"bool\",\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n )\n },\n \"paddle\",\n )\n def logical_xor(self, y, out=None, name=None):\n return paddle_frontend.logical_xor(self, y, out=out)\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n )\n def isnan(self, name=None):\n return paddle_frontend.isnan(self)\n\n @with_unsupported_dtypes(\n {\n \"2.5.1 and below\": (\n \"bool\",\n \"uint8\",\n \"int8\",\n \"int16\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n )\n def greater_than(self, y, name=None):\n return paddle_frontend.greater_than(self, y)\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def rsqrt(self, name=None):\n return paddle_frontend.Tensor(ivy.reciprocal(ivy.sqrt(self._ivy_array)))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def reciprocal(self, name=None):\n return paddle_frontend.reciprocal(self)\n\n @with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"bool\",\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n )\n },\n \"paddle\",\n )\n def logical_and(self, y, out=None, name=None):\n return paddle_frontend.logical_and(self, y, out=out)\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def divide(self, y, name=None):\n return paddle_frontend.divide(self, y)\n\n @with_unsupported_dtypes(\n {\n \"2.5.1 and below\": (\n \"bool\",\n \"uint8\",\n \"int8\",\n \"int16\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n )\n def less_than(self, y, name=None):\n return paddle_frontend.less_than(self, y)\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def cumprod(self, dim=None, dtype=None, name=None):\n return paddle_frontend.Tensor(\n ivy.cumprod(self._ivy_array, axis=dim, dtype=dtype)\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def cumsum(self, axis=None, dtype=None, name=None):\n return paddle_frontend.Tensor(\n ivy.cumsum(self._ivy_array, axis=axis, dtype=dtype)\n )\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\", \"float32\", \"float64\")},\n \"paddle\",\n )\n def angle(self, name=None):\n return paddle_frontend.Tensor(ivy.angle(self._ivy_array))\n\n @with_unsupported_dtypes(\n {\n \"2.5.1 and below\": (\n \"uint8\",\n \"int8\",\n \"int16\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n )\n def equal(self, y, name=None):\n return paddle_frontend.equal(self, y)\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def rad2deg(self, name=None):\n return paddle_frontend.Tensor(ivy.rad2deg(self._ivy_array))\n\n @with_unsupported_dtypes(\n {\n \"2.5.1 and below\": (\n \"uint8\",\n \"int8\",\n \"int16\",\n \"float16\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n )\n def equal_all(self, y, name=None):\n return paddle_frontend.Tensor(\n ivy.array_equal(self._ivy_array, _to_ivy_array(y))\n )\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def maximum(self, other, name=None):\n return ivy.maximum(self._ivy_array, other)\n\n @with_unsupported_dtypes({\"2.5.1 and below\": \"bfloat16\"}, \"paddle\")\n def fmax(self, y, name=None):\n return paddle_frontend.Tensor(ivy.fmax(self._ivy_array, _to_ivy_array(y)))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": \"bfloat16\"}, \"paddle\")\n def fmin(self, y, name=None):\n return paddle_frontend.Tensor(ivy.fmin(self._ivy_array, _to_ivy_array(y)))\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n )\n def minimum(self, y, name=None):\n return paddle_frontend.Tensor(ivy.minimum(self._ivy_array, _to_ivy_array(y)))\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n )\n def max(self, axis=None, keepdim=False, name=None):\n return paddle_frontend.Tensor(\n ivy.max(self._ivy_array, axis=axis, keepdims=keepdim)\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def deg2rad(self, name=None):\n return paddle_frontend.Tensor(ivy.deg2rad(self._ivy_array))\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\", \"bool\")}, \"paddle\"\n )\n def rot90(self, k=1, axes=(0, 1), name=None):\n return paddle_frontend.Tensor(ivy.rot90(self._ivy_array, k=k, axes=axes))\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\")},\n \"paddle\",\n )\n def imag(self, name=None):\n return paddle_frontend.imag(self)\n\n def is_tensor(self):\n return paddle_frontend.is_tensor(self._ivy_array)\n\n @with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"float32\",\n \"float64\",\n )\n },\n \"paddle\",\n )\n def isclose(self, y, rtol=1e-05, atol=1e-08, equal_nan=False, name=None):\n return paddle_frontend.isclose(\n self, y, rtol=rtol, atol=atol, equal_nan=equal_nan\n )\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"int32\", \"int64\")}, \"paddle\")\n def floor_divide(self, y, name=None):\n return paddle_frontend.Tensor(\n ivy.floor_divide(self._ivy_array, _to_ivy_array(y))\n )\n\n # cond\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def cond(self, p=None, name=None):\n return paddle_frontend.cond(self, p=p, name=name)\n\n @with_unsupported_dtypes({\"2.4.2 and below\": (\"int16\", \"float16\")}, \"paddle\")\n def conj(self, name=None):\n return paddle_frontend.Tensor(ivy.conj(self._ivy_array))\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def log2(self, name=None):\n return paddle_frontend.Tensor(ivy.log2(self._ivy_array))\n\n @with_unsupported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n )\n def neg(self, name=None):\n return paddle_frontend.neg(self)\n\n @with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"bool\",\n \"int8\",\n \"int16\",\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n )\n },\n \"paddle\",\n )\n def logical_not(self, out=None, name=None):\n return paddle_frontend.Tensor(ivy.logical_not(self.ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def sign(self, name=None):\n return ivy.sign(self._ivy_array)\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def var(self, axis=None, unbiased=True, keepdim=False, name=None):\n return paddle_frontend.Tensor(\n ivy.var(\n self._ivy_array, axis=axis, correction=int(unbiased), keepdims=keepdim\n )\n )\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def sgn(self, name=None):\n return paddle_frontend.Tensor(ivy.sign(self._ivy_array, np_variant=True))\n\n def tolist(self):\n return paddle_frontend.Tensor(ivy.to_list(self._ivy_array))\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n )\n def min(self, axis=None, keepdim=False, name=None):\n return ivy.min(self._ivy_array, axis=axis, keepdims=keepdim)\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def atan(self, name=None):\n return ivy.atan(self._ivy_array)\n\n @with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def atanh(self, name=None):\n return ivy.atanh(self._ivy_array)\n\n @with_unsupported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n def std(self, axis=None, unbiased=True, keepdim=False, name=None):\n return paddle_frontend.Tensor(\n ivy.std(self._ivy_array, axis=axis, keepdims=keepdim)\n )\n\n @with_supported_dtypes(\n {\"2.5.1 and below\": (\"int32\", \"int64\", \"float32\", \"float64\")}, \"paddle\"\n )\n def trunc(self, name=None):\n return paddle_frontend.Tensor(ivy.trunc(self._ivy_array))\n\n @with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def remainder(self, y, name=None):\n return ivy.remainder(self._ivy_array, y)\n", "path": "ivy/functional/frontends/paddle/tensor/tensor.py" } ]
diff --git a/ivy/functional/frontends/paddle/tensor/tensor.py b/ivy/functional/frontends/paddle/tensor/tensor.py index b621b83084113..fda3fc4d9ff1a 100644 --- a/ivy/functional/frontends/paddle/tensor/tensor.py +++ b/ivy/functional/frontends/paddle/tensor/tensor.py @@ -654,3 +654,7 @@ def std(self, axis=None, unbiased=True, keepdim=False, name=None): ) def trunc(self, name=None): return paddle_frontend.Tensor(ivy.trunc(self._ivy_array)) + + @with_unsupported_dtypes({"2.5.1 and below": ("float16", "bfloat16")}, "paddle") + def remainder(self, y, name=None): + return ivy.remainder(self._ivy_array, y) diff --git a/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_tensor.py b/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_tensor.py index 5a8ddd8609dfc..112e2c73a6007 100644 --- a/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_tensor.py +++ b/ivy_tests/test_ivy/test_frontends/test_paddle/test_tensor/test_tensor.py @@ -3020,3 +3020,42 @@ def test_torch_tensor_sqrt_( frontend=frontend, on_device=on_device, ) + + +# remainder +@handle_frontend_method( + class_tree=CLASS_TREE, + init_tree="paddle.to_tensor", + method_name="remainder", + dtype_and_x=helpers.dtype_and_values( + available_dtypes=helpers.get_dtypes("float"), + num_arrays=2, + allow_inf=False, + large_abs_safety_factor=2, + small_abs_safety_factor=2, + safety_factor_scale="log", + shared_dtype=True, + ), +) +def test_paddle_tensor_remainder( + dtype_and_x, + frontend_method_data, + init_flags, + method_flags, + frontend, + on_device, + backend_fw, +): + input_dtype, x = dtype_and_x + helpers.test_frontend_method( + init_input_dtypes=input_dtype, + backend_to_test=backend_fw, + init_all_as_kwargs_np={"data": x[0]}, + method_input_dtypes=input_dtype, + method_all_as_kwargs_np={"y": x[1]}, + frontend_method_data=frontend_method_data, + init_flags=init_flags, + method_flags=method_flags, + frontend=frontend, + on_device=on_device, + )
akvo__akvo-rsr-2137
Bug in project document category API ## Test plan The project_document_category should not give an error. E.g. `http://rsr.localdev.akvo.org/rest/v1/project_document_category/` should load. ## Issue description The project document category API gives an error. See http://sentry.support.akvo-ops.org/rsr/test/group/879/, or on the Test server: http://rsr.test.akvo.org/rest/v1/project_document_category/.
[ { "content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom akvo.rsr.models import ProjectDocument, ProjectDocumentCategory\n\nfrom ..serializers import ProjectDocumentSerializer, ProjectDocumentCategorySerializer\nfrom ..viewsets import PublicProjectViewSet\n\n\nclass ProjectDocumentViewSet(PublicProjectViewSet):\n \"\"\"\n \"\"\"\n queryset = ProjectDocument.objects.all()\n serializer_class = ProjectDocumentSerializer\n\n\nclass ProjectDocumentCategoryViewSet(PublicProjectViewSet):\n \"\"\"\n \"\"\"\n queryset = ProjectDocumentCategory.objects.all()\n serializer_class = ProjectDocumentCategorySerializer\n filter_fields = ('document__project', 'document', 'category', )\n", "path": "akvo/rest/views/project_document.py" } ]
[ { "content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom akvo.rsr.models import ProjectDocument, ProjectDocumentCategory\n\nfrom ..serializers import ProjectDocumentSerializer, ProjectDocumentCategorySerializer\nfrom ..viewsets import PublicProjectViewSet\n\n\nclass ProjectDocumentViewSet(PublicProjectViewSet):\n \"\"\"\n \"\"\"\n queryset = ProjectDocument.objects.all()\n serializer_class = ProjectDocumentSerializer\n\n\nclass ProjectDocumentCategoryViewSet(PublicProjectViewSet):\n \"\"\"\n \"\"\"\n queryset = ProjectDocumentCategory.objects.all()\n serializer_class = ProjectDocumentCategorySerializer\n filter_fields = ('document__project', 'document', 'category', )\n project_relation = 'document__project__'\n", "path": "akvo/rest/views/project_document.py" } ]
diff --git a/akvo/rest/views/project_document.py b/akvo/rest/views/project_document.py index a91a48968e..705e1df3ba 100644 --- a/akvo/rest/views/project_document.py +++ b/akvo/rest/views/project_document.py @@ -24,3 +24,4 @@ class ProjectDocumentCategoryViewSet(PublicProjectViewSet): queryset = ProjectDocumentCategory.objects.all() serializer_class = ProjectDocumentCategorySerializer filter_fields = ('document__project', 'document', 'category', ) + project_relation = 'document__project__'
pypi__warehouse-3056
Disable 'delete confirm' button until confirmation word is correct We currently have a modal on `warehouse/templates/manage/settings.html`, that allows the user to confirm that they want to delete their project: ![screenshot from 2018-02-03 14-43-29](https://user-images.githubusercontent.com/3323703/35768242-9dcfc21a-08f0-11e8-834d-fdcc3e6cd998.png) The user is required to enter the project name as an extra security measure. If they get it wrong, we show them this error: ![screenshot from 2018-02-03 14-44-19](https://user-images.githubusercontent.com/3323703/35768249-bba976d2-08f0-11e8-97ba-99c37bfc7479.png) ## Proposal It would be really nice if we could `disable` the delete button until the correct project name is given, e.g. ![screenshot from 2018-02-03 14-46-02](https://user-images.githubusercontent.com/3323703/35768271-fa2cdc64-08f0-11e8-848f-58433e60ae6b.png) ![screenshot from 2018-02-03 14-46-25](https://user-images.githubusercontent.com/3323703/35768274-0692bca8-08f1-11e8-9149-3aa7a5faad65.png) ## Notes We will have several other delete confirmation modals on other pages, sometimes with multiple modals on a single page (e.g. delete release, delete file) - so the code will need to be written to take this into account.
[ { "content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom packaging.utils import canonicalize_name\nfrom pyramid.httpexceptions import HTTPSeeOther\n\nfrom warehouse.packaging.models import (\n Release, Dependency, File, Role, JournalEntry, release_classifiers\n)\n\n\ndef confirm_project(project, request, fail_route):\n confirm = request.POST.get(\"confirm\")\n project_name = project.normalized_name\n if not confirm:\n request.session.flash(\n \"Must confirm the request.\",\n queue=\"error\",\n )\n raise HTTPSeeOther(\n request.route_path(fail_route, project_name=project_name)\n )\n if canonicalize_name(confirm) != project.normalized_name:\n request.session.flash(\n \"Could not delete project - \" +\n f\"{confirm!r} is not the same as {project.normalized_name!r}\",\n queue=\"error\",\n )\n raise HTTPSeeOther(\n request.route_path(fail_route, project_name=project_name)\n )\n\n\ndef remove_project(project, request, flash=True):\n # TODO: We don't actually delete files from the data store. We should add\n # some kind of garbage collection at some point.\n\n request.db.add(\n JournalEntry(\n name=project.name,\n action=\"remove\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n request.db.query(Role).filter(Role.project == project).delete()\n request.db.query(File).filter(File.name == project.name).delete()\n (request.db.query(Dependency).filter(Dependency.name == project.name)\n .delete())\n (request.db.execute(release_classifiers.delete()\n .where(release_classifiers.c.name ==\n project.name)))\n\n # Load the following objects into the session and individually delete them\n # so they are included in `session.deleted` and their cache keys are purged\n\n # Delete releases first, otherwise they will get cascade-deleted by the\n # project deletion and won't be purged\n for release in (\n request.db.query(Release)\n .filter(Release.project == project)\n .all()):\n request.db.delete(release)\n\n # Finally, delete the project\n request.db.delete(project)\n\n # Flush so we can repeat this multiple times if necessary\n request.db.flush()\n\n if flash:\n request.session.flash(\n f\"Successfully deleted the project {project.name!r}.\",\n queue=\"success\",\n )\n", "path": "warehouse/utils/project.py" } ]
[ { "content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom packaging.utils import canonicalize_name\nfrom pyramid.httpexceptions import HTTPSeeOther\n\nfrom warehouse.packaging.models import (\n Release, Dependency, File, Role, JournalEntry, release_classifiers\n)\n\n\ndef confirm_project(project, request, fail_route):\n confirm = request.POST.get(\"confirm_project_name\")\n project_name = project.normalized_name\n if not confirm:\n request.session.flash(\n \"Must confirm the request.\",\n queue=\"error\",\n )\n raise HTTPSeeOther(\n request.route_path(fail_route, project_name=project_name)\n )\n if canonicalize_name(confirm) != project.normalized_name:\n request.session.flash(\n \"Could not delete project - \" +\n f\"{confirm!r} is not the same as {project.normalized_name!r}\",\n queue=\"error\",\n )\n raise HTTPSeeOther(\n request.route_path(fail_route, project_name=project_name)\n )\n\n\ndef remove_project(project, request, flash=True):\n # TODO: We don't actually delete files from the data store. We should add\n # some kind of garbage collection at some point.\n\n request.db.add(\n JournalEntry(\n name=project.name,\n action=\"remove\",\n submitted_by=request.user,\n submitted_from=request.remote_addr,\n )\n )\n request.db.query(Role).filter(Role.project == project).delete()\n request.db.query(File).filter(File.name == project.name).delete()\n (request.db.query(Dependency).filter(Dependency.name == project.name)\n .delete())\n (request.db.execute(release_classifiers.delete()\n .where(release_classifiers.c.name ==\n project.name)))\n\n # Load the following objects into the session and individually delete them\n # so they are included in `session.deleted` and their cache keys are purged\n\n # Delete releases first, otherwise they will get cascade-deleted by the\n # project deletion and won't be purged\n for release in (\n request.db.query(Release)\n .filter(Release.project == project)\n .all()):\n request.db.delete(release)\n\n # Finally, delete the project\n request.db.delete(project)\n\n # Flush so we can repeat this multiple times if necessary\n request.db.flush()\n\n if flash:\n request.session.flash(\n f\"Successfully deleted the project {project.name!r}.\",\n queue=\"success\",\n )\n", "path": "warehouse/utils/project.py" } ]
diff --git a/.babelrc b/.babelrc index 002b4aa0d58e..10966c1f5a0c 100644 --- a/.babelrc +++ b/.babelrc @@ -1,3 +1,4 @@ { - "presets": ["env"] + "presets": ["env"], + "plugins": ["transform-class-properties"] } diff --git a/Gulpfile.babel.js b/Gulpfile.babel.js index 238b8a446aa1..dfdc96153cb8 100644 --- a/Gulpfile.babel.js +++ b/Gulpfile.babel.js @@ -44,6 +44,7 @@ let webpackConfig = { loader: "babel-loader", options: { presets: ["env"], + plugins: ["transform-class-properties"], }, }, }, diff --git a/package-lock.json b/package-lock.json index 8456afee7e7a..0f921dbd9d39 100644 --- a/package-lock.json +++ b/package-lock.json @@ -2,6 +2,183 @@ "requires": true, "lockfileVersion": 1, "dependencies": { + "@babel/code-frame": { + "version": "7.0.0-beta.40", + "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.0.0-beta.40.tgz", + "integrity": "sha512-eVXQSbu/RimU6OKcK2/gDJVTFcxXJI4sHbIqw2mhwMZeQ2as/8AhS9DGkEDoHMBBNJZ5B0US63lF56x+KDcxiA==", + "dev": true, + "requires": { + "@babel/highlight": "7.0.0-beta.40" + } + }, + "@babel/generator": { + "version": "7.0.0-beta.40", + "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.0.0-beta.40.tgz", + "integrity": "sha512-c91BQcXyTq/5aFV4afgOionxZS1dxWt8OghEx5Q52SKssdGRFSiMKnk9tGkev1pYULPJBqjSDZU2Pcuc58ffZw==", + "dev": true, + "requires": { + "@babel/types": "7.0.0-beta.40", + "jsesc": "2.5.1", + "lodash": "4.17.4", + "source-map": "0.5.7", + "trim-right": "1.0.1" + }, + "dependencies": { + "jsesc": { + "version": "2.5.1", + "resolved": "https://registry.npmjs.org/jsesc/-/jsesc-2.5.1.tgz", + "integrity": "sha1-5CGiqOINawgZ3yiQj3glJrlt0f4=", + "dev": true + } + } + }, + "@babel/helper-function-name": { + "version": "7.0.0-beta.40", + "resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.0.0-beta.40.tgz", + "integrity": "sha512-cK9BVLtOfisSISTTHXKGvBc2OBh65tjEk4PgXhsSnnH0i8RP2v+5RCxoSlh2y/i+l2fxQqKqv++Qo5RMiwmRCA==", + "dev": true, + "requires": { + "@babel/helper-get-function-arity": "7.0.0-beta.40", + "@babel/template": "7.0.0-beta.40", + "@babel/types": "7.0.0-beta.40" + } + }, + "@babel/helper-get-function-arity": { + "version": "7.0.0-beta.40", + "resolved": "https://registry.npmjs.org/@babel/helper-get-function-arity/-/helper-get-function-arity-7.0.0-beta.40.tgz", + "integrity": "sha512-MwquaPznI4cUoZEgHC/XGkddOXtqKqD4DvZDOyJK2LR9Qi6TbMbAhc6IaFoRX7CRTFCmtGeu8gdXW2dBotBBTA==", + "dev": true, + "requires": { + "@babel/types": "7.0.0-beta.40" + } + }, + "@babel/highlight": { + "version": "7.0.0-beta.40", + "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.0.0-beta.40.tgz", + "integrity": "sha512-mOhhTrzieV6VO7odgzFGFapiwRK0ei8RZRhfzHhb6cpX3QM8XXuCLXWjN8qBB7JReDdUR80V3LFfFrGUYevhNg==", + "dev": true, + "requires": { + "chalk": "2.3.1", + "esutils": "2.0.2", + "js-tokens": "3.0.2" + }, + "dependencies": { + "ansi-styles": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-3.2.0.tgz", + "integrity": "sha512-NnSOmMEYtVR2JVMIGTzynRkkaxtiq1xnFBcdQD/DnNCYPoEPsVJhM98BDyaoNOQIi7p4okdi3E27eN7GQbsUug==", + "dev": true, + "requires": { + "color-convert": "1.9.1" + } + }, + "chalk": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.3.1.tgz", + "integrity": "sha512-QUU4ofkDoMIVO7hcx1iPTISs88wsO8jA92RQIm4JAwZvFGGAV2hSAA1NX7oVj2Ej2Q6NDTcRDjPTFrMCRZoJ6g==", + "dev": true, + "requires": { + "ansi-styles": "3.2.0", + "escape-string-regexp": "1.0.5", + "supports-color": "5.2.0" + } + }, + "has-flag": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz", + "integrity": "sha1-tdRU3CGZriJWmfNGfloH87lVuv0=", + "dev": true + }, + "supports-color": { + "version": "5.2.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.2.0.tgz", + "integrity": "sha512-F39vS48la4YvTZUPVeTqsjsFNrvcMwrV3RLZINsmHo+7djCvuUzSIeXOnZ5hmjef4bajL1dNccN+tg5XAliO5Q==", + "dev": true, + "requires": { + "has-flag": "3.0.0" + } + } + } + }, + "@babel/template": { + "version": "7.0.0-beta.40", + "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.0.0-beta.40.tgz", + "integrity": "sha512-RlQiVB7eL7fxsKN6JvnCCwEwEL28CBYalXSgWWULuFlEHjtMoXBqQanSie3bNyhrANJx67sb+Sd/vuGivoMwLQ==", + "dev": true, + "requires": { + "@babel/code-frame": "7.0.0-beta.40", + "@babel/types": "7.0.0-beta.40", + "babylon": "7.0.0-beta.40", + "lodash": "4.17.4" + }, + "dependencies": { + "babylon": { + "version": "7.0.0-beta.40", + "resolved": "https://registry.npmjs.org/babylon/-/babylon-7.0.0-beta.40.tgz", + "integrity": "sha512-AVxF2EcxvGD5hhOuLTOLAXBb0VhwWpEX0HyHdAI2zU+AAP4qEwtQj8voz1JR3uclGai0rfcE+dCTHnNMOnimFg==", + "dev": true + } + } + }, + "@babel/traverse": { + "version": "7.0.0-beta.40", + "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.0.0-beta.40.tgz", + "integrity": "sha512-h96SQorjvdSuxQ6hHFIuAa3oxnad1TA5bU1Zz88+XqzwmM5QM0/k2D+heXGGy/76gT5ajl7xYLKGiPA/KTyVhQ==", + "dev": true, + "requires": { + "@babel/code-frame": "7.0.0-beta.40", + "@babel/generator": "7.0.0-beta.40", + "@babel/helper-function-name": "7.0.0-beta.40", + "@babel/types": "7.0.0-beta.40", + "babylon": "7.0.0-beta.40", + "debug": "3.1.0", + "globals": "11.3.0", + "invariant": "2.2.2", + "lodash": "4.17.4" + }, + "dependencies": { + "babylon": { + "version": "7.0.0-beta.40", + "resolved": "https://registry.npmjs.org/babylon/-/babylon-7.0.0-beta.40.tgz", + "integrity": "sha512-AVxF2EcxvGD5hhOuLTOLAXBb0VhwWpEX0HyHdAI2zU+AAP4qEwtQj8voz1JR3uclGai0rfcE+dCTHnNMOnimFg==", + "dev": true + }, + "debug": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/debug/-/debug-3.1.0.tgz", + "integrity": "sha512-OX8XqP7/1a9cqkxYw2yXss15f26NKWBpDXQd0/uK/KPqdQhxbPa994hnzjcE2VqQpDslf55723cKPUOGSmMY3g==", + "dev": true, + "requires": { + "ms": "2.0.0" + } + }, + "globals": { + "version": "11.3.0", + "resolved": "https://registry.npmjs.org/globals/-/globals-11.3.0.tgz", + "integrity": "sha512-kkpcKNlmQan9Z5ZmgqKH/SMbSmjxQ7QjyNqfXVc8VJcoBV2UEg+sxQD15GQofGRh2hfpwUb70VC31DR7Rq5Hdw==", + "dev": true + } + } + }, + "@babel/types": { + "version": "7.0.0-beta.40", + "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.0.0-beta.40.tgz", + "integrity": "sha512-uXCGCzTgMZxcSUzutCPtZmXbVC+cvENgS2e0tRuhn+Y1hZnMb8IHP0Trq7Q2MB/eFmG5pKrAeTIUfQIe5kA4Tg==", + "dev": true, + "requires": { + "esutils": "2.0.2", + "lodash": "4.17.4", + "to-fast-properties": "2.0.0" + }, + "dependencies": { + "to-fast-properties": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/to-fast-properties/-/to-fast-properties-2.0.0.tgz", + "integrity": "sha1-3F5pjL0HkmW8c+A3doGk5Og/YW4=", + "dev": true + } + } + }, "@gulp-sourcemaps/identity-map": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/@gulp-sourcemaps/identity-map/-/identity-map-1.0.1.tgz", @@ -30,6 +207,32 @@ "through2": "2.0.3" } }, + "@stimulus/core": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/@stimulus/core/-/core-1.0.1.tgz", + "integrity": "sha512-tQGBJyhkr+/6JZLb7WqrIgZiY46Aa0FbXH9nzNZjSMDut11LAfNPDqc7U5N9mvPoG7kCE2YJa/YQdGGlB6LgFg==", + "requires": { + "@stimulus/mutation-observers": "1.0.0" + } + }, + "@stimulus/multimap": { + "version": "0.9.0", + "resolved": "https://registry.npmjs.org/@stimulus/multimap/-/multimap-0.9.0.tgz", + "integrity": "sha512-cH38w+peiR6dMPN0bRJttNYatOs2aSussM4A8T27VNX9oVn5ZMVfPi7POnmDtPXzd8x3F6jpWoP8HkbzkqaPFw==" + }, + "@stimulus/mutation-observers": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/@stimulus/mutation-observers/-/mutation-observers-1.0.0.tgz", + "integrity": "sha512-lxXzttbMKjAML4+7JfxbETCwf9WFaIZwjTO9mbAIUq2Lykpajkc43DfdEsn6tVv5ayCU+3ODZRRllMkmLiENXg==", + "requires": { + "@stimulus/multimap": "0.9.0" + } + }, + "@stimulus/webpack-helpers": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/@stimulus/webpack-helpers/-/webpack-helpers-1.0.0.tgz", + "integrity": "sha512-p525nE67Nj9Q7lzo6XMXjFGWHyCwPyEevypKJoMmtNp8CfTGEBJe+zyqhDfuqtPg1RvRSZPgrWJgDXAC844pQQ==" + }, "abbrev": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/abbrev/-/abbrev-1.1.1.tgz", @@ -511,6 +714,28 @@ "source-map": "0.5.7" } }, + "babel-eslint": { + "version": "8.2.2", + "resolved": "https://registry.npmjs.org/babel-eslint/-/babel-eslint-8.2.2.tgz", + "integrity": "sha512-Qt2lz2egBxNYWqN9JIO2z4NOOf8i4b5JS6CFoYrOZZTDssueiV1jH/jsefyg+86SeNY3rB361/mi3kE1WK2WYQ==", + "dev": true, + "requires": { + "@babel/code-frame": "7.0.0-beta.40", + "@babel/traverse": "7.0.0-beta.40", + "@babel/types": "7.0.0-beta.40", + "babylon": "7.0.0-beta.40", + "eslint-scope": "3.7.1", + "eslint-visitor-keys": "1.0.0" + }, + "dependencies": { + "babylon": { + "version": "7.0.0-beta.40", + "resolved": "https://registry.npmjs.org/babylon/-/babylon-7.0.0-beta.40.tgz", + "integrity": "sha512-AVxF2EcxvGD5hhOuLTOLAXBb0VhwWpEX0HyHdAI2zU+AAP4qEwtQj8voz1JR3uclGai0rfcE+dCTHnNMOnimFg==", + "dev": true + } + } + }, "babel-generator": { "version": "6.26.0", "resolved": "https://registry.npmjs.org/babel-generator/-/babel-generator-6.26.0.tgz", @@ -682,6 +907,11 @@ "resolved": "https://registry.npmjs.org/babel-plugin-syntax-async-functions/-/babel-plugin-syntax-async-functions-6.13.0.tgz", "integrity": "sha1-ytnK0RkbWtY0vzCuCHI5HgZHvpU=" }, + "babel-plugin-syntax-class-properties": { + "version": "6.13.0", + "resolved": "https://registry.npmjs.org/babel-plugin-syntax-class-properties/-/babel-plugin-syntax-class-properties-6.13.0.tgz", + "integrity": "sha1-1+sjt5oxf4VDlixQW4J8fWysJ94=" + }, "babel-plugin-syntax-exponentiation-operator": { "version": "6.13.0", "resolved": "https://registry.npmjs.org/babel-plugin-syntax-exponentiation-operator/-/babel-plugin-syntax-exponentiation-operator-6.13.0.tgz", @@ -702,6 +932,17 @@ "babel-runtime": "6.26.0" } }, + "babel-plugin-transform-class-properties": { + "version": "6.24.1", + "resolved": "https://registry.npmjs.org/babel-plugin-transform-class-properties/-/babel-plugin-transform-class-properties-6.24.1.tgz", + "integrity": "sha1-anl2PqYdM9NvN7YRqp3vgagbRqw=", + "requires": { + "babel-helper-function-name": "6.24.1", + "babel-plugin-syntax-class-properties": "6.13.0", + "babel-runtime": "6.26.0", + "babel-template": "6.26.0" + } + }, "babel-plugin-transform-es2015-arrow-functions": { "version": "6.22.0", "resolved": "https://registry.npmjs.org/babel-plugin-transform-es2015-arrow-functions/-/babel-plugin-transform-es2015-arrow-functions-6.22.0.tgz", @@ -3593,6 +3834,12 @@ "estraverse": "4.2.0" } }, + "eslint-visitor-keys": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-1.0.0.tgz", + "integrity": "sha512-qzm/XxIbxm/FHyH341ZrbnMUpe+5Bocte9xkmFMzPMjRaZMcXww+MpBptFvtU+79L362nqiLhekCxCxDPaUMBQ==", + "dev": true + }, "espree": { "version": "3.5.2", "resolved": "https://registry.npmjs.org/espree/-/espree-3.5.2.tgz", @@ -10538,6 +10785,15 @@ } } }, + "stimulus": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/stimulus/-/stimulus-1.0.1.tgz", + "integrity": "sha512-3cxlrUkI5KLUno41W4IFbkBK6uubt2D29x5CS+KPPTag693ZlBnD7d/2UgJEOg1mNBgTgukeiTyHh9UC+nRkSg==", + "requires": { + "@stimulus/core": "1.0.1", + "@stimulus/webpack-helpers": "1.0.0" + } + }, "stream-array": { "version": "1.1.2", "resolved": "https://registry.npmjs.org/stream-array/-/stream-array-1.1.2.tgz", diff --git a/package.json b/package.json index 195a4829ae48..c871bef8c9f6 100644 --- a/package.json +++ b/package.json @@ -7,6 +7,7 @@ "dependencies": { "babel-core": "6.26.0", "babel-loader": "7.1.2", + "babel-plugin-transform-class-properties": "6.24.1", "babel-polyfill": "6.26.0", "babel-preset-env": "1.6.1", "babel-register": "6.26.0", @@ -31,6 +32,7 @@ "gulp-watch": "4.3.11", "imports-loader": "0.7.1", "jquery": "3.2.1", + "stimulus": "1.0.1", "uglify-js": "3.2.1", "vinyl-named": "1.1.0", "webpack": "3.10.0", @@ -40,15 +42,18 @@ "zopflipng-bin": "4.0.0" }, "devDependencies": { + "babel-eslint": "8.2.2", "eslint": "4.12.1", "sass-lint": "1.12.1" }, "eslintConfig": { "env": { "browser": true, - "es6": true + "es6": true, + "amd": true }, "extends": "eslint:recommended", + "parser": "babel-eslint", "parserOptions": { "sourceType": "module" }, diff --git a/tests/unit/admin/views/test_projects.py b/tests/unit/admin/views/test_projects.py index 01f9d2b005cd..9f5895b171ca 100644 --- a/tests/unit/admin/views/test_projects.py +++ b/tests/unit/admin/views/test_projects.py @@ -434,7 +434,7 @@ def test_no_confirm(self): def test_wrong_confirm(self): project = pretend.stub(normalized_name='foo') request = pretend.stub( - POST={"confirm": "bar"}, + POST={"confirm_project_name": "bar"}, session=pretend.stub( flash=pretend.call_recorder(lambda *a, **kw: None), ), @@ -461,7 +461,7 @@ def test_deletes_project(self, db_request): db_request.session = pretend.stub( flash=pretend.call_recorder(lambda *a, **kw: None), ) - db_request.POST["confirm"] = project.normalized_name + db_request.POST["confirm_project_name"] = project.normalized_name db_request.user = UserFactory.create() db_request.remote_addr = "192.168.1.1" diff --git a/tests/unit/manage/test_views.py b/tests/unit/manage/test_views.py index 8a4b6e0242cc..1fed8166a861 100644 --- a/tests/unit/manage/test_views.py +++ b/tests/unit/manage/test_views.py @@ -773,7 +773,7 @@ def test_delete_project_no_confirm(self): def test_delete_project_wrong_confirm(self): project = pretend.stub(normalized_name='foo') request = pretend.stub( - POST={"confirm": "bar"}, + POST={"confirm_project_name": "bar"}, session=pretend.stub( flash=pretend.call_recorder(lambda *a, **kw: None), ), @@ -801,7 +801,7 @@ def test_delete_project(self, db_request): db_request.session = pretend.stub( flash=pretend.call_recorder(lambda *a, **kw: None), ) - db_request.POST["confirm"] = project.normalized_name + db_request.POST["confirm_project_name"] = project.normalized_name db_request.user = UserFactory.create() db_request.remote_addr = "192.168.1.1" diff --git a/tests/unit/utils/test_project.py b/tests/unit/utils/test_project.py index 021273ab9979..ba36d0524c2f 100644 --- a/tests/unit/utils/test_project.py +++ b/tests/unit/utils/test_project.py @@ -29,7 +29,7 @@ def test_confirm(): project = stub(normalized_name='foobar') request = stub( - POST={'confirm': 'foobar'}, + POST={'confirm_project_name': 'foobar'}, route_path=call_recorder(lambda *a, **kw: stub()), session=stub(flash=call_recorder(lambda *a, **kw: stub())), ) @@ -43,7 +43,7 @@ def test_confirm(): def test_confirm_no_input(): project = stub(normalized_name='foobar') request = stub( - POST={'confirm': ''}, + POST={'confirm_project_name': ''}, route_path=call_recorder(lambda *a, **kw: '/the-redirect'), session=stub(flash=call_recorder(lambda *a, **kw: stub())), ) @@ -63,7 +63,7 @@ def test_confirm_no_input(): def test_confirm_incorrect_input(): project = stub(normalized_name='foobar') request = stub( - POST={'confirm': 'bizbaz'}, + POST={'confirm_project_name': 'bizbaz'}, route_path=call_recorder(lambda *a, **kw: '/the-redirect'), session=stub(flash=call_recorder(lambda *a, **kw: stub())), ) diff --git a/warehouse/admin/templates/admin/projects/delete.html b/warehouse/admin/templates/admin/projects/delete.html index e8e635e7e42a..17ade1e46d9e 100644 --- a/warehouse/admin/templates/admin/projects/delete.html +++ b/warehouse/admin/templates/admin/projects/delete.html @@ -32,10 +32,10 @@ <h3 class="box-title">Delete Project</h3> </p> <div class="form-group col-sm-12"> - <label for="confirm"> + <label for="confirm_project_name"> Are you sure you want to delete <strong>{{ project_name }}</strong>? </label> - <input name="confirm" class="form-control" type="text" placeholder="Enter project name to confirm" autocomplete="off" autocorrect="off" autocapitalize="off"> + <input name="confirm_project_name" class="form-control" type="text" placeholder="Enter project name to confirm" autocomplete="off" autocorrect="off" autocapitalize="off"> </div> </div> diff --git a/warehouse/static/js/warehouse/controllers/confirm_controller.js b/warehouse/static/js/warehouse/controllers/confirm_controller.js new file mode 100644 index 000000000000..4177c8b432ff --- /dev/null +++ b/warehouse/static/js/warehouse/controllers/confirm_controller.js @@ -0,0 +1,17 @@ +import { Controller } from "stimulus"; + +export default class extends Controller { + static targets = [ "input", "button" ] + + connect() { + this.buttonTarget.disabled = true; + } + + check() { + if (this.inputTarget.value == this.buttonTarget.dataset.expected) { + this.buttonTarget.disabled = false; + } else { + this.buttonTarget.disabled = true; + } + } +} diff --git a/warehouse/static/js/warehouse/index.js b/warehouse/static/js/warehouse/index.js index 2a44301ce0fc..2a005db96ac9 100644 --- a/warehouse/static/js/warehouse/index.js +++ b/warehouse/static/js/warehouse/index.js @@ -16,6 +16,10 @@ // ensure we have an ES6 like environment. import "babel-polyfill"; +// Import stimulus +import { Application } from "stimulus"; +import { definitionsFromContext } from "stimulus/webpack-helpers"; + // We'll use docReady as a modern replacement for $(document).ready() which // does not require all of jQuery to use. This will let us use it without // having to load all of jQuery, which will make things faster. @@ -164,3 +168,7 @@ docReady(() => { } } }); + +const application = Application.start(); +const context = require.context("./controllers", true, /\.js$/); +application.load(definitionsFromContext(context)); diff --git a/warehouse/static/sass/blocks/_button.scss b/warehouse/static/sass/blocks/_button.scss index 9424d0fb1282..020f2fa6cdc7 100644 --- a/warehouse/static/sass/blocks/_button.scss +++ b/warehouse/static/sass/blocks/_button.scss @@ -51,7 +51,7 @@ border-color: $brand-color; color: darken($brand-color, 10); text-decoration: none; - z-index: index($z-index-scale, "active-button"); + z-index: index($z-index-scale, "active-button"); // Needed for button groups outline: none; } @@ -65,7 +65,7 @@ border-color: $brand-color; background-color: $brand-color; color: $white; - z-index: index($z-index-scale, "primary-button"); + z-index: index($z-index-scale, "primary-button"); // Needed for button groups &:focus, &:hover, diff --git a/warehouse/static/sass/blocks/_modal.scss b/warehouse/static/sass/blocks/_modal.scss index edb8c6c81f65..6ab881fad930 100644 --- a/warehouse/static/sass/blocks/_modal.scss +++ b/warehouse/static/sass/blocks/_modal.scss @@ -41,6 +41,7 @@ align-items: center; justify-content: center; flex-grow: 1; + text-align: left; &:target { opacity: 1; diff --git a/warehouse/static/sass/settings/_z-index.scss b/warehouse/static/sass/settings/_z-index.scss index 0f2f74703bf8..949c01a28754 100644 --- a/warehouse/static/sass/settings/_z-index.scss +++ b/warehouse/static/sass/settings/_z-index.scss @@ -25,14 +25,13 @@ // sass-lint:disable indentation -$z-index-scale: "tabs-border", +$z-index-scale: "active-button", + "primary-button", + "tabs-border", "history-line", "history-node", "callout-block", "callout-block-border", - "button", - "active-button", - "primary-button", "dropdown", "sticky-top", "dark-overlay", diff --git a/warehouse/templates/manage/account.html b/warehouse/templates/manage/account.html index d86b4b61a838..bbdb79dc899c 100644 --- a/warehouse/templates/manage/account.html +++ b/warehouse/templates/manage/account.html @@ -277,37 +277,7 @@ <h3>Cannot Delete Account</h3> {% else %} <h3>Proceed with caution!</h3> <p>You will not be able to recover your account after you delete it.</p> - <form> - <a href="#delete-account-modal" class="button button--primary"> - Delete Account - </a> - </form> + {{ confirm_button("Delete your PyPI Account", "Username", user.username) }} {% endif %} </div> - - <div id="delete-account-modal" class="modal"> - <div class="modal__content" role="dialog"> - <form method="POST" action="{{ request.current_route_path() }}" class="modal__form"> - <a href="#modal-close" title="Close" class="modal__close"> - <i class="fa fa-times" aria-hidden="true"></i> - <span class="sr-only">close</span> - </a> - <div class="modal__body"> - <h3 class="modal__title">Delete your PyPI account?</h3> - <div class="callout-block callout-block--danger callout-block--bottom-margin no-top-margin"> - <p>Warning: This action cannot be undone!</p> - </div> - <p>Confirm your username to continue.</p> - <input name="csrf_token" type="hidden" value="{{ request.session.get_csrf_token() }}"> - <label for="confirm_username">Username</label> - <input name="confirm_username" type="text" placeholder="Confirm your username" autocomplete="off" autocorrect="off" autocapitalize="off"> - </div> - <div class="modal__footer"> - <a href="#modal-close" class="button modal__action">Cancel</a> - <button class="button button--primary modal__action" type="submit">Delete Account</button> - </div> - </form> - </div> - </div> - {% endblock %} diff --git a/warehouse/templates/manage/manage_base.html b/warehouse/templates/manage/manage_base.html index 02b583233998..1cbf1f0783d4 100644 --- a/warehouse/templates/manage/manage_base.html +++ b/warehouse/templates/manage/manage_base.html @@ -54,3 +54,54 @@ <h3 class="sidebar-section__title">Your Account</h3> </div> </div> {% endblock %} + +{% macro modal_slug(title, index) %} +{% endmacro %} + +{% macro confirm_modal(title, confirm_name, confirm_string, slug, index=None, extra_fields=None, action=None) %} + <div id="{{ slug }}" class="modal" data-controller="confirm"> + <div class="modal__content" role="dialog"> + <form method="POST" class="modal__form" action="{{ action or request.current_route_path() }}"> + <input name="csrf_token" type="hidden" value="{{ request.session.get_csrf_token() }}"> + {{ extra_fields if extra_fields else '' }} + <a href="#modal-close" title="Close" class="modal__close"> + <i class="fa fa-times" aria-hidden="true"></i> + <span class="sr-only">close</span> + </a> + <div class="modal__body"> + <h3 class="modal__title">{{ title }} {{ confirm_string }}?</h3> + <div class="callout-block callout-block--danger callout-block--bottom-margin no-top-margin"> + <p>Warning: This action cannot be undone!</p> + </div> + <p>Confirm the {{ confirm_name|lower }} to continue.</p> + {% set name = "confirm_" + confirm_name.lower().replace(' ', '_') %} + <label for="{{ name }}">{{ confirm_name }}</label> + <input name="{{ name }}" data-action="input->confirm#check" data-target="confirm.input" type="text" placeholder="{{ confirm_string }}" autocomplete="off" autocorrect="off" autocapitalize="off"> + </div> + <div class="modal__footer"> + <a href="#modal-close" class="button modal__action">Cancel</a> + <button class="button button--primary modal__action" data-target="confirm.button" data-expected="{{ confirm_string }}" type="submit"> + {{ title }} + </button> + </div> + </form> + </div> + </div> +{% endmacro %} + +{% macro confirm_button(title, confirm_name, confirm_string, index=None, extra_fields=None, action=None) %} + {% set slug = title.lower().replace(' ', '-') + '-modal' + ('-{}'.format(index) if index else '') %} + <a href="#{{ slug }}" class="button button--primary"> + {{ title }} + </a> + {{ confirm_modal(title, confirm_name, confirm_string, slug, index=None, extra_fields=extra_fields, action=action) }} +{% endmacro %} + +{% macro confirm_dropdown(title, confirm_name, confirm_string, index=None, extra_fields=None, action=None) %} + {% set slug = title.lower().replace(' ', '-') + '-modal' + ('-{}'.format(index) if index else '') %} + <a href="#{{ slug }}" class="dropdown__link"> + <i class="fa fa-trash" aria-hidden="true"></i> + Delete + </a> + {{ confirm_modal(title, confirm_name, confirm_string, slug, index=index, extra_fields=extra_fields, action=action) }} +{% endmacro %} diff --git a/warehouse/templates/manage/manage_project_base.html b/warehouse/templates/manage/manage_project_base.html index 280920205079..1247618b320b 100644 --- a/warehouse/templates/manage/manage_project_base.html +++ b/warehouse/templates/manage/manage_project_base.html @@ -11,7 +11,7 @@ # See the License for the specific language governing permissions and # limitations under the License. -#} -{% extends "base.html" %} +{% extends "manage_base.html" %} {% set user = request.user %} {% set projects = user.projects %} diff --git a/warehouse/templates/manage/release.html b/warehouse/templates/manage/release.html index 0be8f077d183..777681d72f33 100644 --- a/warehouse/templates/manage/release.html +++ b/warehouse/templates/manage/release.html @@ -74,10 +74,10 @@ <h2 class="heading-wsubtitle__heading">Release Version {{ release.version }}</h2 <i class="fa fa-hashtag" aria-hidden="true"></i> View Hashes </a> - <a href="#delete-file-modal-{{ loop.index }}" class="dropdown__link"> - <i class="fa fa-trash" aria-hidden="true"></i> - Delete - </a> + {% set extra_fields %} + <input name="file_id" type="hidden" value="{{ file.id }}"> + {% endset %} + {{ confirm_dropdown("Delete File", "Filename", file.filename, index=loop.index, extra_fields=extra_fields) }} </div> </div> </td> @@ -112,63 +112,11 @@ <h3>Delete Release</h3> Deleting will irreversibly delete this release. {% endif %} </p> - <a href="#delete-release-modal" class="button button--primary">Delete</a> - </div> - - <div id="delete-release-modal" class="modal"> - <div class="modal__content" role="dialog"> - <form method="POST" class="modal__form" action="{{ request.current_route_path() }}"> - <input name="csrf_token" type="hidden" value="{{ request.session.get_csrf_token() }}"> - <a href="#modal-close" title="Close" class="modal__close"> - <i class="fa fa-times" aria-hidden="true"></i> - <span class="sr-only">close</span> - </a> - <div class="modal__body"> - <h3 class="modal__title">Delete Release {{ release.version }}?</h3> - <div class="callout-block callout-block--danger callout-block--bottom-margin no-top-margin"> - <p>Warning: This action cannot be undone!</p> - </div> - <p>Confirm the release version to continue.</p> - <label for="confirm_version">Release version</label> - <input name="confirm_version" type="text" placeholder="Confirm version" autocomplete="off" autocorrect="off" autocapitalize="off"> - </div> - <div class="modal__footer"> - <a href="#modal-close" class="button modal__action">Cancel</a> - <button class="button button--primary modal__action" type="submit">Delete Release</button> - </div> - </form> - </div> + {{ confirm_button("Delete Release", "Version", release.version) }} </div> {% if files %} {% for file in files %} - <div id="delete-file-modal-{{ loop.index }}" class="modal"> - {% set project_name = project.normalized_name %} - <div class="modal__content" role="dialog"> - <form method="POST" class="modal__form" action="{{ request.current_route_path() }}"> - <input name="csrf_token" type="hidden" value="{{ request.session.get_csrf_token() }}"> - <input name="file_id" type="hidden" value="{{ file.id }}"> - <a href="#modal-close" title="Close" class="modal__close"> - <i class="fa fa-times" aria-hidden="true"></i> - <span class="sr-only">close</span> - </a> - <div class="modal__body"> - <h3 class="modal__title">Delete {{ file.filename }}?</h3> - <div class="callout-block callout-block--danger callout-block--bottom-margin no-top-margin"> - <p>Warning: This action cannot be undone!</p> - </div> - <p>Confirm the file name to continue.</p> - <label for="confirm_filename">File name</label> - <input name="confirm_filename" type="text" placeholder="Confirm file name" autocomplete="off" autocorrect="off" autocapitalize="off"> - </div> - <div class="modal__footer"> - <a href="#modal-close" class="button modal__action">Cancel</a> - <button class="button button--primary modal__action" type="submit">Delete File</button> - </div> - </form> - </div> - </div> - <div id="copy-hash-modal-{{ loop.index }}" class="modal modal--wide"> <div class="modal__content" role="dialog"> <a href="#modal-close" title="Close" class="modal__close"> diff --git a/warehouse/templates/manage/releases.html b/warehouse/templates/manage/releases.html index 5284746b089d..05036a6151ac 100644 --- a/warehouse/templates/manage/releases.html +++ b/warehouse/templates/manage/releases.html @@ -61,12 +61,8 @@ <h2>Releases ({{ project.releases|length }})</h2> <i class="fa fa-eye" aria-hidden="true"></i> View </a> - {# TODO: https://github.com/pypa/warehouse/issues/2808 - <a href="#delete-release-modal-{{ loop.index }}" class="dropdown__link"> - <i class="fa fa-trash" aria-hidden="true"></i> - Delete - </a> - #} + {% set action = request.route_path('manage.project.release', project_name=project.name, version=release.version) %} + {{ confirm_dropdown("Delete Release", "Version", release.version, index=loop.index, action=action) }} </div> </div> </td> @@ -85,38 +81,4 @@ <h3>No Releases Found</h3> {% endif %} <p>Learn how to create a new release on the <a href="https://packaging.python.org/tutorials/distributing-packages/">Python Packaging User Guide</a></p> </div> - - {# TODO: https://github.com/pypa/warehouse/issues/2808 - {% for release in project.releases %} - <div id="delete-release-modal-{{ loop.index }}" class="modal"> - <div class="modal__content" role="dialog"> - <a href="#modal-close" title="Close" class="modal__close"> - <i class="fa fa-times" aria-hidden="true"></i> - <span class="sr-only">close</span> - </a> - <div class="modal__body"> - <h3 class="modal__title">Delete {{ project.name }} - release {{ release.version }}?</h3> - <div class="callout-block callout-block--danger callout-block--bottom-margin no-top-margin"> - <p>Warning: This action cannot be undone!</p> - </div> - <p>Enter your password to continue.</p> - <form class="modal__form"> - <div class="split-layout"> - <label for="password">Password</label> - <label for="show-password" class="show-password"> - <input id="show-password" type="checkbox">&nbsp;Show password - </label> - </div> - <input type="password" id="password" placeholder="Your password"> - </form> - </div> - <div class="modal__footer"> - <a href="#modal-close" class="button modal__action">Cancel</a> - <button class="button button--primary modal__action">Delete Release</button> - </div> - </div> - <p>Enter your password to continue.</p> - </div> - {% endfor %} - #} {% endblock %} diff --git a/warehouse/templates/manage/settings.html b/warehouse/templates/manage/settings.html index d20e19790253..0cee4a1c2ccc 100644 --- a/warehouse/templates/manage/settings.html +++ b/warehouse/templates/manage/settings.html @@ -37,32 +37,7 @@ <h3>Delete Project</h3> Deleting will irreversibly delete this project. {% endif %} </p> - <a href="#delete-project-modal" class="button button--primary">Delete</a> - </div> - - <div id="delete-project-modal" class="modal"> - {% set project_name = project.normalized_name %} - <div class="modal__content" role="dialog"> - <form method="POST" action="{{ request.route_path('manage.project.delete_project', project_name=project_name) }}" class="modal__form"> - <a href="#modal-close" title="Close" class="modal__close"> - <i class="fa fa-times" aria-hidden="true"></i> - <span class="sr-only">close</span> - </a> - <div class="modal__body"> - <h3 class="modal__title">Delete {{ project.name }}?</h3> - <div class="callout-block callout-block--danger callout-block--bottom-margin no-top-margin"> - <p>Warning: This action cannot be undone!</p> - </div> - <p>Confirm the project name to continue.</p> - <input name="csrf_token" type="hidden" value="{{ request.session.get_csrf_token() }}"> - <label for="project-name">Project Name</label> - <input name="confirm" type="text" placeholder="Confirm project name" autocomplete="off" autocorrect="off" autocapitalize="off"> - </div> - <div class="modal__footer"> - <a href="#modal-close" class="button modal__action">Cancel</a> - <button class="button button--primary modal__action" type="submit">Delete Project</button> - </div> - </form> - </div> + {% set action = request.route_path('manage.project.delete_project', project_name=project.normalized_name) %} + {{ confirm_button("Delete project", "Project name", project.normalized_name, action=action) }} </div> {% endblock %} diff --git a/warehouse/utils/project.py b/warehouse/utils/project.py index 5ae484f0d19d..2b1b59e61946 100644 --- a/warehouse/utils/project.py +++ b/warehouse/utils/project.py @@ -19,7 +19,7 @@ def confirm_project(project, request, fail_route): - confirm = request.POST.get("confirm") + confirm = request.POST.get("confirm_project_name") project_name = project.normalized_name if not confirm: request.session.flash(
netket__netket-897
[input validation] nk.graph.Chain([L]) does not fial nk.graph.Chain([L]) should fail, but does not, and leads to failure later that it took me a while to realise... We should validate the input to be an integer. ```python >>> graph = nk.graph.Chain([L]) >>> sgb = graph.space_group_builder() >>> sgb.little_group([0]) ```
[ { "content": "# Copyright 2021 The NetKet Authors - All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom itertools import permutations\nfrom typing import Sequence, Union, Tuple\nimport numpy as np\nimport warnings\n\nfrom .lattice import Lattice\n\nfrom netket.utils.group import PointGroup, PGSymmetry, planar, cubic, Identity\n\n\ndef _perm_symm(perm: Tuple) -> PGSymmetry:\n n = len(perm)\n M = np.zeros((n, n))\n M[range(n), perm] = 1\n return PGSymmetry(M)\n\n\ndef _axis_reflection(axis: int, ndim: int) -> PGSymmetry:\n M = np.eye(ndim)\n M[axis, axis] = -1\n return PGSymmetry(M)\n\n\ndef _grid_point_group(extent: Sequence[int], pbc: Sequence[bool]) -> PointGroup:\n # axis permutations\n # can exchange two axes iff they have the same kind of BC and length\n # represent open BC by setting kind[i] = -extent[i], so just have to\n # match these\n axis_perm = []\n axes = np.arange(len(extent), dtype=int)\n extent = np.asarray(extent, dtype=int)\n kind = np.where(pbc, extent, -extent)\n ndim = len(extent)\n for perm in permutations(axes):\n if np.all(kind == kind[list(perm)]):\n axis_perm.append(_perm_symm(perm))\n result = PointGroup(axis_perm, ndim=ndim)\n # reflections across axes and setting the origin\n # OBC axes are only symmetric w.r.t. their midpoint, (extent[i]-1)/2\n origin = []\n for i in axes:\n result = result @ PointGroup([Identity(), _axis_reflection(i, ndim)], ndim=ndim)\n origin.append(0 if pbc[i] else (extent[i] - 1) / 2)\n result = result.elems\n result[0] = Identity() # it would otherwise be an equivalent PGSymmetry\n return PointGroup(result, ndim=ndim).change_origin(origin)\n\n\ndef Grid(\n extent: Sequence[int] = None,\n *,\n length: Sequence[int] = None,\n pbc: Union[bool, Sequence[bool]] = True,\n) -> Lattice:\n \"\"\"\n Constructs a hypercubic lattice given its extent in all dimensions.\n\n Args:\n extent: Size of the lattice along each dimension. It must be a list with\n integer components >= 1.\n pbc: If `True`, the grid will have periodic boundary conditions (PBC);\n if `False`, the grid will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Examples:\n Construct a 5x10 square lattice with periodic boundary conditions:\n\n >>> import netket\n >>> g=netket.graph.Grid(extent=[5, 10], pbc=True)\n >>> print(g.n_nodes)\n 50\n\n Construct a 2x2x3 cubic lattice with open boundary conditions:\n\n >>> g=netket.graph.Grid(extent=[2,2,3], pbc=False)\n >>> print(g.n_nodes)\n 12\n \"\"\"\n if extent is None:\n if length is None:\n raise TypeError(\"Required argument 'extent' missing\")\n else:\n warnings.warn(\n \"'length' is deprecated and may be removed in future versions, \"\n \"use 'extent' instead\",\n FutureWarning,\n )\n extent = np.asarray(length, dtype=int)\n else:\n if length is not None:\n raise TypeError(\n \"'length' is a deprecated alias of 'extent', do not supply both\"\n )\n else:\n extent = np.asarray(extent, dtype=int)\n\n ndim = len(extent)\n if isinstance(pbc, bool):\n pbc = [pbc] * ndim\n return Lattice(\n basis_vectors=np.eye(ndim),\n extent=extent,\n pbc=pbc,\n point_group=lambda: _grid_point_group(extent, pbc),\n )\n\n\ndef Hypercube(length: int, n_dim: int = 1, *, pbc: bool = True) -> Lattice:\n r\"\"\"Constructs a hypercubic lattice with equal side length in all dimensions.\n Periodic boundary conditions can also be imposed.\n\n Args:\n length: Side length of the hypercube; must always be >=1\n n_dim: Dimension of the hypercube; must be at least 1.\n pbc: Whether the hypercube should have periodic boundary conditions\n (in all directions)\n\n Examples:\n A 10x10x10 cubic lattice with periodic boundary conditions can be\n constructed as follows:\n\n >>> import netket\n >>> g = netket.graph.Hypercube(10, n_dim=3, pbc=True)\n >>> print(g.n_nodes)\n 1000\n \"\"\"\n length_vector = [length] * n_dim\n return Grid(length_vector, pbc=pbc)\n\n\ndef Cube(length: int, *, pbc: bool = True) -> Lattice:\n \"\"\"Constructs a cubic lattice of side `length`\n Periodic boundary conditions can also be imposed\n\n Args:\n length: Side length of the cube; must always be >=1\n pbc: Whether the cube should have periodic boundary conditions\n (in all directions)\n\n Examples:\n A 10×10×10 cubic lattice with periodic boundary conditions can be\n constructed as follows:\n\n >>> import netket\n >>> g=netket.graph.Cube(10, pbc=True)\n >>> print(g.n_nodes)\n 1000\n \"\"\"\n return Hypercube(length, pbc=pbc, n_dim=3)\n\n\ndef Square(length: int, *, pbc: bool = True) -> Lattice:\n \"\"\"Constructs a square lattice of side `length`\n Periodic boundary conditions can also be imposed\n\n Args:\n length: Side length of the square; must always be >=1\n pbc: Whether the square should have periodic boundary\n conditions (in both directions)\n\n Examples:\n A 10x10 square lattice with periodic boundary conditions can be\n constructed as follows:\n\n >>> import netket\n >>> g=netket.graph.Square(10, pbc=True)\n >>> print(g.n_nodes)\n 100\n \"\"\"\n return Hypercube(length, pbc=pbc, n_dim=2)\n\n\ndef Chain(length: int, *, pbc: bool = True) -> Lattice:\n r\"\"\"Constructs a chain of `length` sites.\n Periodic boundary conditions can also be imposed\n\n Args:\n length: Length of the chain. It must always be >=1\n pbc: Whether the chain should have periodic boundary conditions\n\n Examples:\n A 10 site chain with periodic boundary conditions can be\n constructed as follows:\n\n >>> import netket\n >>> g = netket.graph.Chain(10, pbc=True)\n >>> print(g.n_nodes)\n 10\n \"\"\"\n return Hypercube(length, pbc=pbc, n_dim=1)\n\n\ndef BCC(extent: Sequence[int], *, pbc: Union[bool, Sequence[bool]] = True) -> Lattice:\n \"\"\"Constructs a BCC lattice of a given spatial extent.\n Periodic boundary conditions can also be imposed\n Sites are returned at the Bravais lattice points.\n\n Arguments:\n extent: Number of primitive unit cells along each direction, needs to be\n an array of length 3\n pbc: If `True`, the lattice will have periodic boundary conditions (PBC);\n if `False`, the lattice will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Example:\n Construct a BCC lattice with 3×3×3 primitive unit cells:\n\n >>> from netket.graph import BCC\n >>> g = BCC(extent=[3,3,3])\n >>> print(g.n_nodes)\n 27\n \"\"\"\n basis = [[-0.5, 0.5, 0.5], [0.5, -0.5, 0.5], [0.5, 0.5, -0.5]]\n # determine if full point group is realised by the simulation box\n point_group = cubic.Oh() if np.all(pbc) and len(set(extent)) == 1 else None\n return Lattice(basis_vectors=basis, extent=extent, pbc=pbc, point_group=point_group)\n\n\ndef FCC(extent: Sequence[int], *, pbc: Union[bool, Sequence[bool]] = True) -> Lattice:\n \"\"\"Constructs an FCC lattice of a given spatial extent.\n Periodic boundary conditions can also be imposed\n Sites are returned at the Bravais lattice points.\n\n Arguments:\n extent: Number of primitive unit cells along each direction, needs\n to be an array of length 3\n pbc: If `True`, the lattice will have periodic boundary conditions (PBC);\n if `False`, the lattice will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Example:\n Construct an FCC lattice with 3×3×3 primitive unit cells:\n\n >>> from netket.graph import FCC\n >>> g = FCC(extent=[3,3,3])\n >>> print(g.n_nodes)\n 27\n \"\"\"\n basis = [[0, 0.5, 0.5], [0.5, 0, 0.5], [0.5, 0.5, 0]]\n # determine if full point group is realised by the simulation box\n point_group = cubic.Oh() if np.all(pbc) and len(set(extent)) == 1 else None\n return Lattice(basis_vectors=basis, extent=extent, pbc=pbc, point_group=point_group)\n\n\ndef Diamond(\n extent: Sequence[int], *, pbc: Union[bool, Sequence[bool]] = True\n) -> Lattice:\n \"\"\"Constructs a diamond lattice of a given spatial extent.\n Periodic boundary conditions can also be imposed.\n\n Sites are returned at the 8a Wyckoff positions of the FCC lattice\n ([000], [1/4,1/4,1/4], and translations thereof).\n\n Arguments:\n extent: Number of primitive unit cells along each direction, needs to\n be an array of length 3\n pbc: If `True`, the lattice will have periodic boundary conditions (PBC);\n if `False`, the lattice will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Example:\n Construct a diamond lattice with 3×3×3 primitive unit cells:\n\n >>> from netket.graph import Diamond\n >>> g = Diamond(extent=[3,3,3])\n >>> print(g.n_nodes)\n 54\n \"\"\"\n basis = [[0, 0.5, 0.5], [0.5, 0, 0.5], [0.5, 0.5, 0]]\n sites = [[0, 0, 0], [0.25, 0.25, 0.25]]\n # determine if full point group is realised by the simulation box\n point_group = cubic.Fd3m() if np.all(pbc) and len(set(extent)) == 1 else None\n return Lattice(\n basis_vectors=basis,\n site_offsets=sites,\n extent=extent,\n pbc=pbc,\n point_group=point_group,\n )\n\n\ndef Pyrochlore(\n extent: Sequence[int], *, pbc: Union[bool, Sequence[bool]] = True\n) -> Lattice:\n \"\"\"Constructs a pyrochlore lattice of a given spatial extent.\n Periodic boundary conditions can also be imposed.\n\n Sites are returned at the 16c Wyckoff positions of the FCC lattice\n ([111]/8, [1 -1 -1]/8, [-1 1 -1]/8, [-1 -1 1]/8, and translations thereof).\n\n Arguments:\n extent: Number of primitive unit cells along each direction, needs to be\n an array of length 3\n pbc: If `True`, the lattice will have periodic boundary conditions (PBC);\n if `False`, the lattice will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Example:\n Construct a pyrochlore lattice with 3×3×3 primitive unit cells:\n\n >>> from netket.graph import Pyrochlore\n >>> g = Pyrochlore(extent=[3,3,3])\n >>> print(g.n_nodes)\n 108\n \"\"\"\n basis = [[0, 0.5, 0.5], [0.5, 0, 0.5], [0.5, 0.5, 0]]\n sites = np.array([[1, 1, 1], [1, 3, 3], [3, 1, 3], [3, 3, 1]]) / 8\n # determine if full point group is realised by the simulation box\n point_group = cubic.Fd3m() if np.all(pbc) and len(set(extent)) == 1 else None\n return Lattice(\n basis_vectors=basis,\n site_offsets=sites,\n extent=extent,\n pbc=pbc,\n point_group=point_group,\n )\n\n\ndef _hexagonal_general(\n extent, *, site_offsets=None, pbc: Union[bool, Sequence[bool]] = True\n) -> Lattice:\n basis = [[1, 0], [0.5, 0.75 ** 0.5]]\n # determine if full point group is realised by the simulation box\n point_group = planar.D(6) if np.all(pbc) and extent[0] == extent[1] else None\n return Lattice(\n basis_vectors=basis,\n extent=extent,\n site_offsets=site_offsets,\n pbc=pbc,\n point_group=point_group,\n )\n\n\ndef Triangular(extent, *, pbc: Union[bool, Sequence[bool]] = True) -> Lattice:\n r\"\"\"Constructs a triangular lattice of a given spatial extent.\n Periodic boundary conditions can also be imposed\n Sites are returned at the Bravais lattice points.\n\n Arguments:\n extent: Number of unit cells along each direction, needs to be an array\n of length 2\n pbc: If `True`, the lattice will have periodic boundary conditions (PBC);\n if `False`, the lattice will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Example:\n Construct a triangular lattice with 3 × 3 unit cells:\n\n >>> from netket.graph import Triangular\n >>> g = Triangular(extent=[3, 3])\n >>> print(g.n_nodes)\n 9\n \"\"\"\n return _hexagonal_general(extent, site_offsets=None, pbc=pbc)\n\n\ndef Honeycomb(extent, *, pbc: Union[bool, Sequence[bool]] = True) -> Lattice:\n r\"\"\"Constructs a honeycomb lattice of a given spatial extent.\n Periodic boundary conditions can also be imposed.\n Sites are returned at the 2b Wyckoff positions.\n\n Arguments:\n extent: Number of unit cells along each direction, needs to be an array\n of length 2\n pbc: If `True`, the lattice will have periodic boundary conditions (PBC);\n if `False`, the lattice will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Example:\n Construct a honeycomb lattice with 3 × 3 unit cells:\n\n >>> from netket.graph import Honeycomb\n >>> g = Honeycomb(extent=[3, 3])\n >>> print(g.n_nodes)\n 18\n \"\"\"\n return _hexagonal_general(\n extent, site_offsets=[[0.5, 0.5 / 3 ** 0.5], [1, 1 / 3 ** 0.5]], pbc=pbc\n )\n\n\ndef Kagome(extent, *, pbc: Union[bool, Sequence[bool]] = True) -> Lattice:\n r\"\"\"Constructs a kagome lattice of a given spatial extent.\n Periodic boundary conditions can also be imposed.\n Sites are returned at the 3c Wyckoff positions.\n\n Arguments:\n extent: Number of unit cells along each direction, needs to be an array\n of length 2\n pbc: If `True`, the lattice will have periodic boundary conditions (PBC);\n if `False`, the lattice will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Example:\n Construct a kagome lattice with 3 × 3 unit cells:\n\n >>> from netket.graph import Kagome\n >>> g = Kagome(extent=[3, 3])\n >>> print(g.n_nodes)\n 27\n \"\"\"\n return _hexagonal_general(\n extent,\n site_offsets=[[0.5, 0], [0.25, 0.75 ** 0.5 / 2], [0.75, 0.75 ** 0.5 / 2]],\n pbc=pbc,\n )\n", "path": "netket/graph/common_lattices.py" } ]
[ { "content": "# Copyright 2021 The NetKet Authors - All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom itertools import permutations\nfrom typing import Sequence, Union, Tuple\nimport numpy as np\nimport warnings\n\nfrom .lattice import Lattice\n\nfrom netket.utils.group import PointGroup, PGSymmetry, planar, cubic, Identity\n\n\ndef _perm_symm(perm: Tuple) -> PGSymmetry:\n n = len(perm)\n M = np.zeros((n, n))\n M[range(n), perm] = 1\n return PGSymmetry(M)\n\n\ndef _axis_reflection(axis: int, ndim: int) -> PGSymmetry:\n M = np.eye(ndim)\n M[axis, axis] = -1\n return PGSymmetry(M)\n\n\ndef _grid_point_group(extent: Sequence[int], pbc: Sequence[bool]) -> PointGroup:\n # axis permutations\n # can exchange two axes iff they have the same kind of BC and length\n # represent open BC by setting kind[i] = -extent[i], so just have to\n # match these\n axis_perm = []\n axes = np.arange(len(extent), dtype=int)\n extent = np.asarray(extent, dtype=int)\n kind = np.where(pbc, extent, -extent)\n ndim = len(extent)\n for perm in permutations(axes):\n if np.all(kind == kind[list(perm)]):\n axis_perm.append(_perm_symm(perm))\n result = PointGroup(axis_perm, ndim=ndim)\n # reflections across axes and setting the origin\n # OBC axes are only symmetric w.r.t. their midpoint, (extent[i]-1)/2\n origin = []\n for i in axes:\n result = result @ PointGroup([Identity(), _axis_reflection(i, ndim)], ndim=ndim)\n origin.append(0 if pbc[i] else (extent[i] - 1) / 2)\n result = result.elems\n result[0] = Identity() # it would otherwise be an equivalent PGSymmetry\n return PointGroup(result, ndim=ndim).change_origin(origin)\n\n\ndef Grid(\n extent: Sequence[int] = None,\n *,\n length: Sequence[int] = None,\n pbc: Union[bool, Sequence[bool]] = True,\n) -> Lattice:\n \"\"\"\n Constructs a hypercubic lattice given its extent in all dimensions.\n\n Args:\n extent: Size of the lattice along each dimension. It must be a list with\n integer components >= 1.\n pbc: If `True`, the grid will have periodic boundary conditions (PBC);\n if `False`, the grid will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Examples:\n Construct a 5x10 square lattice with periodic boundary conditions:\n\n >>> import netket\n >>> g=netket.graph.Grid(extent=[5, 10], pbc=True)\n >>> print(g.n_nodes)\n 50\n\n Construct a 2x2x3 cubic lattice with open boundary conditions:\n\n >>> g=netket.graph.Grid(extent=[2,2,3], pbc=False)\n >>> print(g.n_nodes)\n 12\n \"\"\"\n if extent is None:\n if length is None:\n raise TypeError(\"Required argument 'extent' missing\")\n else:\n warnings.warn(\n \"'length' is deprecated and may be removed in future versions, \"\n \"use 'extent' instead\",\n FutureWarning,\n )\n extent = np.asarray(length, dtype=int)\n else:\n if length is not None:\n raise TypeError(\n \"'length' is a deprecated alias of 'extent', do not supply both\"\n )\n else:\n extent = np.asarray(extent, dtype=int)\n\n ndim = len(extent)\n if isinstance(pbc, bool):\n pbc = [pbc] * ndim\n return Lattice(\n basis_vectors=np.eye(ndim),\n extent=extent,\n pbc=pbc,\n point_group=lambda: _grid_point_group(extent, pbc),\n )\n\n\ndef Hypercube(length: int, n_dim: int = 1, *, pbc: bool = True) -> Lattice:\n r\"\"\"Constructs a hypercubic lattice with equal side length in all dimensions.\n Periodic boundary conditions can also be imposed.\n\n Args:\n length: Side length of the hypercube; must always be >=1\n n_dim: Dimension of the hypercube; must be at least 1.\n pbc: Whether the hypercube should have periodic boundary conditions\n (in all directions)\n\n Examples:\n A 10x10x10 cubic lattice with periodic boundary conditions can be\n constructed as follows:\n\n >>> import netket\n >>> g = netket.graph.Hypercube(10, n_dim=3, pbc=True)\n >>> print(g.n_nodes)\n 1000\n \"\"\"\n if not isinstance(length, int) or length <= 0:\n raise TypeError(\"Argument `length` must be a positive integer\")\n length_vector = [length] * n_dim\n return Grid(length_vector, pbc=pbc)\n\n\ndef Cube(length: int, *, pbc: bool = True) -> Lattice:\n \"\"\"Constructs a cubic lattice of side `length`\n Periodic boundary conditions can also be imposed\n\n Args:\n length: Side length of the cube; must always be >=1\n pbc: Whether the cube should have periodic boundary conditions\n (in all directions)\n\n Examples:\n A 10×10×10 cubic lattice with periodic boundary conditions can be\n constructed as follows:\n\n >>> import netket\n >>> g=netket.graph.Cube(10, pbc=True)\n >>> print(g.n_nodes)\n 1000\n \"\"\"\n return Hypercube(length, pbc=pbc, n_dim=3)\n\n\ndef Square(length: int, *, pbc: bool = True) -> Lattice:\n \"\"\"Constructs a square lattice of side `length`\n Periodic boundary conditions can also be imposed\n\n Args:\n length: Side length of the square; must always be >=1\n pbc: Whether the square should have periodic boundary\n conditions (in both directions)\n\n Examples:\n A 10x10 square lattice with periodic boundary conditions can be\n constructed as follows:\n\n >>> import netket\n >>> g=netket.graph.Square(10, pbc=True)\n >>> print(g.n_nodes)\n 100\n \"\"\"\n return Hypercube(length, pbc=pbc, n_dim=2)\n\n\ndef Chain(length: int, *, pbc: bool = True) -> Lattice:\n r\"\"\"Constructs a chain of `length` sites.\n Periodic boundary conditions can also be imposed\n\n Args:\n length: Length of the chain. It must always be >=1\n pbc: Whether the chain should have periodic boundary conditions\n\n Examples:\n A 10 site chain with periodic boundary conditions can be\n constructed as follows:\n\n >>> import netket\n >>> g = netket.graph.Chain(10, pbc=True)\n >>> print(g.n_nodes)\n 10\n \"\"\"\n return Hypercube(length, pbc=pbc, n_dim=1)\n\n\ndef BCC(extent: Sequence[int], *, pbc: Union[bool, Sequence[bool]] = True) -> Lattice:\n \"\"\"Constructs a BCC lattice of a given spatial extent.\n Periodic boundary conditions can also be imposed\n Sites are returned at the Bravais lattice points.\n\n Arguments:\n extent: Number of primitive unit cells along each direction, needs to be\n an array of length 3\n pbc: If `True`, the lattice will have periodic boundary conditions (PBC);\n if `False`, the lattice will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Example:\n Construct a BCC lattice with 3×3×3 primitive unit cells:\n\n >>> from netket.graph import BCC\n >>> g = BCC(extent=[3,3,3])\n >>> print(g.n_nodes)\n 27\n \"\"\"\n basis = [[-0.5, 0.5, 0.5], [0.5, -0.5, 0.5], [0.5, 0.5, -0.5]]\n # determine if full point group is realised by the simulation box\n point_group = cubic.Oh() if np.all(pbc) and len(set(extent)) == 1 else None\n return Lattice(basis_vectors=basis, extent=extent, pbc=pbc, point_group=point_group)\n\n\ndef FCC(extent: Sequence[int], *, pbc: Union[bool, Sequence[bool]] = True) -> Lattice:\n \"\"\"Constructs an FCC lattice of a given spatial extent.\n Periodic boundary conditions can also be imposed\n Sites are returned at the Bravais lattice points.\n\n Arguments:\n extent: Number of primitive unit cells along each direction, needs\n to be an array of length 3\n pbc: If `True`, the lattice will have periodic boundary conditions (PBC);\n if `False`, the lattice will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Example:\n Construct an FCC lattice with 3×3×3 primitive unit cells:\n\n >>> from netket.graph import FCC\n >>> g = FCC(extent=[3,3,3])\n >>> print(g.n_nodes)\n 27\n \"\"\"\n basis = [[0, 0.5, 0.5], [0.5, 0, 0.5], [0.5, 0.5, 0]]\n # determine if full point group is realised by the simulation box\n point_group = cubic.Oh() if np.all(pbc) and len(set(extent)) == 1 else None\n return Lattice(basis_vectors=basis, extent=extent, pbc=pbc, point_group=point_group)\n\n\ndef Diamond(\n extent: Sequence[int], *, pbc: Union[bool, Sequence[bool]] = True\n) -> Lattice:\n \"\"\"Constructs a diamond lattice of a given spatial extent.\n Periodic boundary conditions can also be imposed.\n\n Sites are returned at the 8a Wyckoff positions of the FCC lattice\n ([000], [1/4,1/4,1/4], and translations thereof).\n\n Arguments:\n extent: Number of primitive unit cells along each direction, needs to\n be an array of length 3\n pbc: If `True`, the lattice will have periodic boundary conditions (PBC);\n if `False`, the lattice will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Example:\n Construct a diamond lattice with 3×3×3 primitive unit cells:\n\n >>> from netket.graph import Diamond\n >>> g = Diamond(extent=[3,3,3])\n >>> print(g.n_nodes)\n 54\n \"\"\"\n basis = [[0, 0.5, 0.5], [0.5, 0, 0.5], [0.5, 0.5, 0]]\n sites = [[0, 0, 0], [0.25, 0.25, 0.25]]\n # determine if full point group is realised by the simulation box\n point_group = cubic.Fd3m() if np.all(pbc) and len(set(extent)) == 1 else None\n return Lattice(\n basis_vectors=basis,\n site_offsets=sites,\n extent=extent,\n pbc=pbc,\n point_group=point_group,\n )\n\n\ndef Pyrochlore(\n extent: Sequence[int], *, pbc: Union[bool, Sequence[bool]] = True\n) -> Lattice:\n \"\"\"Constructs a pyrochlore lattice of a given spatial extent.\n Periodic boundary conditions can also be imposed.\n\n Sites are returned at the 16c Wyckoff positions of the FCC lattice\n ([111]/8, [1 -1 -1]/8, [-1 1 -1]/8, [-1 -1 1]/8, and translations thereof).\n\n Arguments:\n extent: Number of primitive unit cells along each direction, needs to be\n an array of length 3\n pbc: If `True`, the lattice will have periodic boundary conditions (PBC);\n if `False`, the lattice will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Example:\n Construct a pyrochlore lattice with 3×3×3 primitive unit cells:\n\n >>> from netket.graph import Pyrochlore\n >>> g = Pyrochlore(extent=[3,3,3])\n >>> print(g.n_nodes)\n 108\n \"\"\"\n basis = [[0, 0.5, 0.5], [0.5, 0, 0.5], [0.5, 0.5, 0]]\n sites = np.array([[1, 1, 1], [1, 3, 3], [3, 1, 3], [3, 3, 1]]) / 8\n # determine if full point group is realised by the simulation box\n point_group = cubic.Fd3m() if np.all(pbc) and len(set(extent)) == 1 else None\n return Lattice(\n basis_vectors=basis,\n site_offsets=sites,\n extent=extent,\n pbc=pbc,\n point_group=point_group,\n )\n\n\ndef _hexagonal_general(\n extent, *, site_offsets=None, pbc: Union[bool, Sequence[bool]] = True\n) -> Lattice:\n basis = [[1, 0], [0.5, 0.75 ** 0.5]]\n # determine if full point group is realised by the simulation box\n point_group = planar.D(6) if np.all(pbc) and extent[0] == extent[1] else None\n return Lattice(\n basis_vectors=basis,\n extent=extent,\n site_offsets=site_offsets,\n pbc=pbc,\n point_group=point_group,\n )\n\n\ndef Triangular(extent, *, pbc: Union[bool, Sequence[bool]] = True) -> Lattice:\n r\"\"\"Constructs a triangular lattice of a given spatial extent.\n Periodic boundary conditions can also be imposed\n Sites are returned at the Bravais lattice points.\n\n Arguments:\n extent: Number of unit cells along each direction, needs to be an array\n of length 2\n pbc: If `True`, the lattice will have periodic boundary conditions (PBC);\n if `False`, the lattice will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Example:\n Construct a triangular lattice with 3 × 3 unit cells:\n\n >>> from netket.graph import Triangular\n >>> g = Triangular(extent=[3, 3])\n >>> print(g.n_nodes)\n 9\n \"\"\"\n return _hexagonal_general(extent, site_offsets=None, pbc=pbc)\n\n\ndef Honeycomb(extent, *, pbc: Union[bool, Sequence[bool]] = True) -> Lattice:\n r\"\"\"Constructs a honeycomb lattice of a given spatial extent.\n Periodic boundary conditions can also be imposed.\n Sites are returned at the 2b Wyckoff positions.\n\n Arguments:\n extent: Number of unit cells along each direction, needs to be an array\n of length 2\n pbc: If `True`, the lattice will have periodic boundary conditions (PBC);\n if `False`, the lattice will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Example:\n Construct a honeycomb lattice with 3 × 3 unit cells:\n\n >>> from netket.graph import Honeycomb\n >>> g = Honeycomb(extent=[3, 3])\n >>> print(g.n_nodes)\n 18\n \"\"\"\n return _hexagonal_general(\n extent, site_offsets=[[0.5, 0.5 / 3 ** 0.5], [1, 1 / 3 ** 0.5]], pbc=pbc\n )\n\n\ndef Kagome(extent, *, pbc: Union[bool, Sequence[bool]] = True) -> Lattice:\n r\"\"\"Constructs a kagome lattice of a given spatial extent.\n Periodic boundary conditions can also be imposed.\n Sites are returned at the 3c Wyckoff positions.\n\n Arguments:\n extent: Number of unit cells along each direction, needs to be an array\n of length 2\n pbc: If `True`, the lattice will have periodic boundary conditions (PBC);\n if `False`, the lattice will have open boundary conditions (OBC).\n This parameter can also be a list of booleans with same length as\n the parameter `length`, in which case each dimension will have\n PBC/OBC depending on the corresponding entry of `pbc`.\n\n Example:\n Construct a kagome lattice with 3 × 3 unit cells:\n\n >>> from netket.graph import Kagome\n >>> g = Kagome(extent=[3, 3])\n >>> print(g.n_nodes)\n 27\n \"\"\"\n return _hexagonal_general(\n extent,\n site_offsets=[[0.5, 0], [0.25, 0.75 ** 0.5 / 2], [0.75, 0.75 ** 0.5 / 2]],\n pbc=pbc,\n )\n", "path": "netket/graph/common_lattices.py" } ]
diff --git a/netket/graph/common_lattices.py b/netket/graph/common_lattices.py index 831e9037ab..a3c74eb76b 100644 --- a/netket/graph/common_lattices.py +++ b/netket/graph/common_lattices.py @@ -140,6 +140,8 @@ def Hypercube(length: int, n_dim: int = 1, *, pbc: bool = True) -> Lattice: >>> print(g.n_nodes) 1000 """ + if not isinstance(length, int) or length <= 0: + raise TypeError("Argument `length` must be a positive integer") length_vector = [length] * n_dim return Grid(length_vector, pbc=pbc) diff --git a/test/graph/test_graph.py b/test/graph/test_graph.py index 878e94a62e..f9d99d91de 100644 --- a/test/graph/test_graph.py +++ b/test/graph/test_graph.py @@ -305,6 +305,18 @@ def test_graph_wrong(): with pytest.raises(ValueError): nk.graph.Graph([1, 2, 3], [1, 2, 3]) + with pytest.raises(TypeError): + nk.graph.Hypercube([5]) + + with pytest.raises(TypeError): + nk.graph.Cube([5]) + + with pytest.raises(TypeError): + nk.graph.Square([5]) + + with pytest.raises(TypeError): + nk.graph.Chain([5]) + def test_edges_are_correct(): def check_edges(length, n_dim, pbc):
conda__conda-3538
Invalid JSON output When installing `Jupyter` I sometimes see the following error: ``` post-link :: /etc/machine-id not found .. bus post-link :: .. using /proc/sys/kernel/random/boot_id ``` When installing with the `--json` flag the error output causes the json to be invalid. Example: ``` root@head:~# conda create -n test_env2 python jupyter -y --json -q dbus post-link :: /etc/machine-id not found .. dbus post-link :: .. using /proc/sys/kernel/random/boot_id { "actions": { "LINK": [ "expat-2.1.0-0 1", ... ], "PREFIX": "/opt/a/b/c/muunitnoc/anaconda/envs/test_env2", "SYMLINK_CONDA": [ "/opt/a/b/c/muunitnoc/anaconda" ], "op_order": [ "RM_FETCHED", "FETCH", "RM_EXTRACTED", "EXTRACT", "UNLINK", "LINK", "SYMLINK_CONDA" ] }, "success": true } ``` In my opinion this is a fairly critical -- I need to depend on valid JSON output cc @kalefranz @koverholt @mingwandroid Invalid JSON output When installing `Jupyter` I sometimes see the following error: ``` post-link :: /etc/machine-id not found .. bus post-link :: .. using /proc/sys/kernel/random/boot_id ``` When installing with the `--json` flag the error output causes the json to be invalid. Example: ``` root@head:~# conda create -n test_env2 python jupyter -y --json -q dbus post-link :: /etc/machine-id not found .. dbus post-link :: .. using /proc/sys/kernel/random/boot_id { "actions": { "LINK": [ "expat-2.1.0-0 1", ... ], "PREFIX": "/opt/a/b/c/muunitnoc/anaconda/envs/test_env2", "SYMLINK_CONDA": [ "/opt/a/b/c/muunitnoc/anaconda" ], "op_order": [ "RM_FETCHED", "FETCH", "RM_EXTRACTED", "EXTRACT", "UNLINK", "LINK", "SYMLINK_CONDA" ] }, "success": true } ``` In my opinion this is a fairly critical -- I need to depend on valid JSON output cc @kalefranz @koverholt @mingwandroid
[ { "content": "# (c) 2012-2014 Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\n\"\"\" This module contains:\n * all low-level code for extracting, linking and unlinking packages\n * a very simple CLI\n\nThese API functions have argument names referring to:\n\n dist: canonical package name (e.g. 'numpy-1.6.2-py26_0')\n\n pkgs_dir: the \"packages directory\" (e.g. '/opt/anaconda/pkgs' or\n '/home/joe/envs/.pkgs')\n\n prefix: the prefix of a particular environment, which may also\n be the \"default\" environment (i.e. sys.prefix),\n but is otherwise something like '/opt/anaconda/envs/foo',\n or even any prefix, e.g. '/home/joe/myenv'\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport errno\nimport functools\nimport json\nimport logging\nimport os\nimport re\nimport shlex\nimport shutil\nimport stat\nimport struct\nimport subprocess\nimport sys\nimport tarfile\nimport traceback\nfrom collections import namedtuple\nfrom enum import Enum\nfrom itertools import chain\nfrom os.path import (abspath, basename, dirname, exists, isdir, isfile, islink, join, normcase,\n normpath)\n\nfrom . import CondaError\nfrom .base.constants import UTF8\nfrom .base.context import context\nfrom .common.disk import exp_backoff_fn, rm_rf\nfrom .common.url import path_to_url\nfrom .exceptions import CondaOSError, LinkError, PaddingError\nfrom .lock import DirectoryLock, FileLock\nfrom .models.channel import Channel\nfrom .utils import on_win\n\n\n# conda-build compatibility\nfrom .common.disk import delete_trash, move_to_trash, move_path_to_trash # NOQA\n\n\nif on_win:\n import ctypes\n from ctypes import wintypes\n\n CreateHardLink = ctypes.windll.kernel32.CreateHardLinkW\n CreateHardLink.restype = wintypes.BOOL\n CreateHardLink.argtypes = [wintypes.LPCWSTR, wintypes.LPCWSTR,\n wintypes.LPVOID]\n try:\n CreateSymbolicLink = ctypes.windll.kernel32.CreateSymbolicLinkW\n CreateSymbolicLink.restype = wintypes.BOOL\n CreateSymbolicLink.argtypes = [wintypes.LPCWSTR, wintypes.LPCWSTR,\n wintypes.DWORD]\n except AttributeError:\n CreateSymbolicLink = None\n\n def win_hard_link(src, dst):\n \"Equivalent to os.link, using the win32 CreateHardLink call.\"\n if not CreateHardLink(dst, src, None):\n raise CondaOSError('win32 hard link failed')\n\n def win_soft_link(src, dst):\n \"Equivalent to os.symlink, using the win32 CreateSymbolicLink call.\"\n if CreateSymbolicLink is None:\n raise CondaOSError('win32 soft link not supported')\n if not CreateSymbolicLink(dst, src, isdir(src)):\n raise CondaOSError('win32 soft link failed')\n\n def win_conda_bat_redirect(src, dst, shell):\n \"\"\"Special function for Windows XP where the `CreateSymbolicLink`\n function is not available.\n\n Simply creates a `.bat` file at `dst` which calls `src` together with\n all command line arguments.\n\n Works of course only with callable files, e.g. `.bat` or `.exe` files.\n \"\"\"\n from conda.utils import shells\n try:\n os.makedirs(os.path.dirname(dst))\n except OSError as exc: # Python >2.5\n if exc.errno == errno.EEXIST and os.path.isdir(os.path.dirname(dst)):\n pass\n else:\n raise\n\n # bat file redirect\n if not os.path.isfile(dst + '.bat'):\n with open(dst + '.bat', 'w') as f:\n f.write('@echo off\\ncall \"%s\" %%*\\n' % src)\n\n # TODO: probably need one here for powershell at some point\n\n # This one is for bash/cygwin/msys\n # set default shell to bash.exe when not provided, as that's most common\n if not shell:\n shell = \"bash.exe\"\n\n # technically these are \"links\" - but islink doesn't work on win\n if not os.path.isfile(dst):\n with open(dst, \"w\") as f:\n f.write(\"#!/usr/bin/env bash \\n\")\n if src.endswith(\"conda\"):\n f.write('%s \"$@\"' % shells[shell]['path_to'](src+\".exe\"))\n else:\n f.write('source %s \"$@\"' % shells[shell]['path_to'](src))\n # Make the new file executable\n # http://stackoverflow.com/a/30463972/1170370\n mode = os.stat(dst).st_mode\n mode |= (mode & 292) >> 2 # copy R bits to X\n os.chmod(dst, mode)\n\nlog = logging.getLogger(__name__)\nstdoutlog = logging.getLogger('stdoutlog')\n\n\nSHEBANG_REGEX = re.compile(br'^(#!((?:\\\\ |[^ \\n\\r])+)(.*))')\n\n\nclass FileMode(Enum):\n text = 'text'\n binary = 'binary'\n\n def __str__(self):\n return \"%s\" % self.value\n\n\nLINK_HARD = 1\nLINK_SOFT = 2\nLINK_COPY = 3\nlink_name_map = {\n LINK_HARD: 'hard-link',\n LINK_SOFT: 'soft-link',\n LINK_COPY: 'copy',\n}\n\ndef _link(src, dst, linktype=LINK_HARD):\n if linktype == LINK_HARD:\n if on_win:\n win_hard_link(src, dst)\n else:\n os.link(src, dst)\n elif linktype == LINK_SOFT:\n if on_win:\n win_soft_link(src, dst)\n else:\n os.symlink(src, dst)\n elif linktype == LINK_COPY:\n # copy relative symlinks as symlinks\n if not on_win and islink(src) and not os.readlink(src).startswith('/'):\n os.symlink(os.readlink(src), dst)\n else:\n shutil.copy2(src, dst)\n else:\n raise CondaError(\"Did not expect linktype=%r\" % linktype)\n\n\ndef _remove_readonly(func, path, excinfo):\n os.chmod(path, stat.S_IWRITE)\n func(path)\n\n\ndef warn_failed_remove(function, path, exc_info):\n if exc_info[1].errno == errno.EACCES:\n log.warn(\"Cannot remove, permission denied: {0}\".format(path))\n elif exc_info[1].errno == errno.ENOTEMPTY:\n log.warn(\"Cannot remove, not empty: {0}\".format(path))\n else:\n log.warn(\"Cannot remove, unknown reason: {0}\".format(path))\n\n\ndef yield_lines(path):\n \"\"\"Generator function for lines in file. Empty generator if path does not exist.\n\n Args:\n path (str): path to file\n\n Returns:\n iterator: each line in file, not starting with '#'\n\n \"\"\"\n try:\n with open(path) as fh:\n for line in fh:\n line = line.strip()\n if not line or line.startswith('#'):\n continue\n yield line\n except (IOError, OSError) as e:\n if e.errno == errno.ENOENT:\n raise StopIteration\n else:\n raise\n\n\nPREFIX_PLACEHOLDER = ('/opt/anaconda1anaconda2'\n # this is intentionally split into parts,\n # such that running this program on itself\n # will leave it unchanged\n 'anaconda3')\n\n# backwards compatibility for conda-build\nprefix_placeholder = PREFIX_PLACEHOLDER\n\n\ndef read_has_prefix(path):\n \"\"\"\n reads `has_prefix` file and return dict mapping filepaths to tuples(placeholder, FileMode)\n\n A line in `has_prefix` contains one of\n * filepath\n * placeholder mode filepath\n\n mode values are one of\n * text\n * binary\n \"\"\"\n ParseResult = namedtuple('ParseResult', ('placeholder', 'filemode', 'filepath'))\n\n def parse_line(line):\n # placeholder, filemode, filepath\n parts = tuple(x.strip('\"\\'') for x in shlex.split(line, posix=False))\n if len(parts) == 1:\n return ParseResult(PREFIX_PLACEHOLDER, FileMode.text, parts[0])\n elif len(parts) == 3:\n return ParseResult(parts[0], FileMode(parts[1]), parts[2])\n else:\n raise RuntimeError(\"Invalid has_prefix file at path: %s\" % path)\n parsed_lines = (parse_line(line) for line in yield_lines(path))\n return {pr.filepath: (pr.placeholder, pr.filemode) for pr in parsed_lines}\n\n\nclass _PaddingError(Exception):\n pass\n\n\ndef binary_replace(data, a, b):\n \"\"\"\n Perform a binary replacement of `data`, where the placeholder `a` is\n replaced with `b` and the remaining string is padded with null characters.\n All input arguments are expected to be bytes objects.\n \"\"\"\n if on_win and has_pyzzer_entry_point(data):\n return replace_pyzzer_entry_point_shebang(data, a, b)\n\n def replace(match):\n occurances = match.group().count(a)\n padding = (len(a) - len(b))*occurances\n if padding < 0:\n raise _PaddingError\n return match.group().replace(a, b) + b'\\0' * padding\n\n original_data_len = len(data)\n pat = re.compile(re.escape(a) + b'([^\\0]*?)\\0')\n data = pat.sub(replace, data)\n assert len(data) == original_data_len\n\n return data\n\n\ndef replace_long_shebang(mode, data):\n if mode is FileMode.text:\n shebang_match = SHEBANG_REGEX.match(data)\n if shebang_match:\n whole_shebang, executable, options = shebang_match.groups()\n if len(whole_shebang) > 127:\n executable_name = executable.decode(UTF8).split('/')[-1]\n new_shebang = '#!/usr/bin/env %s%s' % (executable_name, options.decode(UTF8))\n data = data.replace(whole_shebang, new_shebang.encode(UTF8))\n else:\n # TODO: binary shebangs exist; figure this out in the future if text works well\n log.debug(\"TODO: binary shebangs exist; figure this out in the future if text works well\")\n return data\n\n\ndef has_pyzzer_entry_point(data):\n pos = data.rfind(b'PK\\x05\\x06')\n return pos >= 0\n\n\ndef replace_pyzzer_entry_point_shebang(all_data, placeholder, new_prefix):\n \"\"\"Code adapted from pyzzer. This is meant to deal with entry point exe's created by distlib,\n which consist of a launcher, then a shebang, then a zip archive of the entry point code to run.\n We need to change the shebang.\n https://bitbucket.org/vinay.sajip/pyzzer/src/5d5740cb04308f067d5844a56fbe91e7a27efccc/pyzzer/__init__.py?at=default&fileviewer=file-view-default#__init__.py-112 # NOQA\n \"\"\"\n # Copyright (c) 2013 Vinay Sajip.\n #\n # Permission is hereby granted, free of charge, to any person obtaining a copy\n # of this software and associated documentation files (the \"Software\"), to deal\n # in the Software without restriction, including without limitation the rights\n # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n # copies of the Software, and to permit persons to whom the Software is\n # furnished to do so, subject to the following conditions:\n #\n # The above copyright notice and this permission notice shall be included in\n # all copies or substantial portions of the Software.\n #\n # THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n # THE SOFTWARE.\n launcher = shebang = None\n pos = all_data.rfind(b'PK\\x05\\x06')\n if pos >= 0:\n end_cdr = all_data[pos + 12:pos + 20]\n cdr_size, cdr_offset = struct.unpack('<LL', end_cdr)\n arc_pos = pos - cdr_size - cdr_offset\n data = all_data[arc_pos:]\n if arc_pos > 0:\n pos = all_data.rfind(b'#!', 0, arc_pos)\n if pos >= 0:\n shebang = all_data[pos:arc_pos]\n if pos > 0:\n launcher = all_data[:pos]\n\n if data and shebang and launcher:\n if hasattr(placeholder, 'encode'):\n placeholder = placeholder.encode('utf-8')\n if hasattr(new_prefix, 'encode'):\n new_prefix = new_prefix.encode('utf-8')\n shebang = shebang.replace(placeholder, new_prefix)\n all_data = b\"\".join([launcher, shebang, data])\n return all_data\n\n\ndef replace_prefix(mode, data, placeholder, new_prefix):\n if mode is FileMode.text:\n data = data.replace(placeholder.encode(UTF8), new_prefix.encode(UTF8))\n elif mode == FileMode.binary:\n data = binary_replace(data, placeholder.encode(UTF8), new_prefix.encode(UTF8))\n else:\n raise RuntimeError(\"Invalid mode: %r\" % mode)\n return data\n\n\ndef update_prefix(path, new_prefix, placeholder=PREFIX_PLACEHOLDER, mode=FileMode.text):\n if on_win and mode is FileMode.text:\n # force all prefix replacements to forward slashes to simplify need to escape backslashes\n # replace with unix-style path separators\n new_prefix = new_prefix.replace('\\\\', '/')\n\n path = os.path.realpath(path)\n with open(path, 'rb') as fi:\n original_data = data = fi.read()\n\n data = replace_prefix(mode, data, placeholder, new_prefix)\n if not on_win:\n data = replace_long_shebang(mode, data)\n\n if data == original_data:\n return\n st = os.lstat(path)\n with exp_backoff_fn(open, path, 'wb') as fo:\n fo.write(data)\n os.chmod(path, stat.S_IMODE(st.st_mode))\n\n\ndef dist2pair(dist):\n dist = str(dist)\n if dist.endswith(']'):\n dist = dist.split('[', 1)[0]\n if dist.endswith('.tar.bz2'):\n dist = dist[:-8]\n parts = dist.split('::', 1)\n return 'defaults' if len(parts) < 2 else parts[0], parts[-1]\n\n\ndef dist2quad(dist):\n channel, dist = dist2pair(dist)\n parts = dist.rsplit('-', 2) + ['', '']\n return (str(parts[0]), str(parts[1]), str(parts[2]), str(channel))\n\n\ndef dist2name(dist):\n return dist2quad(dist)[0]\n\n\ndef name_dist(dist):\n return dist2name(dist)\n\n\ndef dist2filename(dist, suffix='.tar.bz2'):\n return dist2pair(dist)[1] + suffix\n\n\ndef dist2dirname(dist):\n return dist2filename(dist, '')\n\n\ndef create_meta(prefix, dist, info_dir, extra_info):\n \"\"\"\n Create the conda metadata, in a given prefix, for a given package.\n \"\"\"\n # read info/index.json first\n with open(join(info_dir, 'index.json')) as fi:\n meta = json.load(fi)\n # add extra info, add to our intenral cache\n meta.update(extra_info)\n if not meta.get('url'):\n meta['url'] = read_url(dist)\n # write into <env>/conda-meta/<dist>.json\n meta_dir = join(prefix, 'conda-meta')\n if not isdir(meta_dir):\n os.makedirs(meta_dir)\n with open(join(meta_dir, dist2filename(dist, '.json')), 'w') as fo:\n json.dump(meta, fo, indent=2, sort_keys=True)\n if prefix in linked_data_:\n load_linked_data(prefix, dist, meta)\n\n\ndef mk_menus(prefix, files, remove=False):\n \"\"\"\n Create cross-platform menu items (e.g. Windows Start Menu)\n\n Passes all menu config files %PREFIX%/Menu/*.json to ``menuinst.install``.\n ``remove=True`` will remove the menu items.\n \"\"\"\n menu_files = [f for f in files\n if (f.lower().startswith('menu/') and\n f.lower().endswith('.json'))]\n if not menu_files:\n return\n elif basename(abspath(prefix)).startswith('_'):\n logging.warn(\"Environment name starts with underscore '_'. \"\n \"Skipping menu installation.\")\n return\n\n try:\n import menuinst\n except:\n logging.warn(\"Menuinst could not be imported:\")\n logging.warn(traceback.format_exc())\n return\n\n for f in menu_files:\n try:\n menuinst.install(join(prefix, f), remove, prefix)\n except:\n stdoutlog.error(\"menuinst Exception:\")\n stdoutlog.error(traceback.format_exc())\n\n\ndef run_script(prefix, dist, action='post-link', env_prefix=None):\n \"\"\"\n call the post-link (or pre-unlink) script, and return True on success,\n False on failure\n \"\"\"\n path = join(prefix, 'Scripts' if on_win else 'bin', '.%s-%s.%s' % (\n name_dist(dist),\n action,\n 'bat' if on_win else 'sh'))\n if not isfile(path):\n return True\n if on_win:\n try:\n args = [os.environ['COMSPEC'], '/c', path]\n except KeyError:\n return False\n else:\n shell_path = '/bin/sh' if 'bsd' in sys.platform else '/bin/bash'\n args = [shell_path, path]\n env = os.environ.copy()\n env[str('ROOT_PREFIX')] = sys.prefix\n env[str('PREFIX')] = str(env_prefix or prefix)\n env[str('PKG_NAME')], env[str('PKG_VERSION')], env[str('PKG_BUILDNUM')], _ = dist2quad(dist)\n if action == 'pre-link':\n env[str('SOURCE_DIR')] = str(prefix)\n try:\n subprocess.check_call(args, env=env)\n except subprocess.CalledProcessError:\n return False\n return True\n\n\ndef read_url(dist):\n res = package_cache().get(dist, {}).get('urls', (None,))\n return res[0] if res else None\n\n\ndef read_icondata(source_dir):\n import base64\n\n try:\n data = open(join(source_dir, 'info', 'icon.png'), 'rb').read()\n return base64.b64encode(data).decode(UTF8)\n except IOError:\n pass\n return None\n\n\ndef read_no_link(info_dir):\n return set(chain(yield_lines(join(info_dir, 'no_link')),\n yield_lines(join(info_dir, 'no_softlink'))))\n\n\n# Should this be an API function?\ndef symlink_conda(prefix, root_dir, shell=None):\n # do not symlink root env - this clobbers activate incorrectly.\n # prefix should always be longer than, or outside the root dir.\n if normcase(normpath(prefix)) in normcase(normpath(root_dir)):\n return\n if on_win:\n where = 'Scripts'\n symlink_fn = functools.partial(win_conda_bat_redirect, shell=shell)\n else:\n where = 'bin'\n symlink_fn = os.symlink\n if not isdir(join(prefix, where)):\n os.makedirs(join(prefix, where))\n symlink_conda_hlp(prefix, root_dir, where, symlink_fn)\n\n\ndef symlink_conda_hlp(prefix, root_dir, where, symlink_fn):\n scripts = [\"conda\", \"activate\", \"deactivate\"]\n prefix_where = join(prefix, where)\n if not isdir(prefix_where):\n os.makedirs(prefix_where)\n for f in scripts:\n root_file = join(root_dir, where, f)\n prefix_file = join(prefix_where, f)\n try:\n # try to kill stale links if they exist\n if os.path.lexists(prefix_file):\n rm_rf(prefix_file)\n # if they're in use, they won't be killed. Skip making new symlink.\n if not os.path.lexists(prefix_file):\n symlink_fn(root_file, prefix_file)\n except (IOError, OSError) as e:\n if (os.path.lexists(prefix_file) and\n (e.errno in (errno.EPERM, errno.EACCES, errno.EROFS, errno.EEXIST))):\n log.debug(\"Cannot symlink {0} to {1}. Ignoring since link already exists.\"\n .format(root_file, prefix_file))\n else:\n raise\n\n\n# ========================== begin API functions =========================\n\ndef try_hard_link(pkgs_dir, prefix, dist):\n dist = dist2filename(dist, '')\n src = join(pkgs_dir, dist, 'info', 'index.json')\n dst = join(prefix, '.tmp-%s' % dist)\n assert isfile(src), src\n assert not isfile(dst), dst\n try:\n if not isdir(prefix):\n os.makedirs(prefix)\n _link(src, dst, LINK_HARD)\n # Some file systems (at least BeeGFS) do not support hard-links\n # between files in different directories. Depending on the\n # file system configuration, a symbolic link may be created\n # instead. If a symbolic link is created instead of a hard link,\n # return False.\n return not os.path.islink(dst)\n except OSError:\n return False\n finally:\n rm_rf(dst)\n\n\n# ------- package cache ----- construction\n\n# The current package cache does not support the ability to store multiple packages\n# with the same filename from different channels. Furthermore, the filename itself\n# cannot be used to disambiguate; we must read the URL from urls.txt to determine\n# the source channel. For this reason, we now fully parse the directory and its\n# accompanying urls.txt file so we can make arbitrary queries without having to\n# read this data multiple times.\n\npackage_cache_ = {}\nfname_table_ = {}\n\n\ndef add_cached_package(pdir, url, overwrite=False, urlstxt=False):\n \"\"\"\n Adds a new package to the cache. The URL is used to determine the\n package filename and channel, and the directory pdir is scanned for\n both a compressed and an extracted version of that package. If\n urlstxt=True, this URL will be appended to the urls.txt file in the\n cache, so that subsequent runs will correctly identify the package.\n \"\"\"\n package_cache()\n if '/' in url:\n dist = url.rsplit('/', 1)[-1]\n else:\n dist = url\n url = None\n if dist.endswith('.tar.bz2'):\n fname = dist\n dist = dist[:-8]\n else:\n fname = dist + '.tar.bz2'\n xpkg = join(pdir, fname)\n if not overwrite and xpkg in fname_table_:\n return\n if not isfile(xpkg):\n xpkg = None\n xdir = join(pdir, dist)\n if not (isdir(xdir) and\n isfile(join(xdir, 'info', 'files')) and\n isfile(join(xdir, 'info', 'index.json'))):\n xdir = None\n if not (xpkg or xdir):\n return\n if url:\n url = url\n schannel = Channel(url).canonical_name\n prefix = '' if schannel == 'defaults' else schannel + '::'\n xkey = xpkg or (xdir + '.tar.bz2')\n fname_table_[xkey] = fname_table_[path_to_url(xkey)] = prefix\n fkey = prefix + dist\n rec = package_cache_.get(fkey)\n if rec is None:\n rec = package_cache_[fkey] = dict(files=[], dirs=[], urls=[])\n if url and url not in rec['urls']:\n rec['urls'].append(url)\n if xpkg and xpkg not in rec['files']:\n rec['files'].append(xpkg)\n if xdir and xdir not in rec['dirs']:\n rec['dirs'].append(xdir)\n if urlstxt:\n try:\n with open(join(pdir, 'urls.txt'), 'a') as fa:\n fa.write('%s\\n' % url)\n except IOError:\n pass\n\n\ndef package_cache():\n \"\"\"\n Initializes the package cache. Each entry in the package cache\n dictionary contains three lists:\n - urls: the URLs used to refer to that package\n - files: the full pathnames to fetched copies of that package\n - dirs: the full pathnames to extracted copies of that package\n Nominally there should be no more than one entry in each list, but\n in theory this can handle the presence of multiple copies.\n \"\"\"\n if package_cache_:\n return package_cache_\n # Stops recursion\n package_cache_['@'] = None\n # import pdb; pdb.set_trace()\n for pdir in context.pkgs_dirs:\n try:\n data = open(join(pdir, 'urls.txt')).read()\n for url in data.split()[::-1]:\n if '/' in url:\n add_cached_package(pdir, url)\n except IOError:\n pass\n if isdir(pdir):\n for fn in os.listdir(pdir):\n add_cached_package(pdir, fn)\n del package_cache_['@']\n return package_cache_\n\n\ndef cached_url(url):\n package_cache()\n return fname_table_.get(url)\n\n\ndef find_new_location(dist):\n \"\"\"\n Determines the download location for the given package, and the name\n of a package, if any, that must be removed to make room. If the\n given package is already in the cache, it returns its current location,\n under the assumption that it will be overwritten. If the conflict\n value is None, that means there is no other package with that same\n name present in the cache (e.g., no collision).\n \"\"\"\n rec = package_cache().get(dist)\n if rec:\n return dirname((rec['files'] or rec['dirs'])[0]), None\n fname = dist2filename(dist)\n dname = fname[:-8]\n # Look for a location with no conflicts\n # On the second pass, just pick the first location\n for p in range(2):\n for pkg_dir in context.pkgs_dirs:\n pkg_path = join(pkg_dir, fname)\n prefix = fname_table_.get(pkg_path)\n if p or prefix is None:\n return pkg_dir, prefix + dname if p else None\n\n\n# ------- package cache ----- fetched\n\ndef fetched():\n \"\"\"\n Returns the (set of canonical names) of all fetched packages\n \"\"\"\n return set(dist for dist, rec in package_cache().items() if rec['files'])\n\n\ndef is_fetched(dist):\n \"\"\"\n Returns the full path of the fetched package, or None if it is not in the cache.\n \"\"\"\n for fn in package_cache().get(dist, {}).get('files', ()):\n return fn\n\n\ndef rm_fetched(dist):\n \"\"\"\n Checks to see if the requested package is in the cache; and if so, it removes both\n the package itself and its extracted contents.\n \"\"\"\n rec = package_cache().get(dist)\n if rec is None:\n return\n for fname in rec['files']:\n del fname_table_[fname]\n del fname_table_[path_to_url(fname)]\n with FileLock(fname):\n rm_rf(fname)\n if exists(fname):\n log.warn(\"File not removed during RM_FETCHED instruction: %s\", fname)\n for fname in rec['dirs']:\n with FileLock(fname):\n rm_rf(fname)\n if exists(fname):\n log.warn(\"Directory not removed during RM_FETCHED instruction: %s\", fname)\n del package_cache_[dist]\n\n\n# ------- package cache ----- extracted\n\ndef extracted():\n \"\"\"\n return the (set of canonical names) of all extracted packages\n \"\"\"\n return set(dist for dist, rec in package_cache().items() if rec['dirs'])\n\n\ndef is_extracted(dist):\n \"\"\"\n returns the full path of the extracted data for the requested package,\n or None if that package is not extracted.\n \"\"\"\n for fn in package_cache().get(dist, {}).get('dirs', ()):\n return fn\n\n\ndef rm_extracted(dist):\n \"\"\"\n Removes any extracted versions of the given package found in the cache.\n \"\"\"\n rec = package_cache().get(dist)\n if rec is None:\n return\n for fname in rec['dirs']:\n with FileLock(fname):\n rm_rf(fname)\n if exists(fname):\n log.warn(\"Directory not removed during RM_EXTRACTED instruction: %s\", fname)\n if rec['files']:\n rec['dirs'] = []\n else:\n del package_cache_[dist]\n\n\ndef extract(dist):\n \"\"\"\n Extract a package, i.e. make a package available for linkage. We assume\n that the compressed package is located in the packages directory.\n \"\"\"\n rec = package_cache()[dist]\n url = rec['urls'][0]\n fname = rec['files'][0]\n assert url and fname\n pkgs_dir = dirname(fname)\n path = fname[:-8]\n with FileLock(path):\n temp_path = path + '.tmp'\n rm_rf(temp_path)\n with tarfile.open(fname) as t:\n t.extractall(path=temp_path)\n rm_rf(path)\n exp_backoff_fn(os.rename, temp_path, path)\n if sys.platform.startswith('linux') and os.getuid() == 0:\n # When extracting as root, tarfile will by restore ownership\n # of extracted files. However, we want root to be the owner\n # (our implementation of --no-same-owner).\n for root, dirs, files in os.walk(path):\n for fn in files:\n p = join(root, fn)\n os.lchown(p, 0, 0)\n add_cached_package(pkgs_dir, url, overwrite=True)\n\n# Because the conda-meta .json files do not include channel names in\n# their filenames, we have to pull that information from the .json\n# files themselves. This has made it necessary in virtually all\n# circumstances to load the full set of files from this directory.\n# Therefore, we have implemented a full internal cache of this\n# data to eliminate redundant file reads.\nlinked_data_ = {}\n\n\ndef load_linked_data(prefix, dist, rec=None, ignore_channels=False):\n schannel, dname = dist2pair(dist)\n meta_file = join(prefix, 'conda-meta', dname + '.json')\n if rec is None:\n try:\n with open(meta_file) as fi:\n rec = json.load(fi)\n except IOError:\n return None\n else:\n linked_data(prefix)\n url = rec.get('url')\n fn = rec.get('fn')\n if not fn:\n fn = rec['fn'] = url.rsplit('/', 1)[-1] if url else dname + '.tar.bz2'\n if fn[:-8] != dname:\n log.debug('Ignoring invalid package metadata file: %s' % meta_file)\n return None\n channel = rec.get('channel')\n if channel:\n channel = channel.rstrip('/')\n if not url or (url.startswith('file:') and channel[0] != '<unknown>'):\n url = rec['url'] = channel + '/' + fn\n channel, schannel = Channel(url).url_channel_wtf\n rec['url'] = url\n rec['channel'] = channel\n rec['schannel'] = schannel\n rec['link'] = rec.get('link') or True\n if ignore_channels:\n linked_data_[prefix][dname] = rec\n else:\n cprefix = '' if schannel == 'defaults' else schannel + '::'\n linked_data_[prefix][str(cprefix + dname)] = rec\n return rec\n\n\ndef delete_linked_data(prefix, dist, delete=True):\n recs = linked_data_.get(prefix)\n if recs and dist in recs:\n del recs[dist]\n if delete:\n meta_path = join(prefix, 'conda-meta', dist2filename(dist, '.json'))\n if isfile(meta_path):\n rm_rf(meta_path)\n\n\ndef delete_linked_data_any(path):\n '''Here, path may be a complete prefix or a dist inside a prefix'''\n dist = ''\n while True:\n if path in linked_data_:\n if dist:\n delete_linked_data(path, dist)\n return True\n else:\n del linked_data_[path]\n return True\n path, dist = os.path.split(path)\n if not dist:\n return False\n\n\ndef load_meta(prefix, dist):\n \"\"\"\n Return the install meta-data for a linked package in a prefix, or None\n if the package is not linked in the prefix.\n \"\"\"\n return linked_data(prefix).get(dist)\n\n\ndef linked_data(prefix, ignore_channels=False):\n \"\"\"\n Return a dictionary of the linked packages in prefix.\n \"\"\"\n # Manually memoized so it can be updated\n recs = linked_data_.get(prefix)\n if recs is None:\n recs = linked_data_[prefix] = {}\n meta_dir = join(prefix, 'conda-meta')\n if isdir(meta_dir):\n for fn in os.listdir(meta_dir):\n if fn.endswith('.json'):\n load_linked_data(prefix, fn[:-5], ignore_channels=ignore_channels)\n return recs\n\n\ndef linked(prefix, ignore_channels=False):\n \"\"\"\n Return the set of canonical names of linked packages in prefix.\n \"\"\"\n return set(linked_data(prefix, ignore_channels=ignore_channels).keys())\n\n\ndef is_linked(prefix, dist):\n \"\"\"\n Return the install metadata for a linked package in a prefix, or None\n if the package is not linked in the prefix.\n \"\"\"\n # FIXME Functions that begin with `is_` should return True/False\n return load_meta(prefix, dist)\n\n\ndef link(prefix, dist, linktype=LINK_HARD, index=None):\n \"\"\"\n Set up a package in a specified (environment) prefix. We assume that\n the package has been extracted (using extract() above).\n \"\"\"\n log.debug(\"linking package %s with link type %s\", dist, linktype)\n index = index or {}\n source_dir = is_extracted(dist)\n assert source_dir is not None\n pkgs_dir = dirname(source_dir)\n log.debug('pkgs_dir=%r, prefix=%r, dist=%r, linktype=%r' %\n (pkgs_dir, prefix, dist, linktype))\n\n if not run_script(source_dir, dist, 'pre-link', prefix):\n raise LinkError('Error: pre-link failed: %s' % dist)\n\n info_dir = join(source_dir, 'info')\n files = list(yield_lines(join(info_dir, 'files')))\n has_prefix_files = read_has_prefix(join(info_dir, 'has_prefix'))\n no_link = read_no_link(info_dir)\n\n # for the lock issue\n # may run into lock if prefix not exist\n if not isdir(prefix):\n os.makedirs(prefix)\n\n with DirectoryLock(prefix), FileLock(source_dir):\n for filepath in files:\n src = join(source_dir, filepath)\n dst = join(prefix, filepath)\n dst_dir = dirname(dst)\n if not isdir(dst_dir):\n os.makedirs(dst_dir)\n if os.path.exists(dst):\n log.info(\"file exists, but clobbering: %r\" % dst)\n rm_rf(dst)\n lt = linktype\n if filepath in has_prefix_files or filepath in no_link or islink(src):\n lt = LINK_COPY\n\n try:\n _link(src, dst, lt)\n except OSError as e:\n raise CondaOSError('failed to link (src=%r, dst=%r, type=%r, error=%r)' %\n (src, dst, lt, e))\n\n for filepath in sorted(has_prefix_files):\n placeholder, mode = has_prefix_files[filepath]\n try:\n update_prefix(join(prefix, filepath), prefix, placeholder, mode)\n except _PaddingError:\n raise PaddingError(dist, placeholder, len(placeholder))\n\n # make sure that the child environment behaves like the parent,\n # wrt user/system install on win\n # This is critical for doing shortcuts correctly\n if on_win:\n nonadmin = join(sys.prefix, \".nonadmin\")\n if isfile(nonadmin):\n open(join(prefix, \".nonadmin\"), 'w').close()\n\n if context.shortcuts:\n mk_menus(prefix, files, remove=False)\n\n if not run_script(prefix, dist, 'post-link'):\n raise LinkError(\"Error: post-link failed for: %s\" % dist)\n\n meta_dict = index.get(dist + '.tar.bz2', {})\n meta_dict['url'] = read_url(dist)\n alt_files_path = join(prefix, 'conda-meta', dist2filename(dist, '.files'))\n if isfile(alt_files_path):\n # alt_files_path is a hack for noarch\n meta_dict['files'] = list(yield_lines(alt_files_path))\n else:\n meta_dict['files'] = files\n meta_dict['link'] = {'source': source_dir,\n 'type': link_name_map.get(linktype)}\n if 'icon' in meta_dict:\n meta_dict['icondata'] = read_icondata(source_dir)\n\n create_meta(prefix, dist, info_dir, meta_dict)\n\n\ndef unlink(prefix, dist):\n \"\"\"\n Remove a package from the specified environment, it is an error if the\n package does not exist in the prefix.\n \"\"\"\n with DirectoryLock(prefix):\n log.debug(\"unlinking package %s\", dist)\n run_script(prefix, dist, 'pre-unlink')\n\n meta = load_meta(prefix, dist)\n # Always try to run this - it should not throw errors where menus do not exist\n mk_menus(prefix, meta['files'], remove=True)\n dst_dirs1 = set()\n\n for f in meta['files']:\n dst = join(prefix, f)\n dst_dirs1.add(dirname(dst))\n rm_rf(dst)\n\n # remove the meta-file last\n delete_linked_data(prefix, dist, delete=True)\n\n dst_dirs2 = set()\n for path in dst_dirs1:\n while len(path) > len(prefix):\n dst_dirs2.add(path)\n path = dirname(path)\n # in case there is nothing left\n dst_dirs2.add(join(prefix, 'conda-meta'))\n dst_dirs2.add(prefix)\n\n # remove empty directories\n for path in sorted(dst_dirs2, key=len, reverse=True):\n if isdir(path) and not os.listdir(path):\n rm_rf(path)\n\n\ndef messages(prefix):\n path = join(prefix, '.messages.txt')\n try:\n with open(path) as fi:\n sys.stdout.write(fi.read())\n except IOError:\n pass\n finally:\n rm_rf(path)\n", "path": "conda/install.py" } ]
[ { "content": "# (c) 2012-2014 Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\n\"\"\" This module contains:\n * all low-level code for extracting, linking and unlinking packages\n * a very simple CLI\n\nThese API functions have argument names referring to:\n\n dist: canonical package name (e.g. 'numpy-1.6.2-py26_0')\n\n pkgs_dir: the \"packages directory\" (e.g. '/opt/anaconda/pkgs' or\n '/home/joe/envs/.pkgs')\n\n prefix: the prefix of a particular environment, which may also\n be the \"default\" environment (i.e. sys.prefix),\n but is otherwise something like '/opt/anaconda/envs/foo',\n or even any prefix, e.g. '/home/joe/myenv'\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport errno\nimport functools\nimport json\nimport logging\nimport os\nimport re\nimport shlex\nimport shutil\nimport stat\nimport struct\nimport subprocess\nimport sys\nimport tarfile\nimport traceback\nfrom collections import namedtuple\nfrom enum import Enum\nfrom itertools import chain\nfrom os.path import (abspath, basename, dirname, exists, isdir, isfile, islink, join, normcase,\n normpath)\n\nfrom . import CondaError\nfrom .base.constants import UTF8\nfrom .base.context import context\nfrom .common.disk import exp_backoff_fn, rm_rf\nfrom .common.url import path_to_url\nfrom .exceptions import CondaOSError, LinkError, PaddingError\nfrom .lock import DirectoryLock, FileLock\nfrom .models.channel import Channel\nfrom .utils import on_win\n\n\n# conda-build compatibility\nfrom .common.disk import delete_trash, move_to_trash, move_path_to_trash # NOQA\n\n\nif on_win:\n import ctypes\n from ctypes import wintypes\n\n CreateHardLink = ctypes.windll.kernel32.CreateHardLinkW\n CreateHardLink.restype = wintypes.BOOL\n CreateHardLink.argtypes = [wintypes.LPCWSTR, wintypes.LPCWSTR,\n wintypes.LPVOID]\n try:\n CreateSymbolicLink = ctypes.windll.kernel32.CreateSymbolicLinkW\n CreateSymbolicLink.restype = wintypes.BOOL\n CreateSymbolicLink.argtypes = [wintypes.LPCWSTR, wintypes.LPCWSTR,\n wintypes.DWORD]\n except AttributeError:\n CreateSymbolicLink = None\n\n def win_hard_link(src, dst):\n \"Equivalent to os.link, using the win32 CreateHardLink call.\"\n if not CreateHardLink(dst, src, None):\n raise CondaOSError('win32 hard link failed')\n\n def win_soft_link(src, dst):\n \"Equivalent to os.symlink, using the win32 CreateSymbolicLink call.\"\n if CreateSymbolicLink is None:\n raise CondaOSError('win32 soft link not supported')\n if not CreateSymbolicLink(dst, src, isdir(src)):\n raise CondaOSError('win32 soft link failed')\n\n def win_conda_bat_redirect(src, dst, shell):\n \"\"\"Special function for Windows XP where the `CreateSymbolicLink`\n function is not available.\n\n Simply creates a `.bat` file at `dst` which calls `src` together with\n all command line arguments.\n\n Works of course only with callable files, e.g. `.bat` or `.exe` files.\n \"\"\"\n from conda.utils import shells\n try:\n os.makedirs(os.path.dirname(dst))\n except OSError as exc: # Python >2.5\n if exc.errno == errno.EEXIST and os.path.isdir(os.path.dirname(dst)):\n pass\n else:\n raise\n\n # bat file redirect\n if not os.path.isfile(dst + '.bat'):\n with open(dst + '.bat', 'w') as f:\n f.write('@echo off\\ncall \"%s\" %%*\\n' % src)\n\n # TODO: probably need one here for powershell at some point\n\n # This one is for bash/cygwin/msys\n # set default shell to bash.exe when not provided, as that's most common\n if not shell:\n shell = \"bash.exe\"\n\n # technically these are \"links\" - but islink doesn't work on win\n if not os.path.isfile(dst):\n with open(dst, \"w\") as f:\n f.write(\"#!/usr/bin/env bash \\n\")\n if src.endswith(\"conda\"):\n f.write('%s \"$@\"' % shells[shell]['path_to'](src+\".exe\"))\n else:\n f.write('source %s \"$@\"' % shells[shell]['path_to'](src))\n # Make the new file executable\n # http://stackoverflow.com/a/30463972/1170370\n mode = os.stat(dst).st_mode\n mode |= (mode & 292) >> 2 # copy R bits to X\n os.chmod(dst, mode)\n\nlog = logging.getLogger(__name__)\nstdoutlog = logging.getLogger('stdoutlog')\n\n\nSHEBANG_REGEX = re.compile(br'^(#!((?:\\\\ |[^ \\n\\r])+)(.*))')\n\n\nclass FileMode(Enum):\n text = 'text'\n binary = 'binary'\n\n def __str__(self):\n return \"%s\" % self.value\n\n\nLINK_HARD = 1\nLINK_SOFT = 2\nLINK_COPY = 3\nlink_name_map = {\n LINK_HARD: 'hard-link',\n LINK_SOFT: 'soft-link',\n LINK_COPY: 'copy',\n}\n\ndef _link(src, dst, linktype=LINK_HARD):\n if linktype == LINK_HARD:\n if on_win:\n win_hard_link(src, dst)\n else:\n os.link(src, dst)\n elif linktype == LINK_SOFT:\n if on_win:\n win_soft_link(src, dst)\n else:\n os.symlink(src, dst)\n elif linktype == LINK_COPY:\n # copy relative symlinks as symlinks\n if not on_win and islink(src) and not os.readlink(src).startswith('/'):\n os.symlink(os.readlink(src), dst)\n else:\n shutil.copy2(src, dst)\n else:\n raise CondaError(\"Did not expect linktype=%r\" % linktype)\n\n\ndef _remove_readonly(func, path, excinfo):\n os.chmod(path, stat.S_IWRITE)\n func(path)\n\n\ndef warn_failed_remove(function, path, exc_info):\n if exc_info[1].errno == errno.EACCES:\n log.warn(\"Cannot remove, permission denied: {0}\".format(path))\n elif exc_info[1].errno == errno.ENOTEMPTY:\n log.warn(\"Cannot remove, not empty: {0}\".format(path))\n else:\n log.warn(\"Cannot remove, unknown reason: {0}\".format(path))\n\n\ndef yield_lines(path):\n \"\"\"Generator function for lines in file. Empty generator if path does not exist.\n\n Args:\n path (str): path to file\n\n Returns:\n iterator: each line in file, not starting with '#'\n\n \"\"\"\n try:\n with open(path) as fh:\n for line in fh:\n line = line.strip()\n if not line or line.startswith('#'):\n continue\n yield line\n except (IOError, OSError) as e:\n if e.errno == errno.ENOENT:\n raise StopIteration\n else:\n raise\n\n\nPREFIX_PLACEHOLDER = ('/opt/anaconda1anaconda2'\n # this is intentionally split into parts,\n # such that running this program on itself\n # will leave it unchanged\n 'anaconda3')\n\n# backwards compatibility for conda-build\nprefix_placeholder = PREFIX_PLACEHOLDER\n\n\ndef read_has_prefix(path):\n \"\"\"\n reads `has_prefix` file and return dict mapping filepaths to tuples(placeholder, FileMode)\n\n A line in `has_prefix` contains one of\n * filepath\n * placeholder mode filepath\n\n mode values are one of\n * text\n * binary\n \"\"\"\n ParseResult = namedtuple('ParseResult', ('placeholder', 'filemode', 'filepath'))\n\n def parse_line(line):\n # placeholder, filemode, filepath\n parts = tuple(x.strip('\"\\'') for x in shlex.split(line, posix=False))\n if len(parts) == 1:\n return ParseResult(PREFIX_PLACEHOLDER, FileMode.text, parts[0])\n elif len(parts) == 3:\n return ParseResult(parts[0], FileMode(parts[1]), parts[2])\n else:\n raise RuntimeError(\"Invalid has_prefix file at path: %s\" % path)\n parsed_lines = (parse_line(line) for line in yield_lines(path))\n return {pr.filepath: (pr.placeholder, pr.filemode) for pr in parsed_lines}\n\n\nclass _PaddingError(Exception):\n pass\n\n\ndef binary_replace(data, a, b):\n \"\"\"\n Perform a binary replacement of `data`, where the placeholder `a` is\n replaced with `b` and the remaining string is padded with null characters.\n All input arguments are expected to be bytes objects.\n \"\"\"\n if on_win and has_pyzzer_entry_point(data):\n return replace_pyzzer_entry_point_shebang(data, a, b)\n\n def replace(match):\n occurances = match.group().count(a)\n padding = (len(a) - len(b))*occurances\n if padding < 0:\n raise _PaddingError\n return match.group().replace(a, b) + b'\\0' * padding\n\n original_data_len = len(data)\n pat = re.compile(re.escape(a) + b'([^\\0]*?)\\0')\n data = pat.sub(replace, data)\n assert len(data) == original_data_len\n\n return data\n\n\ndef replace_long_shebang(mode, data):\n if mode is FileMode.text:\n shebang_match = SHEBANG_REGEX.match(data)\n if shebang_match:\n whole_shebang, executable, options = shebang_match.groups()\n if len(whole_shebang) > 127:\n executable_name = executable.decode(UTF8).split('/')[-1]\n new_shebang = '#!/usr/bin/env %s%s' % (executable_name, options.decode(UTF8))\n data = data.replace(whole_shebang, new_shebang.encode(UTF8))\n else:\n # TODO: binary shebangs exist; figure this out in the future if text works well\n log.debug(\"TODO: binary shebangs exist; figure this out in the future if text works well\")\n return data\n\n\ndef has_pyzzer_entry_point(data):\n pos = data.rfind(b'PK\\x05\\x06')\n return pos >= 0\n\n\ndef replace_pyzzer_entry_point_shebang(all_data, placeholder, new_prefix):\n \"\"\"Code adapted from pyzzer. This is meant to deal with entry point exe's created by distlib,\n which consist of a launcher, then a shebang, then a zip archive of the entry point code to run.\n We need to change the shebang.\n https://bitbucket.org/vinay.sajip/pyzzer/src/5d5740cb04308f067d5844a56fbe91e7a27efccc/pyzzer/__init__.py?at=default&fileviewer=file-view-default#__init__.py-112 # NOQA\n \"\"\"\n # Copyright (c) 2013 Vinay Sajip.\n #\n # Permission is hereby granted, free of charge, to any person obtaining a copy\n # of this software and associated documentation files (the \"Software\"), to deal\n # in the Software without restriction, including without limitation the rights\n # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n # copies of the Software, and to permit persons to whom the Software is\n # furnished to do so, subject to the following conditions:\n #\n # The above copyright notice and this permission notice shall be included in\n # all copies or substantial portions of the Software.\n #\n # THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n # THE SOFTWARE.\n launcher = shebang = None\n pos = all_data.rfind(b'PK\\x05\\x06')\n if pos >= 0:\n end_cdr = all_data[pos + 12:pos + 20]\n cdr_size, cdr_offset = struct.unpack('<LL', end_cdr)\n arc_pos = pos - cdr_size - cdr_offset\n data = all_data[arc_pos:]\n if arc_pos > 0:\n pos = all_data.rfind(b'#!', 0, arc_pos)\n if pos >= 0:\n shebang = all_data[pos:arc_pos]\n if pos > 0:\n launcher = all_data[:pos]\n\n if data and shebang and launcher:\n if hasattr(placeholder, 'encode'):\n placeholder = placeholder.encode('utf-8')\n if hasattr(new_prefix, 'encode'):\n new_prefix = new_prefix.encode('utf-8')\n shebang = shebang.replace(placeholder, new_prefix)\n all_data = b\"\".join([launcher, shebang, data])\n return all_data\n\n\ndef replace_prefix(mode, data, placeholder, new_prefix):\n if mode is FileMode.text:\n data = data.replace(placeholder.encode(UTF8), new_prefix.encode(UTF8))\n elif mode == FileMode.binary:\n data = binary_replace(data, placeholder.encode(UTF8), new_prefix.encode(UTF8))\n else:\n raise RuntimeError(\"Invalid mode: %r\" % mode)\n return data\n\n\ndef update_prefix(path, new_prefix, placeholder=PREFIX_PLACEHOLDER, mode=FileMode.text):\n if on_win and mode is FileMode.text:\n # force all prefix replacements to forward slashes to simplify need to escape backslashes\n # replace with unix-style path separators\n new_prefix = new_prefix.replace('\\\\', '/')\n\n path = os.path.realpath(path)\n with open(path, 'rb') as fi:\n original_data = data = fi.read()\n\n data = replace_prefix(mode, data, placeholder, new_prefix)\n if not on_win:\n data = replace_long_shebang(mode, data)\n\n if data == original_data:\n return\n st = os.lstat(path)\n with exp_backoff_fn(open, path, 'wb') as fo:\n fo.write(data)\n os.chmod(path, stat.S_IMODE(st.st_mode))\n\n\ndef dist2pair(dist):\n dist = str(dist)\n if dist.endswith(']'):\n dist = dist.split('[', 1)[0]\n if dist.endswith('.tar.bz2'):\n dist = dist[:-8]\n parts = dist.split('::', 1)\n return 'defaults' if len(parts) < 2 else parts[0], parts[-1]\n\n\ndef dist2quad(dist):\n channel, dist = dist2pair(dist)\n parts = dist.rsplit('-', 2) + ['', '']\n return (str(parts[0]), str(parts[1]), str(parts[2]), str(channel))\n\n\ndef dist2name(dist):\n return dist2quad(dist)[0]\n\n\ndef name_dist(dist):\n return dist2name(dist)\n\n\ndef dist2filename(dist, suffix='.tar.bz2'):\n return dist2pair(dist)[1] + suffix\n\n\ndef dist2dirname(dist):\n return dist2filename(dist, '')\n\n\ndef create_meta(prefix, dist, info_dir, extra_info):\n \"\"\"\n Create the conda metadata, in a given prefix, for a given package.\n \"\"\"\n # read info/index.json first\n with open(join(info_dir, 'index.json')) as fi:\n meta = json.load(fi)\n # add extra info, add to our intenral cache\n meta.update(extra_info)\n if not meta.get('url'):\n meta['url'] = read_url(dist)\n # write into <env>/conda-meta/<dist>.json\n meta_dir = join(prefix, 'conda-meta')\n if not isdir(meta_dir):\n os.makedirs(meta_dir)\n with open(join(meta_dir, dist2filename(dist, '.json')), 'w') as fo:\n json.dump(meta, fo, indent=2, sort_keys=True)\n if prefix in linked_data_:\n load_linked_data(prefix, dist, meta)\n\n\ndef mk_menus(prefix, files, remove=False):\n \"\"\"\n Create cross-platform menu items (e.g. Windows Start Menu)\n\n Passes all menu config files %PREFIX%/Menu/*.json to ``menuinst.install``.\n ``remove=True`` will remove the menu items.\n \"\"\"\n menu_files = [f for f in files\n if (f.lower().startswith('menu/') and\n f.lower().endswith('.json'))]\n if not menu_files:\n return\n elif basename(abspath(prefix)).startswith('_'):\n logging.warn(\"Environment name starts with underscore '_'. \"\n \"Skipping menu installation.\")\n return\n\n try:\n import menuinst\n except:\n logging.warn(\"Menuinst could not be imported:\")\n logging.warn(traceback.format_exc())\n return\n\n for f in menu_files:\n try:\n menuinst.install(join(prefix, f), remove, prefix)\n except:\n stdoutlog.error(\"menuinst Exception:\")\n stdoutlog.error(traceback.format_exc())\n\n\ndef run_script(prefix, dist, action='post-link', env_prefix=None):\n \"\"\"\n call the post-link (or pre-unlink) script, and return True on success,\n False on failure\n \"\"\"\n path = join(prefix, 'Scripts' if on_win else 'bin', '.%s-%s.%s' % (\n name_dist(dist),\n action,\n 'bat' if on_win else 'sh'))\n if not isfile(path):\n return True\n if on_win:\n try:\n args = [os.environ['COMSPEC'], '/c', path]\n except KeyError:\n return False\n else:\n shell_path = '/bin/sh' if 'bsd' in sys.platform else '/bin/bash'\n args = [shell_path, path]\n env = os.environ.copy()\n env[str('ROOT_PREFIX')] = sys.prefix\n env[str('PREFIX')] = str(env_prefix or prefix)\n env[str('PKG_NAME')], env[str('PKG_VERSION')], env[str('PKG_BUILDNUM')], _ = dist2quad(dist)\n if action == 'pre-link':\n env[str('SOURCE_DIR')] = str(prefix)\n try:\n subprocess.check_call(args, env=env)\n except subprocess.CalledProcessError:\n return False\n return True\n\n\ndef read_url(dist):\n res = package_cache().get(dist, {}).get('urls', (None,))\n return res[0] if res else None\n\n\ndef read_icondata(source_dir):\n import base64\n\n try:\n data = open(join(source_dir, 'info', 'icon.png'), 'rb').read()\n return base64.b64encode(data).decode(UTF8)\n except IOError:\n pass\n return None\n\n\ndef read_no_link(info_dir):\n return set(chain(yield_lines(join(info_dir, 'no_link')),\n yield_lines(join(info_dir, 'no_softlink'))))\n\n\n# Should this be an API function?\ndef symlink_conda(prefix, root_dir, shell=None):\n # do not symlink root env - this clobbers activate incorrectly.\n # prefix should always be longer than, or outside the root dir.\n if normcase(normpath(prefix)) in normcase(normpath(root_dir)):\n return\n if on_win:\n where = 'Scripts'\n symlink_fn = functools.partial(win_conda_bat_redirect, shell=shell)\n else:\n where = 'bin'\n symlink_fn = os.symlink\n if not isdir(join(prefix, where)):\n os.makedirs(join(prefix, where))\n symlink_conda_hlp(prefix, root_dir, where, symlink_fn)\n\n\ndef symlink_conda_hlp(prefix, root_dir, where, symlink_fn):\n scripts = [\"conda\", \"activate\", \"deactivate\"]\n prefix_where = join(prefix, where)\n if not isdir(prefix_where):\n os.makedirs(prefix_where)\n for f in scripts:\n root_file = join(root_dir, where, f)\n prefix_file = join(prefix_where, f)\n try:\n # try to kill stale links if they exist\n if os.path.lexists(prefix_file):\n rm_rf(prefix_file)\n # if they're in use, they won't be killed. Skip making new symlink.\n if not os.path.lexists(prefix_file):\n symlink_fn(root_file, prefix_file)\n except (IOError, OSError) as e:\n if (os.path.lexists(prefix_file) and\n (e.errno in (errno.EPERM, errno.EACCES, errno.EROFS, errno.EEXIST))):\n log.debug(\"Cannot symlink {0} to {1}. Ignoring since link already exists.\"\n .format(root_file, prefix_file))\n else:\n raise\n\n\n# ========================== begin API functions =========================\n\ndef try_hard_link(pkgs_dir, prefix, dist):\n dist = dist2filename(dist, '')\n src = join(pkgs_dir, dist, 'info', 'index.json')\n dst = join(prefix, '.tmp-%s' % dist)\n assert isfile(src), src\n assert not isfile(dst), dst\n try:\n if not isdir(prefix):\n os.makedirs(prefix)\n _link(src, dst, LINK_HARD)\n # Some file systems (at least BeeGFS) do not support hard-links\n # between files in different directories. Depending on the\n # file system configuration, a symbolic link may be created\n # instead. If a symbolic link is created instead of a hard link,\n # return False.\n return not os.path.islink(dst)\n except OSError:\n return False\n finally:\n rm_rf(dst)\n\n\n# ------- package cache ----- construction\n\n# The current package cache does not support the ability to store multiple packages\n# with the same filename from different channels. Furthermore, the filename itself\n# cannot be used to disambiguate; we must read the URL from urls.txt to determine\n# the source channel. For this reason, we now fully parse the directory and its\n# accompanying urls.txt file so we can make arbitrary queries without having to\n# read this data multiple times.\n\npackage_cache_ = {}\nfname_table_ = {}\n\n\ndef add_cached_package(pdir, url, overwrite=False, urlstxt=False):\n \"\"\"\n Adds a new package to the cache. The URL is used to determine the\n package filename and channel, and the directory pdir is scanned for\n both a compressed and an extracted version of that package. If\n urlstxt=True, this URL will be appended to the urls.txt file in the\n cache, so that subsequent runs will correctly identify the package.\n \"\"\"\n package_cache()\n if '/' in url:\n dist = url.rsplit('/', 1)[-1]\n else:\n dist = url\n url = None\n if dist.endswith('.tar.bz2'):\n fname = dist\n dist = dist[:-8]\n else:\n fname = dist + '.tar.bz2'\n xpkg = join(pdir, fname)\n if not overwrite and xpkg in fname_table_:\n return\n if not isfile(xpkg):\n xpkg = None\n xdir = join(pdir, dist)\n if not (isdir(xdir) and\n isfile(join(xdir, 'info', 'files')) and\n isfile(join(xdir, 'info', 'index.json'))):\n xdir = None\n if not (xpkg or xdir):\n return\n if url:\n url = url\n schannel = Channel(url).canonical_name\n prefix = '' if schannel == 'defaults' else schannel + '::'\n xkey = xpkg or (xdir + '.tar.bz2')\n fname_table_[xkey] = fname_table_[path_to_url(xkey)] = prefix\n fkey = prefix + dist\n rec = package_cache_.get(fkey)\n if rec is None:\n rec = package_cache_[fkey] = dict(files=[], dirs=[], urls=[])\n if url and url not in rec['urls']:\n rec['urls'].append(url)\n if xpkg and xpkg not in rec['files']:\n rec['files'].append(xpkg)\n if xdir and xdir not in rec['dirs']:\n rec['dirs'].append(xdir)\n if urlstxt:\n try:\n with open(join(pdir, 'urls.txt'), 'a') as fa:\n fa.write('%s\\n' % url)\n except IOError:\n pass\n\n\ndef package_cache():\n \"\"\"\n Initializes the package cache. Each entry in the package cache\n dictionary contains three lists:\n - urls: the URLs used to refer to that package\n - files: the full pathnames to fetched copies of that package\n - dirs: the full pathnames to extracted copies of that package\n Nominally there should be no more than one entry in each list, but\n in theory this can handle the presence of multiple copies.\n \"\"\"\n if package_cache_:\n return package_cache_\n # Stops recursion\n package_cache_['@'] = None\n # import pdb; pdb.set_trace()\n for pdir in context.pkgs_dirs:\n try:\n data = open(join(pdir, 'urls.txt')).read()\n for url in data.split()[::-1]:\n if '/' in url:\n add_cached_package(pdir, url)\n except IOError:\n pass\n if isdir(pdir):\n for fn in os.listdir(pdir):\n add_cached_package(pdir, fn)\n del package_cache_['@']\n return package_cache_\n\n\ndef cached_url(url):\n package_cache()\n return fname_table_.get(url)\n\n\ndef find_new_location(dist):\n \"\"\"\n Determines the download location for the given package, and the name\n of a package, if any, that must be removed to make room. If the\n given package is already in the cache, it returns its current location,\n under the assumption that it will be overwritten. If the conflict\n value is None, that means there is no other package with that same\n name present in the cache (e.g., no collision).\n \"\"\"\n rec = package_cache().get(dist)\n if rec:\n return dirname((rec['files'] or rec['dirs'])[0]), None\n fname = dist2filename(dist)\n dname = fname[:-8]\n # Look for a location with no conflicts\n # On the second pass, just pick the first location\n for p in range(2):\n for pkg_dir in context.pkgs_dirs:\n pkg_path = join(pkg_dir, fname)\n prefix = fname_table_.get(pkg_path)\n if p or prefix is None:\n return pkg_dir, prefix + dname if p else None\n\n\n# ------- package cache ----- fetched\n\ndef fetched():\n \"\"\"\n Returns the (set of canonical names) of all fetched packages\n \"\"\"\n return set(dist for dist, rec in package_cache().items() if rec['files'])\n\n\ndef is_fetched(dist):\n \"\"\"\n Returns the full path of the fetched package, or None if it is not in the cache.\n \"\"\"\n for fn in package_cache().get(dist, {}).get('files', ()):\n return fn\n\n\ndef rm_fetched(dist):\n \"\"\"\n Checks to see if the requested package is in the cache; and if so, it removes both\n the package itself and its extracted contents.\n \"\"\"\n rec = package_cache().get(dist)\n if rec is None:\n return\n for fname in rec['files']:\n del fname_table_[fname]\n del fname_table_[path_to_url(fname)]\n with FileLock(fname):\n rm_rf(fname)\n if exists(fname):\n log.warn(\"File not removed during RM_FETCHED instruction: %s\", fname)\n for fname in rec['dirs']:\n with FileLock(fname):\n rm_rf(fname)\n if exists(fname):\n log.warn(\"Directory not removed during RM_FETCHED instruction: %s\", fname)\n del package_cache_[dist]\n\n\n# ------- package cache ----- extracted\n\ndef extracted():\n \"\"\"\n return the (set of canonical names) of all extracted packages\n \"\"\"\n return set(dist for dist, rec in package_cache().items() if rec['dirs'])\n\n\ndef is_extracted(dist):\n \"\"\"\n returns the full path of the extracted data for the requested package,\n or None if that package is not extracted.\n \"\"\"\n for fn in package_cache().get(dist, {}).get('dirs', ()):\n return fn\n\n\ndef rm_extracted(dist):\n \"\"\"\n Removes any extracted versions of the given package found in the cache.\n \"\"\"\n rec = package_cache().get(dist)\n if rec is None:\n return\n for fname in rec['dirs']:\n with FileLock(fname):\n rm_rf(fname)\n if exists(fname):\n log.warn(\"Directory not removed during RM_EXTRACTED instruction: %s\", fname)\n if rec['files']:\n rec['dirs'] = []\n else:\n del package_cache_[dist]\n\n\ndef extract(dist):\n \"\"\"\n Extract a package, i.e. make a package available for linkage. We assume\n that the compressed package is located in the packages directory.\n \"\"\"\n rec = package_cache()[dist]\n url = rec['urls'][0]\n fname = rec['files'][0]\n assert url and fname\n pkgs_dir = dirname(fname)\n path = fname[:-8]\n with FileLock(path):\n temp_path = path + '.tmp'\n rm_rf(temp_path)\n with tarfile.open(fname) as t:\n t.extractall(path=temp_path)\n rm_rf(path)\n exp_backoff_fn(os.rename, temp_path, path)\n if sys.platform.startswith('linux') and os.getuid() == 0:\n # When extracting as root, tarfile will by restore ownership\n # of extracted files. However, we want root to be the owner\n # (our implementation of --no-same-owner).\n for root, dirs, files in os.walk(path):\n for fn in files:\n p = join(root, fn)\n os.lchown(p, 0, 0)\n add_cached_package(pkgs_dir, url, overwrite=True)\n\n# Because the conda-meta .json files do not include channel names in\n# their filenames, we have to pull that information from the .json\n# files themselves. This has made it necessary in virtually all\n# circumstances to load the full set of files from this directory.\n# Therefore, we have implemented a full internal cache of this\n# data to eliminate redundant file reads.\nlinked_data_ = {}\n\n\ndef load_linked_data(prefix, dist, rec=None, ignore_channels=False):\n schannel, dname = dist2pair(dist)\n meta_file = join(prefix, 'conda-meta', dname + '.json')\n if rec is None:\n try:\n with open(meta_file) as fi:\n rec = json.load(fi)\n except IOError:\n return None\n else:\n linked_data(prefix)\n url = rec.get('url')\n fn = rec.get('fn')\n if not fn:\n fn = rec['fn'] = url.rsplit('/', 1)[-1] if url else dname + '.tar.bz2'\n if fn[:-8] != dname:\n log.debug('Ignoring invalid package metadata file: %s' % meta_file)\n return None\n channel = rec.get('channel')\n if channel:\n channel = channel.rstrip('/')\n if not url or (url.startswith('file:') and channel[0] != '<unknown>'):\n url = rec['url'] = channel + '/' + fn\n channel, schannel = Channel(url).url_channel_wtf\n rec['url'] = url\n rec['channel'] = channel\n rec['schannel'] = schannel\n rec['link'] = rec.get('link') or True\n if ignore_channels:\n linked_data_[prefix][dname] = rec\n else:\n cprefix = '' if schannel == 'defaults' else schannel + '::'\n linked_data_[prefix][str(cprefix + dname)] = rec\n return rec\n\n\ndef delete_linked_data(prefix, dist, delete=True):\n recs = linked_data_.get(prefix)\n if recs and dist in recs:\n del recs[dist]\n if delete:\n meta_path = join(prefix, 'conda-meta', dist2filename(dist, '.json'))\n if isfile(meta_path):\n rm_rf(meta_path)\n\n\ndef delete_linked_data_any(path):\n '''Here, path may be a complete prefix or a dist inside a prefix'''\n dist = ''\n while True:\n if path in linked_data_:\n if dist:\n delete_linked_data(path, dist)\n return True\n else:\n del linked_data_[path]\n return True\n path, dist = os.path.split(path)\n if not dist:\n return False\n\n\ndef load_meta(prefix, dist):\n \"\"\"\n Return the install meta-data for a linked package in a prefix, or None\n if the package is not linked in the prefix.\n \"\"\"\n return linked_data(prefix).get(dist)\n\n\ndef linked_data(prefix, ignore_channels=False):\n \"\"\"\n Return a dictionary of the linked packages in prefix.\n \"\"\"\n # Manually memoized so it can be updated\n recs = linked_data_.get(prefix)\n if recs is None:\n recs = linked_data_[prefix] = {}\n meta_dir = join(prefix, 'conda-meta')\n if isdir(meta_dir):\n for fn in os.listdir(meta_dir):\n if fn.endswith('.json'):\n load_linked_data(prefix, fn[:-5], ignore_channels=ignore_channels)\n return recs\n\n\ndef linked(prefix, ignore_channels=False):\n \"\"\"\n Return the set of canonical names of linked packages in prefix.\n \"\"\"\n return set(linked_data(prefix, ignore_channels=ignore_channels).keys())\n\n\ndef is_linked(prefix, dist):\n \"\"\"\n Return the install metadata for a linked package in a prefix, or None\n if the package is not linked in the prefix.\n \"\"\"\n # FIXME Functions that begin with `is_` should return True/False\n return load_meta(prefix, dist)\n\n\ndef link(prefix, dist, linktype=LINK_HARD, index=None):\n \"\"\"\n Set up a package in a specified (environment) prefix. We assume that\n the package has been extracted (using extract() above).\n \"\"\"\n log.debug(\"linking package %s with link type %s\", dist, linktype)\n index = index or {}\n source_dir = is_extracted(dist)\n assert source_dir is not None\n pkgs_dir = dirname(source_dir)\n log.debug('pkgs_dir=%r, prefix=%r, dist=%r, linktype=%r' %\n (pkgs_dir, prefix, dist, linktype))\n\n if not run_script(source_dir, dist, 'pre-link', prefix):\n raise LinkError('Error: pre-link failed: %s' % dist)\n\n info_dir = join(source_dir, 'info')\n files = list(yield_lines(join(info_dir, 'files')))\n has_prefix_files = read_has_prefix(join(info_dir, 'has_prefix'))\n no_link = read_no_link(info_dir)\n\n # for the lock issue\n # may run into lock if prefix not exist\n if not isdir(prefix):\n os.makedirs(prefix)\n\n with DirectoryLock(prefix), FileLock(source_dir):\n for filepath in files:\n src = join(source_dir, filepath)\n dst = join(prefix, filepath)\n dst_dir = dirname(dst)\n if not isdir(dst_dir):\n os.makedirs(dst_dir)\n if os.path.exists(dst):\n log.info(\"file exists, but clobbering: %r\" % dst)\n rm_rf(dst)\n lt = linktype\n if filepath in has_prefix_files or filepath in no_link or islink(src):\n lt = LINK_COPY\n\n try:\n _link(src, dst, lt)\n except OSError as e:\n raise CondaOSError('failed to link (src=%r, dst=%r, type=%r, error=%r)' %\n (src, dst, lt, e))\n\n for filepath in sorted(has_prefix_files):\n placeholder, mode = has_prefix_files[filepath]\n try:\n update_prefix(join(prefix, filepath), prefix, placeholder, mode)\n except _PaddingError:\n raise PaddingError(dist, placeholder, len(placeholder))\n\n # make sure that the child environment behaves like the parent,\n # wrt user/system install on win\n # This is critical for doing shortcuts correctly\n if on_win:\n nonadmin = join(sys.prefix, \".nonadmin\")\n if isfile(nonadmin):\n open(join(prefix, \".nonadmin\"), 'w').close()\n\n if context.shortcuts:\n mk_menus(prefix, files, remove=False)\n\n if not run_script(prefix, dist, 'post-link'):\n raise LinkError(\"Error: post-link failed for: %s\" % dist)\n\n meta_dict = index.get(dist + '.tar.bz2', {})\n meta_dict['url'] = read_url(dist)\n alt_files_path = join(prefix, 'conda-meta', dist2filename(dist, '.files'))\n if isfile(alt_files_path):\n # alt_files_path is a hack for noarch\n meta_dict['files'] = list(yield_lines(alt_files_path))\n else:\n meta_dict['files'] = files\n meta_dict['link'] = {'source': source_dir,\n 'type': link_name_map.get(linktype)}\n if 'icon' in meta_dict:\n meta_dict['icondata'] = read_icondata(source_dir)\n\n create_meta(prefix, dist, info_dir, meta_dict)\n\n\ndef unlink(prefix, dist):\n \"\"\"\n Remove a package from the specified environment, it is an error if the\n package does not exist in the prefix.\n \"\"\"\n with DirectoryLock(prefix):\n log.debug(\"unlinking package %s\", dist)\n run_script(prefix, dist, 'pre-unlink')\n\n meta = load_meta(prefix, dist)\n # Always try to run this - it should not throw errors where menus do not exist\n mk_menus(prefix, meta['files'], remove=True)\n dst_dirs1 = set()\n\n for f in meta['files']:\n dst = join(prefix, f)\n dst_dirs1.add(dirname(dst))\n rm_rf(dst)\n\n # remove the meta-file last\n delete_linked_data(prefix, dist, delete=True)\n\n dst_dirs2 = set()\n for path in dst_dirs1:\n while len(path) > len(prefix):\n dst_dirs2.add(path)\n path = dirname(path)\n # in case there is nothing left\n dst_dirs2.add(join(prefix, 'conda-meta'))\n dst_dirs2.add(prefix)\n\n # remove empty directories\n for path in sorted(dst_dirs2, key=len, reverse=True):\n if isdir(path) and not os.listdir(path):\n rm_rf(path)\n\n\ndef messages(prefix):\n path = join(prefix, '.messages.txt')\n try:\n with open(path) as fi:\n fh = sys.stderr if context.json else sys.stdout\n fh.write(fi.read())\n except IOError:\n pass\n finally:\n rm_rf(path)\n", "path": "conda/install.py" } ]
diff --git a/conda/install.py b/conda/install.py index 128194407c5..f73050c2e32 100644 --- a/conda/install.py +++ b/conda/install.py @@ -1046,7 +1046,8 @@ def messages(prefix): path = join(prefix, '.messages.txt') try: with open(path) as fi: - sys.stdout.write(fi.read()) + fh = sys.stderr if context.json else sys.stdout + fh.write(fi.read()) except IOError: pass finally:
frappe__hrms-1526
Organizational Chart: Total connections includes employees left ### Information about bug <img width="329" alt="Screenshot 2024-03-08 at 11 20 37 AM" src="https://github.com/frappe/hrms/assets/20027965/b88248f8-502e-41fa-ba1a-87c0cd43165a"> The current system displays a total count of connections for each employee, including those who are no longer with the company. However, when viewing the connections, only active employees are shown. **Expected Output:** The count now reflects only active employees, ensuring consistency between the number displayed and the individuals visible upon selecting any employee. ### Module HR ### Version ERPNext: v14.x.x-develop () (develop) Frappe Framework: v15.x.x-develop () (develop) Frappe HR: v16.0.0-dev (develop) ### Installation method manual install ### Relevant log output / Stack trace / Full Error Message. _No response_ ### Code of Conduct - [x] I agree to follow this project's Code of Conduct
[ { "content": "import frappe\nfrom frappe.query_builder.functions import Count\n\n\[email protected]()\ndef get_children(parent=None, company=None, exclude_node=None):\n\tfilters = [[\"status\", \"=\", \"Active\"]]\n\tif company and company != \"All Companies\":\n\t\tfilters.append([\"company\", \"=\", company])\n\n\tif parent and company and parent != company:\n\t\tfilters.append([\"reports_to\", \"=\", parent])\n\telse:\n\t\tfilters.append([\"reports_to\", \"=\", \"\"])\n\n\tif exclude_node:\n\t\tfilters.append([\"name\", \"!=\", exclude_node])\n\n\temployees = frappe.get_all(\n\t\t\"Employee\",\n\t\tfields=[\n\t\t\t\"employee_name as name\",\n\t\t\t\"name as id\",\n\t\t\t\"lft\",\n\t\t\t\"rgt\",\n\t\t\t\"reports_to\",\n\t\t\t\"image\",\n\t\t\t\"designation as title\",\n\t\t],\n\t\tfilters=filters,\n\t\torder_by=\"name\",\n\t)\n\n\tfor employee in employees:\n\t\temployee.connections = get_connections(employee.id, employee.lft, employee.rgt)\n\t\temployee.expandable = bool(employee.connections)\n\n\treturn employees\n\n\ndef get_connections(employee: str, lft: int, rgt: int) -> int:\n\tEmployee = frappe.qb.DocType(\"Employee\")\n\tquery = (\n\t\tfrappe.qb.from_(Employee)\n\t\t.select(Count(Employee.name))\n\t\t.where((Employee.lft > lft) & (Employee.rgt < rgt))\n\t).run()\n\n\treturn query[0][0]\n", "path": "hrms/hr/page/organizational_chart/organizational_chart.py" } ]
[ { "content": "import frappe\nfrom frappe.query_builder.functions import Count\n\n\[email protected]()\ndef get_children(parent=None, company=None, exclude_node=None):\n\tfilters = [[\"status\", \"=\", \"Active\"]]\n\tif company and company != \"All Companies\":\n\t\tfilters.append([\"company\", \"=\", company])\n\n\tif parent and company and parent != company:\n\t\tfilters.append([\"reports_to\", \"=\", parent])\n\telse:\n\t\tfilters.append([\"reports_to\", \"=\", \"\"])\n\n\tif exclude_node:\n\t\tfilters.append([\"name\", \"!=\", exclude_node])\n\n\temployees = frappe.get_all(\n\t\t\"Employee\",\n\t\tfields=[\n\t\t\t\"employee_name as name\",\n\t\t\t\"name as id\",\n\t\t\t\"lft\",\n\t\t\t\"rgt\",\n\t\t\t\"reports_to\",\n\t\t\t\"image\",\n\t\t\t\"designation as title\",\n\t\t],\n\t\tfilters=filters,\n\t\torder_by=\"name\",\n\t)\n\n\tfor employee in employees:\n\t\temployee.connections = get_connections(employee.id, employee.lft, employee.rgt)\n\t\temployee.expandable = bool(employee.connections)\n\n\treturn employees\n\n\ndef get_connections(employee: str, lft: int, rgt: int) -> int:\n\tEmployee = frappe.qb.DocType(\"Employee\")\n\tquery = (\n\t\tfrappe.qb.from_(Employee)\n\t\t.select(Count(Employee.name))\n\t\t.where((Employee.lft > lft) & (Employee.rgt < rgt) & (Employee.status == \"Active\"))\n\t).run()\n\n\treturn query[0][0]\n", "path": "hrms/hr/page/organizational_chart/organizational_chart.py" } ]
diff --git a/hrms/hr/page/organizational_chart/organizational_chart.py b/hrms/hr/page/organizational_chart/organizational_chart.py index b8bfce5dc4..8be4802104 100644 --- a/hrms/hr/page/organizational_chart/organizational_chart.py +++ b/hrms/hr/page/organizational_chart/organizational_chart.py @@ -43,7 +43,7 @@ def get_connections(employee: str, lft: int, rgt: int) -> int: query = ( frappe.qb.from_(Employee) .select(Count(Employee.name)) - .where((Employee.lft > lft) & (Employee.rgt < rgt)) + .where((Employee.lft > lft) & (Employee.rgt < rgt) & (Employee.status == "Active")) ).run() return query[0][0]