code
stringlengths
52
7.75k
docs
stringlengths
1
5.85k
def get_item_attribute(self, item, name): if name in self.__item_attributes: return self.__item_attributes[name](item) elif self.section: return self.section.get_item_attribute(item, name) else: raise AttributeError(name)
Method called by item when an attribute is not found.
def dispatch_event(self, event_, **kwargs): if self.settings.hooks_enabled: result = self.hooks.dispatch_event(event_, **kwargs) if result is not None: return result # Must also dispatch the event in parent section if self.section: return self.section.dispatch_event(event_, **kwargs) elif self.section: # Settings only apply to one section, so must still # dispatch the event in parent sections recursively. self.section.dispatch_event(event_, **kwargs)
Dispatch section event. Notes: You MUST NOT call event.trigger() directly because it will circumvent the section settings as well as ignore the section tree. If hooks are disabled somewhere up in the tree, and enabled down below, events will still be dispatched down below because that's where they originate.
def configparser(self): if self._configparser_adapter is None: self._configparser_adapter = ConfigPersistenceAdapter( config=self, reader_writer=ConfigParserReaderWriter( config_parser_factory=self.settings.configparser_factory, ), ) return self._configparser_adapter
Adapter to dump/load INI format strings and files using standard library's ``ConfigParser`` (or the backported configparser module in Python 2). Returns: ConfigPersistenceAdapter
def json(self): if self._json_adapter is None: self._json_adapter = ConfigPersistenceAdapter( config=self, reader_writer=JsonReaderWriter(), ) return self._json_adapter
Adapter to dump/load JSON format strings and files. Returns: ConfigPersistenceAdapter
def yaml(self): if self._yaml_adapter is None: self._yaml_adapter = ConfigPersistenceAdapter( config=self, reader_writer=YamlReaderWriter(), ) return self._yaml_adapter
Adapter to dump/load YAML format strings and files. Returns: ConfigPersistenceAdapter
def click(self): if self._click_extension is None: from .click_ext import ClickExtension self._click_extension = ClickExtension( config=self ) return self._click_extension
click extension Returns: ClickExtension
def load(self): # Must reverse because we want the sources assigned to higher-up Config instances # to overrides sources assigned to lower Config instances. for section in reversed(list(self.iter_sections(recursive=True, key=None))): if section.is_config: section.load() for source in self.settings.load_sources: adapter = getattr(self, _get_persistence_adapter_for(source)) if adapter.store_exists(source): adapter.load(source)
Load user configuration based on settings.
def option(self, *args, **kwargs): args, kwargs = _config_parameter(args, kwargs) return self._click.option(*args, **kwargs)
Registers a click.option which falls back to a configmanager Item if user hasn't provided a value in the command line. Item must be the last of ``args``. Examples:: config = Config({'greeting': 'Hello'}) @click.command() @config.click.option('--greeting', config.greeting) def say_hello(greeting): click.echo(greeting)
def argument(self, *args, **kwargs): if kwargs.get('required', True): raise TypeError( 'In click framework, arguments are mandatory, unless marked required=False. ' 'Attempt to use configmanager as a fallback provider suggests that this is an optional option, ' 'not a mandatory argument.' ) args, kwargs = _config_parameter(args, kwargs) return self._click.argument(*args, **kwargs)
Registers a click.argument which falls back to a configmanager Item if user hasn't provided a value in the command line. Item must be the last of ``args``.
def _get_kwarg(self, name, kwargs): at_name = '@{}'.format(name) if name in kwargs: if at_name in kwargs: raise ValueError('Both {!r} and {!r} specified in kwargs'.format(name, at_name)) return kwargs[name] if at_name in kwargs: return kwargs[at_name] return not_set
Helper to get value of a named attribute irrespective of whether it is passed with or without "@" prefix.
def _get_envvar_value(self): envvar_name = None if self.envvar is True: envvar_name = self.envvar_name if envvar_name is None: envvar_name = '_'.join(self.get_path()).upper() elif self.envvar: envvar_name = self.envvar if envvar_name and envvar_name in os.environ: return self.type.deserialize(os.environ[envvar_name]) else: return not_set
Internal helper to get item value from an environment variable if item is controlled by one, and if the variable is set. Returns not_set otherwise.
def get(self, fallback=not_set): envvar_value = self._get_envvar_value() if envvar_value is not not_set: return envvar_value if self.has_value: if self._value is not not_set: return self._value else: return copy.deepcopy(self.default) elif fallback is not not_set: return fallback elif self.required: raise RequiredValueMissing(name=self.name, item=self) return fallback
Returns config value. See Also: :meth:`.set` and :attr:`.value`
def set(self, value): old_value = self._value old_raw_str_value = self.raw_str_value self.type.set_item_value(self, value) new_value = self._value if old_value is not_set and new_value is not_set: # Nothing to report return if self.section: self.section.dispatch_event( self.section.hooks.item_value_changed, item=self, old_value=old_value, new_value=new_value, old_raw_str_value=old_raw_str_value, new_raw_str_value=self.raw_str_value )
Sets config value.
def reset(self): old_value = self._value old_raw_str_value = self.raw_str_value self._value = not_set self.raw_str_value = not_set new_value = self._value if old_value is not_set: # Nothing to report return if self.section: self.section.dispatch_event( self.section.hooks.item_value_changed, item=self, old_value=old_value, new_value=new_value, old_raw_str_value=old_raw_str_value, new_raw_str_value=self.raw_str_value, )
Resets the value of config item to its default value.
def is_default(self): envvar_value = self._get_envvar_value() if envvar_value is not not_set: return envvar_value == self.default else: return self._value is not_set or self._value == self.default
``True`` if the item's value is its default value or if no value and no default value are set. If the item is backed by an environment variable, this will be ``True`` only if the environment variable is set and is different to the default value of the item.
def has_value(self): if self._get_envvar_value() is not not_set: return True else: return self.default is not not_set or self._value is not not_set
``True`` if item has a default value or custom value set.
def get_path(self): if self.section: return self.section.get_path() + (self.name,) else: return self.name,
Calculate item's path in configuration tree. Use this sparingly -- path is calculated by going up the configuration tree. For a large number of items, it is more efficient to use iterators that return paths as keys. Path value is stable only once the configuration tree is completely initialised.
def validate(self): if self.required and not self.has_value: raise RequiredValueMissing(name=self.name, item=self)
Validate item.
def translate(self, type_): if isinstance(type_, six.string_types): for t in self.all_types: if type_ in t.aliases: return t raise ValueError('Failed to recognise type by name {!r}'.format(type_)) for t in self.all_types: if type_ in t.builtin_types: return t return type_
Given a built-in, an otherwise known type, or a name of known type, return its corresponding wrapper type:: >>> Types.translate(int) <_IntType ('int', 'integer')> >>> Types.translate('string') <_StrType ('str', 'string', 'unicode')>
def filebrowser(request, file_type): template = 'filebrowser.html' upload_form = FileUploadForm() uploaded_file = None upload_tab_active = False is_images_dialog = (file_type == 'img') is_documents_dialog = (file_type == 'doc') files = FileBrowserFile.objects.filter(file_type=file_type) if request.POST: upload_form = FileUploadForm(request.POST, request.FILES) upload_tab_active = True if upload_form.is_valid(): uploaded_file = upload_form.save(commit=False) uploaded_file.file_type = file_type uploaded_file.save() data = { 'upload_form': upload_form, 'uploaded_file': uploaded_file, 'upload_tab_active': upload_tab_active, 'is_images_dialog': is_images_dialog, 'is_documents_dialog': is_documents_dialog } per_page = getattr(settings, 'FILEBROWSER_PER_PAGE', 20) return render_paginate(request, template, files, per_page, data)
Trigger view for filebrowser
def filebrowser_remove_file(request, item_id, file_type): fobj = get_object_or_404(FileBrowserFile, file_type=file_type, id=item_id) fobj.delete() if file_type == 'doc': return HttpResponseRedirect(reverse('mce-filebrowser-documents')) return HttpResponseRedirect(reverse('mce-filebrowser-images'))
Remove file
def available_domains(self): if not hasattr(self, '_available_domains'): url = 'http://{0}/request/domains/format/json/'.format( self.api_domain) req = requests.get(url) domains = req.json() setattr(self, '_available_domains', domains) return self._available_domains
Return list of available domains for use in email address.
def generate_login(self, min_length=6, max_length=10, digits=True): chars = string.ascii_lowercase if digits: chars += string.digits length = random.randint(min_length, max_length) return ''.join(random.choice(chars) for x in range(length))
Generate string for email address login with defined length and alphabet. :param min_length: (optional) min login length. Default value is ``6``. :param max_length: (optional) max login length. Default value is ``10``. :param digits: (optional) use digits in login generation. Default value is ``True``.
def get_email_address(self): if self.login is None: self.login = self.generate_login() available_domains = self.available_domains if self.domain is None: self.domain = random.choice(available_domains) elif self.domain not in available_domains: raise ValueError('Domain not found in available domains!') return u'{0}{1}'.format(self.login, self.domain)
Return full email address from login and domain from params in class initialization or generate new.
def get_mailbox(self, email=None, email_hash=None): if email is None: email = self.get_email_address() if email_hash is None: email_hash = self.get_hash(email) url = 'http://{0}/request/mail/id/{1}/format/json/'.format( self.api_domain, email_hash) req = requests.get(url) return req.json()
Return list of emails in given email address or dict with `error` key if mail box is empty. :param email: (optional) email address. :param email_hash: (optional) md5 hash from email address.
def _connect(self): ''' Connect Setup a socket connection to the specified telegram-cli socket -- @return None ''' if self.connection_type.lower() == 'tcp': self.connection = sockets.setup_tcp_socket(self.location, self.port) elif self.connection_type.lower() == 'unix': self.connection = sockets.setup_domain_socket(self.location) returf _connect(self): ''' Connect Setup a socket connection to the specified telegram-cli socket -- @return None ''' if self.connection_type.lower() == 'tcp': self.connection = sockets.setup_tcp_socket(self.location, self.port) elif self.connection_type.lower() == 'unix': self.connection = sockets.setup_domain_socket(self.location) return
Connect Setup a socket connection to the specified telegram-cli socket -- @return None
def _send(self, payload): ''' Send Send a payload to a telegram-cli socket. -- @param payload:str The Payload to send over a socket connection. @return bool ''' if not self.connection: self._connect() # Send the payload, adding a newline to the end self.connection.send(payload + '\n') # Read 256 bytes off the socket and check the # status that returned. try: data = self.connection.recv(256) except socket.timeout, e: print 'Failed to read response in a timely manner to determine the status.' return False if data.split('\n')[1] == 'FAIL': print 'Failed to send payload: {payload}'.format(payload = payload) return False return Truf _send(self, payload): ''' Send Send a payload to a telegram-cli socket. -- @param payload:str The Payload to send over a socket connection. @return bool ''' if not self.connection: self._connect() # Send the payload, adding a newline to the end self.connection.send(payload + '\n') # Read 256 bytes off the socket and check the # status that returned. try: data = self.connection.recv(256) except socket.timeout, e: print 'Failed to read response in a timely manner to determine the status.' return False if data.split('\n')[1] == 'FAIL': print 'Failed to send payload: {payload}'.format(payload = payload) return False return True
Send Send a payload to a telegram-cli socket. -- @param payload:str The Payload to send over a socket connection. @return bool
def setup_domain_socket(location): ''' Setup Domain Socket Setup a connection to a Unix Domain Socket -- @param location:str The path to the Unix Domain Socket to connect to. @return <class 'socket._socketobject'> ''' clientsocket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) clientsocket.settimeout(timeout) clientsocket.connect(location) return clientsockef setup_domain_socket(location): ''' Setup Domain Socket Setup a connection to a Unix Domain Socket -- @param location:str The path to the Unix Domain Socket to connect to. @return <class 'socket._socketobject'> ''' clientsocket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) clientsocket.settimeout(timeout) clientsocket.connect(location) return clientsocket
Setup Domain Socket Setup a connection to a Unix Domain Socket -- @param location:str The path to the Unix Domain Socket to connect to. @return <class 'socket._socketobject'>
def setup_tcp_socket(location, port): ''' Setup TCP Socket Setup a connection to a TCP Socket -- @param location:str The Hostname / IP Address of the remote TCP Socket. @param port:int The TCP Port the remote Socket is listening on. @return <class 'socket._socketobject'> ''' clientsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) clientsocket.settimeout(timeout) clientsocket.connect((location, port)) return clientsockef setup_tcp_socket(location, port): ''' Setup TCP Socket Setup a connection to a TCP Socket -- @param location:str The Hostname / IP Address of the remote TCP Socket. @param port:int The TCP Port the remote Socket is listening on. @return <class 'socket._socketobject'> ''' clientsocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) clientsocket.settimeout(timeout) clientsocket.connect((location, port)) return clientsocket
Setup TCP Socket Setup a connection to a TCP Socket -- @param location:str The Hostname / IP Address of the remote TCP Socket. @param port:int The TCP Port the remote Socket is listening on. @return <class 'socket._socketobject'>
def create_primary_zone(self, account_name, zone_name): zone_properties = {"name": zone_name, "accountName": account_name, "type": "PRIMARY"} primary_zone_info = {"forceImport": True, "createType": "NEW"} zone_data = {"properties": zone_properties, "primaryCreateInfo": primary_zone_info} return self.rest_api_connection.post("/v1/zones", json.dumps(zone_data))
Creates a new primary zone. Arguments: account_name -- The name of the account that will contain this zone. zone_name -- The name of the zone. It must be unique.
def create_primary_zone_by_upload(self, account_name, zone_name, bind_file): zone_properties = {"name": zone_name, "accountName": account_name, "type": "PRIMARY"} primary_zone_info = {"forceImport": True, "createType": "UPLOAD"} zone_data = {"properties": zone_properties, "primaryCreateInfo": primary_zone_info} files = {'zone': ('', json.dumps(zone_data), 'application/json'), 'file': ('file', open(bind_file, 'rb'), 'application/octet-stream')} return self.rest_api_connection.post_multi_part("/v1/zones", files)
Creates a new primary zone by uploading a bind file Arguments: account_name -- The name of the account that will contain this zone. zone_name -- The name of the zone. It must be unique. bind_file -- The file to upload.
def create_primary_zone_by_axfr(self, account_name, zone_name, master, tsig_key=None, key_value=None): zone_properties = {"name": zone_name, "accountName": account_name, "type": "PRIMARY"} if tsig_key is not None and key_value is not None: name_server_info = {"ip": master, "tsigKey": tsig_key, "tsigKeyValue": key_value} else: name_server_info = {"ip": master} primary_zone_info = {"forceImport": True, "createType": "TRANSFER", "nameServer": name_server_info} zone_data = {"properties": zone_properties, "primaryCreateInfo": primary_zone_info} return self.rest_api_connection.post("/v1/zones", json.dumps(zone_data))
Creates a new primary zone by zone transferring off a master. Arguments: account_name -- The name of the account that will contain this zone. zone_name -- The name of the zone. It must be unique. master -- Primary name server IP address. Keyword Arguments: tsig_key -- For TSIG-enabled zones: The transaction signature key. NOTE: Requires key_value. key_value -- TSIG key secret.
def create_secondary_zone(self, account_name, zone_name, master, tsig_key=None, key_value=None): zone_properties = {"name": zone_name, "accountName": account_name, "type": "SECONDARY"} if tsig_key is not None and key_value is not None: name_server_info = {"ip": master, "tsigKey": tsig_key, "tsigKeyValue": key_value} else: name_server_info = {"ip": master} name_server_ip_1 = {"nameServerIp1": name_server_info} name_server_ip_list = {"nameServerIpList": name_server_ip_1} secondary_zone_info = {"primaryNameServers": name_server_ip_list} zone_data = {"properties": zone_properties, "secondaryCreateInfo": secondary_zone_info} return self.rest_api_connection.post("/v1/zones", json.dumps(zone_data))
Creates a new secondary zone. Arguments: account_name -- The name of the account. zone_name -- The name of the zone. master -- Primary name server IP address. Keyword Arguments: tsig_key -- For TSIG-enabled zones: The transaction signature key. NOTE: Requires key_value. key_value -- TSIG key secret.
def get_zones_of_account(self, account_name, q=None, **kwargs): uri = "/v1/accounts/" + account_name + "/zones" params = build_params(q, kwargs) return self.rest_api_connection.get(uri, params)
Returns a list of zones for the specified account. Arguments: account_name -- The name of the account. Keyword Arguments: q -- The search parameters, in a dict. Valid keys are: name - substring match of the zone name zone_type - one of: PRIMARY SECONDARY ALIAS sort -- The sort column used to order the list. Valid values for the sort field are: NAME ACCOUNT_NAME RECORD_COUNT ZONE_TYPE reverse -- Whether the list is ascending(False) or descending(True) offset -- The position in the list for the first returned element(0 based) limit -- The maximum number of rows to be returned.
def get_zones(self, q=None, **kwargs): uri = "/v1/zones" params = build_params(q, kwargs) return self.rest_api_connection.get(uri, params)
Returns a list of zones across all of the user's accounts. Keyword Arguments: q -- The search parameters, in a dict. Valid keys are: name - substring match of the zone name zone_type - one of: PRIMARY SECONDARY ALIAS sort -- The sort column used to order the list. Valid values for the sort field are: NAME ACCOUNT_NAME RECORD_COUNT ZONE_TYPE reverse -- Whether the list is ascending(False) or descending(True) offset -- The position in the list for the first returned element(0 based) limit -- The maximum number of rows to be returned.
def edit_secondary_name_server(self, zone_name, primary=None, backup=None, second_backup=None): name_server_info = {} if primary is not None: name_server_info['nameServerIp1'] = {'ip':primary} if backup is not None: name_server_info['nameServerIp2'] = {'ip':backup} if second_backup is not None: name_server_info['nameServerIp3'] = {'ip':second_backup} name_server_ip_list = {"nameServerIpList": name_server_info} secondary_zone_info = {"primaryNameServers": name_server_ip_list} zone_data = {"secondaryCreateInfo": secondary_zone_info} return self.rest_api_connection.patch("/v1/zones/" + zone_name, json.dumps(zone_data))
Edit the axfr name servers of a secondary zone. Arguments: zone_name -- The name of the secondary zone being edited. primary -- The primary name server value. Keyword Arguments: backup -- The backup name server if any. second_backup -- The second backup name server.
def get_rrsets(self, zone_name, q=None, **kwargs): uri = "/v1/zones/" + zone_name + "/rrsets" params = build_params(q, kwargs) return self.rest_api_connection.get(uri, params)
Returns the list of RRSets in the specified zone. Arguments: zone_name -- The name of the zone. Keyword Arguments: q -- The search parameters, in a dict. Valid keys are: ttl - must match the TTL for the rrset owner - substring match of the owner name value - substring match of the first BIND field value sort -- The sort column used to order the list. Valid values for the sort field are: OWNER TTL TYPE reverse -- Whether the list is ascending(False) or descending(True) offset -- The position in the list for the first returned element(0 based) limit -- The maximum number of rows to be returned.
def get_rrsets_by_type(self, zone_name, rtype, q=None, **kwargs): uri = "/v1/zones/" + zone_name + "/rrsets/" + rtype params = build_params(q, kwargs) return self.rest_api_connection.get(uri, params)
Returns the list of RRSets in the specified zone of the specified type. Arguments: zone_name -- The name of the zone. rtype -- The type of the RRSets. This can be numeric (1) or if a well-known name is defined for the type (A), you can use it instead. Keyword Arguments: q -- The search parameters, in a dict. Valid keys are: ttl - must match the TTL for the rrset owner - substring match of the owner name value - substring match of the first BIND field value sort -- The sort column used to order the list. Valid values for the sort field are: OWNER TTL TYPE reverse -- Whether the list is ascending(False) or descending(True) offset -- The position in the list for the first returned element(0 based) limit -- The maximum number of rows to be returned.
def get_rrsets_by_type_owner(self, zone_name, rtype, owner_name, q=None, **kwargs): uri = "/v1/zones/" + zone_name + "/rrsets/" + rtype + "/" + owner_name params = build_params(q, kwargs) return self.rest_api_connection.get(uri, params)
Returns the list of RRSets in the specified zone of the specified type. Arguments: zone_name -- The name of the zone. rtype -- The type of the RRSets. This can be numeric (1) or if a well-known name is defined for the type (A), you can use it instead. owner_name -- The owner name for the RRSet. If no trailing dot is supplied, the owner_name is assumed to be relative (foo). If a trailing dot is supplied, the owner name is assumed to be absolute (foo.zonename.com.) Keyword Arguments: q -- The search parameters, in a dict. Valid keys are: ttl - must match the TTL for the rrset value - substring match of the first BIND field value sort -- The sort column used to order the list. Valid values for the sort field are: TTL TYPE reverse -- Whether the list is ascending(False) or descending(True) offset -- The position in the list for the first returned element(0 based) limit -- The maximum number of rows to be returned.
def create_rrset(self, zone_name, rtype, owner_name, ttl, rdata): if type(rdata) is not list: rdata = [rdata] rrset = {"ttl": ttl, "rdata": rdata} return self.rest_api_connection.post("/v1/zones/" + zone_name + "/rrsets/" + rtype + "/" + owner_name, json.dumps(rrset))
Creates a new RRSet in the specified zone. Arguments: zone_name -- The zone that will contain the new RRSet. The trailing dot is optional. rtype -- The type of the RRSet. This can be numeric (1) or if a well-known name is defined for the type (A), you can use it instead. owner_name -- The owner name for the RRSet. If no trailing dot is supplied, the owner_name is assumed to be relative (foo). If a trailing dot is supplied, the owner name is assumed to be absolute (foo.zonename.com.) ttl -- The TTL value for the RRSet. rdata -- The BIND data for the RRSet as a string. If there is a single resource record in the RRSet, you can pass in the single string. If there are multiple resource records in this RRSet, pass in a list of strings.
def edit_rrset(self, zone_name, rtype, owner_name, ttl, rdata, profile=None): if type(rdata) is not list: rdata = [rdata] rrset = {"ttl": ttl, "rdata": rdata} if profile: rrset["profile"] = profile uri = "/v1/zones/" + zone_name + "/rrsets/" + rtype + "/" + owner_name return self.rest_api_connection.put(uri, json.dumps(rrset))
Updates an existing RRSet in the specified zone. Arguments: zone_name -- The zone that contains the RRSet. The trailing dot is optional. rtype -- The type of the RRSet. This can be numeric (1) or if a well-known name is defined for the type (A), you can use it instead. owner_name -- The owner name for the RRSet. If no trailing dot is supplied, the owner_name is assumed to be relative (foo). If a trailing dot is supplied, the owner name is assumed to be absolute (foo.zonename.com.) ttl -- The updated TTL value for the RRSet. rdata -- The updated BIND data for the RRSet as a string. If there is a single resource record in the RRSet, you can pass in the single string. If there are multiple resource records in this RRSet, pass in a list of strings. profile -- The profile info if this is updating a resource pool
def edit_rrset_rdata(self, zone_name, rtype, owner_name, rdata, profile=None): if type(rdata) is not list: rdata = [rdata] rrset = {"rdata": rdata} method = "patch" if profile: rrset["profile"] = profile method = "put" uri = "/v1/zones/" + zone_name + "/rrsets/" + rtype + "/" + owner_name return getattr(self.rest_api_connection, method)(uri,json.dumps(rrset))
Updates an existing RRSet's Rdata in the specified zone. Arguments: zone_name -- The zone that contains the RRSet. The trailing dot is optional. rtype -- The type of the RRSet. This can be numeric (1) or if a well-known name is defined for the type (A), you can use it instead. owner_name -- The owner name for the RRSet. If no trailing dot is supplied, the owner_name is assumed to be relative (foo). If a trailing dot is supplied, the owner name is assumed to be absolute (foo.zonename.com.) rdata -- The updated BIND data for the RRSet as a string. If there is a single resource record in the RRSet, you can pass in the single string. If there are multiple resource records in this RRSet, pass in a list of strings. profile -- The profile info if this is updating a resource pool
def delete_rrset(self, zone_name, rtype, owner_name): return self.rest_api_connection.delete("/v1/zones/" + zone_name + "/rrsets/" + rtype + "/" + owner_name)
Deletes an RRSet. Arguments: zone_name -- The zone containing the RRSet to be deleted. The trailing dot is optional. rtype -- The type of the RRSet. This can be numeric (1) or if a well-known name is defined for the type (A), you can use it instead. owner_name -- The owner name for the RRSet. If no trailing dot is supplied, the owner_name is assumed to be relative (foo). If a trailing dot is supplied, the owner name is assumed to be absolute (foo.zonename.com.)
def create_web_forward(self, zone_name, request_to, redirect_to, forward_type): web_forward = {"requestTo": request_to, "defaultRedirectTo": redirect_to, "defaultForwardType": forward_type} return self.rest_api_connection.post("/v1/zones/" + zone_name + "/webforwards", json.dumps(web_forward))
Create a web forward record. Arguments: zone_name -- The zone in which the web forward is to be created. request_to -- The URL to be redirected. You may use http:// and ftp://. forward_type -- The type of forward. Valid options include: Framed HTTP_301_REDIRECT HTTP_302_REDIRECT HTTP_303_REDIRECT HTTP_307_REDIRECT
def create_sb_pool(self, zone_name, owner_name, ttl, pool_info, rdata_info, backup_record_list): rrset = self._build_sb_rrset(backup_record_list, pool_info, rdata_info, ttl) return self.rest_api_connection.post("/v1/zones/" + zone_name + "/rrsets/A/" + owner_name, json.dumps(rrset))
Creates a new SB Pool. Arguments: zone_name -- The zone that contains the RRSet. The trailing dot is optional. owner_name -- The owner name for the RRSet. If no trailing dot is supplied, the owner_name is assumed to be relative (foo). If a trailing dot is supplied, the owner name is assumed to be absolute (foo.zonename.com.) ttl -- The updated TTL value for the RRSet. pool_info -- dict of information about the pool rdata_info -- dict of information about the records in the pool. The keys in the dict are the A and CNAME records that make up the pool. The values are the rdataInfo for each of the records backup_record_list -- list of dicts of information about the backup (all-fail) records in the pool. There are two key/value in each dict: rdata - the A or CNAME for the backup record failoverDelay - the time to wait to fail over (optional, defaults to 0)
def edit_sb_pool(self, zone_name, owner_name, ttl, pool_info, rdata_info, backup_record_list): rrset = self._build_sb_rrset(backup_record_list, pool_info, rdata_info, ttl) return self.rest_api_connection.put("/v1/zones/" + zone_name + "/rrsets/A/" + owner_name, json.dumps(rrset))
Updates an existing SB Pool in the specified zone. :param zone_name: The zone that contains the RRSet. The trailing dot is optional. :param owner_name: The owner name for the RRSet. If no trailing dot is supplied, the owner_name is assumed to be relative (foo). If a trailing dot is supplied, the owner name is assumed to be absolute (foo.zonename.com.) :param ttl: The updated TTL value for the RRSet. :param pool_info: dict of information about the pool :param rdata_info: dict of information about the records in the pool. The keys in the dict are the A and CNAME records that make up the pool. The values are the rdataInfo for each of the records :param backup_record_list: list of dicts of information about the backup (all-fail) records in the pool. There are two key/value in each dict: rdata - the A or CNAME for the backup record failoverDelay - the time to wait to fail over (optional, defaults to 0)
def create_tc_pool(self, zone_name, owner_name, ttl, pool_info, rdata_info, backup_record): rrset = self._build_tc_rrset(backup_record, pool_info, rdata_info, ttl) return self.rest_api_connection.post("/v1/zones/" + zone_name + "/rrsets/A/" + owner_name, json.dumps(rrset))
Creates a new TC Pool. Arguments: zone_name -- The zone that contains the RRSet. The trailing dot is optional. owner_name -- The owner name for the RRSet. If no trailing dot is supplied, the owner_name is assumed to be relative (foo). If a trailing dot is supplied, the owner name is assumed to be absolute (foo.zonename.com.) ttl -- The updated TTL value for the RRSet. pool_info -- dict of information about the pool rdata_info -- dict of information about the records in the pool. The keys in the dict are the A and CNAME records that make up the pool. The values are the rdataInfo for each of the records backup_record -- dict of information about the backup (all-fail) records in the pool. There are two key/value in the dict: rdata - the A or CNAME for the backup record failoverDelay - the time to wait to fail over (optional, defaults to 0)
def edit_tc_pool(self, zone_name, owner_name, ttl, pool_info, rdata_info, backup_record): rrset = self._build_tc_rrset(backup_record, pool_info, rdata_info, ttl) return self.rest_api_connection.put("/v1/zones/" + zone_name + "/rrsets/A/" + owner_name, json.dumps(rrset))
Updates an existing TC Pool in the specified zone. :param zone_name: The zone that contains the RRSet. The trailing dot is optional. :param owner_name: The owner name for the RRSet. If no trailing dot is supplied, the owner_name is assumed to be relative (foo). If a trailing dot is supplied, the owner name is assumed to be absolute (foo.zonename.com.) :param ttl: The updated TTL value for the RRSet. :param pool_info: dict of information about the pool :param rdata_info: dict of information about the records in the pool. The keys in the dict are the A and CNAME records that make up the pool. The values are the rdataInfo for each of the records :param backup_record: dict of information about the backup (all-fail) records in the pool. There are two key/value in the dict: rdata - the A or CNAME for the backup record failoverDelay - the time to wait to fail over (optional, defaults to 0)
def dumpf(obj, path, encoding=None): path = str(path) if path.endswith('.gz'): with gzip.open(path, mode='wt', encoding=encoding) as f: return dump(obj, f) else: with open(path, mode='wt', encoding=encoding) as f: dump(obj, f)
Serialize obj to path in ARPA format (.arpa, .gz).
def load(fp, model=None, parser=None): if not model: model = 'simple' if not parser: parser = 'quick' if model not in ['simple']: raise ValueError if parser not in ['quick']: raise ValueError if model == 'simple' and parser == 'quick': return ARPAParserQuick(ARPAModelSimple).parse(fp) else: raise ValueError
Deserialize fp (a file-like object) to a Python object.
def loadf(path, encoding=None, model=None, parser=None): path = str(path) if path.endswith('.gz'): with gzip.open(path, mode='rt', encoding=encoding) as f: return load(f, model=model, parser=parser) else: with open(path, mode='rt', encoding=encoding) as f: return load(f, model=model, parser=parser)
Deserialize path (.arpa, .gz) to a Python object.
def loads(s, model=None, parser=None): with StringIO(s) as f: return load(f, model=model, parser=parser)
Deserialize s (a str) to a Python object.
def send_message(self, recipient, message): ''' Send Message Sends a message to a Telegram Recipient. From telegram-cli: msg <peer> <text> Sends text message to peer -- @param recipient:str The telegram recipient the message is intended for. Can be either a Person or a Group. @param message:str The message to send. @return None ''' payload = 'msg {recipient} {message}'.format( recipient = strings.escape_recipient(recipient), message = strings.escape_newlines(message.strip()) ) self._send(payload) returf send_message(self, recipient, message): ''' Send Message Sends a message to a Telegram Recipient. From telegram-cli: msg <peer> <text> Sends text message to peer -- @param recipient:str The telegram recipient the message is intended for. Can be either a Person or a Group. @param message:str The message to send. @return None ''' payload = 'msg {recipient} {message}'.format( recipient = strings.escape_recipient(recipient), message = strings.escape_newlines(message.strip()) ) self._send(payload) return
Send Message Sends a message to a Telegram Recipient. From telegram-cli: msg <peer> <text> Sends text message to peer -- @param recipient:str The telegram recipient the message is intended for. Can be either a Person or a Group. @param message:str The message to send. @return None
def send_image(self, recipient, path): ''' Send Image Sends a an image to a Telegram Recipient. The image needs to be readable to the telegram-cli instance where the socket is created. From telegram-cli: send_photo <peer> <file> Sends photo to peer -- @param recipient:str The telegram recipient the message is intended for. Can be either a Person or a Group. @param path:str The full path to the image to send. @return None ''' payload = 'send_photo {recipient} {path}'.format( recipient = strings.escape_recipient(recipient), path = path ) self._send(payload) returf send_image(self, recipient, path): ''' Send Image Sends a an image to a Telegram Recipient. The image needs to be readable to the telegram-cli instance where the socket is created. From telegram-cli: send_photo <peer> <file> Sends photo to peer -- @param recipient:str The telegram recipient the message is intended for. Can be either a Person or a Group. @param path:str The full path to the image to send. @return None ''' payload = 'send_photo {recipient} {path}'.format( recipient = strings.escape_recipient(recipient), path = path ) self._send(payload) return
Send Image Sends a an image to a Telegram Recipient. The image needs to be readable to the telegram-cli instance where the socket is created. From telegram-cli: send_photo <peer> <file> Sends photo to peer -- @param recipient:str The telegram recipient the message is intended for. Can be either a Person or a Group. @param path:str The full path to the image to send. @return None
def isdisjoint(self, other): if other == FullSpace: return False else: for ls in self.local_factors: if isinstance(ls.label, StrLabel): return False for ls in other.local_factors: if isinstance(ls.label, StrLabel): return False return set(self.local_factors).isdisjoint(set(other.local_factors))
Check whether two Hilbert spaces are disjoint (do not have any common local factors). Note that `FullSpace` is *not* disjoint with any other Hilbert space, while `TrivialSpace` *is* disjoint with any other HilbertSpace (even itself)
def _check_basis_label_type(cls, label_or_index): if not isinstance(label_or_index, cls._basis_label_types): raise TypeError( "label_or_index must be an instance of one of %s; not %s" % ( ", ".join([t.__name__ for t in cls._basis_label_types]), label_or_index.__class__.__name__))
Every object (BasisKet, LocalSigma) that contains a label or index for an eigenstate of some LocalSpace should call this routine to check the type of that label or index (or, use :meth:`_unpack_basis_label_or_index`
def basis_states(self): from qnet.algebra.core.state_algebra import BasisKet # avoid circ. import for label in self.basis_labels: yield BasisKet(label, hs=self)
Yield an iterator over the states (:class:`.BasisKet` instances) that form the canonical basis of the Hilbert space Raises: .BasisNotSetError: if the Hilbert space has no defined basis
def basis_state(self, index_or_label): from qnet.algebra.core.state_algebra import BasisKet # avoid circ. import try: return BasisKet(index_or_label, hs=self) except ValueError as exc_info: if isinstance(index_or_label, int): raise IndexError(str(exc_info)) else: raise KeyError(str(exc_info))
Return the basis state with the given index or label. Raises: .BasisNotSetError: if the Hilbert space has no defined basis IndexError: if there is no basis state with the given index KeyError: if there is not basis state with the given label
def next_basis_label_or_index(self, label_or_index, n=1): if isinstance(label_or_index, int): new_index = label_or_index + n if new_index < 0: raise IndexError("index %d < 0" % new_index) if self.has_basis: if new_index >= self.dimension: raise IndexError("index %d out of range for basis %s" % (new_index, self._basis)) return new_index elif isinstance(label_or_index, str): label_index = self.basis_labels.index(label_or_index) new_index = label_index + n if (new_index < 0) or (new_index >= len(self._basis)): raise IndexError("index %d out of range for basis %s" % (new_index, self._basis)) return self._basis[new_index] elif isinstance(label_or_index, SymbolicLabelBase): return label_or_index.__class__(expr=label_or_index.expr + n) else: raise TypeError( "Invalid type for label_or_index: %s" % label_or_index.__class__.__name__)
Given the label or index of a basis state, return the label/index of the next basis state. More generally, if `n` is given, return the `n`'th next basis state label/index; `n` may also be negative to obtain previous basis state labels/indices. The return type is the same as the type of `label_or_index`. Args: label_or_index (int or str or SymbolicLabelBase): If `int`, the index of a basis state; if `str`, the label of a basis state n (int): The increment Raises: IndexError: If going beyond the last or first basis state ValueError: If `label` is not a label for any basis state in the Hilbert space .BasisNotSetError: If the Hilbert space has no defined basis TypeError: if `label_or_index` is neither a :class:`str` nor an :class:`int`, nor a :class:`SymbolicLabelBase`
def basis_states(self): from qnet.algebra.core.state_algebra import BasisKet, TensorKet # importing locally avoids circular import ls_bases = [ls.basis_labels for ls in self.local_factors] for label_tuple in cartesian_product(*ls_bases): yield TensorKet( *[BasisKet(label, hs=ls) for (ls, label) in zip(self.local_factors, label_tuple)])
Yield an iterator over the states (:class:`.TensorKet` instances) that form the canonical basis of the Hilbert space Raises: .BasisNotSetError: if the Hilbert space has no defined basis
def basis_state(self, index_or_label): from qnet.algebra.core.state_algebra import BasisKet, TensorKet if isinstance(index_or_label, int): # index ls_bases = [ls.basis_labels for ls in self.local_factors] label_tuple = list(cartesian_product(*ls_bases))[index_or_label] try: return TensorKet( *[BasisKet(label, hs=ls) for (ls, label) in zip(self.local_factors, label_tuple)]) except ValueError as exc_info: raise IndexError(str(exc_info)) else: # label local_labels = index_or_label.split(",") if len(local_labels) != len(self.local_factors): raise KeyError( "label %s for Hilbert space %s must be comma-separated " "concatenation of local labels" % (index_or_label, self)) try: return TensorKet( *[BasisKet(label, hs=ls) for (ls, label) in zip(self.local_factors, local_labels)]) except ValueError as exc_info: raise KeyError(str(exc_info))
Return the basis state with the given index or label. Raises: .BasisNotSetError: if the Hilbert space has no defined basis IndexError: if there is no basis state with the given index KeyError: if there is not basis state with the given label
def remove(self, other): if other is FullSpace: return TrivialSpace if other is TrivialSpace: return self if isinstance(other, ProductSpace): oops = set(other.operands) else: oops = {other} return ProductSpace.create( *sorted(set(self.operands).difference(oops)))
Remove a particular factor from a tensor product space.
def intersect(self, other): if other is FullSpace: return self if other is TrivialSpace: return TrivialSpace if isinstance(other, ProductSpace): other_ops = set(other.operands) else: other_ops = {other} return ProductSpace.create( *sorted(set(self.operands).intersection(other_ops)))
Find the mutual tensor factors of two Hilbert spaces.
def identifier(self): identifier = self._hs._local_identifiers.get( self.__class__.__name__, self._hs._local_identifiers.get( 'Create', self._identifier)) if not self._rx_identifier.match(identifier): raise ValueError( "identifier '%s' does not match pattern '%s'" % (identifier, self._rx_identifier.pattern)) return identifier
The identifier (symbol) that is used when printing the annihilation operator. This is identical to the identifier of :class:`Create`. A custom identifier for both :class:`Destroy` and :class:`Create` can be set through the `local_identifiers` parameter of the associated Hilbert space:: >>> hs_custom = LocalSpace(0, local_identifiers={'Destroy': 'b'}) >>> Create(hs=hs_custom).identifier 'b' >>> Destroy(hs=hs_custom).identifier 'b'
def _isinstance(expr, classname): for cls in type(expr).__mro__: if cls.__name__ == classname: return True return False
Check whether `expr` is an instance of the class with name `classname` This is like the builtin `isinstance`, but it take the `classname` a string, instead of the class directly. Useful for when we don't want to import the class for which we want to check (also, remember that printer choose rendering method based on the class name, so this is totally ok)
def _get_from_cache(self, expr): # The reason method this is separated out from `doprint` is that # printers that use identation, e.g. IndentedSReprPrinter, need to # override how caching is handled, applying variable indentation even # for cached results try: is_cached = expr in self.cache except TypeError: # expr is unhashable is_cached = False if is_cached: return True, self.cache[expr] else: return False, None
Get the result of :meth:`doprint` from the internal cache
def _print_SCALAR_TYPES(self, expr, *args, **kwargs): adjoint = kwargs.get('adjoint', False) if adjoint: expr = expr.conjugate() if isinstance(expr, SympyBasic): self._sympy_printer._print_level = self._print_level + 1 res = self._sympy_printer.doprint(expr) else: # numeric type try: if int(expr) == expr: # In Python, objects that evaluate equal (e.g. 2.0 == 2) # have the same hash. We want to normalize this, so that we # get consistent results when printing with a cache expr = int(expr) except TypeError: pass if adjoint: kwargs = { key: val for (key, val) in kwargs.items() if key != 'adjoint'} res = self._print(expr, *args, **kwargs) return res
Render scalars
def doprint(self, expr, *args, **kwargs): allow_caching = self._allow_caching is_cached = False if len(args) > 0 or len(kwargs) > 0: # we don't want to cache "custom" rendering, such as the adjoint of # the actual expression (kwargs['adjoint'] is True). Otherwise, we # might return a cached values for args/kwargs that are different # from the the expression was originally cached. allow_caching = False if allow_caching: is_cached, res = self._get_from_cache(expr) if not is_cached: if isinstance(expr, Scalar._val_types): res = self._print_SCALAR_TYPES(expr, *args, **kwargs) elif isinstance(expr, str): return self._render_str(expr) else: # the _print method, inherited from SympyPrinter implements the # internal dispatcher for (3-5) res = self._str(self._print(expr, *args, **kwargs)) if allow_caching: self._write_to_cache(expr, res) return res
Returns printer's representation for expr (as a string) The representation is obtained by the following methods: 1. from the :attr:`cache` 2. If `expr` is a Sympy object, delegate to the :meth:`~sympy.printing.printer.Printer.doprint` method of :attr:`_sympy_printer` 3. Let the `expr` print itself if has the :attr:`printmethod` 4. Take the best fitting ``_print_*`` method of the printer 5. As fallback, delegate to :meth:`emptyPrinter` Any extra `args` or `kwargs` are passed to the internal `_print` method.
def decompose_space(H, A): return OperatorTrace.create( OperatorTrace.create(A, over_space=H.operands[-1]), over_space=ProductSpace.create(*H.operands[:-1]))
Simplifies OperatorTrace expressions over tensor-product spaces by turning it into iterated partial traces. Args: H (ProductSpace): The full space. A (Operator): Returns: Operator: Iterative partial trace expression
def get_coeffs(expr, expand=False, epsilon=0.): if expand: expr = expr.expand() ret = defaultdict(int) operands = expr.operands if isinstance(expr, OperatorPlus) else [expr] for e in operands: c, t = _coeff_term(e) try: if abs(complex(c)) < epsilon: continue except TypeError: pass ret[t] += c return ret
Create a dictionary with all Operator terms of the expression (understood as a sum) as keys and their coefficients as values. The returned object is a defaultdict that return 0. if a term/key doesn't exist. Args: expr: The operator expression to get all coefficients from. expand: Whether to expand the expression distributively. epsilon: If non-zero, drop all Operators with coefficients that have absolute value less than epsilon. Returns: dict: A dictionary ``{op1: coeff1, op2: coeff2, ...}``
def factor_coeff(cls, ops, kwargs): coeffs, nops = zip(*map(_coeff_term, ops)) coeff = 1 for c in coeffs: coeff *= c if coeff == 1: return nops, coeffs else: return coeff * cls.create(*nops, **kwargs)
Factor out coefficients of all factors.
def rewrite_with_operator_pm_cc(expr): # TODO: move this to the toolbox from qnet.algebra.toolbox.core import temporary_rules def _combine_operator_p_cc(A, B): if B.adjoint() == A: return OperatorPlusMinusCC(A, sign=+1) else: raise CannotSimplify def _combine_operator_m_cc(A, B): if B.adjoint() == A: return OperatorPlusMinusCC(A, sign=-1) else: raise CannotSimplify def _scal_combine_operator_pm_cc(c, A, d, B): if B.adjoint() == A: if c == d: return c * OperatorPlusMinusCC(A, sign=+1) elif c == -d: return c * OperatorPlusMinusCC(A, sign=-1) raise CannotSimplify A = wc("A", head=Operator) B = wc("B", head=Operator) c = wc("c", head=Scalar) d = wc("d", head=Scalar) with temporary_rules(OperatorPlus, clear=True): OperatorPlus.add_rule( 'PM1', pattern_head(A, B), _combine_operator_p_cc) OperatorPlus.add_rule( 'PM2', pattern_head(pattern(ScalarTimesOperator, -1, B), A), _combine_operator_m_cc) OperatorPlus.add_rule( 'PM3', pattern_head( pattern(ScalarTimesOperator, c, A), pattern(ScalarTimesOperator, d, B)), _scal_combine_operator_pm_cc) return expr.rebuild()
Try to rewrite expr using :class:`OperatorPlusMinusCC` Example: >>> A = OperatorSymbol('A', hs=1) >>> sum = A + A.dag() >>> sum2 = rewrite_with_operator_pm_cc(sum) >>> print(ascii(sum2)) A^(1) + c.c.
def doit(self, classes=None, recursive=True, **kwargs): return super().doit(classes, recursive, **kwargs)
Write out commutator Write out the commutator according to its definition $[\Op{A}, \Op{B}] = \Op{A}\Op{B} - \Op{A}\Op{B}$. See :meth:`.Expression.doit`.
def _attrprint(d, delimiter=', '): return delimiter.join(('"%s"="%s"' % item) for item in sorted(d.items()))
Print a dictionary of attributes in the DOT format
def _styleof(expr, styles): style = dict() for expr_filter, sty in styles: if expr_filter(expr): style.update(sty) return style
Merge style dictionaries in order
def expr_labelfunc(leaf_renderer=str, fallback=str): def _labelfunc(expr, is_leaf): if is_leaf: label = leaf_renderer(expr) elif isinstance(expr, Expression): if len(expr.kwargs) == 0: label = expr.__class__.__name__ else: label = "%s(..., %s)" % ( expr.__class__.__name__, ", ".join([ "%s=%s" % (key, val) for (key, val) in expr.kwargs.items()])) else: label = fallback(expr) return label return _labelfunc
Factory for function ``labelfunc(expr, is_leaf)`` It has the following behavior: * If ``is_leaf`` is True, return ``leaf_renderer(expr)``. * Otherwise, - if `expr` is an Expression, return a custom string similar to :func:`~qnet.printing.srepr`, but with an ellipsis for ``args`` - otherwise, return ``fallback(expr)``
def _git_version(): import subprocess import os def _minimal_ext_cmd(cmd): # construct minimal environment env = {} for k in ['SYSTEMROOT', 'PATH']: v = os.environ.get(k) if v is not None: env[k] = v # LANGUAGE is used on win32 env['LANGUAGE'] = 'C' env['LANG'] = 'C' env['LC_ALL'] = 'C' FNULL = open(os.devnull, 'w') cwd = os.path.dirname(os.path.realpath(__file__)) proc = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=FNULL, env=env, cwd=cwd) out = proc.communicate()[0] return out try: out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD']) return out.strip().decode('ascii') except OSError: return "unknown"
If installed with 'pip installe -e .' from inside a git repo, the current git revision as a string
def FB(circuit, *, out_port=None, in_port=None): if out_port is None: out_port = circuit.cdim - 1 if in_port is None: in_port = circuit.cdim - 1 return Feedback.create(circuit, out_port=out_port, in_port=in_port)
Wrapper for :class:`.Feedback`, defaulting to last channel Args: circuit (Circuit): The circuit that undergoes self-feedback out_port (int): The output port index, default = None --> last port in_port (int): The input port index, default = None --> last port Returns: Circuit: The circuit with applied feedback operation.
def extract_channel(k, cdim): n = cdim perm = tuple(list(range(k)) + [n - 1] + list(range(k, n - 1))) return CPermutation.create(perm)
Create a :class:`CPermutation` that extracts channel `k` Return a permutation circuit that maps the k-th (zero-based) input to the last output, while preserving the relative order of all other channels. Args: k (int): Extracted channel index cdim (int): The circuit dimension (number of channels) Returns: Circuit: Permutation circuit
def map_channels(mapping, cdim): n = cdim free_values = list(range(n)) for v in mapping.values(): if v >= n: raise ValueError('the mapping cannot take on values larger than ' 'cdim - 1') free_values.remove(v) for k in mapping: if k >= n: raise ValueError('the mapping cannot map keys larger than ' 'cdim - 1') permutation = [] for k in range(n): if k in mapping: permutation.append(mapping[k]) else: permutation.append(free_values.pop(0)) return CPermutation.create(tuple(permutation))
Create a :class:`CPermuation` based on a dict of channel mappings For a given mapping in form of a dictionary, generate the channel permutating circuit that achieves the specified mapping while leaving the relative order of all non-specified channels intact. Args: mapping (dict): Input-output mapping of indices (zero-based) ``{in1:out1, in2:out2,...}`` cdim (int): The circuit dimension (number of channels) Returns: CPermutation: Circuit mapping the channels as specified
def pad_with_identity(circuit, k, n): circuit_n = circuit.cdim combined_circuit = circuit + circuit_identity(n) permutation = (list(range(k)) + list(range(circuit_n, circuit_n + n)) + list(range(k, circuit_n))) return (CPermutation.create(invert_permutation(permutation)) << combined_circuit << CPermutation.create(permutation))
Pad a circuit by adding a `n`-channel identity circuit at index `k` That is, a circuit of channel dimension $N$ is extended to one of channel dimension $N+n$, where the channels $k$, $k+1$, ...$k+n-1$, just pass through the system unaffected. E.g. let ``A``, ``B`` be two single channel systems:: >>> A = CircuitSymbol('A', cdim=1) >>> B = CircuitSymbol('B', cdim=1) >>> print(ascii(pad_with_identity(A+B, 1, 2))) A + cid(2) + B This method can also be applied to irreducible systems, but in that case the result can not be decomposed as nicely. Args: circuit (Circuit): circuit to pad k (int): The index at which to insert the circuit n (int): The number of channels to pass through Returns: Circuit: An extended circuit that passes through the channels $k$, $k+1$, ..., $k+n-1$
def prepare_adiabatic_limit(slh, k=None): if k is None: k = symbols('k', positive=True) Ld = slh.L.dag() LdL = (Ld * slh.L)[0, 0] K = (-LdL / 2 + I * slh.H).expand().simplify_scalar() N = slh.S.dag() B, A, Y = K.series_expand(k, 0, 2) G, F = Ld.series_expand(k, 0, 1) return Y, A, B, F, G, N
Prepare the adiabatic elimination on an SLH object Args: slh: The SLH object to take the limit for k: The scaling parameter $k \rightarrow \infty$. The default is a positive symbol 'k' Returns: tuple: The objects ``Y, A, B, F, G, N`` necessary to compute the limiting system.
def eval_adiabatic_limit(YABFGN, Ytilde, P0): Y, A, B, F, G, N = YABFGN Klim = (P0 * (B - A * Ytilde * A) * P0).expand().simplify_scalar() Hlim = ((Klim - Klim.dag())/2/I).expand().simplify_scalar() Ldlim = (P0 * (G - A * Ytilde * F) * P0).expand().simplify_scalar() dN = identity_matrix(N.shape[0]) + F.H * Ytilde * F Nlim = (P0 * N * dN * P0).expand().simplify_scalar() return SLH(Nlim.dag(), Ldlim.dag(), Hlim.dag())
Compute the limiting SLH model for the adiabatic approximation Args: YABFGN: The tuple (Y, A, B, F, G, N) as returned by prepare_adiabatic_limit. Ytilde: The pseudo-inverse of Y, satisfying Y * Ytilde = P0. P0: The projector onto the null-space of Y. Returns: SLH: Limiting SLH model
def index_in_block(self, channel_index: int) -> int: if channel_index < 0 or channel_index >= self.cdim: raise ValueError() struct = self.block_structure if len(struct) == 1: return channel_index, 0 i = 1 while sum(struct[:i]) <= channel_index and i < self.cdim: i += 1 block_index = i - 1 index_in_block = channel_index - sum(struct[:block_index]) return index_in_block, block_index
Return the index a channel has within the subblock it belongs to I.e., only for reducible circuits, this gives a result different from the argument itself. Args: channel_index (int): The index of the external channel Raises: ValueError: for an invalid `channel_index`
def get_blocks(self, block_structure=None): if block_structure is None: block_structure = self.block_structure try: return self._get_blocks(block_structure) except IncompatibleBlockStructures as e: raise e
For a reducible circuit, get a sequence of subblocks that when concatenated again yield the original circuit. The block structure given has to be compatible with the circuits actual block structure, i.e. it can only be more coarse-grained. Args: block_structure (tuple): The block structure according to which the subblocks are generated (default = ``None``, corresponds to the circuit's own block structure) Returns: A tuple of subblocks that the circuit consists of. Raises: .IncompatibleBlockStructures
def feedback(self, *, out_port=None, in_port=None): if out_port is None: out_port = self.cdim - 1 if in_port is None: in_port = self.cdim - 1 return self._feedback(out_port=out_port, in_port=in_port)
Return a circuit with self-feedback from the output port (zero-based) ``out_port`` to the input port ``in_port``. Args: out_port (int or None): The output port from which the feedback connection leaves (zero-based, default ``None`` corresponds to the *last* port). in_port (int or None): The input port into which the feedback connection goes (zero-based, default ``None`` corresponds to the *last* port).
def show(self): # noinspection PyPackageRequirements from IPython.display import Image, display fname = self.render() display(Image(filename=fname))
Show the circuit expression in an IPython notebook.
def render(self, fname=''): import qnet.visualization.circuit_pyx as circuit_visualization from tempfile import gettempdir from time import time, sleep if not fname: tmp_dir = gettempdir() fname = os.path.join(tmp_dir, "tmp_{}.png".format(hash(time))) if circuit_visualization.draw_circuit(self, fname): done = False for k in range(20): if os.path.exists(fname): done = True break else: sleep(.5) if done: return fname raise CannotVisualize()
Render the circuit expression and store the result in a file Args: fname (str): Path to an image file to store the result in. Returns: str: The path to the image file
def space(self): args_spaces = (self.S.space, self.L.space, self.H.space) return ProductSpace.create(*args_spaces)
Total Hilbert space
def free_symbols(self): return set.union( self.S.free_symbols, self.L.free_symbols, self.H.free_symbols)
Set of all symbols occcuring in S, L, or H
def series_with_slh(self, other): new_S = self.S * other.S new_L = self.S * other.L + self.L def ImAdjoint(m): return (m.H - m) * (I / 2) delta = ImAdjoint(self.L.adjoint() * self.S * other.L) if isinstance(delta, Matrix): new_H = self.H + other.H + delta[0, 0] else: assert delta == 0 new_H = self.H + other.H return SLH(new_S, new_L, new_H)
Series product with another :class:`SLH` object Args: other (SLH): An upstream SLH circuit. Returns: SLH: The combined system.
def concatenate_slh(self, other): selfS = self.S otherS = other.S new_S = block_matrix( selfS, zerosm((selfS.shape[0], otherS.shape[1]), dtype=int), zerosm((otherS.shape[0], selfS.shape[1]), dtype=int), otherS) new_L = vstackm((self.L, other.L)) new_H = self.H + other.H return SLH(new_S, new_L, new_H)
Concatenation with another :class:`SLH` object
def expand(self): return SLH(self.S.expand(), self.L.expand(), self.H.expand())
Expand out all operator expressions within S, L and H Return a new :class:`SLH` object with these expanded expressions.
def simplify_scalar(self, func=sympy.simplify): return SLH( self.S.simplify_scalar(func=func), self.L.simplify_scalar(func=func), self.H.simplify_scalar(func=func))
Simplify all scalar expressions within S, L and H Return a new :class:`SLH` object with the simplified expressions. See also: :meth:`.QuantumExpression.simplify_scalar`
def symbolic_master_equation(self, rho=None): L, H = self.L, self.H if rho is None: rho = OperatorSymbol('rho', hs=self.space) return (-I * (H * rho - rho * H) + sum(Lk * rho * adjoint(Lk) - (adjoint(Lk) * Lk * rho + rho * adjoint(Lk) * Lk) / 2 for Lk in L.matrix.ravel()))
Compute the symbolic Liouvillian acting on a state rho If no rho is given, an OperatorSymbol is created in its place. This correspnds to the RHS of the master equation in which an average is taken over the external noise degrees of freedom. Args: rho (Operator): A symbolic density matrix operator Returns: Operator: The RHS of the master equation.
def symbolic_heisenberg_eom( self, X=None, noises=None, expand_simplify=True): L, H = self.L, self.H if X is None: X = OperatorSymbol('X', hs=(L.space | H.space)) summands = [I * (H * X - X * H), ] for Lk in L.matrix.ravel(): summands.append(adjoint(Lk) * X * Lk) summands.append(-(adjoint(Lk) * Lk * X + X * adjoint(Lk) * Lk) / 2) if noises is not None: if not isinstance(noises, Matrix): noises = Matrix(noises) LambdaT = (noises.adjoint().transpose() * noises.transpose()).transpose() assert noises.shape == L.shape S = self.S summands.append((adjoint(noises) * S.adjoint() * (X * L - L * X)) .expand()[0, 0]) summand = (((L.adjoint() * X - X * L.adjoint()) * S * noises) .expand()[0, 0]) summands.append(summand) if len(S.space & X.space): comm = (S.adjoint() * X * S - X) summands.append((comm * LambdaT).expand().trace()) ret = OperatorPlus.create(*summands) if expand_simplify: ret = ret.expand().simplify_scalar() return ret
Compute the symbolic Heisenberg equations of motion of a system operator X. If no X is given, an OperatorSymbol is created in its place. If no noises are given, this correspnds to the ensemble-averaged Heisenberg equation of motion. Args: X (Operator): A system operator noises (Operator): A vector of noise inputs Returns: Operator: The RHS of the Heisenberg equations of motion of X.
def block_perms(self): if not self._block_perms: self._block_perms = permutation_to_block_permutations( self.permutation) return self._block_perms
If the circuit is reducible into permutations within subranges of the full range of channels, this yields a tuple with the internal permutations for each such block. :type: tuple
def series_with_permutation(self, other): combined_permutation = tuple([self.permutation[p] for p in other.permutation]) return CPermutation.create(combined_permutation)
Compute the series product with another channel permutation circuit Args: other (CPermutation): Returns: Circuit: The composite permutation circuit (could also be the identity circuit for n channels)
def _factorize_for_rhs(self, rhs): block_structure = rhs.block_structure block_perm, perms_within_blocks \ = block_perm_and_perms_within_blocks(self.permutation, block_structure) fblockp = full_block_perm(block_perm, block_structure) if not sorted(fblockp) == list(range(self.cdim)): raise BadPermutationError() new_rhs_circuit = CPermutation.create(fblockp) within_blocks = [CPermutation.create(within_block) for within_block in perms_within_blocks] within_perm_circuit = Concatenation.create(*within_blocks) rhs_blocks = rhs.get_blocks(block_structure) summands = [SeriesProduct.create(within_blocks[p], rhs_blocks[p]) for p in invert_permutation(block_perm)] permuted_rhs_circuit = Concatenation.create(*summands) new_lhs_circuit = (self << within_perm_circuit.series_inverse() << new_rhs_circuit.series_inverse()) return new_lhs_circuit, permuted_rhs_circuit, new_rhs_circuit
Factorize a channel permutation circuit according the block structure of the upstream circuit. This allows to move as much of the permutation as possible *around* a reducible circuit upstream. It basically decomposes ``permutation << rhs --> permutation' << rhs' << residual'`` where rhs' is just a block permutated version of rhs and residual' is the maximal part of the permutation that one may move around rhs. Args: rhs (Circuit): An upstream circuit object Returns: tuple: new_lhs_circuit, permuted_rhs_circuit, new_rhs_circuit Raises: .BadPermutationError
def _factor_rhs(self, in_port): n = self.cdim if not (0 <= in_port < n): raise Exception in_im = self.permutation[in_port] # (I) is equivalent to # m_{in_im -> (n-1)} << self << m_{(n-1) -> in_port} # == (red_self + cid(1)) (I') red_self_plus_cid1 = (map_channels({in_im: (n - 1)}, n) << self << map_channels({(n - 1): in_port}, n)) if isinstance(red_self_plus_cid1, CPermutation): #make sure we can factor assert red_self_plus_cid1.permutation[(n - 1)] == (n - 1) #form reduced permutation object red_self = CPermutation.create(red_self_plus_cid1.permutation[:-1]) return in_im, red_self else: # 'red_self_plus_cid1' must be the identity for n channels. # Actually, this case can only occur # when self == m_{in_port -> in_im} return in_im, circuit_identity(n - 1)
With:: n := self.cdim in_im := self.permutation[in_port] m_{k->l} := map_signals_circuit({k:l}, n) solve the equation (I) containing ``self``:: self << m_{(n-1) -> in_port} == m_{(n-1) -> in_im} << (red_self + cid(1)) (I) for the (n-1) channel CPermutation ``red_self``. Return in_im, red_self. This is useful when ``self`` is the RHS in a SeriesProduct Object that is within a Feedback loop as it allows to extract the feedback channel from the permutation and moving the remaining part of the permutation (``red_self``) outside of the feedback loop. :param int in_port: The index for which to factor.