text_prompt
stringlengths
100
17.7k
code_prompt
stringlengths
7
9.86k
<SYSTEM_TASK:> Executes a batch of SQL DML statements. This method allows many <END_TASK> <USER_TASK:> Description: def execute_batch_dml( self, session, transaction, statements, seqno, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Executes a batch of SQL DML statements. This method allows many statements to be run with lower latency than submitting them sequentially with ``ExecuteSql``. Statements are executed in order, sequentially. ``ExecuteBatchDmlResponse`` will contain a ``ResultSet`` for each DML statement that has successfully executed. If a statement fails, its error status will be returned as part of the ``ExecuteBatchDmlResponse``. Execution will stop at the first failed statement; the remaining statements will not run. ExecuteBatchDml is expected to return an OK status with a response even if there was an error while processing one of the DML statements. Clients must inspect response.status to determine if there were any errors while processing the request. See more details in ``ExecuteBatchDmlRequest`` and ``ExecuteBatchDmlResponse``. Example: >>> from google.cloud import spanner_v1 >>> >>> client = spanner_v1.SpannerClient() >>> >>> session = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') >>> >>> # TODO: Initialize `transaction`: >>> transaction = {} >>> >>> # TODO: Initialize `statements`: >>> statements = [] >>> >>> # TODO: Initialize `seqno`: >>> seqno = 0 >>> >>> response = client.execute_batch_dml(session, transaction, statements, seqno) Args: session (str): Required. The session in which the DML statements should be performed. transaction (Union[dict, ~google.cloud.spanner_v1.types.TransactionSelector]): The transaction to use. A ReadWrite transaction is required. Single-use transactions are not supported (to avoid replay). The caller must either supply an existing transaction ID or begin a new transaction. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.TransactionSelector` statements (list[Union[dict, ~google.cloud.spanner_v1.types.Statement]]): The list of statements to execute in this batch. Statements are executed serially, such that the effects of statement i are visible to statement i+1. Each statement must be a DML statement. Execution will stop at the first failed statement; the remaining statements will not run. REQUIRES: statements\_size() > 0. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.Statement` seqno (long): A per-transaction sequence number used to identify this request. This is used in the same space as the seqno in ``ExecuteSqlRequest``. See more details in ``ExecuteSqlRequest``. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.spanner_v1.types.ExecuteBatchDmlResponse` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "execute_batch_dml" not in self._inner_api_calls: self._inner_api_calls[ "execute_batch_dml" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.execute_batch_dml, default_retry=self._method_configs["ExecuteBatchDml"].retry, default_timeout=self._method_configs["ExecuteBatchDml"].timeout, client_info=self._client_info, ) request = spanner_pb2.ExecuteBatchDmlRequest( session=session, transaction=transaction, statements=statements, seqno=seqno ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("session", session)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) return self._inner_api_calls["execute_batch_dml"]( request, retry=retry, timeout=timeout, metadata=metadata )
<SYSTEM_TASK:> Commits a transaction. The request includes the mutations to be applied <END_TASK> <USER_TASK:> Description: def commit( self, session, mutations, transaction_id=None, single_use_transaction=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Commits a transaction. The request includes the mutations to be applied to rows in the database. ``Commit`` might return an ``ABORTED`` error. This can occur at any time; commonly, the cause is conflicts with concurrent transactions. However, it can also happen for a variety of other reasons. If ``Commit`` returns ``ABORTED``, the caller should re-attempt the transaction from the beginning, re-using the same session. Example: >>> from google.cloud import spanner_v1 >>> >>> client = spanner_v1.SpannerClient() >>> >>> session = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') >>> >>> # TODO: Initialize `mutations`: >>> mutations = [] >>> >>> response = client.commit(session, mutations) Args: session (str): Required. The session in which the transaction to be committed is running. mutations (list[Union[dict, ~google.cloud.spanner_v1.types.Mutation]]): The mutations to be executed when this transaction commits. All mutations are applied atomically, in the order they appear in this list. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.Mutation` transaction_id (bytes): Commit a previously-started transaction. single_use_transaction (Union[dict, ~google.cloud.spanner_v1.types.TransactionOptions]): Execute mutations in a temporary transaction. Note that unlike commit of a previously-started transaction, commit with a temporary transaction is non-idempotent. That is, if the ``CommitRequest`` is sent to Cloud Spanner more than once (for instance, due to retries in the application, or in the transport library), it is possible that the mutations are executed more than once. If this is undesirable, use ``BeginTransaction`` and ``Commit`` instead. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.TransactionOptions` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.spanner_v1.types.CommitResponse` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "commit" not in self._inner_api_calls: self._inner_api_calls[ "commit" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.commit, default_retry=self._method_configs["Commit"].retry, default_timeout=self._method_configs["Commit"].timeout, client_info=self._client_info, ) # Sanity check: We have some fields which are mutually exclusive; # raise ValueError if more than one is sent. google.api_core.protobuf_helpers.check_oneof( transaction_id=transaction_id, single_use_transaction=single_use_transaction ) request = spanner_pb2.CommitRequest( session=session, mutations=mutations, transaction_id=transaction_id, single_use_transaction=single_use_transaction, ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("session", session)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) return self._inner_api_calls["commit"]( request, retry=retry, timeout=timeout, metadata=metadata )
<SYSTEM_TASK:> Creates a set of partition tokens that can be used to execute a query <END_TASK> <USER_TASK:> Description: def partition_query( self, session, sql, transaction=None, params=None, param_types=None, partition_options=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Creates a set of partition tokens that can be used to execute a query operation in parallel. Each of the returned partition tokens can be used by ``ExecuteStreamingSql`` to specify a subset of the query result to read. The same session and read-only transaction must be used by the PartitionQueryRequest used to create the partition tokens and the ExecuteSqlRequests that use the partition tokens. Partition tokens become invalid when the session used to create them is deleted, is idle for too long, begins a new transaction, or becomes too old. When any of these happen, it is not possible to resume the query, and the whole operation must be restarted from the beginning. Example: >>> from google.cloud import spanner_v1 >>> >>> client = spanner_v1.SpannerClient() >>> >>> session = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') >>> >>> # TODO: Initialize `sql`: >>> sql = '' >>> >>> response = client.partition_query(session, sql) Args: session (str): Required. The session used to create the partitions. sql (str): The query request to generate partitions for. The request will fail if the query is not root partitionable. The query plan of a root partitionable query has a single distributed union operator. A distributed union operator conceptually divides one or more tables into multiple splits, remotely evaluates a subquery independently on each split, and then unions all results. This must not contain DML commands, such as INSERT, UPDATE, or DELETE. Use ``ExecuteStreamingSql`` with a PartitionedDml transaction for large, partition-friendly DML operations. transaction (Union[dict, ~google.cloud.spanner_v1.types.TransactionSelector]): Read only snapshot transactions are supported, read/write and single use transactions are not. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.TransactionSelector` params (Union[dict, ~google.cloud.spanner_v1.types.Struct]): The SQL query string can contain parameter placeholders. A parameter placeholder consists of ``'@'`` followed by the parameter name. Parameter names consist of any combination of letters, numbers, and underscores. Parameters can appear anywhere that a literal value is expected. The same parameter name can be used more than once, for example: ``"WHERE id > @msg_id AND id < @msg_id + 100"`` It is an error to execute an SQL query with unbound parameters. Parameter values are specified using ``params``, which is a JSON object whose keys are parameter names, and whose values are the corresponding parameter values. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.Struct` param_types (dict[str -> Union[dict, ~google.cloud.spanner_v1.types.Type]]): It is not always possible for Cloud Spanner to infer the right SQL type from a JSON value. For example, values of type ``BYTES`` and values of type ``STRING`` both appear in ``params`` as JSON strings. In these cases, ``param_types`` can be used to specify the exact SQL type for some or all of the SQL query parameters. See the definition of ``Type`` for more information about SQL types. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.Type` partition_options (Union[dict, ~google.cloud.spanner_v1.types.PartitionOptions]): Additional options that affect how many partitions are created. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.PartitionOptions` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.spanner_v1.types.PartitionResponse` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "partition_query" not in self._inner_api_calls: self._inner_api_calls[ "partition_query" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.partition_query, default_retry=self._method_configs["PartitionQuery"].retry, default_timeout=self._method_configs["PartitionQuery"].timeout, client_info=self._client_info, ) request = spanner_pb2.PartitionQueryRequest( session=session, sql=sql, transaction=transaction, params=params, param_types=param_types, partition_options=partition_options, ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("session", session)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) return self._inner_api_calls["partition_query"]( request, retry=retry, timeout=timeout, metadata=metadata )
<SYSTEM_TASK:> Creates a set of partition tokens that can be used to execute a read <END_TASK> <USER_TASK:> Description: def partition_read( self, session, table, key_set, transaction=None, index=None, columns=None, partition_options=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Creates a set of partition tokens that can be used to execute a read operation in parallel. Each of the returned partition tokens can be used by ``StreamingRead`` to specify a subset of the read result to read. The same session and read-only transaction must be used by the PartitionReadRequest used to create the partition tokens and the ReadRequests that use the partition tokens. There are no ordering guarantees on rows returned among the returned partition tokens, or even within each individual StreamingRead call issued with a partition\_token. Partition tokens become invalid when the session used to create them is deleted, is idle for too long, begins a new transaction, or becomes too old. When any of these happen, it is not possible to resume the read, and the whole operation must be restarted from the beginning. Example: >>> from google.cloud import spanner_v1 >>> >>> client = spanner_v1.SpannerClient() >>> >>> session = client.session_path('[PROJECT]', '[INSTANCE]', '[DATABASE]', '[SESSION]') >>> >>> # TODO: Initialize `table`: >>> table = '' >>> >>> # TODO: Initialize `key_set`: >>> key_set = {} >>> >>> response = client.partition_read(session, table, key_set) Args: session (str): Required. The session used to create the partitions. table (str): Required. The name of the table in the database to be read. key_set (Union[dict, ~google.cloud.spanner_v1.types.KeySet]): Required. ``key_set`` identifies the rows to be yielded. ``key_set`` names the primary keys of the rows in ``table`` to be yielded, unless ``index`` is present. If ``index`` is present, then ``key_set`` instead names index keys in ``index``. It is not an error for the ``key_set`` to name rows that do not exist in the database. Read yields nothing for nonexistent rows. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.KeySet` transaction (Union[dict, ~google.cloud.spanner_v1.types.TransactionSelector]): Read only snapshot transactions are supported, read/write and single use transactions are not. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.TransactionSelector` index (str): If non-empty, the name of an index on ``table``. This index is used instead of the table primary key when interpreting ``key_set`` and sorting result rows. See ``key_set`` for further information. columns (list[str]): The columns of ``table`` to be returned for each row matching this request. partition_options (Union[dict, ~google.cloud.spanner_v1.types.PartitionOptions]): Additional options that affect how many partitions are created. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_v1.types.PartitionOptions` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.spanner_v1.types.PartitionResponse` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "partition_read" not in self._inner_api_calls: self._inner_api_calls[ "partition_read" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.partition_read, default_retry=self._method_configs["PartitionRead"].retry, default_timeout=self._method_configs["PartitionRead"].timeout, client_info=self._client_info, ) request = spanner_pb2.PartitionReadRequest( session=session, table=table, key_set=key_set, transaction=transaction, index=index, columns=columns, partition_options=partition_options, ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("session", session)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) return self._inner_api_calls["partition_read"]( request, retry=retry, timeout=timeout, metadata=metadata )
<SYSTEM_TASK:> Create an instance of the GAPIC Datastore API. <END_TASK> <USER_TASK:> Description: def make_datastore_api(client): """Create an instance of the GAPIC Datastore API. :type client: :class:`~google.cloud.datastore.client.Client` :param client: The client that holds configuration details. :rtype: :class:`.datastore.v1.datastore_client.DatastoreClient` :returns: A datastore API instance with the proper credentials. """
parse_result = six.moves.urllib_parse.urlparse(client._base_url) host = parse_result.netloc if parse_result.scheme == "https": channel = make_secure_channel(client._credentials, DEFAULT_USER_AGENT, host) else: channel = insecure_channel(host) return datastore_client.DatastoreClient( channel=channel, client_info=client_info.ClientInfo( client_library_version=__version__, gapic_version=__version__ ), )
<SYSTEM_TASK:> Indent some text. <END_TASK> <USER_TASK:> Description: def _indent(lines, prefix=" "): """Indent some text. Note that this is present as ``textwrap.indent``, but not in Python 2. Args: lines (str): The newline delimited string to be indented. prefix (Optional[str]): The prefix to indent each line with. Default to two spaces. Returns: str: The newly indented content. """
indented = [] for line in lines.split("\n"): indented.append(prefix + line) return "\n".join(indented)
<SYSTEM_TASK:> Return the time that the message was originally published. <END_TASK> <USER_TASK:> Description: def publish_time(self): """Return the time that the message was originally published. Returns: datetime: The date and time that the message was published. """
timestamp = self._message.publish_time delta = datetime.timedelta( seconds=timestamp.seconds, microseconds=timestamp.nanos // 1000 ) return datetime_helpers._UTC_EPOCH + delta
<SYSTEM_TASK:> Acknowledge the given message. <END_TASK> <USER_TASK:> Description: def ack(self): """Acknowledge the given message. Acknowledging a message in Pub/Sub means that you are done with it, and it will not be delivered to this subscription again. You should avoid acknowledging messages until you have *finished* processing them, so that in the event of a failure, you receive the message again. .. warning:: Acks in Pub/Sub are best effort. You should always ensure that your processing code is idempotent, as you may receive any given message more than once. """
time_to_ack = math.ceil(time.time() - self._received_timestamp) self._request_queue.put( requests.AckRequest( ack_id=self._ack_id, byte_size=self.size, time_to_ack=time_to_ack ) )
<SYSTEM_TASK:> Release the message from lease management. <END_TASK> <USER_TASK:> Description: def drop(self): """Release the message from lease management. This informs the policy to no longer hold on to the lease for this message. Pub/Sub will re-deliver the message if it is not acknowledged before the existing lease expires. .. warning:: For most use cases, the only reason to drop a message from lease management is on :meth:`ack` or :meth:`nack`; these methods both call this one. You probably do not want to call this method directly. """
self._request_queue.put( requests.DropRequest(ack_id=self._ack_id, byte_size=self.size) )
<SYSTEM_TASK:> Inform the policy to lease this message continually. <END_TASK> <USER_TASK:> Description: def lease(self): """Inform the policy to lease this message continually. .. note:: This method is called by the constructor, and you should never need to call it manually. """
self._request_queue.put( requests.LeaseRequest(ack_id=self._ack_id, byte_size=self.size) )
<SYSTEM_TASK:> Resets the deadline for acknowledgement. <END_TASK> <USER_TASK:> Description: def modify_ack_deadline(self, seconds): """Resets the deadline for acknowledgement. New deadline will be the given value of seconds from now. The default implementation handles this for you; you should not need to manually deal with setting ack deadlines. The exception case is if you are implementing your own custom subclass of :class:`~.pubsub_v1.subcriber._consumer.Consumer`. Args: seconds (int): The number of seconds to set the lease deadline to. This should be between 0 and 600. Due to network latency, values below 10 are advised against. """
self._request_queue.put( requests.ModAckRequest(ack_id=self._ack_id, seconds=seconds) )
<SYSTEM_TASK:> Decline to acknowldge the given message. <END_TASK> <USER_TASK:> Description: def nack(self): """Decline to acknowldge the given message. This will cause the message to be re-delivered to the subscription. """
self._request_queue.put( requests.NackRequest(ack_id=self._ack_id, byte_size=self.size) )
<SYSTEM_TASK:> Runs a query while printing status updates <END_TASK> <USER_TASK:> Description: def _run_query(client, query, job_config=None): """Runs a query while printing status updates Args: client (google.cloud.bigquery.client.Client): Client to bundle configuration needed for API requests. query (str): SQL query to be executed. Defaults to the standard SQL dialect. Use the ``job_config`` parameter to change dialects. job_config (google.cloud.bigquery.job.QueryJobConfig, optional): Extra configuration options for the job. Returns: google.cloud.bigquery.job.QueryJob: the query job created Example: >>> client = bigquery.Client() >>> _run_query(client, "SELECT 17") Executing query with job ID: bf633912-af2c-4780-b568-5d868058632b Query executing: 1.66s Query complete after 2.07s 'bf633912-af2c-4780-b568-5d868058632b' """
start_time = time.time() query_job = client.query(query, job_config=job_config) print("Executing query with job ID: {}".format(query_job.job_id)) while True: print("\rQuery executing: {:0.2f}s".format(time.time() - start_time), end="") try: query_job.result(timeout=0.5) break except futures.TimeoutError: continue print("\nQuery complete after {:0.2f}s".format(time.time() - start_time)) return query_job
<SYSTEM_TASK:> Underlying function for bigquery cell magic <END_TASK> <USER_TASK:> Description: def _cell_magic(line, query): """Underlying function for bigquery cell magic Note: This function contains the underlying logic for the 'bigquery' cell magic. This function is not meant to be called directly. Args: line (str): "%%bigquery" followed by arguments as required query (str): SQL query to run Returns: pandas.DataFrame: the query results. """
args = magic_arguments.parse_argstring(_cell_magic, line) params = [] if args.params is not None: try: params = _helpers.to_query_parameters( ast.literal_eval("".join(args.params)) ) except Exception: raise SyntaxError( "--params is not a correctly formatted JSON string or a JSON " "serializable dictionary" ) project = args.project or context.project client = bigquery.Client(project=project, credentials=context.credentials) bqstorage_client = _make_bqstorage_client( args.use_bqstorage_api or context.use_bqstorage_api, context.credentials ) job_config = bigquery.job.QueryJobConfig() job_config.query_parameters = params job_config.use_legacy_sql = args.use_legacy_sql query_job = _run_query(client, query, job_config) if not args.verbose: display.clear_output() result = query_job.to_dataframe(bqstorage_client=bqstorage_client) if args.destination_var: IPython.get_ipython().push({args.destination_var: result}) else: return result
<SYSTEM_TASK:> Return a fully-qualified database_root string. <END_TASK> <USER_TASK:> Description: def database_root_path(cls, project, database): """Return a fully-qualified database_root string."""
return google.api_core.path_template.expand( "projects/{project}/databases/{database}", project=project, database=database, )
<SYSTEM_TASK:> Return a fully-qualified any_path string. <END_TASK> <USER_TASK:> Description: def any_path_path(cls, project, database, document, any_path): """Return a fully-qualified any_path string."""
return google.api_core.path_template.expand( "projects/{project}/databases/{database}/documents/{document}/{any_path=**}", project=project, database=database, document=document, any_path=any_path, )
<SYSTEM_TASK:> Gets a single document. <END_TASK> <USER_TASK:> Description: def get_document( self, name, mask=None, transaction=None, read_time=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Gets a single document. Example: >>> from google.cloud import firestore_v1beta1 >>> >>> client = firestore_v1beta1.FirestoreClient() >>> >>> name = client.any_path_path('[PROJECT]', '[DATABASE]', '[DOCUMENT]', '[ANY_PATH]') >>> >>> response = client.get_document(name) Args: name (str): The resource name of the Document to get. In the format: ``projects/{project_id}/databases/{database_id}/documents/{document_path}``. mask (Union[dict, ~google.cloud.firestore_v1beta1.types.DocumentMask]): The fields to return. If not set, returns all fields. If the document has a field that is not present in this mask, that field will not be returned in the response. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.DocumentMask` transaction (bytes): Reads the document in a transaction. read_time (Union[dict, ~google.cloud.firestore_v1beta1.types.Timestamp]): Reads the version of the document at the given time. This may not be older than 60 seconds. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.Timestamp` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.firestore_v1beta1.types.Document` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "get_document" not in self._inner_api_calls: self._inner_api_calls[ "get_document" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.get_document, default_retry=self._method_configs["GetDocument"].retry, default_timeout=self._method_configs["GetDocument"].timeout, client_info=self._client_info, ) # Sanity check: We have some fields which are mutually exclusive; # raise ValueError if more than one is sent. google.api_core.protobuf_helpers.check_oneof( transaction=transaction, read_time=read_time ) request = firestore_pb2.GetDocumentRequest( name=name, mask=mask, transaction=transaction, read_time=read_time ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("name", name)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) return self._inner_api_calls["get_document"]( request, retry=retry, timeout=timeout, metadata=metadata )
<SYSTEM_TASK:> Lists documents. <END_TASK> <USER_TASK:> Description: def list_documents( self, parent, collection_id, page_size=None, order_by=None, mask=None, transaction=None, read_time=None, show_missing=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Lists documents. Example: >>> from google.cloud import firestore_v1beta1 >>> >>> client = firestore_v1beta1.FirestoreClient() >>> >>> parent = client.any_path_path('[PROJECT]', '[DATABASE]', '[DOCUMENT]', '[ANY_PATH]') >>> >>> # TODO: Initialize `collection_id`: >>> collection_id = '' >>> >>> # Iterate over all results >>> for element in client.list_documents(parent, collection_id): ... # process element ... pass >>> >>> >>> # Alternatively: >>> >>> # Iterate over results one page at a time >>> for page in client.list_documents(parent, collection_id).pages: ... for element in page: ... # process element ... pass Args: parent (str): The parent resource name. In the format: ``projects/{project_id}/databases/{database_id}/documents`` or ``projects/{project_id}/databases/{database_id}/documents/{document_path}``. For example: ``projects/my-project/databases/my-database/documents`` or ``projects/my-project/databases/my-database/documents/chatrooms/my-chatroom`` collection_id (str): The collection ID, relative to ``parent``, to list. For example: ``chatrooms`` or ``messages``. page_size (int): The maximum number of resources contained in the underlying API response. If page streaming is performed per- resource, this parameter does not affect the return value. If page streaming is performed per-page, this determines the maximum number of resources in a page. order_by (str): The order to sort results by. For example: ``priority desc, name``. mask (Union[dict, ~google.cloud.firestore_v1beta1.types.DocumentMask]): The fields to return. If not set, returns all fields. If a document has a field that is not present in this mask, that field will not be returned in the response. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.DocumentMask` transaction (bytes): Reads documents in a transaction. read_time (Union[dict, ~google.cloud.firestore_v1beta1.types.Timestamp]): Reads documents as they were at the given time. This may not be older than 60 seconds. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.Timestamp` show_missing (bool): If the list should show missing documents. A missing document is a document that does not exist but has sub-documents. These documents will be returned with a key but will not have fields, ``Document.create_time``, or ``Document.update_time`` set. Requests with ``show_missing`` may not specify ``where`` or ``order_by``. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.gax.PageIterator` instance. By default, this is an iterable of :class:`~google.cloud.firestore_v1beta1.types.Document` instances. This object can also be configured to iterate over the pages of the response through the `options` parameter. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "list_documents" not in self._inner_api_calls: self._inner_api_calls[ "list_documents" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.list_documents, default_retry=self._method_configs["ListDocuments"].retry, default_timeout=self._method_configs["ListDocuments"].timeout, client_info=self._client_info, ) # Sanity check: We have some fields which are mutually exclusive; # raise ValueError if more than one is sent. google.api_core.protobuf_helpers.check_oneof( transaction=transaction, read_time=read_time ) request = firestore_pb2.ListDocumentsRequest( parent=parent, collection_id=collection_id, page_size=page_size, order_by=order_by, mask=mask, transaction=transaction, read_time=read_time, show_missing=show_missing, ) iterator = google.api_core.page_iterator.GRPCIterator( client=None, method=functools.partial( self._inner_api_calls["list_documents"], retry=retry, timeout=timeout, metadata=metadata, ), request=request, items_field="documents", request_token_field="page_token", response_token_field="next_page_token", ) return iterator
<SYSTEM_TASK:> Updates or inserts a document. <END_TASK> <USER_TASK:> Description: def update_document( self, document, update_mask, mask=None, current_document=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Updates or inserts a document. Example: >>> from google.cloud import firestore_v1beta1 >>> >>> client = firestore_v1beta1.FirestoreClient() >>> >>> # TODO: Initialize `document`: >>> document = {} >>> >>> # TODO: Initialize `update_mask`: >>> update_mask = {} >>> >>> response = client.update_document(document, update_mask) Args: document (Union[dict, ~google.cloud.firestore_v1beta1.types.Document]): The updated document. Creates the document if it does not already exist. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.Document` update_mask (Union[dict, ~google.cloud.firestore_v1beta1.types.DocumentMask]): The fields to update. None of the field paths in the mask may contain a reserved name. If the document exists on the server and has fields not referenced in the mask, they are left unchanged. Fields referenced in the mask, but not present in the input document, are deleted from the document on the server. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.DocumentMask` mask (Union[dict, ~google.cloud.firestore_v1beta1.types.DocumentMask]): The fields to return. If not set, returns all fields. If the document has a field that is not present in this mask, that field will not be returned in the response. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.DocumentMask` current_document (Union[dict, ~google.cloud.firestore_v1beta1.types.Precondition]): An optional precondition on the document. The request will fail if this is set and not met by the target document. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.Precondition` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.firestore_v1beta1.types.Document` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "update_document" not in self._inner_api_calls: self._inner_api_calls[ "update_document" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.update_document, default_retry=self._method_configs["UpdateDocument"].retry, default_timeout=self._method_configs["UpdateDocument"].timeout, client_info=self._client_info, ) request = firestore_pb2.UpdateDocumentRequest( document=document, update_mask=update_mask, mask=mask, current_document=current_document, ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("document.name", document.name)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) return self._inner_api_calls["update_document"]( request, retry=retry, timeout=timeout, metadata=metadata )
<SYSTEM_TASK:> Gets multiple documents. <END_TASK> <USER_TASK:> Description: def batch_get_documents( self, database, documents, mask=None, transaction=None, new_transaction=None, read_time=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Gets multiple documents. Documents returned by this method are not guaranteed to be returned in the same order that they were requested. Example: >>> from google.cloud import firestore_v1beta1 >>> >>> client = firestore_v1beta1.FirestoreClient() >>> >>> database = client.database_root_path('[PROJECT]', '[DATABASE]') >>> >>> # TODO: Initialize `documents`: >>> documents = [] >>> >>> for element in client.batch_get_documents(database, documents): ... # process element ... pass Args: database (str): The database name. In the format: ``projects/{project_id}/databases/{database_id}``. documents (list[str]): The names of the documents to retrieve. In the format: ``projects/{project_id}/databases/{database_id}/documents/{document_path}``. The request will fail if any of the document is not a child resource of the given ``database``. Duplicate names will be elided. mask (Union[dict, ~google.cloud.firestore_v1beta1.types.DocumentMask]): The fields to return. If not set, returns all fields. If a document has a field that is not present in this mask, that field will not be returned in the response. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.DocumentMask` transaction (bytes): Reads documents in a transaction. new_transaction (Union[dict, ~google.cloud.firestore_v1beta1.types.TransactionOptions]): Starts a new transaction and reads the documents. Defaults to a read-only transaction. The new transaction ID will be returned as the first response in the stream. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.TransactionOptions` read_time (Union[dict, ~google.cloud.firestore_v1beta1.types.Timestamp]): Reads documents as they were at the given time. This may not be older than 60 seconds. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.Timestamp` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: Iterable[~google.cloud.firestore_v1beta1.types.BatchGetDocumentsResponse]. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "batch_get_documents" not in self._inner_api_calls: self._inner_api_calls[ "batch_get_documents" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.batch_get_documents, default_retry=self._method_configs["BatchGetDocuments"].retry, default_timeout=self._method_configs["BatchGetDocuments"].timeout, client_info=self._client_info, ) # Sanity check: We have some fields which are mutually exclusive; # raise ValueError if more than one is sent. google.api_core.protobuf_helpers.check_oneof( transaction=transaction, new_transaction=new_transaction, read_time=read_time, ) request = firestore_pb2.BatchGetDocumentsRequest( database=database, documents=documents, mask=mask, transaction=transaction, new_transaction=new_transaction, read_time=read_time, ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("database", database)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) return self._inner_api_calls["batch_get_documents"]( request, retry=retry, timeout=timeout, metadata=metadata )
<SYSTEM_TASK:> Starts a new transaction. <END_TASK> <USER_TASK:> Description: def begin_transaction( self, database, options_=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Starts a new transaction. Example: >>> from google.cloud import firestore_v1beta1 >>> >>> client = firestore_v1beta1.FirestoreClient() >>> >>> database = client.database_root_path('[PROJECT]', '[DATABASE]') >>> >>> response = client.begin_transaction(database) Args: database (str): The database name. In the format: ``projects/{project_id}/databases/{database_id}``. options_ (Union[dict, ~google.cloud.firestore_v1beta1.types.TransactionOptions]): The options for the transaction. Defaults to a read-write transaction. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.TransactionOptions` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.firestore_v1beta1.types.BeginTransactionResponse` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "begin_transaction" not in self._inner_api_calls: self._inner_api_calls[ "begin_transaction" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.begin_transaction, default_retry=self._method_configs["BeginTransaction"].retry, default_timeout=self._method_configs["BeginTransaction"].timeout, client_info=self._client_info, ) request = firestore_pb2.BeginTransactionRequest( database=database, options=options_ ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("database", database)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) return self._inner_api_calls["begin_transaction"]( request, retry=retry, timeout=timeout, metadata=metadata )
<SYSTEM_TASK:> Runs a query. <END_TASK> <USER_TASK:> Description: def run_query( self, parent, structured_query=None, transaction=None, new_transaction=None, read_time=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Runs a query. Example: >>> from google.cloud import firestore_v1beta1 >>> >>> client = firestore_v1beta1.FirestoreClient() >>> >>> parent = client.any_path_path('[PROJECT]', '[DATABASE]', '[DOCUMENT]', '[ANY_PATH]') >>> >>> for element in client.run_query(parent): ... # process element ... pass Args: parent (str): The parent resource name. In the format: ``projects/{project_id}/databases/{database_id}/documents`` or ``projects/{project_id}/databases/{database_id}/documents/{document_path}``. For example: ``projects/my-project/databases/my-database/documents`` or ``projects/my-project/databases/my-database/documents/chatrooms/my-chatroom`` structured_query (Union[dict, ~google.cloud.firestore_v1beta1.types.StructuredQuery]): A structured query. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.StructuredQuery` transaction (bytes): Reads documents in a transaction. new_transaction (Union[dict, ~google.cloud.firestore_v1beta1.types.TransactionOptions]): Starts a new transaction and reads the documents. Defaults to a read-only transaction. The new transaction ID will be returned as the first response in the stream. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.TransactionOptions` read_time (Union[dict, ~google.cloud.firestore_v1beta1.types.Timestamp]): Reads documents as they were at the given time. This may not be older than 60 seconds. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.Timestamp` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: Iterable[~google.cloud.firestore_v1beta1.types.RunQueryResponse]. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "run_query" not in self._inner_api_calls: self._inner_api_calls[ "run_query" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.run_query, default_retry=self._method_configs["RunQuery"].retry, default_timeout=self._method_configs["RunQuery"].timeout, client_info=self._client_info, ) # Sanity check: We have some fields which are mutually exclusive; # raise ValueError if more than one is sent. google.api_core.protobuf_helpers.check_oneof(structured_query=structured_query) # Sanity check: We have some fields which are mutually exclusive; # raise ValueError if more than one is sent. google.api_core.protobuf_helpers.check_oneof( transaction=transaction, new_transaction=new_transaction, read_time=read_time, ) request = firestore_pb2.RunQueryRequest( parent=parent, structured_query=structured_query, transaction=transaction, new_transaction=new_transaction, read_time=read_time, ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("parent", parent)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) return self._inner_api_calls["run_query"]( request, retry=retry, timeout=timeout, metadata=metadata )
<SYSTEM_TASK:> Streams batches of document updates and deletes, in order. <END_TASK> <USER_TASK:> Description: def write( self, requests, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Streams batches of document updates and deletes, in order. EXPERIMENTAL: This method interface might change in the future. Example: >>> from google.cloud import firestore_v1beta1 >>> >>> client = firestore_v1beta1.FirestoreClient() >>> >>> database = client.database_root_path('[PROJECT]', '[DATABASE]') >>> request = {'database': database} >>> >>> requests = [request] >>> for element in client.write(requests): ... # process element ... pass Args: requests (iterator[dict|google.cloud.firestore_v1beta1.proto.firestore_pb2.WriteRequest]): The input objects. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.firestore_v1beta1.types.WriteRequest` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: Iterable[~google.cloud.firestore_v1beta1.types.WriteResponse]. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "write" not in self._inner_api_calls: self._inner_api_calls[ "write" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.write, default_retry=self._method_configs["Write"].retry, default_timeout=self._method_configs["Write"].timeout, client_info=self._client_info, ) return self._inner_api_calls["write"]( requests, retry=retry, timeout=timeout, metadata=metadata )
<SYSTEM_TASK:> Return a fully-qualified log string. <END_TASK> <USER_TASK:> Description: def log_path(cls, project, log): """Return a fully-qualified log string."""
return google.api_core.path_template.expand( "projects/{project}/logs/{log}", project=project, log=log )
<SYSTEM_TASK:> Deletes all the log entries in a log. <END_TASK> <USER_TASK:> Description: def delete_log( self, log_name, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Deletes all the log entries in a log. The log reappears if it receives new entries. Log entries written shortly before the delete operation might not be deleted. Example: >>> from google.cloud import logging_v2 >>> >>> client = logging_v2.LoggingServiceV2Client() >>> >>> log_name = client.log_path('[PROJECT]', '[LOG]') >>> >>> client.delete_log(log_name) Args: log_name (str): Required. The resource name of the log to delete: :: "projects/[PROJECT_ID]/logs/[LOG_ID]" "organizations/[ORGANIZATION_ID]/logs/[LOG_ID]" "billingAccounts/[BILLING_ACCOUNT_ID]/logs/[LOG_ID]" "folders/[FOLDER_ID]/logs/[LOG_ID]" ``[LOG_ID]`` must be URL-encoded. For example, ``"projects/my-project-id/logs/syslog"``, ``"organizations/1234567890/logs/cloudresourcemanager.googleapis.com%2Factivity"``. For more information about log names, see ``LogEntry``. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "delete_log" not in self._inner_api_calls: self._inner_api_calls[ "delete_log" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.delete_log, default_retry=self._method_configs["DeleteLog"].retry, default_timeout=self._method_configs["DeleteLog"].timeout, client_info=self._client_info, ) request = logging_pb2.DeleteLogRequest(log_name=log_name) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("log_name", log_name)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) self._inner_api_calls["delete_log"]( request, retry=retry, timeout=timeout, metadata=metadata )
<SYSTEM_TASK:> Expand a matched variable with its value. <END_TASK> <USER_TASK:> Description: def _expand_variable_match(positional_vars, named_vars, match): """Expand a matched variable with its value. Args: positional_vars (list): A list of positonal variables. This list will be modified. named_vars (dict): A dictionary of named variables. match (re.Match): A regular expression match. Returns: str: The expanded variable to replace the match. Raises: ValueError: If a positional or named variable is required by the template but not specified or if an unexpected template expression is encountered. """
positional = match.group("positional") name = match.group("name") if name is not None: try: return six.text_type(named_vars[name]) except KeyError: raise ValueError( "Named variable '{}' not specified and needed by template " "`{}` at position {}".format(name, match.string, match.start()) ) elif positional is not None: try: return six.text_type(positional_vars.pop(0)) except IndexError: raise ValueError( "Positional variable not specified and needed by template " "`{}` at position {}".format(match.string, match.start()) ) else: raise ValueError("Unknown template expression {}".format(match.group(0)))
<SYSTEM_TASK:> Expand a path template with the given variables. <END_TASK> <USER_TASK:> Description: def expand(tmpl, *args, **kwargs): """Expand a path template with the given variables. ..code-block:: python >>> expand('users/*/messages/*', 'me', '123') users/me/messages/123 >>> expand('/v1/{name=shelves/*/books/*}', name='shelves/1/books/3') /v1/shelves/1/books/3 Args: tmpl (str): The path template. args: The positional variables for the path. kwargs: The named variables for the path. Returns: str: The expanded path Raises: ValueError: If a positional or named variable is required by the template but not specified or if an unexpected template expression is encountered. """
replacer = functools.partial(_expand_variable_match, list(args), kwargs) return _VARIABLE_RE.sub(replacer, tmpl)
<SYSTEM_TASK:> Replace a variable match with a pattern that can be used to validate it. <END_TASK> <USER_TASK:> Description: def _replace_variable_with_pattern(match): """Replace a variable match with a pattern that can be used to validate it. Args: match (re.Match): A regular expression match Returns: str: A regular expression pattern that can be used to validate the variable in an expanded path. Raises: ValueError: If an unexpected template expression is encountered. """
positional = match.group("positional") name = match.group("name") template = match.group("template") if name is not None: if not template: return _SINGLE_SEGMENT_PATTERN.format(name) elif template == "**": return _MULTI_SEGMENT_PATTERN.format(name) else: return _generate_pattern_for_template(template) elif positional == "*": return _SINGLE_SEGMENT_PATTERN elif positional == "**": return _MULTI_SEGMENT_PATTERN else: raise ValueError("Unknown template expression {}".format(match.group(0)))
<SYSTEM_TASK:> Validate a path against the path template. <END_TASK> <USER_TASK:> Description: def validate(tmpl, path): """Validate a path against the path template. .. code-block:: python >>> validate('users/*/messages/*', 'users/me/messages/123') True >>> validate('users/*/messages/*', 'users/me/drafts/123') False >>> validate('/v1/{name=shelves/*/books/*}', /v1/shelves/1/books/3) True >>> validate('/v1/{name=shelves/*/books/*}', /v1/shelves/1/tapes/3) False Args: tmpl (str): The path template. path (str): The expanded path. Returns: bool: True if the path matches. """
pattern = _generate_pattern_for_template(tmpl) + "$" return True if re.match(pattern, path) is not None else False
<SYSTEM_TASK:> AppProfile name used in requests. <END_TASK> <USER_TASK:> Description: def name(self): """AppProfile name used in requests. .. note:: This property will not change if ``app_profile_id`` does not, but the return value is not cached. The AppProfile name is of the form ``"projects/../instances/../app_profile/{app_profile_id}"`` :rtype: str :returns: The AppProfile name. """
return self.instance_admin_client.app_profile_path( self._instance._client.project, self._instance.instance_id, self.app_profile_id, )
<SYSTEM_TASK:> Creates an instance app_profile from a protobuf. <END_TASK> <USER_TASK:> Description: def from_pb(cls, app_profile_pb, instance): """Creates an instance app_profile from a protobuf. :type app_profile_pb: :class:`instance_pb2.app_profile_pb` :param app_profile_pb: An instance protobuf object. :type instance: :class:`google.cloud.bigtable.instance.Instance` :param instance: The instance that owns the cluster. :rtype: :class:`AppProfile` :returns: The AppProfile parsed from the protobuf response. :raises: :class:`ValueError <exceptions.ValueError>` if the AppProfile name does not match ``projects/{project}/instances/{instance_id}/appProfiles/{app_profile_id}`` or if the parsed instance ID does not match the istance ID on the client. or if the parsed project ID does not match the project ID on the client. """
match_app_profile_name = _APP_PROFILE_NAME_RE.match(app_profile_pb.name) if match_app_profile_name is None: raise ValueError( "AppProfile protobuf name was not in the " "expected format.", app_profile_pb.name, ) if match_app_profile_name.group("instance") != instance.instance_id: raise ValueError( "Instance ID on app_profile does not match the " "instance ID on the client" ) if match_app_profile_name.group("project") != instance._client.project: raise ValueError( "Project ID on app_profile does not match the " "project ID on the client" ) app_profile_id = match_app_profile_name.group("app_profile_id") result = cls(app_profile_id, instance) result._update_from_pb(app_profile_pb) return result
<SYSTEM_TASK:> Reload the metadata for this cluster <END_TASK> <USER_TASK:> Description: def reload(self): """Reload the metadata for this cluster"""
app_profile_pb = self.instance_admin_client.get_app_profile(self.name) # NOTE: _update_from_pb does not check that the project and # app_profile ID on the response match the request. self._update_from_pb(app_profile_pb)
<SYSTEM_TASK:> Check whether the AppProfile already exists. <END_TASK> <USER_TASK:> Description: def exists(self): """Check whether the AppProfile already exists. :rtype: bool :returns: True if the AppProfile exists, else False. """
try: self.instance_admin_client.get_app_profile(self.name) return True # NOTE: There could be other exceptions that are returned to the user. except NotFound: return False
<SYSTEM_TASK:> Create this AppProfile. <END_TASK> <USER_TASK:> Description: def create(self, ignore_warnings=None): """Create this AppProfile. .. note:: Uses the ``instance`` and ``app_profile_id`` on the current :class:`AppProfile` in addition to the ``routing_policy_type``, ``description``, ``cluster_id`` and ``allow_transactional_writes``. To change them before creating, reset the values via .. code:: python app_profile.app_profile_id = 'i-changed-my-mind' app_profile.routing_policy_type = ( google.cloud.bigtable.enums.RoutingPolicyType.SINGLE ) app_profile.description = 'new-description' app-profile.cluster_id = 'other-cluster-id' app-profile.allow_transactional_writes = True before calling :meth:`create`. :type: ignore_warnings: bool :param: ignore_warnings: (Optional) If true, ignore safety checks when creating the AppProfile. """
return self.from_pb( self.instance_admin_client.create_app_profile( parent=self._instance.name, app_profile_id=self.app_profile_id, app_profile=self._to_pb(), ignore_warnings=ignore_warnings, ), self._instance, )
<SYSTEM_TASK:> Update this app_profile. <END_TASK> <USER_TASK:> Description: def update(self, ignore_warnings=None): """Update this app_profile. .. note:: Update any or all of the following values: ``routing_policy_type`` ``description`` ``cluster_id`` ``allow_transactional_writes`` """
update_mask_pb = field_mask_pb2.FieldMask() if self.description is not None: update_mask_pb.paths.append("description") if self.routing_policy_type == RoutingPolicyType.ANY: update_mask_pb.paths.append("multi_cluster_routing_use_any") else: update_mask_pb.paths.append("single_cluster_routing") return self.instance_admin_client.update_app_profile( app_profile=self._to_pb(), update_mask=update_mask_pb, ignore_warnings=ignore_warnings, )
<SYSTEM_TASK:> Creates a sink that exports specified log entries to a destination. The <END_TASK> <USER_TASK:> Description: def create_sink( self, parent, sink, unique_writer_identity=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Creates a sink that exports specified log entries to a destination. The export of newly-ingested log entries begins immediately, unless the sink's ``writer_identity`` is not permitted to write to the destination. A sink can export log entries only from the resource owning the sink. Example: >>> from google.cloud import logging_v2 >>> >>> client = logging_v2.ConfigServiceV2Client() >>> >>> parent = client.project_path('[PROJECT]') >>> >>> # TODO: Initialize `sink`: >>> sink = {} >>> >>> response = client.create_sink(parent, sink) Args: parent (str): Required. The resource in which to create the sink: :: "projects/[PROJECT_ID]" "organizations/[ORGANIZATION_ID]" "billingAccounts/[BILLING_ACCOUNT_ID]" "folders/[FOLDER_ID]" Examples: ``"projects/my-logging-project"``, ``"organizations/123456789"``. sink (Union[dict, ~google.cloud.logging_v2.types.LogSink]): Required. The new sink, whose ``name`` parameter is a sink identifier that is not already in use. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.logging_v2.types.LogSink` unique_writer_identity (bool): Optional. Determines the kind of IAM identity returned as ``writer_identity`` in the new sink. If this value is omitted or set to false, and if the sink's parent is a project, then the value returned as ``writer_identity`` is the same group or service account used by Logging before the addition of writer identities to this API. The sink's destination must be in the same project as the sink itself. If this field is set to true, or if the sink is owned by a non-project resource such as an organization, then the value of ``writer_identity`` will be a unique service account used only for exports from the new sink. For more information, see ``writer_identity`` in ``LogSink``. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.logging_v2.types.LogSink` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "create_sink" not in self._inner_api_calls: self._inner_api_calls[ "create_sink" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.create_sink, default_retry=self._method_configs["CreateSink"].retry, default_timeout=self._method_configs["CreateSink"].timeout, client_info=self._client_info, ) request = logging_config_pb2.CreateSinkRequest( parent=parent, sink=sink, unique_writer_identity=unique_writer_identity ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("parent", parent)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) return self._inner_api_calls["create_sink"]( request, retry=retry, timeout=timeout, metadata=metadata )
<SYSTEM_TASK:> Creates a new exclusion in a specified parent resource. <END_TASK> <USER_TASK:> Description: def create_exclusion( self, parent, exclusion, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Creates a new exclusion in a specified parent resource. Only log entries belonging to that resource can be excluded. You can have up to 10 exclusions in a resource. Example: >>> from google.cloud import logging_v2 >>> >>> client = logging_v2.ConfigServiceV2Client() >>> >>> parent = client.project_path('[PROJECT]') >>> >>> # TODO: Initialize `exclusion`: >>> exclusion = {} >>> >>> response = client.create_exclusion(parent, exclusion) Args: parent (str): Required. The parent resource in which to create the exclusion: :: "projects/[PROJECT_ID]" "organizations/[ORGANIZATION_ID]" "billingAccounts/[BILLING_ACCOUNT_ID]" "folders/[FOLDER_ID]" Examples: ``"projects/my-logging-project"``, ``"organizations/123456789"``. exclusion (Union[dict, ~google.cloud.logging_v2.types.LogExclusion]): Required. The new exclusion, whose ``name`` parameter is an exclusion name that is not already used in the parent resource. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.logging_v2.types.LogExclusion` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.logging_v2.types.LogExclusion` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "create_exclusion" not in self._inner_api_calls: self._inner_api_calls[ "create_exclusion" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.create_exclusion, default_retry=self._method_configs["CreateExclusion"].retry, default_timeout=self._method_configs["CreateExclusion"].timeout, client_info=self._client_info, ) request = logging_config_pb2.CreateExclusionRequest( parent=parent, exclusion=exclusion ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("parent", parent)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) return self._inner_api_calls["create_exclusion"]( request, retry=retry, timeout=timeout, metadata=metadata )
<SYSTEM_TASK:> Convert a Value protobuf to cell data. <END_TASK> <USER_TASK:> Description: def _parse_value_pb(value_pb, field_type): """Convert a Value protobuf to cell data. :type value_pb: :class:`~google.protobuf.struct_pb2.Value` :param value_pb: protobuf to convert :type field_type: :class:`~google.cloud.spanner_v1.proto.type_pb2.Type` :param field_type: type code for the value :rtype: varies on field_type :returns: value extracted from value_pb :raises ValueError: if unknown type is passed """
if value_pb.HasField("null_value"): return None if field_type.code == type_pb2.STRING: result = value_pb.string_value elif field_type.code == type_pb2.BYTES: result = value_pb.string_value.encode("utf8") elif field_type.code == type_pb2.BOOL: result = value_pb.bool_value elif field_type.code == type_pb2.INT64: result = int(value_pb.string_value) elif field_type.code == type_pb2.FLOAT64: if value_pb.HasField("string_value"): result = float(value_pb.string_value) else: result = value_pb.number_value elif field_type.code == type_pb2.DATE: result = _date_from_iso8601_date(value_pb.string_value) elif field_type.code == type_pb2.TIMESTAMP: DatetimeWithNanoseconds = datetime_helpers.DatetimeWithNanoseconds result = DatetimeWithNanoseconds.from_rfc3339(value_pb.string_value) elif field_type.code == type_pb2.ARRAY: result = [ _parse_value_pb(item_pb, field_type.array_element_type) for item_pb in value_pb.list_value.values ] elif field_type.code == type_pb2.STRUCT: result = [ _parse_value_pb(item_pb, field_type.struct_type.fields[i].type) for (i, item_pb) in enumerate(value_pb.list_value.values) ] else: raise ValueError("Unknown type: %s" % (field_type,)) return result
<SYSTEM_TASK:> Convert a list of ListValue protobufs into a list of list of cell data. <END_TASK> <USER_TASK:> Description: def _parse_list_value_pbs(rows, row_type): """Convert a list of ListValue protobufs into a list of list of cell data. :type rows: list of :class:`~google.protobuf.struct_pb2.ListValue` :param rows: row data returned from a read/query :type row_type: :class:`~google.cloud.spanner_v1.proto.type_pb2.StructType` :param row_type: row schema specification :rtype: list of list of cell data :returns: data for the rows, coerced into appropriate types """
result = [] for row in rows: row_data = [] for value_pb, field in zip(row.values, row_type.fields): row_data.append(_parse_value_pb(value_pb, field.type)) result.append(row_data) return result
<SYSTEM_TASK:> Exports assets with time and resource types to a given Cloud Storage <END_TASK> <USER_TASK:> Description: def export_assets( self, parent, output_config, read_time=None, asset_types=None, content_type=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Exports assets with time and resource types to a given Cloud Storage location. The output format is newline-delimited JSON. This API implements the ``google.longrunning.Operation`` API allowing you to keep track of the export. Example: >>> from google.cloud import asset_v1 >>> >>> client = asset_v1.AssetServiceClient() >>> >>> # TODO: Initialize `parent`: >>> parent = '' >>> >>> # TODO: Initialize `output_config`: >>> output_config = {} >>> >>> response = client.export_assets(parent, output_config) >>> >>> def callback(operation_future): ... # Handle result. ... result = operation_future.result() >>> >>> response.add_done_callback(callback) >>> >>> # Handle metadata. >>> metadata = response.metadata() Args: parent (str): Required. The relative name of the root asset. This can only be an organization number (such as "organizations/123"), a project ID (such as "projects/my-project-id"), or a project number (such as "projects/12345"), or a folder number (such as "folders/123"). output_config (Union[dict, ~google.cloud.asset_v1.types.OutputConfig]): Required. Output configuration indicating where the results will be output to. All results will be in newline delimited JSON format. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.asset_v1.types.OutputConfig` read_time (Union[dict, ~google.cloud.asset_v1.types.Timestamp]): Timestamp to take an asset snapshot. This can only be set to a timestamp between 2018-10-02 UTC (inclusive) and the current time. If not specified, the current time will be used. Due to delays in resource data collection and indexing, there is a volatile window during which running the same query may get different results. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.asset_v1.types.Timestamp` asset_types (list[str]): A list of asset types of which to take a snapshot for. For example: "compute.googleapis.com/Disk". If specified, only matching assets will be returned. See `Introduction to Cloud Asset Inventory <https://cloud.google.com/resource-manager/docs/cloud-asset-inventory/overview>`__ for all supported asset types. content_type (~google.cloud.asset_v1.types.ContentType): Asset content type. If not specified, no content but the asset name will be returned. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.asset_v1.types._OperationFuture` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "export_assets" not in self._inner_api_calls: self._inner_api_calls[ "export_assets" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.export_assets, default_retry=self._method_configs["ExportAssets"].retry, default_timeout=self._method_configs["ExportAssets"].timeout, client_info=self._client_info, ) request = asset_service_pb2.ExportAssetsRequest( parent=parent, output_config=output_config, read_time=read_time, asset_types=asset_types, content_type=content_type, ) operation = self._inner_api_calls["export_assets"]( request, retry=retry, timeout=timeout, metadata=metadata ) return google.api_core.operation.from_gapic( operation, self.transport._operations_client, asset_service_pb2.ExportAssetsResponse, metadata_type=asset_service_pb2.ExportAssetsRequest, )
<SYSTEM_TASK:> Batch gets the update history of assets that overlap a time window. For <END_TASK> <USER_TASK:> Description: def batch_get_assets_history( self, parent, content_type, read_time_window, asset_names=None, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Batch gets the update history of assets that overlap a time window. For RESOURCE content, this API outputs history with asset in both non-delete or deleted status. For IAM\_POLICY content, this API outputs history when the asset and its attached IAM POLICY both exist. This can create gaps in the output history. If a specified asset does not exist, this API returns an INVALID\_ARGUMENT error. Example: >>> from google.cloud import asset_v1 >>> from google.cloud.asset_v1 import enums >>> >>> client = asset_v1.AssetServiceClient() >>> >>> # TODO: Initialize `parent`: >>> parent = '' >>> >>> # TODO: Initialize `content_type`: >>> content_type = enums.ContentType.CONTENT_TYPE_UNSPECIFIED >>> >>> # TODO: Initialize `read_time_window`: >>> read_time_window = {} >>> >>> response = client.batch_get_assets_history(parent, content_type, read_time_window) Args: parent (str): Required. The relative name of the root asset. It can only be an organization number (such as "organizations/123"), a project ID (such as "projects/my-project-id")", or a project number (such as "projects/12345"). content_type (~google.cloud.asset_v1.types.ContentType): Required. The content type. read_time_window (Union[dict, ~google.cloud.asset_v1.types.TimeWindow]): Optional. The time window for the asset history. Both start\_time and end\_time are optional and if set, it must be after 2018-10-02 UTC. If end\_time is not set, it is default to current timestamp. If start\_time is not set, the snapshot of the assets at end\_time will be returned. The returned results contain all temporal assets whose time window overlap with read\_time\_window. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.asset_v1.types.TimeWindow` asset_names (list[str]): A list of the full names of the assets. For example: ``//compute.googleapis.com/projects/my_project_123/zones/zone1/instances/instance1``. See `Resource Names <https://cloud.google.com/apis/design/resource_names#full_resource_name>`__ and `Resource Name Format <https://cloud.google.com/resource-manager/docs/cloud-asset-inventory/resource-name-format>`__ for more info. The request becomes a no-op if the asset name list is empty, and the max size of the asset name list is 100 in one request. retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.asset_v1.types.BatchGetAssetsHistoryResponse` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "batch_get_assets_history" not in self._inner_api_calls: self._inner_api_calls[ "batch_get_assets_history" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.batch_get_assets_history, default_retry=self._method_configs["BatchGetAssetsHistory"].retry, default_timeout=self._method_configs["BatchGetAssetsHistory"].timeout, client_info=self._client_info, ) request = asset_service_pb2.BatchGetAssetsHistoryRequest( parent=parent, content_type=content_type, read_time_window=read_time_window, asset_names=asset_names, ) return self._inner_api_calls["batch_get_assets_history"]( request, retry=retry, timeout=timeout, metadata=metadata )
<SYSTEM_TASK:> Extract and parse Avro schema from a read session. <END_TASK> <USER_TASK:> Description: def _avro_schema(read_session): """Extract and parse Avro schema from a read session. Args: read_session ( \ ~google.cloud.bigquery_storage_v1beta1.types.ReadSession \ ): The read session associated with this read rows stream. This contains the schema, which is required to parse the data blocks. Returns: Tuple[fastavro.schema, Tuple[str]]: A parsed Avro schema, using :func:`fastavro.schema.parse_schema` and the column names for a read session. """
json_schema = json.loads(read_session.avro_schema.schema) column_names = tuple((field["name"] for field in json_schema["fields"])) return fastavro.parse_schema(json_schema), column_names
<SYSTEM_TASK:> Parse all rows in a stream block. <END_TASK> <USER_TASK:> Description: def _avro_rows(block, avro_schema): """Parse all rows in a stream block. Args: block ( \ ~google.cloud.bigquery_storage_v1beta1.types.ReadRowsResponse \ ): A block containing Avro bytes to parse into rows. avro_schema (fastavro.schema): A parsed Avro schema, used to deserialized the bytes in the block. Returns: Iterable[Mapping]: A sequence of rows, represented as dictionaries. """
blockio = six.BytesIO(block.avro_rows.serialized_binary_rows) while True: # Loop in a while loop because schemaless_reader can only read # a single record. try: # TODO: Parse DATETIME into datetime.datetime (no timezone), # instead of as a string. yield fastavro.schemaless_reader(blockio, avro_schema) except StopIteration: break
<SYSTEM_TASK:> Reconnect to the ReadRows stream using the most recent offset. <END_TASK> <USER_TASK:> Description: def _reconnect(self): """Reconnect to the ReadRows stream using the most recent offset."""
self._wrapped = self._client.read_rows( _copy_stream_position(self._position), **self._read_rows_kwargs )
<SYSTEM_TASK:> A generator of all pages in the stream. <END_TASK> <USER_TASK:> Description: def pages(self): """A generator of all pages in the stream. Returns: types.GeneratorType[google.cloud.bigquery_storage_v1beta1.ReadRowsPage]: A generator of pages. """
# Each page is an iterator of rows. But also has num_items, remaining, # and to_dataframe. avro_schema, column_names = _avro_schema(self._read_session) for block in self._reader: self._status = block.status yield ReadRowsPage(avro_schema, column_names, block)
<SYSTEM_TASK:> Parse metadata and rows from the block only once. <END_TASK> <USER_TASK:> Description: def _parse_block(self): """Parse metadata and rows from the block only once."""
if self._iter_rows is not None: return rows = _avro_rows(self._block, self._avro_schema) self._num_items = self._block.avro_rows.row_count self._remaining = self._block.avro_rows.row_count self._iter_rows = iter(rows)
<SYSTEM_TASK:> Get the next row in the page. <END_TASK> <USER_TASK:> Description: def next(self): """Get the next row in the page."""
self._parse_block() if self._remaining > 0: self._remaining -= 1 return six.next(self._iter_rows)
<SYSTEM_TASK:> Creates an instance and begins preparing it to begin serving. The <END_TASK> <USER_TASK:> Description: def create_instance( self, parent, instance_id, instance, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Creates an instance and begins preparing it to begin serving. The returned ``long-running operation`` can be used to track the progress of preparing the new instance. The instance name is assigned by the caller. If the named instance already exists, ``CreateInstance`` returns ``ALREADY_EXISTS``. Immediately upon completion of this request: - The instance is readable via the API, with all requested attributes but no allocated resources. Its state is ``CREATING``. Until completion of the returned operation: - Cancelling the operation renders the instance immediately unreadable via the API. - The instance can be deleted. - All other attempts to modify the instance are rejected. Upon completion of the returned operation: - Billing for all successfully-allocated resources begins (some types may have lower than the requested levels). - Databases can be created in the instance. - The instance's allocated resource levels are readable via the API. - The instance's state becomes ``READY``. The returned ``long-running operation`` will have a name of the format ``<instance_name>/operations/<operation_id>`` and can be used to track creation of the instance. The ``metadata`` field type is ``CreateInstanceMetadata``. The ``response`` field type is ``Instance``, if successful. Example: >>> from google.cloud import spanner_admin_instance_v1 >>> >>> client = spanner_admin_instance_v1.InstanceAdminClient() >>> >>> parent = client.project_path('[PROJECT]') >>> >>> # TODO: Initialize `instance_id`: >>> instance_id = '' >>> >>> # TODO: Initialize `instance`: >>> instance = {} >>> >>> response = client.create_instance(parent, instance_id, instance) >>> >>> def callback(operation_future): ... # Handle result. ... result = operation_future.result() >>> >>> response.add_done_callback(callback) >>> >>> # Handle metadata. >>> metadata = response.metadata() Args: parent (str): Required. The name of the project in which to create the instance. Values are of the form ``projects/<project>``. instance_id (str): Required. The ID of the instance to create. Valid identifiers are of the form ``[a-z][-a-z0-9]*[a-z0-9]`` and must be between 6 and 30 characters in length. instance (Union[dict, ~google.cloud.spanner_admin_instance_v1.types.Instance]): Required. The instance to create. The name may be omitted, but if specified must be ``<parent>/instances/<instance_id>``. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_admin_instance_v1.types.Instance` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.spanner_admin_instance_v1.types._OperationFuture` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "create_instance" not in self._inner_api_calls: self._inner_api_calls[ "create_instance" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.create_instance, default_retry=self._method_configs["CreateInstance"].retry, default_timeout=self._method_configs["CreateInstance"].timeout, client_info=self._client_info, ) request = spanner_instance_admin_pb2.CreateInstanceRequest( parent=parent, instance_id=instance_id, instance=instance ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("parent", parent)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) operation = self._inner_api_calls["create_instance"]( request, retry=retry, timeout=timeout, metadata=metadata ) return google.api_core.operation.from_gapic( operation, self.transport._operations_client, spanner_instance_admin_pb2.Instance, metadata_type=spanner_instance_admin_pb2.CreateInstanceMetadata, )
<SYSTEM_TASK:> Updates an instance, and begins allocating or releasing resources as <END_TASK> <USER_TASK:> Description: def update_instance( self, instance, field_mask, retry=google.api_core.gapic_v1.method.DEFAULT, timeout=google.api_core.gapic_v1.method.DEFAULT, metadata=None, ): """ Updates an instance, and begins allocating or releasing resources as requested. The returned ``long-running operation`` can be used to track the progress of updating the instance. If the named instance does not exist, returns ``NOT_FOUND``. Immediately upon completion of this request: - For resource types for which a decrease in the instance's allocation has been requested, billing is based on the newly-requested level. Until completion of the returned operation: - Cancelling the operation sets its metadata's ``cancel_time``, and begins restoring resources to their pre-request values. The operation is guaranteed to succeed at undoing all resource changes, after which point it terminates with a ``CANCELLED`` status. - All other attempts to modify the instance are rejected. - Reading the instance via the API continues to give the pre-request resource levels. Upon completion of the returned operation: - Billing begins for all successfully-allocated resources (some types may have lower than the requested levels). - All newly-reserved resources are available for serving the instance's tables. - The instance's new resource levels are readable via the API. The returned ``long-running operation`` will have a name of the format ``<instance_name>/operations/<operation_id>`` and can be used to track the instance modification. The ``metadata`` field type is ``UpdateInstanceMetadata``. The ``response`` field type is ``Instance``, if successful. Authorization requires ``spanner.instances.update`` permission on resource ``name``. Example: >>> from google.cloud import spanner_admin_instance_v1 >>> >>> client = spanner_admin_instance_v1.InstanceAdminClient() >>> >>> # TODO: Initialize `instance`: >>> instance = {} >>> >>> # TODO: Initialize `field_mask`: >>> field_mask = {} >>> >>> response = client.update_instance(instance, field_mask) >>> >>> def callback(operation_future): ... # Handle result. ... result = operation_future.result() >>> >>> response.add_done_callback(callback) >>> >>> # Handle metadata. >>> metadata = response.metadata() Args: instance (Union[dict, ~google.cloud.spanner_admin_instance_v1.types.Instance]): Required. The instance to update, which must always include the instance name. Otherwise, only fields mentioned in [][google.spanner.admin.instance.v1.UpdateInstanceRequest.field\_mask] need be included. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_admin_instance_v1.types.Instance` field_mask (Union[dict, ~google.cloud.spanner_admin_instance_v1.types.FieldMask]): Required. A mask specifying which fields in [][google.spanner.admin.instance.v1.UpdateInstanceRequest.instance] should be updated. The field mask must always be specified; this prevents any future fields in [][google.spanner.admin.instance.v1.Instance] from being erased accidentally by clients that do not know about them. If a dict is provided, it must be of the same form as the protobuf message :class:`~google.cloud.spanner_admin_instance_v1.types.FieldMask` retry (Optional[google.api_core.retry.Retry]): A retry object used to retry requests. If ``None`` is specified, requests will not be retried. timeout (Optional[float]): The amount of time, in seconds, to wait for the request to complete. Note that if ``retry`` is specified, the timeout applies to each individual attempt. metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata that is provided to the method. Returns: A :class:`~google.cloud.spanner_admin_instance_v1.types._OperationFuture` instance. Raises: google.api_core.exceptions.GoogleAPICallError: If the request failed for any reason. google.api_core.exceptions.RetryError: If the request failed due to a retryable error and retry attempts failed. ValueError: If the parameters are invalid. """
# Wrap the transport method to add retry and timeout logic. if "update_instance" not in self._inner_api_calls: self._inner_api_calls[ "update_instance" ] = google.api_core.gapic_v1.method.wrap_method( self.transport.update_instance, default_retry=self._method_configs["UpdateInstance"].retry, default_timeout=self._method_configs["UpdateInstance"].timeout, client_info=self._client_info, ) request = spanner_instance_admin_pb2.UpdateInstanceRequest( instance=instance, field_mask=field_mask ) if metadata is None: metadata = [] metadata = list(metadata) try: routing_header = [("instance.name", instance.name)] except AttributeError: pass else: routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata( routing_header ) metadata.append(routing_metadata) operation = self._inner_api_calls["update_instance"]( request, retry=retry, timeout=timeout, metadata=metadata ) return google.api_core.operation.from_gapic( operation, self.transport._operations_client, spanner_instance_admin_pb2.Instance, metadata_type=spanner_instance_admin_pb2.UpdateInstanceMetadata, )
<SYSTEM_TASK:> Return a fully-qualified scan_config string. <END_TASK> <USER_TASK:> Description: def scan_config_path(cls, project, scan_config): """Return a fully-qualified scan_config string."""
return google.api_core.path_template.expand( "projects/{project}/scanConfigs/{scan_config}", project=project, scan_config=scan_config, )
<SYSTEM_TASK:> Return a fully-qualified scan_run string. <END_TASK> <USER_TASK:> Description: def scan_run_path(cls, project, scan_config, scan_run): """Return a fully-qualified scan_run string."""
return google.api_core.path_template.expand( "projects/{project}/scanConfigs/{scan_config}/scanRuns/{scan_run}", project=project, scan_config=scan_config, scan_run=scan_run, )
<SYSTEM_TASK:> Make a copy of this client. <END_TASK> <USER_TASK:> Description: def copy(self): """Make a copy of this client. Copies the local data stored as simple types but does not copy the current state of any open connections with the Cloud Bigtable API. :rtype: :class:`.Client` :returns: A copy of the current client. """
return self.__class__( project=self.project, credentials=self._credentials, user_agent=self.user_agent, )
<SYSTEM_TASK:> List available instance configurations for the client's project. <END_TASK> <USER_TASK:> Description: def list_instance_configs(self, page_size=None, page_token=None): """List available instance configurations for the client's project. .. _RPC docs: https://cloud.google.com/spanner/docs/reference/rpc/\ google.spanner.admin.instance.v1#google.spanner.admin.\ instance.v1.InstanceAdmin.ListInstanceConfigs See `RPC docs`_. :type page_size: int :param page_size: Optional. The maximum number of configs in each page of results from this request. Non-positive values are ignored. Defaults to a sensible value set by the API. :type page_token: str :param page_token: Optional. If present, return the next batch of configs, using the value, which must correspond to the ``nextPageToken`` value returned in the previous response. Deprecated: use the ``pages`` property of the returned iterator instead of manually passing the token. :rtype: :class:`~google.api_core.page_iterator.Iterator` :returns: Iterator of :class:`~google.cloud.spanner_v1.instance.InstanceConfig` resources within the client's project. """
metadata = _metadata_with_prefix(self.project_name) path = "projects/%s" % (self.project,) page_iter = self.instance_admin_api.list_instance_configs( path, page_size=page_size, metadata=metadata ) page_iter.next_page_token = page_token page_iter.item_to_value = _item_to_instance_config return page_iter
<SYSTEM_TASK:> List instances for the client's project. <END_TASK> <USER_TASK:> Description: def list_instances(self, filter_="", page_size=None, page_token=None): """List instances for the client's project. See https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.InstanceAdmin.ListInstances :type filter_: string :param filter_: (Optional) Filter to select instances listed. See the ``ListInstancesRequest`` docs above for examples. :type page_size: int :param page_size: Optional. The maximum number of instances in each page of results from this request. Non-positive values are ignored. Defaults to a sensible value set by the API. :type page_token: str :param page_token: Optional. If present, return the next batch of instances, using the value, which must correspond to the ``nextPageToken`` value returned in the previous response. Deprecated: use the ``pages`` property of the returned iterator instead of manually passing the token. :rtype: :class:`~google.api_core.page_iterator.Iterator` :returns: Iterator of :class:`~google.cloud.spanner_v1.instance.Instance` resources within the client's project. """
metadata = _metadata_with_prefix(self.project_name) path = "projects/%s" % (self.project,) page_iter = self.instance_admin_api.list_instances( path, page_size=page_size, metadata=metadata ) page_iter.item_to_value = self._item_to_instance page_iter.next_page_token = page_token return page_iter
<SYSTEM_TASK:> Predicate for determining when to retry. <END_TASK> <USER_TASK:> Description: def _should_retry(exc): """Predicate for determining when to retry. We retry if and only if the 'reason' is 'backendError' or 'rateLimitExceeded'. """
if not hasattr(exc, "errors"): return False if len(exc.errors) == 0: # Check for unstructured error returns, e.g. from GFE return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES) reason = exc.errors[0]["reason"] return reason in _RETRYABLE_REASONS
<SYSTEM_TASK:> Detect correct entry type from resource and instantiate. <END_TASK> <USER_TASK:> Description: def entry_from_resource(resource, client, loggers): """Detect correct entry type from resource and instantiate. :type resource: dict :param resource: One entry resource from API response. :type client: :class:`~google.cloud.logging.client.Client` :param client: Client that owns the log entry. :type loggers: dict :param loggers: A mapping of logger fullnames -> loggers. If the logger that owns the entry is not in ``loggers``, the entry will have a newly-created logger. :rtype: :class:`~google.cloud.logging.entries._BaseEntry` :returns: The entry instance, constructed via the resource """
if "textPayload" in resource: return TextEntry.from_api_repr(resource, client, loggers) if "jsonPayload" in resource: return StructEntry.from_api_repr(resource, client, loggers) if "protoPayload" in resource: return ProtobufEntry.from_api_repr(resource, client, loggers) return LogEntry.from_api_repr(resource, client, loggers)
<SYSTEM_TASK:> Retrieve the metadata key in the metadata server. <END_TASK> <USER_TASK:> Description: def retrieve_metadata_server(metadata_key): """Retrieve the metadata key in the metadata server. See: https://cloud.google.com/compute/docs/storing-retrieving-metadata :type metadata_key: str :param metadata_key: Key of the metadata which will form the url. You can also supply query parameters after the metadata key. e.g. "tags?alt=json" :rtype: str :returns: The value of the metadata key returned by the metadata server. """
url = METADATA_URL + metadata_key try: response = requests.get(url, headers=METADATA_HEADERS) if response.status_code == requests.codes.ok: return response.text except requests.exceptions.RequestException: # Ignore the exception, connection failed means the attribute does not # exist in the metadata server. pass return None
<SYSTEM_TASK:> Wrapper around deprecated function. <END_TASK> <USER_TASK:> Description: def batch_eval(*args, **kwargs): """ Wrapper around deprecated function. """
# Inside function to avoid circular import from cleverhans.evaluation import batch_eval as new_batch_eval warnings.warn("batch_eval has moved to cleverhans.evaluation. " "batch_eval will be removed from utils_tf on or after " "2019-03-09.") return new_batch_eval(*args, **kwargs)
<SYSTEM_TASK:> Returns a list of string names of all available GPUs <END_TASK> <USER_TASK:> Description: def get_available_gpus(): """ Returns a list of string names of all available GPUs """
local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos if x.device_type == 'GPU']
<SYSTEM_TASK:> A wrapper for clip_by_value that casts the clipping range if needed. <END_TASK> <USER_TASK:> Description: def clip_by_value(t, clip_value_min, clip_value_max, name=None): """ A wrapper for clip_by_value that casts the clipping range if needed. """
def cast_clip(clip): """ Cast clipping range argument if needed. """ if t.dtype in (tf.float32, tf.float64): if hasattr(clip, 'dtype'): # Convert to tf dtype in case this is a numpy dtype clip_dtype = tf.as_dtype(clip.dtype) if clip_dtype != t.dtype: return tf.cast(clip, t.dtype) return clip clip_value_min = cast_clip(clip_value_min) clip_value_max = cast_clip(clip_value_max) return tf.clip_by_value(t, clip_value_min, clip_value_max, name)
<SYSTEM_TASK:> A wrapper around tf multiplication that does more automatic casting of <END_TASK> <USER_TASK:> Description: def mul(a, b): """ A wrapper around tf multiplication that does more automatic casting of the input. """
def multiply(a, b): """Multiplication""" return a * b return op_with_scalar_cast(a, b, multiply)
<SYSTEM_TASK:> A wrapper around tf division that does more automatic casting of <END_TASK> <USER_TASK:> Description: def div(a, b): """ A wrapper around tf division that does more automatic casting of the input. """
def divide(a, b): """Division""" return a / b return op_with_scalar_cast(a, b, divide)
<SYSTEM_TASK:> Helper method to write images from single batch into datastore. <END_TASK> <USER_TASK:> Description: def _write_single_batch_images_internal(self, batch_id, client_batch): """Helper method to write images from single batch into datastore."""
client = self._datastore_client batch_key = client.key(self._entity_kind_batches, batch_id) for img_id, img in iteritems(self._data[batch_id]['images']): img_entity = client.entity( client.key(self._entity_kind_images, img_id, parent=batch_key)) for k, v in iteritems(img): img_entity[k] = v client_batch.put(img_entity)
<SYSTEM_TASK:> Writes all image batches to the datastore. <END_TASK> <USER_TASK:> Description: def write_to_datastore(self): """Writes all image batches to the datastore."""
client = self._datastore_client with client.no_transact_batch() as client_batch: for batch_id, batch_data in iteritems(self._data): batch_key = client.key(self._entity_kind_batches, batch_id) batch_entity = client.entity(batch_key) for k, v in iteritems(batch_data): if k != 'images': batch_entity[k] = v client_batch.put(batch_entity) self._write_single_batch_images_internal(batch_id, client_batch)
<SYSTEM_TASK:> Writes only images from one batch to the datastore. <END_TASK> <USER_TASK:> Description: def write_single_batch_images_to_datastore(self, batch_id): """Writes only images from one batch to the datastore."""
client = self._datastore_client with client.no_transact_batch() as client_batch: self._write_single_batch_images_internal(batch_id, client_batch)
<SYSTEM_TASK:> Initializes batches by reading from the datastore. <END_TASK> <USER_TASK:> Description: def init_from_datastore(self): """Initializes batches by reading from the datastore."""
self._data = {} for entity in self._datastore_client.query_fetch( kind=self._entity_kind_batches): batch_id = entity.key.flat_path[-1] self._data[batch_id] = dict(entity) self._data[batch_id]['images'] = {} for entity in self._datastore_client.query_fetch( kind=self._entity_kind_images): batch_id = entity.key.flat_path[-3] image_id = entity.key.flat_path[-1] self._data[batch_id]['images'][image_id] = dict(entity)
<SYSTEM_TASK:> Adds batch with give ID and list of properties. <END_TASK> <USER_TASK:> Description: def add_batch(self, batch_id, batch_properties=None): """Adds batch with give ID and list of properties."""
if batch_properties is None: batch_properties = {} if not isinstance(batch_properties, dict): raise ValueError('batch_properties has to be dict, however it was: ' + str(type(batch_properties))) self._data[batch_id] = batch_properties.copy() self._data[batch_id]['images'] = {}
<SYSTEM_TASK:> Reads list of dataset images from the datastore. <END_TASK> <USER_TASK:> Description: def _read_image_list(self, skip_image_ids=None): """Reads list of dataset images from the datastore."""
if skip_image_ids is None: skip_image_ids = [] images = self._storage_client.list_blobs( prefix=os.path.join('dataset', self._dataset_name) + '/') zip_files = [i for i in images if i.endswith('.zip')] if len(zip_files) == 1: # we have a zip archive with images zip_name = zip_files[0] logging.info('Reading list of images from zip file %s', zip_name) blob = self._storage_client.get_blob(zip_name) buf = BytesIO() logging.info('Downloading zip') blob.download_to_file(buf) buf.seek(0) logging.info('Reading content of the zip') with zipfile.ZipFile(buf) as f: images = [os.path.join(zip_name, os.path.basename(n)) for n in f.namelist() if n.endswith('.png')] buf.close() logging.info('Found %d images', len(images)) else: # we have just a directory with images, filter non-PNG files logging.info('Reading list of images from png files in storage') images = [i for i in images if i.endswith('.png')] logging.info('Found %d images', len(images)) # filter images which should be skipped images = [i for i in images if os.path.basename(i)[:-4] not in skip_image_ids] # assign IDs to images images = [(DATASET_IMAGE_ID_PATTERN.format(idx), i) for idx, i in enumerate(sorted(images))] return images
<SYSTEM_TASK:> Initializes dataset batches from the list of images in the datastore. <END_TASK> <USER_TASK:> Description: def init_from_storage_write_to_datastore(self, batch_size=100, allowed_epsilon=None, skip_image_ids=None, max_num_images=None): """Initializes dataset batches from the list of images in the datastore. Args: batch_size: batch size allowed_epsilon: list of allowed epsilon or None to use default skip_image_ids: list of image ids to skip max_num_images: maximum number of images to read """
if allowed_epsilon is None: allowed_epsilon = copy.copy(DEFAULT_EPSILON) # init dataset batches from data in storage self._dataset_batches = {} # read all blob names from storage images = self._read_image_list(skip_image_ids) if max_num_images: images = images[:max_num_images] for batch_idx, batch_start in enumerate(range(0, len(images), batch_size)): batch = images[batch_start:batch_start+batch_size] batch_id = DATASET_BATCH_ID_PATTERN.format(batch_idx) batch_epsilon = allowed_epsilon[batch_idx % len(allowed_epsilon)] self.add_batch(batch_id, {'epsilon': batch_epsilon}) for image_id, image_path in batch: self.add_image(batch_id, image_id, {'dataset_image_id': os.path.basename(image_path)[:-4], 'image_path': image_path}) # write data to datastore self.write_to_datastore()
<SYSTEM_TASK:> Init list of adversarial batches from dataset batches and submissions. <END_TASK> <USER_TASK:> Description: def init_from_dataset_and_submissions_write_to_datastore( self, dataset_batches, attack_submission_ids): """Init list of adversarial batches from dataset batches and submissions. Args: dataset_batches: instances of DatasetBatches attack_submission_ids: iterable with IDs of all (targeted and nontargeted) attack submissions, could be obtains as CompetitionSubmissions.get_all_attack_ids() """
batches_x_attacks = itertools.product(dataset_batches.data.keys(), attack_submission_ids) for idx, (dataset_batch_id, attack_id) in enumerate(batches_x_attacks): adv_batch_id = ADVERSARIAL_BATCH_ID_PATTERN.format(idx) self.add_batch(adv_batch_id, {'dataset_batch_id': dataset_batch_id, 'submission_id': attack_id}) self.write_to_datastore()
<SYSTEM_TASK:> Returns total number of all generated adversarial examples. <END_TASK> <USER_TASK:> Description: def count_generated_adv_examples(self): """Returns total number of all generated adversarial examples."""
result = {} for v in itervalues(self.data): s_id = v['submission_id'] result[s_id] = result.get(s_id, 0) + len(v['images']) return result
<SYSTEM_TASK:> Create a logger object with the given name. <END_TASK> <USER_TASK:> Description: def create_logger(name): """ Create a logger object with the given name. If this is the first time that we call this method, then initialize the formatter. """
base = logging.getLogger("cleverhans") if len(base.handlers) == 0: ch = logging.StreamHandler() formatter = logging.Formatter('[%(levelname)s %(asctime)s %(name)s] ' + '%(message)s') ch.setFormatter(formatter) base.addHandler(ch) return base
<SYSTEM_TASK:> Returns a version of `normal_dict` whose iteration order is always the same <END_TASK> <USER_TASK:> Description: def deterministic_dict(normal_dict): """ Returns a version of `normal_dict` whose iteration order is always the same """
out = OrderedDict() for key in sorted(normal_dict.keys()): out[key] = normal_dict[key] return out
<SYSTEM_TASK:> Calls shell command with argument substitution. <END_TASK> <USER_TASK:> Description: def shell_call(command, **kwargs): """Calls shell command with argument substitution. Args: command: command represented as a list. Each element of the list is one token of the command. For example "cp a b" becomes ['cp', 'a', 'b'] If any element of the list looks like '${NAME}' then it will be replaced by value from **kwargs with key 'NAME'. **kwargs: dictionary with argument substitution Returns: output of the command Raises: subprocess.CalledProcessError if command return value is not zero This function is useful when you need to do variable substitution prior running the command. Below are few examples of how it works: shell_call(['cp', 'a', 'b'], a='asd') calls command 'cp a b' shell_call(['cp', '${a}', 'b'], a='asd') calls command 'cp asd b', '${a}; was replaced with 'asd' before calling the command """
# Regular expression to find instances of '${NAME}' in a string CMD_VARIABLE_RE = re.compile('^\\$\\{(\\w+)\\}$') command = list(command) for i in range(len(command)): m = CMD_VARIABLE_RE.match(command[i]) if m: var_id = m.group(1) if var_id in kwargs: command[i] = kwargs[var_id] str_command = ' '.join(command) logging.debug('Executing shell command: %s' % str_command) return subprocess.check_output(command)
<SYSTEM_TASK:> Returns a copy of a dictionary whose values are numpy arrays. <END_TASK> <USER_TASK:> Description: def deep_copy(numpy_dict): """ Returns a copy of a dictionary whose values are numpy arrays. Copies their values rather than copying references to them. """
out = {} for key in numpy_dict: out[key] = numpy_dict[key].copy() return out
<SYSTEM_TASK:> Load a saved model and print out its accuracy on different data distributions <END_TASK> <USER_TASK:> Description: def print_accuracies(filepath, train_start=TRAIN_START, train_end=TRAIN_END, test_start=TEST_START, test_end=TEST_END, batch_size=BATCH_SIZE, which_set=WHICH_SET, base_eps_iter=BASE_EPS_ITER, nb_iter=NB_ITER): """ Load a saved model and print out its accuracy on different data distributions This function works by running a single attack on each example. This provides a reasonable estimate of the true failure rate quickly, so long as the model does not suffer from gradient masking. However, this estimate is mostly intended for development work and not for publication. A more accurate estimate may be obtained by running an attack bundler instead. :param filepath: path to model to evaluate :param train_start: index of first training set example to use :param train_end: index of last training set example to use :param test_start: index of first test set example to use :param test_end: index of last test set example to use :param batch_size: size of evaluation batches :param which_set: 'train' or 'test' :param base_eps_iter: step size if the data were in [0,1] (Step size will be rescaled proportional to the actual data range) :param nb_iter: Number of iterations of PGD to run per class """
# Set TF random seed to improve reproducibility tf.set_random_seed(20181014) set_log_level(logging.INFO) sess = tf.Session() with sess.as_default(): model = load(filepath) assert len(model.get_params()) > 0 factory = model.dataset_factory factory.kwargs['train_start'] = train_start factory.kwargs['train_end'] = train_end factory.kwargs['test_start'] = test_start factory.kwargs['test_end'] = test_end dataset = factory() x_data, y_data = dataset.get_set(which_set) impl(sess, model, dataset, factory, x_data, y_data, base_eps_iter, nb_iter)
<SYSTEM_TASK:> Clone variables unused by the attack on all GPUs. Specifically, the <END_TASK> <USER_TASK:> Description: def clone_g0_inputs_on_ngpus(self, inputs, outputs, g0_inputs): """ Clone variables unused by the attack on all GPUs. Specifically, the ground-truth label, y, has to be preserved until the training step. :param inputs: A list of dictionaries as the inputs to each step. :param outputs: A list of dictionaries as the outputs of each step. :param g0_inputs: Initial variables to be cloned. :return: Updated inputs and outputs. """
assert len(inputs) == len(outputs), ( 'Inputs and outputs should have the same number of elements.') inputs[0].update(g0_inputs) outputs[0].update(g0_inputs) # Copy g0_inputs forward for i in range(1, len(inputs)): # Create the graph for i'th step of attack device_name = inputs[i]['x'].device with tf.device(device_name): with tf.variable_scope('step%d' % i): for k, v in g0_inputs.iteritems(): if k not in inputs[i]: v_copy = clone_variable(k, v) inputs[i][k] = v_copy outputs[i][k] = v_copy return inputs, outputs
<SYSTEM_TASK:> Set the device before the next fprop to create a new graph on the <END_TASK> <USER_TASK:> Description: def set_device(self, device_name): """ Set the device before the next fprop to create a new graph on the specified device. """
device_name = unify_device_name(device_name) self.device_name = device_name for layer in self.layers: layer.device_name = device_name
<SYSTEM_TASK:> Create and initialize layer parameters on the device previously set <END_TASK> <USER_TASK:> Description: def set_input_shape_ngpu(self, new_input_shape): """ Create and initialize layer parameters on the device previously set in self.device_name. :param new_input_shape: a list or tuple for the shape of the input. """
assert self.device_name, "Device name has not been set." device_name = self.device_name if self.input_shape is None: # First time setting the input shape self.input_shape = [None] + [int(d) for d in list(new_input_shape)] if device_name in self.params_device: # There is a copy of weights on this device self.__dict__.update(self.params_device[device_name]) return # Stop recursion self.params_device[device_name] = {} # Initialize weights on this device with tf.device(device_name): self.set_input_shape(self.input_shape) keys_after = self.__dict__.keys() if self.params_names is None: # Prevent overriding training self.params_names = [k for k in keys_after if isinstance( self.__dict__[k], tf.Variable)] params = {k: self.__dict__[k] for k in self.params_names} self.params_device[device_name] = params
<SYSTEM_TASK:> Create an assignment operation for each weight on all devices. The <END_TASK> <USER_TASK:> Description: def create_sync_ops(self, host_device): """Create an assignment operation for each weight on all devices. The weight is assigned the value of the copy on the `host_device'. """
sync_ops = [] host_params = self.params_device[host_device] for device, params in (self.params_device).iteritems(): if device == host_device: continue for k in self.params_names: if isinstance(params[k], tf.Variable): sync_ops += [tf.assign(params[k], host_params[k])] return sync_ops
<SYSTEM_TASK:> Iterate with exponential backoff on failures. <END_TASK> <USER_TASK:> Description: def iterate_with_exp_backoff(base_iter, max_num_tries=6, max_backoff=300.0, start_backoff=4.0, backoff_multiplier=2.0, frac_random_backoff=0.25): """Iterate with exponential backoff on failures. Useful to wrap results of datastore Query.fetch to avoid 429 error. Args: base_iter: basic iterator of generator object max_num_tries: maximum number of tries for each request max_backoff: maximum backoff, in seconds start_backoff: initial value of backoff backoff_multiplier: backoff multiplier frac_random_backoff: fraction of the value of random part of the backoff Yields: values of yielded by base iterator """
try_number = 0 if hasattr(base_iter, '__iter__'): base_iter = iter(base_iter) while True: try: yield next(base_iter) try_number = 0 except StopIteration: break except TooManyRequests as e: logging.warning('TooManyRequests error: %s', tb.format_exc()) if try_number >= max_num_tries: logging.error('Number of tries exceeded, too many requests: %s', e) raise # compute sleep time for truncated exponential backoff sleep_time = start_backoff * math.pow(backoff_multiplier, try_number) sleep_time *= (1.0 + frac_random_backoff * random.random()) sleep_time = min(sleep_time, max_backoff) logging.warning('Too many requests error, ' 'retrying with exponential backoff %.3f', sleep_time) time.sleep(sleep_time) try_number += 1
<SYSTEM_TASK:> Lists names of all blobs by their prefix. <END_TASK> <USER_TASK:> Description: def list_blobs(self, prefix=''): """Lists names of all blobs by their prefix."""
return [b.name for b in self.bucket.list_blobs(prefix=prefix)]
<SYSTEM_TASK:> Rolls back pending mutations. <END_TASK> <USER_TASK:> Description: def rollback(self): """Rolls back pending mutations. Keep in mind that NoTransactionBatch splits all mutations into smaller batches and commit them as soon as mutation buffer reaches maximum length. That's why rollback method will only roll back pending mutations from the buffer, but won't be able to rollback already committed mutations. """
try: if self._cur_batch: self._cur_batch.rollback() except ValueError: # ignore "Batch must be in progress to rollback" error pass self._cur_batch = None self._num_mutations = 0
<SYSTEM_TASK:> Adds mutation of the entity to the mutation buffer. <END_TASK> <USER_TASK:> Description: def put(self, entity): """Adds mutation of the entity to the mutation buffer. If mutation buffer reaches its capacity then this method commit all pending mutations from the buffer and emties it. Args: entity: entity which should be put into the datastore """
self._cur_batch.put(entity) self._num_mutations += 1 if self._num_mutations >= MAX_MUTATIONS_IN_BATCH: self.commit() self.begin()
<SYSTEM_TASK:> Adds deletion of the entity with given key to the mutation buffer. <END_TASK> <USER_TASK:> Description: def delete(self, key): """Adds deletion of the entity with given key to the mutation buffer. If mutation buffer reaches its capacity then this method commit all pending mutations from the buffer and emties it. Args: key: key of the entity which should be deleted """
self._cur_batch.delete(key) self._num_mutations += 1 if self._num_mutations >= MAX_MUTATIONS_IN_BATCH: self.commit() self.begin()
<SYSTEM_TASK:> Removes directory tree as a superuser. <END_TASK> <USER_TASK:> Description: def sudo_remove_dirtree(dir_name): """Removes directory tree as a superuser. Args: dir_name: name of the directory to remove. This function is necessary to cleanup directories created from inside a Docker, since they usually written as a root, thus have to be removed as a root. """
try: subprocess.check_output(['sudo', 'rm', '-rf', dir_name]) except subprocess.CalledProcessError as e: raise WorkerError('Can''t remove directory {0}'.format(dir_name), e)
<SYSTEM_TASK:> Main function which runs worker. <END_TASK> <USER_TASK:> Description: def main(args): """Main function which runs worker."""
title = '## Starting evaluation of round {0} ##'.format(args.round_name) logging.info('\n' + '#' * len(title) + '\n' + '#' * len(title) + '\n' + '##' + ' ' * (len(title)-2) + '##' + '\n' + title + '\n' + '#' * len(title) + '\n' + '#' * len(title) + '\n' + '##' + ' ' * (len(title)-2) + '##' + '\n') if args.blacklisted_submissions: logging.warning('BLACKLISTED SUBMISSIONS: %s', args.blacklisted_submissions) random.seed() logging.info('Running nvidia-docker to ensure that GPU works') shell_call(['docker', 'run', '--runtime=nvidia', '--rm', 'nvidia/cuda', 'nvidia-smi']) eval_worker = EvaluationWorker( worker_id=args.worker_id, storage_client=eval_lib.CompetitionStorageClient( args.project_id, args.storage_bucket), datastore_client=eval_lib.CompetitionDatastoreClient( args.project_id, args.round_name), storage_bucket=args.storage_bucket, round_name=args.round_name, dataset_name=args.dataset_name, blacklisted_submissions=args.blacklisted_submissions, num_defense_shards=args.num_defense_shards) eval_worker.run_work()
<SYSTEM_TASK:> Creates a temporary copy of extracted submission. <END_TASK> <USER_TASK:> Description: def temp_copy_extracted_submission(self): """Creates a temporary copy of extracted submission. When executed, submission is allowed to modify it's own directory. So to ensure that submission does not pass any data between runs, new copy of the submission is made before each run. After a run temporary copy of submission is deleted. Returns: directory where temporary copy is located """
tmp_copy_dir = os.path.join(self.submission_dir, 'tmp_copy') shell_call(['cp', '-R', os.path.join(self.extracted_submission_dir), tmp_copy_dir]) return tmp_copy_dir
<SYSTEM_TASK:> Runs docker command without time limit. <END_TASK> <USER_TASK:> Description: def run_without_time_limit(self, cmd): """Runs docker command without time limit. Args: cmd: list with the command line arguments which are passed to docker binary Returns: how long it took to run submission in seconds Raises: WorkerError: if error occurred during execution of the submission """
cmd = [DOCKER_BINARY, 'run', DOCKER_NVIDIA_RUNTIME] + cmd logging.info('Docker command: %s', ' '.join(cmd)) start_time = time.time() retval = subprocess.call(cmd) elapsed_time_sec = int(time.time() - start_time) logging.info('Elapsed time of attack: %d', elapsed_time_sec) logging.info('Docker retval: %d', retval) if retval != 0: logging.warning('Docker returned non-zero retval: %d', retval) raise WorkerError('Docker returned non-zero retval ' + str(retval)) return elapsed_time_sec
<SYSTEM_TASK:> Runs docker command and enforces time limit. <END_TASK> <USER_TASK:> Description: def run_with_time_limit(self, cmd, time_limit=SUBMISSION_TIME_LIMIT): """Runs docker command and enforces time limit. Args: cmd: list with the command line arguments which are passed to docker binary after run time_limit: time limit, in seconds. Negative value means no limit. Returns: how long it took to run submission in seconds Raises: WorkerError: if error occurred during execution of the submission """
if time_limit < 0: return self.run_without_time_limit(cmd) container_name = str(uuid.uuid4()) cmd = [DOCKER_BINARY, 'run', DOCKER_NVIDIA_RUNTIME, '--detach', '--name', container_name] + cmd logging.info('Docker command: %s', ' '.join(cmd)) logging.info('Time limit %d seconds', time_limit) retval = subprocess.call(cmd) start_time = time.time() elapsed_time_sec = 0 while is_docker_still_running(container_name): elapsed_time_sec = int(time.time() - start_time) if elapsed_time_sec < time_limit: time.sleep(1) else: kill_docker_container(container_name) logging.warning('Submission was killed because run out of time') logging.info('Elapsed time of submission: %d', elapsed_time_sec) logging.info('Docker retval: %d', retval) if retval != 0: logging.warning('Docker returned non-zero retval: %d', retval) raise WorkerError('Docker returned non-zero retval ' + str(retval)) return elapsed_time_sec
<SYSTEM_TASK:> Runs defense inside Docker. <END_TASK> <USER_TASK:> Description: def run(self, input_dir, output_file_path): """Runs defense inside Docker. Args: input_dir: directory with input (adversarial images). output_file_path: path of the output file. Returns: how long it took to run submission in seconds """
logging.info('Running defense %s', self.submission_id) tmp_run_dir = self.temp_copy_extracted_submission() output_dir = os.path.dirname(output_file_path) output_filename = os.path.basename(output_file_path) cmd = ['--network=none', '-m=24g', '--cpus=3.75', '-v', '{0}:/input_images:ro'.format(input_dir), '-v', '{0}:/output_data'.format(output_dir), '-v', '{0}:/code'.format(tmp_run_dir), '-w', '/code', self.container_name, './' + self.entry_point, '/input_images', '/output_data/' + output_filename] elapsed_time_sec = self.run_with_time_limit(cmd) sudo_remove_dirtree(tmp_run_dir) return elapsed_time_sec
<SYSTEM_TASK:> Read `dataset_meta` field from bucket <END_TASK> <USER_TASK:> Description: def read_dataset_metadata(self): """Read `dataset_meta` field from bucket"""
if self.dataset_meta: return shell_call(['gsutil', 'cp', 'gs://' + self.storage_client.bucket_name + '/' + 'dataset/' + self.dataset_name + '_dataset.csv', LOCAL_DATASET_METADATA_FILE]) with open(LOCAL_DATASET_METADATA_FILE, 'r') as f: self.dataset_meta = eval_lib.DatasetMetadata(f)
<SYSTEM_TASK:> Initializes data necessary to execute attacks. <END_TASK> <USER_TASK:> Description: def fetch_attacks_data(self): """Initializes data necessary to execute attacks. This method could be called multiple times, only first call does initialization, subsequent calls are noop. """
if self.attacks_data_initialized: return # init data from datastore self.submissions.init_from_datastore() self.dataset_batches.init_from_datastore() self.adv_batches.init_from_datastore() # copy dataset locally if not os.path.exists(LOCAL_DATASET_DIR): os.makedirs(LOCAL_DATASET_DIR) eval_lib.download_dataset(self.storage_client, self.dataset_batches, LOCAL_DATASET_DIR, os.path.join(LOCAL_DATASET_COPY, self.dataset_name, 'images')) # download dataset metadata self.read_dataset_metadata() # mark as initialized self.attacks_data_initialized = True
<SYSTEM_TASK:> Runs one attack work. <END_TASK> <USER_TASK:> Description: def run_attack_work(self, work_id): """Runs one attack work. Args: work_id: ID of the piece of work to run Returns: elapsed_time_sec, submission_id - elapsed time and id of the submission Raises: WorkerError: if error occurred during execution. """
adv_batch_id = ( self.attack_work.work[work_id]['output_adversarial_batch_id']) adv_batch = self.adv_batches[adv_batch_id] dataset_batch_id = adv_batch['dataset_batch_id'] submission_id = adv_batch['submission_id'] epsilon = self.dataset_batches[dataset_batch_id]['epsilon'] logging.info('Attack work piece: ' 'dataset_batch_id="%s" submission_id="%s" ' 'epsilon=%d', dataset_batch_id, submission_id, epsilon) if submission_id in self.blacklisted_submissions: raise WorkerError('Blacklisted submission') # get attack attack = AttackSubmission(submission_id, self.submissions, self.storage_bucket) attack.download() # prepare input input_dir = os.path.join(LOCAL_DATASET_DIR, dataset_batch_id) if attack.type == TYPE_TARGETED: # prepare file with target classes target_class_filename = os.path.join(input_dir, 'target_class.csv') self.dataset_meta.save_target_classes_for_batch(target_class_filename, self.dataset_batches, dataset_batch_id) # prepare output directory if os.path.exists(LOCAL_OUTPUT_DIR): sudo_remove_dirtree(LOCAL_OUTPUT_DIR) os.mkdir(LOCAL_OUTPUT_DIR) if os.path.exists(LOCAL_PROCESSED_OUTPUT_DIR): shutil.rmtree(LOCAL_PROCESSED_OUTPUT_DIR) os.mkdir(LOCAL_PROCESSED_OUTPUT_DIR) if os.path.exists(LOCAL_ZIPPED_OUTPUT_DIR): shutil.rmtree(LOCAL_ZIPPED_OUTPUT_DIR) os.mkdir(LOCAL_ZIPPED_OUTPUT_DIR) # run attack elapsed_time_sec = attack.run(input_dir, LOCAL_OUTPUT_DIR, epsilon) if attack.type == TYPE_TARGETED: # remove target class file os.remove(target_class_filename) # enforce epsilon and compute hashes image_hashes = eval_lib.enforce_epsilon_and_compute_hash( input_dir, LOCAL_OUTPUT_DIR, LOCAL_PROCESSED_OUTPUT_DIR, epsilon) if not image_hashes: logging.warning('No images saved by the attack.') return elapsed_time_sec, submission_id # write images back to datastore # rename images and add information to adversarial batch for clean_image_id, hash_val in iteritems(image_hashes): # we will use concatenation of batch_id and image_id # as adversarial image id and as a filename of adversarial images adv_img_id = adv_batch_id + '_' + clean_image_id # rename the image os.rename( os.path.join(LOCAL_PROCESSED_OUTPUT_DIR, clean_image_id + '.png'), os.path.join(LOCAL_PROCESSED_OUTPUT_DIR, adv_img_id + '.png')) # populate values which will be written to datastore image_path = '{0}/adversarial_images/{1}/{1}.zip/{2}.png'.format( self.round_name, adv_batch_id, adv_img_id) # u'' + foo is a a python 2/3 compatible way of casting foo to unicode adv_batch['images'][adv_img_id] = { 'clean_image_id': u'' + str(clean_image_id), 'image_path': u'' + str(image_path), 'image_hash': u'' + str(hash_val), } # archive all images and copy to storage zipped_images_filename = os.path.join(LOCAL_ZIPPED_OUTPUT_DIR, adv_batch_id + '.zip') try: logging.debug('Compressing adversarial images to %s', zipped_images_filename) shell_call([ 'zip', '-j', '-r', zipped_images_filename, LOCAL_PROCESSED_OUTPUT_DIR]) except subprocess.CalledProcessError as e: raise WorkerError('Can''t make archive from adversarial iamges', e) # upload archive to storage dst_filename = '{0}/adversarial_images/{1}/{1}.zip'.format( self.round_name, adv_batch_id) logging.debug( 'Copying archive with adversarial images to %s', dst_filename) self.storage_client.new_blob(dst_filename).upload_from_filename( zipped_images_filename) # writing adv batch to datastore logging.debug('Writing adversarial batch to datastore') self.adv_batches.write_single_batch_images_to_datastore(adv_batch_id) return elapsed_time_sec, submission_id
<SYSTEM_TASK:> Method which evaluates all attack work. <END_TASK> <USER_TASK:> Description: def run_attacks(self): """Method which evaluates all attack work. In a loop this method queries not completed attack work, picks one attack work and runs it. """
logging.info('******** Start evaluation of attacks ********') prev_submission_id = None while True: # wait until work is available self.attack_work.read_all_from_datastore() if not self.attack_work.work: logging.info('Work is not populated, waiting...') time.sleep(SLEEP_TIME) continue if self.attack_work.is_all_work_competed(): logging.info('All attack work completed.') break # download all attacks data and dataset self.fetch_attacks_data() # pick piece of work work_id = self.attack_work.try_pick_piece_of_work( self.worker_id, submission_id=prev_submission_id) if not work_id: logging.info('Failed to pick work, waiting...') time.sleep(SLEEP_TIME_SHORT) continue logging.info('Selected work_id: %s', work_id) # execute work try: elapsed_time_sec, prev_submission_id = self.run_attack_work(work_id) logging.info('Work %s is done', work_id) # indicate that work is completed is_work_update = self.attack_work.update_work_as_completed( self.worker_id, work_id, other_values={'elapsed_time': elapsed_time_sec}) except WorkerError as e: logging.info('Failed to run work:\n%s', str(e)) is_work_update = self.attack_work.update_work_as_completed( self.worker_id, work_id, error=str(e)) if not is_work_update: logging.warning('Can''t update work "%s" as completed by worker %d', work_id, self.worker_id) logging.info('******** Finished evaluation of attacks ********')
<SYSTEM_TASK:> Lazy initialization of data necessary to execute defenses. <END_TASK> <USER_TASK:> Description: def fetch_defense_data(self): """Lazy initialization of data necessary to execute defenses."""
if self.defenses_data_initialized: return logging.info('Fetching defense data from datastore') # init data from datastore self.submissions.init_from_datastore() self.dataset_batches.init_from_datastore() self.adv_batches.init_from_datastore() # read dataset metadata self.read_dataset_metadata() # mark as initialized self.defenses_data_initialized = True
<SYSTEM_TASK:> Runs one defense work. <END_TASK> <USER_TASK:> Description: def run_defense_work(self, work_id): """Runs one defense work. Args: work_id: ID of the piece of work to run Returns: elapsed_time_sec, submission_id - elapsed time and id of the submission Raises: WorkerError: if error occurred during execution. """
class_batch_id = ( self.defense_work.work[work_id]['output_classification_batch_id']) class_batch = self.class_batches.read_batch_from_datastore(class_batch_id) adversarial_batch_id = class_batch['adversarial_batch_id'] submission_id = class_batch['submission_id'] cloud_result_path = class_batch['result_path'] logging.info('Defense work piece: ' 'adversarial_batch_id="%s" submission_id="%s"', adversarial_batch_id, submission_id) if submission_id in self.blacklisted_submissions: raise WorkerError('Blacklisted submission') # get defense defense = DefenseSubmission(submission_id, self.submissions, self.storage_bucket) defense.download() # prepare input - copy adversarial batch locally input_dir = os.path.join(LOCAL_INPUT_DIR, adversarial_batch_id) if os.path.exists(input_dir): sudo_remove_dirtree(input_dir) os.makedirs(input_dir) try: shell_call([ 'gsutil', '-m', 'cp', # typical location of adv batch: # testing-round/adversarial_images/ADVBATCH000/ os.path.join('gs://', self.storage_bucket, self.round_name, 'adversarial_images', adversarial_batch_id, '*'), input_dir ]) adv_images_files = os.listdir(input_dir) if (len(adv_images_files) == 1) and adv_images_files[0].endswith('.zip'): logging.info('Adversarial batch is in zip archive %s', adv_images_files[0]) shell_call([ 'unzip', os.path.join(input_dir, adv_images_files[0]), '-d', input_dir ]) os.remove(os.path.join(input_dir, adv_images_files[0])) adv_images_files = os.listdir(input_dir) logging.info('%d adversarial images copied', len(adv_images_files)) except (subprocess.CalledProcessError, IOError) as e: raise WorkerError('Can''t copy adversarial batch locally', e) # prepare output directory if os.path.exists(LOCAL_OUTPUT_DIR): sudo_remove_dirtree(LOCAL_OUTPUT_DIR) os.mkdir(LOCAL_OUTPUT_DIR) output_filname = os.path.join(LOCAL_OUTPUT_DIR, 'result.csv') # run defense elapsed_time_sec = defense.run(input_dir, output_filname) # evaluate defense result batch_result = eval_lib.analyze_one_classification_result( storage_client=None, file_path=output_filname, adv_batch=self.adv_batches.data[adversarial_batch_id], dataset_batches=self.dataset_batches, dataset_meta=self.dataset_meta) # copy result of the defense into storage try: shell_call([ 'gsutil', 'cp', output_filname, os.path.join('gs://', self.storage_bucket, cloud_result_path) ]) except subprocess.CalledProcessError as e: raise WorkerError('Can''t result to Cloud Storage', e) return elapsed_time_sec, submission_id, batch_result
<SYSTEM_TASK:> Method which evaluates all defense work. <END_TASK> <USER_TASK:> Description: def run_defenses(self): """Method which evaluates all defense work. In a loop this method queries not completed defense work, picks one defense work and runs it. """
logging.info('******** Start evaluation of defenses ********') prev_submission_id = None need_reload_work = True while True: # wait until work is available if need_reload_work: if self.num_defense_shards: shard_with_work = self.defense_work.read_undone_from_datastore( shard_id=(self.worker_id % self.num_defense_shards), num_shards=self.num_defense_shards) else: shard_with_work = self.defense_work.read_undone_from_datastore() logging.info('Loaded %d records of undone work from shard %s', len(self.defense_work), str(shard_with_work)) if not self.defense_work.work: logging.info('Work is not populated, waiting...') time.sleep(SLEEP_TIME) continue if self.defense_work.is_all_work_competed(): logging.info('All defense work completed.') break # download all defense data and dataset self.fetch_defense_data() need_reload_work = False # pick piece of work work_id = self.defense_work.try_pick_piece_of_work( self.worker_id, submission_id=prev_submission_id) if not work_id: need_reload_work = True logging.info('Failed to pick work, waiting...') time.sleep(SLEEP_TIME_SHORT) continue logging.info('Selected work_id: %s', work_id) # execute work try: elapsed_time_sec, prev_submission_id, batch_result = ( self.run_defense_work(work_id)) logging.info('Work %s is done', work_id) # indicate that work is completed is_work_update = self.defense_work.update_work_as_completed( self.worker_id, work_id, other_values={'elapsed_time': elapsed_time_sec, 'stat_correct': batch_result[0], 'stat_error': batch_result[1], 'stat_target_class': batch_result[2], 'stat_num_images': batch_result[3]}) except WorkerError as e: logging.info('Failed to run work:\n%s', str(e)) if str(e).startswith('Docker returned non-zero retval'): logging.info('Running nvidia-docker to ensure that GPU works') shell_call(['nvidia-docker', 'run', '--rm', 'nvidia/cuda', 'nvidia-smi']) is_work_update = self.defense_work.update_work_as_completed( self.worker_id, work_id, error=str(e)) if not is_work_update: logging.warning('Can''t update work "%s" as completed by worker %d', work_id, self.worker_id) need_reload_work = True logging.info('******** Finished evaluation of defenses ********')
<SYSTEM_TASK:> Run attacks and defenses <END_TASK> <USER_TASK:> Description: def run_work(self): """Run attacks and defenses"""
if os.path.exists(LOCAL_EVAL_ROOT_DIR): sudo_remove_dirtree(LOCAL_EVAL_ROOT_DIR) self.run_attacks() self.run_defenses()
<SYSTEM_TASK:> Construct the graph required to run the attack through generate_np. <END_TASK> <USER_TASK:> Description: def construct_graph(self, fixed, feedable, x_val, hash_key): """ Construct the graph required to run the attack through generate_np. :param fixed: Structural elements that require defining a new graph. :param feedable: Arguments that can be fed to the same graph when they take different values. :param x_val: symbolic adversarial example :param hash_key: the key used to store this graph in our cache """
# try our very best to create a TF placeholder for each of the # feedable keyword arguments, and check the types are one of # the allowed types class_name = str(self.__class__).split(".")[-1][:-2] _logger.info("Constructing new graph for attack " + class_name) # remove the None arguments, they are just left blank for k in list(feedable.keys()): if feedable[k] is None: del feedable[k] # process all of the rest and create placeholders for them new_kwargs = dict(x for x in fixed.items()) for name, value in feedable.items(): given_type = value.dtype if isinstance(value, np.ndarray): if value.ndim == 0: # This is pretty clearly not a batch of data new_kwargs[name] = tf.placeholder(given_type, shape=[], name=name) else: # Assume that this is a batch of data, make the first axis variable # in size new_shape = [None] + list(value.shape[1:]) new_kwargs[name] = tf.placeholder(given_type, new_shape, name=name) elif isinstance(value, utils.known_number_types): new_kwargs[name] = tf.placeholder(given_type, shape=[], name=name) else: raise ValueError("Could not identify type of argument " + name + ": " + str(value)) # x is a special placeholder we always want to have x_shape = [None] + list(x_val.shape)[1:] x = tf.placeholder(self.tf_dtype, shape=x_shape) # now we generate the graph that we want x_adv = self.generate(x, **new_kwargs) self.graphs[hash_key] = (x, new_kwargs, x_adv) if len(self.graphs) >= 10: warnings.warn("Calling generate_np() with multiple different " "structural parameters is inefficient and should" " be avoided. Calling generate() is preferred.")