text_prompt
stringlengths 100
17.7k
⌀ | code_prompt
stringlengths 7
9.86k
⌀ |
---|---|
<SYSTEM_TASK:>
Return a fully-qualified signal string.
<END_TASK>
<USER_TASK:>
Description:
def signal_path(cls, project, signal):
"""Return a fully-qualified signal string.""" |
return google.api_core.path_template.expand(
"projects/{project}/signals/{signal}", project=project, signal=signal
) |
<SYSTEM_TASK:>
Escalates an incident.
<END_TASK>
<USER_TASK:>
Description:
def escalate_incident(
self,
incident,
update_mask=None,
subscriptions=None,
tags=None,
roles=None,
artifacts=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Escalates an incident.
Example:
>>> from google.cloud import irm_v1alpha2
>>>
>>> client = irm_v1alpha2.IncidentServiceClient()
>>>
>>> # TODO: Initialize `incident`:
>>> incident = {}
>>>
>>> response = client.escalate_incident(incident)
Args:
incident (Union[dict, ~google.cloud.irm_v1alpha2.types.Incident]): The incident to escalate with the new values.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.irm_v1alpha2.types.Incident`
update_mask (Union[dict, ~google.cloud.irm_v1alpha2.types.FieldMask]): List of fields that should be updated.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.irm_v1alpha2.types.FieldMask`
subscriptions (list[Union[dict, ~google.cloud.irm_v1alpha2.types.Subscription]]): Subscriptions to add or update. Existing subscriptions with the same
channel and address as a subscription in the list will be updated.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.irm_v1alpha2.types.Subscription`
tags (list[Union[dict, ~google.cloud.irm_v1alpha2.types.Tag]]): Tags to add. Tags identical to existing tags will be ignored.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.irm_v1alpha2.types.Tag`
roles (list[Union[dict, ~google.cloud.irm_v1alpha2.types.IncidentRoleAssignment]]): Roles to add or update. Existing roles with the same type (and title,
for TYPE_OTHER roles) will be updated.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.irm_v1alpha2.types.IncidentRoleAssignment`
artifacts (list[Union[dict, ~google.cloud.irm_v1alpha2.types.Artifact]]): Artifacts to add. All artifacts are added without checking for duplicates.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.irm_v1alpha2.types.Artifact`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.irm_v1alpha2.types.EscalateIncidentResponse` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "escalate_incident" not in self._inner_api_calls:
self._inner_api_calls[
"escalate_incident"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.escalate_incident,
default_retry=self._method_configs["EscalateIncident"].retry,
default_timeout=self._method_configs["EscalateIncident"].timeout,
client_info=self._client_info,
)
request = incidents_service_pb2.EscalateIncidentRequest(
incident=incident,
update_mask=update_mask,
subscriptions=subscriptions,
tags=tags,
roles=roles,
artifacts=artifacts,
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("incident.name", incident.name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["escalate_incident"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Sends a summary of the shift for oncall handoff.
<END_TASK>
<USER_TASK:>
Description:
def send_shift_handoff(
self,
parent,
recipients,
subject,
cc=None,
notes_content_type=None,
notes_content=None,
incidents=None,
preview_only=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Sends a summary of the shift for oncall handoff.
Example:
>>> from google.cloud import irm_v1alpha2
>>>
>>> client = irm_v1alpha2.IncidentServiceClient()
>>>
>>> parent = client.project_path('[PROJECT]')
>>>
>>> # TODO: Initialize `recipients`:
>>> recipients = []
>>>
>>> # TODO: Initialize `subject`:
>>> subject = ''
>>>
>>> response = client.send_shift_handoff(parent, recipients, subject)
Args:
parent (str): The resource name of the Stackdriver project that the handoff is being
sent from. for example, ``projects/{project_id}``
recipients (list[str]): Email addresses of the recipients of the handoff, for example,
"[email protected]". Must contain at least one entry.
subject (str): The subject of the email. Required.
cc (list[str]): Email addresses that should be CC'd on the handoff. Optional.
notes_content_type (str): Content type string, for example, 'text/plain' or 'text/html'.
notes_content (str): Additional notes to be included in the handoff. Optional.
incidents (list[Union[dict, ~google.cloud.irm_v1alpha2.types.Incident]]): The set of incidents that should be included in the handoff. Optional.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.irm_v1alpha2.types.Incident`
preview_only (bool): If set to true a ShiftHandoffResponse will be returned but the handoff
will not actually be sent.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.irm_v1alpha2.types.SendShiftHandoffResponse` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "send_shift_handoff" not in self._inner_api_calls:
self._inner_api_calls[
"send_shift_handoff"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.send_shift_handoff,
default_retry=self._method_configs["SendShiftHandoff"].retry,
default_timeout=self._method_configs["SendShiftHandoff"].timeout,
client_info=self._client_info,
)
request = incidents_service_pb2.SendShiftHandoffRequest(
parent=parent,
recipients=recipients,
subject=subject,
cc=cc,
notes_content_type=notes_content_type,
notes_content=notes_content,
incidents=incidents,
preview_only=preview_only,
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["send_shift_handoff"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Access to bigtable.admin role memebers
<END_TASK>
<USER_TASK:>
Description:
def bigtable_admins(self):
"""Access to bigtable.admin role memebers
For example:
.. literalinclude:: snippets.py
:start-after: [START bigtable_admins_policy]
:end-before: [END bigtable_admins_policy]
""" |
result = set()
for member in self._bindings.get(BIGTABLE_ADMIN_ROLE, ()):
result.add(member)
return frozenset(result) |
<SYSTEM_TASK:>
Validate DDL Statements used to define database schema.
<END_TASK>
<USER_TASK:>
Description:
def _check_ddl_statements(value):
"""Validate DDL Statements used to define database schema.
See
https://cloud.google.com/spanner/docs/data-definition-language
:type value: list of string
:param value: DDL statements, excluding the 'CREATE DATABSE' statement
:rtype: tuple
:returns: tuple of validated DDL statement strings.
:raises ValueError:
if elements in ``value`` are not strings, or if ``value`` contains
a ``CREATE DATABASE`` statement.
""" |
if not all(isinstance(line, six.string_types) for line in value):
raise ValueError("Pass a list of strings")
if any("create database" in line.lower() for line in value):
raise ValueError("Do not pass a 'CREATE DATABASE' statement")
return tuple(value) |
<SYSTEM_TASK:>
Creates an instance of this class from a protobuf.
<END_TASK>
<USER_TASK:>
Description:
def from_pb(cls, database_pb, instance, pool=None):
"""Creates an instance of this class from a protobuf.
:type database_pb:
:class:`google.spanner.v2.spanner_instance_admin_pb2.Instance`
:param database_pb: A instance protobuf object.
:type instance: :class:`~google.cloud.spanner_v1.instance.Instance`
:param instance: The instance that owns the database.
:type pool: concrete subclass of
:class:`~google.cloud.spanner_v1.pool.AbstractSessionPool`.
:param pool: (Optional) session pool to be used by database.
:rtype: :class:`Database`
:returns: The database parsed from the protobuf response.
:raises ValueError:
if the instance name does not match the expected format
or if the parsed project ID does not match the project ID
on the instance's client, or if the parsed instance ID does
not match the instance's ID.
""" |
match = _DATABASE_NAME_RE.match(database_pb.name)
if match is None:
raise ValueError(
"Database protobuf name was not in the " "expected format.",
database_pb.name,
)
if match.group("project") != instance._client.project:
raise ValueError(
"Project ID on database does not match the "
"project ID on the instance's client"
)
instance_id = match.group("instance_id")
if instance_id != instance.instance_id:
raise ValueError(
"Instance ID on database does not match the "
"Instance ID on the instance"
)
database_id = match.group("database_id")
return cls(database_id, instance, pool=pool) |
<SYSTEM_TASK:>
Create this database within its instance
<END_TASK>
<USER_TASK:>
Description:
def create(self):
"""Create this database within its instance
Inclues any configured schema assigned to :attr:`ddl_statements`.
See
https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.CreateDatabase
:rtype: :class:`~google.api_core.operation.Operation`
:returns: a future used to poll the status of the create request
:raises Conflict: if the database already exists
:raises NotFound: if the instance owning the database does not exist
""" |
api = self._instance._client.database_admin_api
metadata = _metadata_with_prefix(self.name)
db_name = self.database_id
if "-" in db_name:
db_name = "`%s`" % (db_name,)
future = api.create_database(
parent=self._instance.name,
create_statement="CREATE DATABASE %s" % (db_name,),
extra_statements=list(self._ddl_statements),
metadata=metadata,
)
return future |
<SYSTEM_TASK:>
Execute a partitionable DML statement.
<END_TASK>
<USER_TASK:>
Description:
def execute_partitioned_dml(self, dml, params=None, param_types=None):
"""Execute a partitionable DML statement.
:type dml: str
:param dml: DML statement
:type params: dict, {str -> column value}
:param params: values for parameter replacement. Keys must match
the names used in ``dml``.
:type param_types: dict[str -> Union[dict, .types.Type]]
:param param_types:
(Optional) maps explicit types for one or more param values;
required if parameters are passed.
:rtype: int
:returns: Count of rows affected by the DML statement.
""" |
if params is not None:
if param_types is None:
raise ValueError("Specify 'param_types' when passing 'params'.")
params_pb = Struct(
fields={key: _make_value_pb(value) for key, value in params.items()}
)
else:
params_pb = None
api = self.spanner_api
txn_options = TransactionOptions(
partitioned_dml=TransactionOptions.PartitionedDml()
)
metadata = _metadata_with_prefix(self.name)
with SessionCheckout(self._pool) as session:
txn = api.begin_transaction(session.name, txn_options, metadata=metadata)
txn_selector = TransactionSelector(id=txn.id)
restart = functools.partial(
api.execute_streaming_sql,
session.name,
dml,
transaction=txn_selector,
params=params_pb,
param_types=param_types,
metadata=metadata,
)
iterator = _restart_on_unavailable(restart)
result_set = StreamedResultSet(iterator)
list(result_set) # consume all partials
return result_set.stats.row_count_lower_bound |
<SYSTEM_TASK:>
Perform a unit of work in a transaction, retrying on abort.
<END_TASK>
<USER_TASK:>
Description:
def run_in_transaction(self, func, *args, **kw):
"""Perform a unit of work in a transaction, retrying on abort.
:type func: callable
:param func: takes a required positional argument, the transaction,
and additional positional / keyword arguments as supplied
by the caller.
:type args: tuple
:param args: additional positional arguments to be passed to ``func``.
:type kw: dict
:param kw: optional keyword arguments to be passed to ``func``.
If passed, "timeout_secs" will be removed and used to
override the default timeout.
:rtype: :class:`datetime.datetime`
:returns: timestamp of committed transaction
""" |
# Sanity check: Is there a transaction already running?
# If there is, then raise a red flag. Otherwise, mark that this one
# is running.
if getattr(self._local, "transaction_running", False):
raise RuntimeError("Spanner does not support nested transactions.")
self._local.transaction_running = True
# Check out a session and run the function in a transaction; once
# done, flip the sanity check bit back.
try:
with SessionCheckout(self._pool) as session:
return session.run_in_transaction(func, *args, **kw)
finally:
self._local.transaction_running = False |
<SYSTEM_TASK:>
Reconstruct an instance from a mapping.
<END_TASK>
<USER_TASK:>
Description:
def from_dict(cls, database, mapping):
"""Reconstruct an instance from a mapping.
:type database: :class:`~google.cloud.spanner.database.Database`
:param database: database to use
:type mapping: mapping
:param mapping: serialized state of the instance
:rtype: :class:`BatchSnapshot`
""" |
instance = cls(database)
session = instance._session = database.session()
session._session_id = mapping["session_id"]
snapshot = instance._snapshot = session.snapshot()
snapshot._transaction_id = mapping["transaction_id"]
return instance |
<SYSTEM_TASK:>
Return state as a dictionary.
<END_TASK>
<USER_TASK:>
Description:
def to_dict(self):
"""Return state as a dictionary.
Result can be used to serialize the instance and reconstitute
it later using :meth:`from_dict`.
:rtype: dict
""" |
session = self._get_session()
snapshot = self._get_snapshot()
return {
"session_id": session._session_id,
"transaction_id": snapshot._transaction_id,
} |
<SYSTEM_TASK:>
Create session as needed.
<END_TASK>
<USER_TASK:>
Description:
def _get_session(self):
"""Create session as needed.
.. note::
Caller is responsible for cleaning up the session after
all partitions have been processed.
""" |
if self._session is None:
session = self._session = self._database.session()
session.create()
return self._session |
<SYSTEM_TASK:>
Create snapshot if needed.
<END_TASK>
<USER_TASK:>
Description:
def _get_snapshot(self):
"""Create snapshot if needed.""" |
if self._snapshot is None:
self._snapshot = self._get_session().snapshot(
read_timestamp=self._read_timestamp,
exact_staleness=self._exact_staleness,
multi_use=True,
)
self._snapshot.begin()
return self._snapshot |
<SYSTEM_TASK:>
Start a partitioned batch read operation.
<END_TASK>
<USER_TASK:>
Description:
def generate_read_batches(
self,
table,
columns,
keyset,
index="",
partition_size_bytes=None,
max_partitions=None,
):
"""Start a partitioned batch read operation.
Uses the ``PartitionRead`` API request to initiate the partitioned
read. Returns a list of batch information needed to perform the
actual reads.
:type table: str
:param table: name of the table from which to fetch data
:type columns: list of str
:param columns: names of columns to be retrieved
:type keyset: :class:`~google.cloud.spanner_v1.keyset.KeySet`
:param keyset: keys / ranges identifying rows to be retrieved
:type index: str
:param index: (Optional) name of index to use, rather than the
table's primary key
:type partition_size_bytes: int
:param partition_size_bytes:
(Optional) desired size for each partition generated. The service
uses this as a hint, the actual partition size may differ.
:type max_partitions: int
:param max_partitions:
(Optional) desired maximum number of partitions generated. The
service uses this as a hint, the actual number of partitions may
differ.
:rtype: iterable of dict
:returns:
mappings of information used peform actual partitioned reads via
:meth:`process_read_batch`.
""" |
partitions = self._get_snapshot().partition_read(
table=table,
columns=columns,
keyset=keyset,
index=index,
partition_size_bytes=partition_size_bytes,
max_partitions=max_partitions,
)
read_info = {
"table": table,
"columns": columns,
"keyset": keyset._to_dict(),
"index": index,
}
for partition in partitions:
yield {"partition": partition, "read": read_info.copy()} |
<SYSTEM_TASK:>
Process a single, partitioned read.
<END_TASK>
<USER_TASK:>
Description:
def process_read_batch(self, batch):
"""Process a single, partitioned read.
:type batch: mapping
:param batch:
one of the mappings returned from an earlier call to
:meth:`generate_read_batches`.
:rtype: :class:`~google.cloud.spanner_v1.streamed.StreamedResultSet`
:returns: a result set instance which can be used to consume rows.
""" |
kwargs = copy.deepcopy(batch["read"])
keyset_dict = kwargs.pop("keyset")
kwargs["keyset"] = KeySet._from_dict(keyset_dict)
return self._get_snapshot().read(partition=batch["partition"], **kwargs) |
<SYSTEM_TASK:>
Start a partitioned query operation.
<END_TASK>
<USER_TASK:>
Description:
def generate_query_batches(
self,
sql,
params=None,
param_types=None,
partition_size_bytes=None,
max_partitions=None,
):
"""Start a partitioned query operation.
Uses the ``PartitionQuery`` API request to start a partitioned
query operation. Returns a list of batch information needed to
peform the actual queries.
:type sql: str
:param sql: SQL query statement
:type params: dict, {str -> column value}
:param params: values for parameter replacement. Keys must match
the names used in ``sql``.
:type param_types: dict[str -> Union[dict, .types.Type]]
:param param_types:
(Optional) maps explicit types for one or more param values;
required if parameters are passed.
:type partition_size_bytes: int
:param partition_size_bytes:
(Optional) desired size for each partition generated. The service
uses this as a hint, the actual partition size may differ.
:type partition_size_bytes: int
:param partition_size_bytes:
(Optional) desired size for each partition generated. The service
uses this as a hint, the actual partition size may differ.
:type max_partitions: int
:param max_partitions:
(Optional) desired maximum number of partitions generated. The
service uses this as a hint, the actual number of partitions may
differ.
:rtype: iterable of dict
:returns:
mappings of information used peform actual partitioned reads via
:meth:`process_read_batch`.
""" |
partitions = self._get_snapshot().partition_query(
sql=sql,
params=params,
param_types=param_types,
partition_size_bytes=partition_size_bytes,
max_partitions=max_partitions,
)
query_info = {"sql": sql}
if params:
query_info["params"] = params
query_info["param_types"] = param_types
for partition in partitions:
yield {"partition": partition, "query": query_info} |
<SYSTEM_TASK:>
Process a single, partitioned query or read.
<END_TASK>
<USER_TASK:>
Description:
def process(self, batch):
"""Process a single, partitioned query or read.
:type batch: mapping
:param batch:
one of the mappings returned from an earlier call to
:meth:`generate_query_batches`.
:rtype: :class:`~google.cloud.spanner_v1.streamed.StreamedResultSet`
:returns: a result set instance which can be used to consume rows.
:raises ValueError: if batch does not contain either 'read' or 'query'
""" |
if "query" in batch:
return self.process_query_batch(batch)
if "read" in batch:
return self.process_read_batch(batch)
raise ValueError("Invalid batch") |
<SYSTEM_TASK:>
Return a fully-qualified location string.
<END_TASK>
<USER_TASK:>
Description:
def location_path(cls, project, location):
"""Return a fully-qualified location string.""" |
return google.api_core.path_template.expand(
"projects/{project}/locations/{location}",
project=project,
location=location,
) |
<SYSTEM_TASK:>
Return a fully-qualified model string.
<END_TASK>
<USER_TASK:>
Description:
def model_path(cls, project, location, model):
"""Return a fully-qualified model string.""" |
return google.api_core.path_template.expand(
"projects/{project}/locations/{location}/models/{model}",
project=project,
location=location,
model=model,
) |
<SYSTEM_TASK:>
Return a fully-qualified column_spec string.
<END_TASK>
<USER_TASK:>
Description:
def column_spec_path(cls, project, location, dataset, table_spec, column_spec):
"""Return a fully-qualified column_spec string.""" |
return google.api_core.path_template.expand(
"projects/{project}/locations/{location}/datasets/{dataset}/tableSpecs/{table_spec}/columnSpecs/{column_spec}",
project=project,
location=location,
dataset=dataset,
table_spec=table_spec,
column_spec=column_spec,
) |
<SYSTEM_TASK:>
Creates a dataset.
<END_TASK>
<USER_TASK:>
Description:
def create_dataset(
self,
parent,
dataset,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates a dataset.
Example:
>>> from google.cloud import automl_v1beta1
>>>
>>> client = automl_v1beta1.AutoMlClient()
>>>
>>> parent = client.location_path('[PROJECT]', '[LOCATION]')
>>>
>>> # TODO: Initialize `dataset`:
>>> dataset = {}
>>>
>>> response = client.create_dataset(parent, dataset)
Args:
parent (str): The resource name of the project to create the dataset for.
dataset (Union[dict, ~google.cloud.automl_v1beta1.types.Dataset]): The dataset to create.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.automl_v1beta1.types.Dataset`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.automl_v1beta1.types.Dataset` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "create_dataset" not in self._inner_api_calls:
self._inner_api_calls[
"create_dataset"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.create_dataset,
default_retry=self._method_configs["CreateDataset"].retry,
default_timeout=self._method_configs["CreateDataset"].timeout,
client_info=self._client_info,
)
request = service_pb2.CreateDatasetRequest(parent=parent, dataset=dataset)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["create_dataset"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Deletes a dataset and all of its contents. Returns empty response in the
<END_TASK>
<USER_TASK:>
Description:
def delete_dataset(
self,
name,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Deletes a dataset and all of its contents. Returns empty response in the
``response`` field when it completes, and ``delete_details`` in the
``metadata`` field.
Example:
>>> from google.cloud import automl_v1beta1
>>>
>>> client = automl_v1beta1.AutoMlClient()
>>>
>>> name = client.dataset_path('[PROJECT]', '[LOCATION]', '[DATASET]')
>>>
>>> response = client.delete_dataset(name)
>>>
>>> def callback(operation_future):
... # Handle result.
... result = operation_future.result()
>>>
>>> response.add_done_callback(callback)
>>>
>>> # Handle metadata.
>>> metadata = response.metadata()
Args:
name (str): The resource name of the dataset to delete.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.automl_v1beta1.types._OperationFuture` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "delete_dataset" not in self._inner_api_calls:
self._inner_api_calls[
"delete_dataset"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.delete_dataset,
default_retry=self._method_configs["DeleteDataset"].retry,
default_timeout=self._method_configs["DeleteDataset"].timeout,
client_info=self._client_info,
)
request = service_pb2.DeleteDatasetRequest(name=name)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
operation = self._inner_api_calls["delete_dataset"](
request, retry=retry, timeout=timeout, metadata=metadata
)
return google.api_core.operation.from_gapic(
operation,
self.transport._operations_client,
empty_pb2.Empty,
metadata_type=proto_operations_pb2.OperationMetadata,
) |
<SYSTEM_TASK:>
Generates sleep intervals based on the exponential back-off algorithm.
<END_TASK>
<USER_TASK:>
Description:
def exponential_sleep_generator(initial, maximum, multiplier=_DEFAULT_DELAY_MULTIPLIER):
"""Generates sleep intervals based on the exponential back-off algorithm.
This implements the `Truncated Exponential Back-off`_ algorithm.
.. _Truncated Exponential Back-off:
https://cloud.google.com/storage/docs/exponential-backoff
Args:
initial (float): The minimum about of time to delay. This must
be greater than 0.
maximum (float): The maximum about of time to delay.
multiplier (float): The multiplier applied to the delay.
Yields:
float: successive sleep intervals.
""" |
delay = initial
while True:
# Introduce jitter by yielding a delay that is uniformly distributed
# to average out to the delay time.
yield min(random.uniform(0.0, delay * 2.0), maximum)
delay = delay * multiplier |
<SYSTEM_TASK:>
Call a function and retry if it fails.
<END_TASK>
<USER_TASK:>
Description:
def retry_target(target, predicate, sleep_generator, deadline, on_error=None):
"""Call a function and retry if it fails.
This is the lowest-level retry helper. Generally, you'll use the
higher-level retry helper :class:`Retry`.
Args:
target(Callable): The function to call and retry. This must be a
nullary function - apply arguments with `functools.partial`.
predicate (Callable[Exception]): A callable used to determine if an
exception raised by the target should be considered retryable.
It should return True to retry or False otherwise.
sleep_generator (Iterable[float]): An infinite iterator that determines
how long to sleep between retries.
deadline (float): How long to keep retrying the target.
on_error (Callable): A function to call while processing a retryable
exception. Any error raised by this function will *not* be
caught.
Returns:
Any: the return value of the target function.
Raises:
google.api_core.RetryError: If the deadline is exceeded while retrying.
ValueError: If the sleep generator stops yielding values.
Exception: If the target raises a method that isn't retryable.
""" |
if deadline is not None:
deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta(
seconds=deadline
)
else:
deadline_datetime = None
last_exc = None
for sleep in sleep_generator:
try:
return target()
# pylint: disable=broad-except
# This function explicitly must deal with broad exceptions.
except Exception as exc:
if not predicate(exc):
raise
last_exc = exc
if on_error is not None:
on_error(exc)
now = datetime_helpers.utcnow()
if deadline_datetime is not None and deadline_datetime < now:
six.raise_from(
exceptions.RetryError(
"Deadline of {:.1f}s exceeded while calling {}".format(
deadline, target
),
last_exc,
),
last_exc,
)
_LOGGER.debug(
"Retrying due to {}, sleeping {:.1f}s ...".format(last_exc, sleep)
)
time.sleep(sleep)
raise ValueError("Sleep generator stopped yielding sleep values.") |
<SYSTEM_TASK:>
Queue a message to be sent on the stream.
<END_TASK>
<USER_TASK:>
Description:
def send(self, request):
"""Queue a message to be sent on the stream.
Send is non-blocking.
If the underlying RPC has been closed, this will raise.
Args:
request (protobuf.Message): The request to send.
""" |
if self.call is None:
raise ValueError("Can not send() on an RPC that has never been open()ed.")
# Don't use self.is_active(), as ResumableBidiRpc will overload it
# to mean something semantically different.
if self.call.is_active():
self._request_queue.put(request)
else:
# calling next should cause the call to raise.
next(self.call) |
<SYSTEM_TASK:>
Wraps a method to recover the stream and retry on error.
<END_TASK>
<USER_TASK:>
Description:
def _recoverable(self, method, *args, **kwargs):
"""Wraps a method to recover the stream and retry on error.
If a retryable error occurs while making the call, then the stream will
be re-opened and the method will be retried. This happens indefinitely
so long as the error is a retryable one. If an error occurs while
re-opening the stream, then this method will raise immediately and
trigger finalization of this object.
Args:
method (Callable[..., Any]): The method to call.
args: The args to pass to the method.
kwargs: The kwargs to pass to the method.
""" |
while True:
try:
return method(*args, **kwargs)
except Exception as exc:
with self._operational_lock:
_LOGGER.debug("Call to retryable %r caused %s.", method, exc)
if not self._should_recover(exc):
self.close()
_LOGGER.debug("Not retrying %r due to %s.", method, exc)
self._finalize(exc)
raise exc
_LOGGER.debug("Re-opening stream from retryable %r.", method)
self._reopen() |
<SYSTEM_TASK:>
Start the background thread and begin consuming the thread.
<END_TASK>
<USER_TASK:>
Description:
def start(self):
"""Start the background thread and begin consuming the thread.""" |
with self._operational_lock:
ready = threading.Event()
thread = threading.Thread(
name=_BIDIRECTIONAL_CONSUMER_NAME,
target=self._thread_main,
args=(ready,)
)
thread.daemon = True
thread.start()
# Other parts of the code rely on `thread.is_alive` which
# isn't sufficient to know if a thread is active, just that it may
# soon be active. This can cause races. Further protect
# against races by using a ready event and wait on it to be set.
ready.wait()
self._thread = thread
_LOGGER.debug("Started helper thread %s", thread.name) |
<SYSTEM_TASK:>
Stop consuming the stream and shutdown the background thread.
<END_TASK>
<USER_TASK:>
Description:
def stop(self):
"""Stop consuming the stream and shutdown the background thread.""" |
with self._operational_lock:
self._bidi_rpc.close()
if self._thread is not None:
# Resume the thread to wake it up in case it is sleeping.
self.resume()
self._thread.join()
self._thread = None |
<SYSTEM_TASK:>
Resumes the response stream.
<END_TASK>
<USER_TASK:>
Description:
def resume(self):
"""Resumes the response stream.""" |
with self._wake:
self._paused = False
self._wake.notifyAll() |
<SYSTEM_TASK:>
Return a fully-qualified project string.
<END_TASK>
<USER_TASK:>
Description:
def project_path(cls, user, project):
"""Return a fully-qualified project string.""" |
return google.api_core.path_template.expand(
"users/{user}/projects/{project}", user=user, project=project
) |
<SYSTEM_TASK:>
Return a fully-qualified fingerprint string.
<END_TASK>
<USER_TASK:>
Description:
def fingerprint_path(cls, user, fingerprint):
"""Return a fully-qualified fingerprint string.""" |
return google.api_core.path_template.expand(
"users/{user}/sshPublicKeys/{fingerprint}",
user=user,
fingerprint=fingerprint,
) |
<SYSTEM_TASK:>
Deletes a POSIX account.
<END_TASK>
<USER_TASK:>
Description:
def delete_posix_account(
self,
name,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Deletes a POSIX account.
Example:
>>> from google.cloud import oslogin_v1
>>>
>>> client = oslogin_v1.OsLoginServiceClient()
>>>
>>> name = client.project_path('[USER]', '[PROJECT]')
>>>
>>> client.delete_posix_account(name)
Args:
name (str): A reference to the POSIX account to update. POSIX accounts are
identified by the project ID they are associated with. A reference to
the POSIX account is in format ``users/{user}/projects/{project}``.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "delete_posix_account" not in self._inner_api_calls:
self._inner_api_calls[
"delete_posix_account"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.delete_posix_account,
default_retry=self._method_configs["DeletePosixAccount"].retry,
default_timeout=self._method_configs["DeletePosixAccount"].timeout,
client_info=self._client_info,
)
request = oslogin_pb2.DeletePosixAccountRequest(name=name)
self._inner_api_calls["delete_posix_account"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Adds an SSH public key and returns the profile information. Default POSIX
<END_TASK>
<USER_TASK:>
Description:
def import_ssh_public_key(
self,
parent,
ssh_public_key,
project_id=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Adds an SSH public key and returns the profile information. Default POSIX
account information is set when no username and UID exist as part of the
login profile.
Example:
>>> from google.cloud import oslogin_v1
>>>
>>> client = oslogin_v1.OsLoginServiceClient()
>>>
>>> parent = client.user_path('[USER]')
>>>
>>> # TODO: Initialize `ssh_public_key`:
>>> ssh_public_key = {}
>>>
>>> response = client.import_ssh_public_key(parent, ssh_public_key)
Args:
parent (str): The unique ID for the user in format ``users/{user}``.
ssh_public_key (Union[dict, ~google.cloud.oslogin_v1.types.SshPublicKey]): The SSH public key and expiration time.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.oslogin_v1.types.SshPublicKey`
project_id (str): The project ID of the Google Cloud Platform project.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.oslogin_v1.types.ImportSshPublicKeyResponse` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "import_ssh_public_key" not in self._inner_api_calls:
self._inner_api_calls[
"import_ssh_public_key"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.import_ssh_public_key,
default_retry=self._method_configs["ImportSshPublicKey"].retry,
default_timeout=self._method_configs["ImportSshPublicKey"].timeout,
client_info=self._client_info,
)
request = oslogin_pb2.ImportSshPublicKeyRequest(
parent=parent, ssh_public_key=ssh_public_key, project_id=project_id
)
return self._inner_api_calls["import_ssh_public_key"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Updates an SSH public key and returns the profile information. This method
<END_TASK>
<USER_TASK:>
Description:
def update_ssh_public_key(
self,
name,
ssh_public_key,
update_mask=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Updates an SSH public key and returns the profile information. This method
supports patch semantics.
Example:
>>> from google.cloud import oslogin_v1
>>>
>>> client = oslogin_v1.OsLoginServiceClient()
>>>
>>> name = client.fingerprint_path('[USER]', '[FINGERPRINT]')
>>>
>>> # TODO: Initialize `ssh_public_key`:
>>> ssh_public_key = {}
>>>
>>> response = client.update_ssh_public_key(name, ssh_public_key)
Args:
name (str): The fingerprint of the public key to update. Public keys are identified
by their SHA-256 fingerprint. The fingerprint of the public key is in
format ``users/{user}/sshPublicKeys/{fingerprint}``.
ssh_public_key (Union[dict, ~google.cloud.oslogin_v1.types.SshPublicKey]): The SSH public key and expiration time.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.oslogin_v1.types.SshPublicKey`
update_mask (Union[dict, ~google.cloud.oslogin_v1.types.FieldMask]): Mask to control which fields get updated. Updates all if not present.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.oslogin_v1.types.FieldMask`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.oslogin_v1.types.SshPublicKey` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "update_ssh_public_key" not in self._inner_api_calls:
self._inner_api_calls[
"update_ssh_public_key"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.update_ssh_public_key,
default_retry=self._method_configs["UpdateSshPublicKey"].retry,
default_timeout=self._method_configs["UpdateSshPublicKey"].timeout,
client_info=self._client_info,
)
request = oslogin_pb2.UpdateSshPublicKeyRequest(
name=name, ssh_public_key=ssh_public_key, update_mask=update_mask
)
return self._inner_api_calls["update_ssh_public_key"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Convert a protobuf GC rule to a native object.
<END_TASK>
<USER_TASK:>
Description:
def _gc_rule_from_pb(gc_rule_pb):
"""Convert a protobuf GC rule to a native object.
:type gc_rule_pb: :class:`.table_v2_pb2.GcRule`
:param gc_rule_pb: The GC rule to convert.
:rtype: :class:`GarbageCollectionRule` or :data:`NoneType <types.NoneType>`
:returns: An instance of one of the native rules defined
in :module:`column_family` or :data:`None` if no values were
set on the protobuf passed in.
:raises: :class:`ValueError <exceptions.ValueError>` if the rule name
is unexpected.
""" |
rule_name = gc_rule_pb.WhichOneof("rule")
if rule_name is None:
return None
if rule_name == "max_num_versions":
return MaxVersionsGCRule(gc_rule_pb.max_num_versions)
elif rule_name == "max_age":
max_age = _helpers._duration_pb_to_timedelta(gc_rule_pb.max_age)
return MaxAgeGCRule(max_age)
elif rule_name == "union":
return GCRuleUnion([_gc_rule_from_pb(rule) for rule in gc_rule_pb.union.rules])
elif rule_name == "intersection":
rules = [_gc_rule_from_pb(rule) for rule in gc_rule_pb.intersection.rules]
return GCRuleIntersection(rules)
else:
raise ValueError("Unexpected rule name", rule_name) |
<SYSTEM_TASK:>
Converts the garbage collection rule to a protobuf.
<END_TASK>
<USER_TASK:>
Description:
def to_pb(self):
"""Converts the garbage collection rule to a protobuf.
:rtype: :class:`.table_v2_pb2.GcRule`
:returns: The converted current object.
""" |
max_age = _helpers._timedelta_to_duration_pb(self.max_age)
return table_v2_pb2.GcRule(max_age=max_age) |
<SYSTEM_TASK:>
Converts the union into a single GC rule as a protobuf.
<END_TASK>
<USER_TASK:>
Description:
def to_pb(self):
"""Converts the union into a single GC rule as a protobuf.
:rtype: :class:`.table_v2_pb2.GcRule`
:returns: The converted current object.
""" |
union = table_v2_pb2.GcRule.Union(rules=[rule.to_pb() for rule in self.rules])
return table_v2_pb2.GcRule(union=union) |
<SYSTEM_TASK:>
Converts the intersection into a single GC rule as a protobuf.
<END_TASK>
<USER_TASK:>
Description:
def to_pb(self):
"""Converts the intersection into a single GC rule as a protobuf.
:rtype: :class:`.table_v2_pb2.GcRule`
:returns: The converted current object.
""" |
intersection = table_v2_pb2.GcRule.Intersection(
rules=[rule.to_pb() for rule in self.rules]
)
return table_v2_pb2.GcRule(intersection=intersection) |
<SYSTEM_TASK:>
Converts the column family to a protobuf.
<END_TASK>
<USER_TASK:>
Description:
def to_pb(self):
"""Converts the column family to a protobuf.
:rtype: :class:`.table_v2_pb2.ColumnFamily`
:returns: The converted current object.
""" |
if self.gc_rule is None:
return table_v2_pb2.ColumnFamily()
else:
return table_v2_pb2.ColumnFamily(gc_rule=self.gc_rule.to_pb()) |
<SYSTEM_TASK:>
Create this column family.
<END_TASK>
<USER_TASK:>
Description:
def create(self):
"""Create this column family.
For example:
.. literalinclude:: snippets_table.py
:start-after: [START bigtable_create_column_family]
:end-before: [END bigtable_create_column_family]
""" |
column_family = self.to_pb()
modification = table_admin_v2_pb2.ModifyColumnFamiliesRequest.Modification(
id=self.column_family_id, create=column_family
)
client = self._table._instance._client
# data it contains are the GC rule and the column family ID already
# stored on this instance.
client.table_admin_client.modify_column_families(
self._table.name, [modification]
) |
<SYSTEM_TASK:>
Delete this column family.
<END_TASK>
<USER_TASK:>
Description:
def delete(self):
"""Delete this column family.
For example:
.. literalinclude:: snippets_table.py
:start-after: [START bigtable_delete_column_family]
:end-before: [END bigtable_delete_column_family]
""" |
modification = table_admin_v2_pb2.ModifyColumnFamiliesRequest.Modification(
id=self.column_family_id, drop=True
)
client = self._table._instance._client
# data it contains are the GC rule and the column family ID already
# stored on this instance.
client.table_admin_client.modify_column_families(
self._table.name, [modification]
) |
<SYSTEM_TASK:>
Triggered whenever the underlying RPC terminates without recovery.
<END_TASK>
<USER_TASK:>
Description:
def _on_rpc_done(self, future):
"""Triggered whenever the underlying RPC terminates without recovery.
This is typically triggered from one of two threads: the background
consumer thread (when calling ``recv()`` produces a non-recoverable
error) or the grpc management thread (when cancelling the RPC).
This method is *non-blocking*. It will start another thread to deal
with shutting everything down. This is to prevent blocking in the
background consumer and preventing it from being ``joined()``.
""" |
_LOGGER.info("RPC termination has signaled manager shutdown.")
future = _maybe_wrap_exception(future)
thread = threading.Thread(
name=_RPC_ERROR_THREAD_NAME, target=self.close, kwargs={"reason": future}
)
thread.daemon = True
thread.start() |
<SYSTEM_TASK:>
Creates a watch snapshot listener for a document. snapshot_callback
<END_TASK>
<USER_TASK:>
Description:
def for_document(
cls,
document_ref,
snapshot_callback,
snapshot_class_instance,
reference_class_instance,
):
"""
Creates a watch snapshot listener for a document. snapshot_callback
receives a DocumentChange object, but may also start to get
targetChange and such soon
Args:
document_ref: Reference to Document
snapshot_callback: callback to be called on snapshot
snapshot_class_instance: instance of DocumentSnapshot to make
snapshots with to pass to snapshot_callback
reference_class_instance: instance of DocumentReference to make
references
""" |
return cls(
document_ref,
document_ref._client,
{
"documents": {"documents": [document_ref._document_path]},
"target_id": WATCH_TARGET_ID,
},
document_watch_comparator,
snapshot_callback,
snapshot_class_instance,
reference_class_instance,
) |
<SYSTEM_TASK:>
Called everytime there is a response from listen. Collect changes
<END_TASK>
<USER_TASK:>
Description:
def on_snapshot(self, proto):
"""
Called everytime there is a response from listen. Collect changes
and 'push' the changes in a batch to the customer when we receive
'current' from the listen response.
Args:
listen_response(`google.cloud.firestore_v1beta1.types.ListenResponse`):
Callback method that receives a object to
""" |
TargetChange = firestore_pb2.TargetChange
target_changetype_dispatch = {
TargetChange.NO_CHANGE: self._on_snapshot_target_change_no_change,
TargetChange.ADD: self._on_snapshot_target_change_add,
TargetChange.REMOVE: self._on_snapshot_target_change_remove,
TargetChange.RESET: self._on_snapshot_target_change_reset,
TargetChange.CURRENT: self._on_snapshot_target_change_current,
}
target_change = proto.target_change
if str(target_change):
target_change_type = target_change.target_change_type
_LOGGER.debug("on_snapshot: target change: " + str(target_change_type))
meth = target_changetype_dispatch.get(target_change_type)
if meth is None:
_LOGGER.info(
"on_snapshot: Unknown target change " + str(target_change_type)
)
self.close(
reason="Unknown target change type: %s " % str(target_change_type)
)
else:
try:
meth(proto)
except Exception as exc2:
_LOGGER.debug("meth(proto) exc: " + str(exc2))
raise
# NOTE:
# in other implementations, such as node, the backoff is reset here
# in this version bidi rpc is just used and will control this.
elif str(proto.document_change):
_LOGGER.debug("on_snapshot: document change")
# No other target_ids can show up here, but we still need to see
# if the targetId was in the added list or removed list.
target_ids = proto.document_change.target_ids or []
removed_target_ids = proto.document_change.removed_target_ids or []
changed = False
removed = False
if WATCH_TARGET_ID in target_ids:
changed = True
if WATCH_TARGET_ID in removed_target_ids:
removed = True
if changed:
_LOGGER.debug("on_snapshot: document change: CHANGED")
# google.cloud.firestore_v1beta1.types.DocumentChange
document_change = proto.document_change
# google.cloud.firestore_v1beta1.types.Document
document = document_change.document
data = _helpers.decode_dict(document.fields, self._firestore)
# Create a snapshot. As Document and Query objects can be
# passed we need to get a Document Reference in a more manual
# fashion than self._document_reference
document_name = document.name
db_str = self._firestore._database_string
db_str_documents = db_str + "/documents/"
if document_name.startswith(db_str_documents):
document_name = document_name[len(db_str_documents) :]
document_ref = self._firestore.document(document_name)
snapshot = self.DocumentSnapshot(
reference=document_ref,
data=data,
exists=True,
read_time=None,
create_time=document.create_time,
update_time=document.update_time,
)
self.change_map[document.name] = snapshot
elif removed:
_LOGGER.debug("on_snapshot: document change: REMOVED")
document = proto.document_change.document
self.change_map[document.name] = ChangeType.REMOVED
# NB: document_delete and document_remove (as far as we, the client,
# are concerned) are functionally equivalent
elif str(proto.document_delete):
_LOGGER.debug("on_snapshot: document change: DELETE")
name = proto.document_delete.document
self.change_map[name] = ChangeType.REMOVED
elif str(proto.document_remove):
_LOGGER.debug("on_snapshot: document change: REMOVE")
name = proto.document_remove.document
self.change_map[name] = ChangeType.REMOVED
elif proto.filter:
_LOGGER.debug("on_snapshot: filter update")
if proto.filter.count != self._current_size():
# We need to remove all the current results.
self._reset_docs()
# The filter didn't match, so re-issue the query.
# TODO: reset stream method?
# self._reset_stream();
else:
_LOGGER.debug("UNKNOWN TYPE. UHOH")
self.close(reason=ValueError("Unknown listen response type: %s" % proto)) |
<SYSTEM_TASK:>
Assembles a new snapshot from the current set of changes and invokes
<END_TASK>
<USER_TASK:>
Description:
def push(self, read_time, next_resume_token):
"""
Assembles a new snapshot from the current set of changes and invokes
the user's callback. Clears the current changes on completion.
""" |
deletes, adds, updates = Watch._extract_changes(
self.doc_map, self.change_map, read_time
)
updated_tree, updated_map, appliedChanges = self._compute_snapshot(
self.doc_tree, self.doc_map, deletes, adds, updates
)
if not self.has_pushed or len(appliedChanges):
# TODO: It is possible in the future we will have the tree order
# on insert. For now, we sort here.
key = functools.cmp_to_key(self._comparator)
keys = sorted(updated_tree.keys(), key=key)
self._snapshot_callback(
keys,
appliedChanges,
datetime.datetime.fromtimestamp(read_time.seconds, pytz.utc),
)
self.has_pushed = True
self.doc_tree = updated_tree
self.doc_map = updated_map
self.change_map.clear()
self.resume_token = next_resume_token |
<SYSTEM_TASK:>
Returns the current count of all documents, including the changes from
<END_TASK>
<USER_TASK:>
Description:
def _current_size(self):
"""
Returns the current count of all documents, including the changes from
the current changeMap.
""" |
deletes, adds, _ = Watch._extract_changes(self.doc_map, self.change_map, None)
return len(self.doc_map) + len(adds) - len(deletes) |
<SYSTEM_TASK:>
Helper to clear the docs on RESET or filter mismatch.
<END_TASK>
<USER_TASK:>
Description:
def _reset_docs(self):
"""
Helper to clear the docs on RESET or filter mismatch.
""" |
_LOGGER.debug("resetting documents")
self.change_map.clear()
self.resume_token = None
# Mark each document as deleted. If documents are not deleted
# they will be sent again by the server.
for snapshot in self.doc_tree.keys():
name = snapshot.reference._document_path
self.change_map[name] = ChangeType.REMOVED
self.current = False |
<SYSTEM_TASK:>
A low level method to send a request to the API.
<END_TASK>
<USER_TASK:>
Description:
def _make_request(
self,
method,
url,
data=None,
content_type=None,
headers=None,
target_object=None,
):
"""A low level method to send a request to the API.
Typically, you shouldn't need to use this method.
:type method: str
:param method: The HTTP method to use in the request.
:type url: str
:param url: The URL to send the request to.
:type data: str
:param data: The data to send as the body of the request.
:type content_type: str
:param content_type: The proper MIME type of the data provided.
:type headers: dict
:param headers: (Optional) A dictionary of HTTP headers to send with
the request. If passed, will be modified directly
here with added headers.
:type target_object: object
:param target_object:
(Optional) Argument to be used by library callers. This can allow
custom behavior, for example, to defer an HTTP request and complete
initialization of the object at a later time.
:rtype: :class:`requests.Response`
:returns: The HTTP response.
""" |
headers = headers or {}
headers.update(self._EXTRA_HEADERS)
headers["Accept-Encoding"] = "gzip"
if content_type:
headers["Content-Type"] = content_type
headers["User-Agent"] = self.USER_AGENT
return self._do_request(method, url, headers, data, target_object) |
<SYSTEM_TASK:>
Make a request over the HTTP transport to the API.
<END_TASK>
<USER_TASK:>
Description:
def api_request(
self,
method,
path,
query_params=None,
data=None,
content_type=None,
headers=None,
api_base_url=None,
api_version=None,
expect_json=True,
_target_object=None,
):
"""Make a request over the HTTP transport to the API.
You shouldn't need to use this method, but if you plan to
interact with the API using these primitives, this is the
correct one to use.
:type method: str
:param method: The HTTP method name (ie, ``GET``, ``POST``, etc).
Required.
:type path: str
:param path: The path to the resource (ie, ``'/b/bucket-name'``).
Required.
:type query_params: dict or list
:param query_params: A dictionary of keys and values (or list of
key-value pairs) to insert into the query
string of the URL.
:type data: str
:param data: The data to send as the body of the request. Default is
the empty string.
:type content_type: str
:param content_type: The proper MIME type of the data provided. Default
is None.
:type headers: dict
:param headers: extra HTTP headers to be sent with the request.
:type api_base_url: str
:param api_base_url: The base URL for the API endpoint.
Typically you won't have to provide this.
Default is the standard API base URL.
:type api_version: str
:param api_version: The version of the API to call. Typically
you shouldn't provide this and instead use
the default for the library. Default is the
latest API version supported by
google-cloud-python.
:type expect_json: bool
:param expect_json: If True, this method will try to parse the
response as JSON and raise an exception if
that cannot be done. Default is True.
:type _target_object: :class:`object`
:param _target_object:
(Optional) Protected argument to be used by library callers. This
can allow custom behavior, for example, to defer an HTTP request
and complete initialization of the object at a later time.
:raises ~google.cloud.exceptions.GoogleCloudError: if the response code
is not 200 OK.
:raises ValueError: if the response content type is not JSON.
:rtype: dict or str
:returns: The API response payload, either as a raw string or
a dictionary if the response is valid JSON.
""" |
url = self.build_api_url(
path=path,
query_params=query_params,
api_base_url=api_base_url,
api_version=api_version,
)
# Making the executive decision that any dictionary
# data will be sent properly as JSON.
if data and isinstance(data, dict):
data = json.dumps(data)
content_type = "application/json"
response = self._make_request(
method=method,
url=url,
data=data,
content_type=content_type,
headers=headers,
target_object=_target_object,
)
if not 200 <= response.status_code < 300:
raise exceptions.from_http_response(response)
if expect_json and response.content:
return response.json()
else:
return response.content |
<SYSTEM_TASK:>
Construct a filter string to filter on metric or resource labels.
<END_TASK>
<USER_TASK:>
Description:
def _build_label_filter(category, *args, **kwargs):
"""Construct a filter string to filter on metric or resource labels.""" |
terms = list(args)
for key, value in six.iteritems(kwargs):
if value is None:
continue
suffix = None
if key.endswith(
("_prefix", "_suffix", "_greater", "_greaterequal", "_less", "_lessequal")
):
key, suffix = key.rsplit("_", 1)
if category == "resource" and key == "resource_type":
key = "resource.type"
else:
key = ".".join((category, "label", key))
if suffix == "prefix":
term = '{key} = starts_with("{value}")'
elif suffix == "suffix":
term = '{key} = ends_with("{value}")'
elif suffix == "greater":
term = "{key} > {value}"
elif suffix == "greaterequal":
term = "{key} >= {value}"
elif suffix == "less":
term = "{key} < {value}"
elif suffix == "lessequal":
term = "{key} <= {value}"
else:
term = '{key} = "{value}"'
terms.append(term.format(key=key, value=value))
return " AND ".join(sorted(terms)) |
<SYSTEM_TASK:>
Copy the query and set the query time interval.
<END_TASK>
<USER_TASK:>
Description:
def select_interval(self, end_time, start_time=None):
"""Copy the query and set the query time interval.
Example::
import datetime
now = datetime.datetime.utcnow()
query = query.select_interval(
end_time=now,
start_time=now - datetime.timedelta(minutes=5))
As a convenience, you can alternatively specify the end time and
an interval duration when you create the query initially.
:type end_time: :class:`datetime.datetime`
:param end_time: The end time (inclusive) of the time interval
for which results should be returned, as a datetime object.
:type start_time: :class:`datetime.datetime`
:param start_time:
(Optional) The start time (exclusive) of the time interval
for which results should be returned, as a datetime object.
If not specified, the interval is a point in time.
:rtype: :class:`Query`
:returns: The new query object.
""" |
new_query = copy.deepcopy(self)
new_query._end_time = end_time
new_query._start_time = start_time
return new_query |
<SYSTEM_TASK:>
Copy the query and add filtering by group.
<END_TASK>
<USER_TASK:>
Description:
def select_group(self, group_id):
"""Copy the query and add filtering by group.
Example::
query = query.select_group('1234567')
:type group_id: str
:param group_id: The ID of a group to filter by.
:rtype: :class:`Query`
:returns: The new query object.
""" |
new_query = copy.deepcopy(self)
new_query._filter.group_id = group_id
return new_query |
<SYSTEM_TASK:>
Copy the query and add filtering by monitored projects.
<END_TASK>
<USER_TASK:>
Description:
def select_projects(self, *args):
"""Copy the query and add filtering by monitored projects.
This is only useful if the target project represents a Stackdriver
account containing the specified monitored projects.
Examples::
query = query.select_projects('project-1')
query = query.select_projects('project-1', 'project-2')
:type args: tuple
:param args: Project IDs limiting the resources to be included
in the query.
:rtype: :class:`Query`
:returns: The new query object.
""" |
new_query = copy.deepcopy(self)
new_query._filter.projects = args
return new_query |
<SYSTEM_TASK:>
Copy the query and add filtering by resource labels.
<END_TASK>
<USER_TASK:>
Description:
def select_resources(self, *args, **kwargs):
"""Copy the query and add filtering by resource labels.
Examples::
query = query.select_resources(zone='us-central1-a')
query = query.select_resources(zone_prefix='europe-')
query = query.select_resources(resource_type='gce_instance')
A keyword argument ``<label>=<value>`` ordinarily generates a filter
expression of the form::
resource.label.<label> = "<value>"
However, by adding ``"_prefix"`` or ``"_suffix"`` to the keyword,
you can specify a partial match.
``<label>_prefix=<value>`` generates::
resource.label.<label> = starts_with("<value>")
``<label>_suffix=<value>`` generates::
resource.label.<label> = ends_with("<value>")
As a special case, ``"resource_type"`` is treated as a special
pseudo-label corresponding to the filter object ``resource.type``.
For example, ``resource_type=<value>`` generates::
resource.type = "<value>"
See the `defined resource types`_.
.. note::
The label ``"instance_name"`` is a metric label,
not a resource label. You would filter on it using
``select_metrics(instance_name=...)``.
:type args: tuple
:param args: Raw filter expression strings to include in the
conjunction. If just one is provided and no keyword arguments
are provided, it can be a disjunction.
:type kwargs: dict
:param kwargs: Label filters to include in the conjunction as
described above.
:rtype: :class:`Query`
:returns: The new query object.
.. _defined resource types:
https://cloud.google.com/monitoring/api/v3/monitored-resources
""" |
new_query = copy.deepcopy(self)
new_query._filter.select_resources(*args, **kwargs)
return new_query |
<SYSTEM_TASK:>
Copy the query and add filtering by metric labels.
<END_TASK>
<USER_TASK:>
Description:
def select_metrics(self, *args, **kwargs):
"""Copy the query and add filtering by metric labels.
Examples::
query = query.select_metrics(instance_name='myinstance')
query = query.select_metrics(instance_name_prefix='mycluster-')
A keyword argument ``<label>=<value>`` ordinarily generates a filter
expression of the form::
metric.label.<label> = "<value>"
However, by adding ``"_prefix"`` or ``"_suffix"`` to the keyword,
you can specify a partial match.
``<label>_prefix=<value>`` generates::
metric.label.<label> = starts_with("<value>")
``<label>_suffix=<value>`` generates::
metric.label.<label> = ends_with("<value>")
If the label's value type is ``INT64``, a similar notation can be
used to express inequalities:
``<label>_less=<value>`` generates::
metric.label.<label> < <value>
``<label>_lessequal=<value>`` generates::
metric.label.<label> <= <value>
``<label>_greater=<value>`` generates::
metric.label.<label> > <value>
``<label>_greaterequal=<value>`` generates::
metric.label.<label> >= <value>
:type args: tuple
:param args: Raw filter expression strings to include in the
conjunction. If just one is provided and no keyword arguments
are provided, it can be a disjunction.
:type kwargs: dict
:param kwargs: Label filters to include in the conjunction as
described above.
:rtype: :class:`Query`
:returns: The new query object.
""" |
new_query = copy.deepcopy(self)
new_query._filter.select_metrics(*args, **kwargs)
return new_query |
<SYSTEM_TASK:>
Copy the query and add temporal alignment.
<END_TASK>
<USER_TASK:>
Description:
def align(self, per_series_aligner, seconds=0, minutes=0, hours=0):
"""Copy the query and add temporal alignment.
If ``per_series_aligner`` is not :data:`Aligner.ALIGN_NONE`, each time
series will contain data points only on the period boundaries.
Example::
from google.cloud.monitoring import enums
query = query.align(
enums.Aggregation.Aligner.ALIGN_MEAN, minutes=5)
It is also possible to specify the aligner as a literal string::
query = query.align('ALIGN_MEAN', minutes=5)
:type per_series_aligner: str or
:class:`~google.cloud.monitoring_v3.gapic.enums.Aggregation.Aligner`
:param per_series_aligner: The approach to be used to align
individual time series. For example: :data:`Aligner.ALIGN_MEAN`.
See
:class:`~google.cloud.monitoring_v3.gapic.enums.Aggregation.Aligner`
and the descriptions of the `supported aligners`_.
:type seconds: int
:param seconds: The number of seconds in the alignment period.
:type minutes: int
:param minutes: The number of minutes in the alignment period.
:type hours: int
:param hours: The number of hours in the alignment period.
:rtype: :class:`Query`
:returns: The new query object.
.. _supported aligners:
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/\
projects.timeSeries/list#Aligner
""" |
new_query = copy.deepcopy(self)
new_query._per_series_aligner = per_series_aligner
new_query._alignment_period_seconds = seconds + 60 * (minutes + 60 * hours)
return new_query |
<SYSTEM_TASK:>
Copy the query and add cross-series reduction.
<END_TASK>
<USER_TASK:>
Description:
def reduce(self, cross_series_reducer, *group_by_fields):
"""Copy the query and add cross-series reduction.
Cross-series reduction combines time series by aggregating their
data points.
For example, you could request an aggregated time series for each
combination of project and zone as follows::
from google.cloud.monitoring import enums
query = query.reduce(enums.Aggregation.Reducer.REDUCE_MEAN,
'resource.project_id', 'resource.zone')
:type cross_series_reducer: str or
:class:`~google.cloud.monitoring_v3.gapic.enums.Aggregation.Reducer`
:param cross_series_reducer:
The approach to be used to combine time series. For example:
:data:`Reducer.REDUCE_MEAN`. See
:class:`~google.cloud.monitoring_v3.gapic.enums.Aggregation.Reducer`
and the descriptions of the `supported reducers`_.
:type group_by_fields: strs
:param group_by_fields:
Fields to be preserved by the reduction. For example, specifying
just ``"resource.zone"`` will result in one time series per zone.
The default is to aggregate all of the time series into just one.
:rtype: :class:`Query`
:returns: The new query object.
.. _supported reducers:
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/\
projects.timeSeries/list#Reducer
""" |
new_query = copy.deepcopy(self)
new_query._cross_series_reducer = cross_series_reducer
new_query._group_by_fields = group_by_fields
return new_query |
<SYSTEM_TASK:>
Yield all time series objects selected by the query.
<END_TASK>
<USER_TASK:>
Description:
def iter(self, headers_only=False, page_size=None):
"""Yield all time series objects selected by the query.
The generator returned iterates over
:class:`~google.cloud.monitoring_v3.types.TimeSeries` objects
containing points ordered from oldest to newest.
Note that the :class:`Query` object itself is an iterable, such that
the following are equivalent::
for timeseries in query:
...
for timeseries in query.iter():
...
:type headers_only: bool
:param headers_only:
Whether to omit the point data from the time series objects.
:type page_size: int
:param page_size:
(Optional) The maximum number of points in each page of results
from this request. Non-positive values are ignored. Defaults
to a sensible value set by the API.
:raises: :exc:`ValueError` if the query time interval has not been
specified.
""" |
if self._end_time is None:
raise ValueError("Query time interval not specified.")
params = self._build_query_params(headers_only, page_size)
for ts in self._client.list_time_series(**params):
yield ts |
<SYSTEM_TASK:>
Return key-value pairs for the list_time_series API call.
<END_TASK>
<USER_TASK:>
Description:
def _build_query_params(self, headers_only=False, page_size=None):
"""Return key-value pairs for the list_time_series API call.
:type headers_only: bool
:param headers_only:
Whether to omit the point data from the
:class:`~google.cloud.monitoring_v3.types.TimeSeries` objects.
:type page_size: int
:param page_size:
(Optional) The maximum number of points in each page of results
from this request. Non-positive values are ignored. Defaults
to a sensible value set by the API.
""" |
params = {"name": self._project_path, "filter_": self.filter}
params["interval"] = types.TimeInterval()
params["interval"].end_time.FromDatetime(self._end_time)
if self._start_time:
params["interval"].start_time.FromDatetime(self._start_time)
if (
self._per_series_aligner
or self._alignment_period_seconds
or self._cross_series_reducer
or self._group_by_fields
):
params["aggregation"] = types.Aggregation(
per_series_aligner=self._per_series_aligner,
cross_series_reducer=self._cross_series_reducer,
group_by_fields=self._group_by_fields,
alignment_period={"seconds": self._alignment_period_seconds},
)
if headers_only:
params["view"] = enums.ListTimeSeriesRequest.TimeSeriesView.HEADERS
else:
params["view"] = enums.ListTimeSeriesRequest.TimeSeriesView.FULL
if page_size is not None:
params["page_size"] = page_size
return params |
<SYSTEM_TASK:>
Get the meaning from a protobuf value.
<END_TASK>
<USER_TASK:>
Description:
def _get_meaning(value_pb, is_list=False):
"""Get the meaning from a protobuf value.
:type value_pb: :class:`.entity_pb2.Value`
:param value_pb: The protobuf value to be checked for an
associated meaning.
:type is_list: bool
:param is_list: Boolean indicating if the ``value_pb`` contains
a list value.
:rtype: int
:returns: The meaning for the ``value_pb`` if one is set, else
:data:`None`. For a list value, if there are disagreeing
means it just returns a list of meanings. If all the
list meanings agree, it just condenses them.
""" |
meaning = None
if is_list:
# An empty list will have no values, hence no shared meaning
# set among them.
if len(value_pb.array_value.values) == 0:
return None
# We check among all the meanings, some of which may be None,
# the rest which may be enum/int values.
all_meanings = [
_get_meaning(sub_value_pb) for sub_value_pb in value_pb.array_value.values
]
unique_meanings = set(all_meanings)
if len(unique_meanings) == 1:
# If there is a unique meaning, we preserve it.
meaning = unique_meanings.pop()
else: # We know len(value_pb.array_value.values) > 0.
# If the meaning is not unique, just return all of them.
meaning = all_meanings
elif value_pb.meaning: # Simple field (int32).
meaning = value_pb.meaning
return meaning |
<SYSTEM_TASK:>
Factory method for creating an entity based on a protobuf.
<END_TASK>
<USER_TASK:>
Description:
def entity_from_protobuf(pb):
"""Factory method for creating an entity based on a protobuf.
The protobuf should be one returned from the Cloud Datastore
Protobuf API.
:type pb: :class:`.entity_pb2.Entity`
:param pb: The Protobuf representing the entity.
:rtype: :class:`google.cloud.datastore.entity.Entity`
:returns: The entity derived from the protobuf.
""" |
key = None
if pb.HasField("key"): # Message field (Key)
key = key_from_protobuf(pb.key)
entity_props = {}
entity_meanings = {}
exclude_from_indexes = []
for prop_name, value_pb in _property_tuples(pb):
value = _get_value_from_value_pb(value_pb)
entity_props[prop_name] = value
# Check if the property has an associated meaning.
is_list = isinstance(value, list)
meaning = _get_meaning(value_pb, is_list=is_list)
if meaning is not None:
entity_meanings[prop_name] = (meaning, value)
# Check if ``value_pb`` was excluded from index. Lists need to be
# special-cased and we require all ``exclude_from_indexes`` values
# in a list agree.
if is_list and len(value) > 0:
exclude_values = set(
value_pb.exclude_from_indexes
for value_pb in value_pb.array_value.values
)
if len(exclude_values) != 1:
raise ValueError(
"For an array_value, subvalues must either "
"all be indexed or all excluded from "
"indexes."
)
if exclude_values.pop():
exclude_from_indexes.append(prop_name)
else:
if value_pb.exclude_from_indexes:
exclude_from_indexes.append(prop_name)
entity = Entity(key=key, exclude_from_indexes=exclude_from_indexes)
entity.update(entity_props)
entity._meanings.update(entity_meanings)
return entity |
<SYSTEM_TASK:>
Converts an entity into a protobuf.
<END_TASK>
<USER_TASK:>
Description:
def entity_to_protobuf(entity):
"""Converts an entity into a protobuf.
:type entity: :class:`google.cloud.datastore.entity.Entity`
:param entity: The entity to be turned into a protobuf.
:rtype: :class:`.entity_pb2.Entity`
:returns: The protobuf representing the entity.
""" |
entity_pb = entity_pb2.Entity()
if entity.key is not None:
key_pb = entity.key.to_protobuf()
entity_pb.key.CopyFrom(key_pb)
for name, value in entity.items():
value_is_list = isinstance(value, list)
value_pb = _new_value_pb(entity_pb, name)
# Set the appropriate value.
_set_protobuf_value(value_pb, value)
# Add index information to protobuf.
if name in entity.exclude_from_indexes:
if not value_is_list:
value_pb.exclude_from_indexes = True
for sub_value in value_pb.array_value.values:
sub_value.exclude_from_indexes = True
# Add meaning information to protobuf.
_set_pb_meaning_from_entity(
entity, name, value, value_pb, is_list=value_is_list
)
return entity_pb |
<SYSTEM_TASK:>
Validate rules for read options, and assign to the request.
<END_TASK>
<USER_TASK:>
Description:
def get_read_options(eventual, transaction_id):
"""Validate rules for read options, and assign to the request.
Helper method for ``lookup()`` and ``run_query``.
:type eventual: bool
:param eventual: Flag indicating if ``EVENTUAL`` or ``STRONG``
consistency should be used.
:type transaction_id: bytes
:param transaction_id: A transaction identifier (may be null).
:rtype: :class:`.datastore_pb2.ReadOptions`
:returns: The read options corresponding to the inputs.
:raises: :class:`ValueError` if ``eventual`` is ``True`` and the
``transaction_id`` is not ``None``.
""" |
if transaction_id is None:
if eventual:
return datastore_pb2.ReadOptions(
read_consistency=datastore_pb2.ReadOptions.EVENTUAL
)
else:
return datastore_pb2.ReadOptions()
else:
if eventual:
raise ValueError("eventual must be False when in a transaction")
else:
return datastore_pb2.ReadOptions(transaction=transaction_id) |
<SYSTEM_TASK:>
Factory method for creating a key based on a protobuf.
<END_TASK>
<USER_TASK:>
Description:
def key_from_protobuf(pb):
"""Factory method for creating a key based on a protobuf.
The protobuf should be one returned from the Cloud Datastore
Protobuf API.
:type pb: :class:`.entity_pb2.Key`
:param pb: The Protobuf representing the key.
:rtype: :class:`google.cloud.datastore.key.Key`
:returns: a new `Key` instance
""" |
path_args = []
for element in pb.path:
path_args.append(element.kind)
if element.id: # Simple field (int64)
path_args.append(element.id)
# This is safe: we expect proto objects returned will only have
# one of `name` or `id` set.
if element.name: # Simple field (string)
path_args.append(element.name)
project = None
if pb.partition_id.project_id: # Simple field (string)
project = pb.partition_id.project_id
namespace = None
if pb.partition_id.namespace_id: # Simple field (string)
namespace = pb.partition_id.namespace_id
return Key(*path_args, namespace=namespace, project=project) |
<SYSTEM_TASK:>
Given a value, return the protobuf attribute name and proper value.
<END_TASK>
<USER_TASK:>
Description:
def _pb_attr_value(val):
"""Given a value, return the protobuf attribute name and proper value.
The Protobuf API uses different attribute names based on value types
rather than inferring the type. This function simply determines the
proper attribute name based on the type of the value provided and
returns the attribute name as well as a properly formatted value.
Certain value types need to be coerced into a different type (such
as a `datetime.datetime` into an integer timestamp, or a
`google.cloud.datastore.key.Key` into a Protobuf representation. This
function handles that for you.
.. note::
Values which are "text" ('unicode' in Python2, 'str' in Python3) map
to 'string_value' in the datastore; values which are "bytes"
('str' in Python2, 'bytes' in Python3) map to 'blob_value'.
For example:
>>> _pb_attr_value(1234)
('integer_value', 1234)
>>> _pb_attr_value('my_string')
('string_value', 'my_string')
:type val:
:class:`datetime.datetime`, :class:`google.cloud.datastore.key.Key`,
bool, float, integer, bytes, str, unicode,
:class:`google.cloud.datastore.entity.Entity`, dict, list,
:class:`google.cloud.datastore.helpers.GeoPoint`, NoneType
:param val: The value to be scrutinized.
:rtype: tuple
:returns: A tuple of the attribute name and proper value type.
""" |
if isinstance(val, datetime.datetime):
name = "timestamp"
value = _datetime_to_pb_timestamp(val)
elif isinstance(val, Key):
name, value = "key", val.to_protobuf()
elif isinstance(val, bool):
name, value = "boolean", val
elif isinstance(val, float):
name, value = "double", val
elif isinstance(val, six.integer_types):
name, value = "integer", val
elif isinstance(val, six.text_type):
name, value = "string", val
elif isinstance(val, six.binary_type):
name, value = "blob", val
elif isinstance(val, Entity):
name, value = "entity", val
elif isinstance(val, dict):
entity_val = Entity(key=None)
entity_val.update(val)
name, value = "entity", entity_val
elif isinstance(val, list):
name, value = "array", val
elif isinstance(val, GeoPoint):
name, value = "geo_point", val.to_protobuf()
elif val is None:
name, value = "null", struct_pb2.NULL_VALUE
else:
raise ValueError("Unknown protobuf attr type", type(val))
return name + "_value", value |
<SYSTEM_TASK:>
Given a protobuf for a Value, get the correct value.
<END_TASK>
<USER_TASK:>
Description:
def _get_value_from_value_pb(value_pb):
"""Given a protobuf for a Value, get the correct value.
The Cloud Datastore Protobuf API returns a Property Protobuf which
has one value set and the rest blank. This function retrieves the
the one value provided.
Some work is done to coerce the return value into a more useful type
(particularly in the case of a timestamp value, or a key value).
:type value_pb: :class:`.entity_pb2.Value`
:param value_pb: The Value Protobuf.
:rtype: object
:returns: The value provided by the Protobuf.
:raises: :class:`ValueError <exceptions.ValueError>` if no value type
has been set.
""" |
value_type = value_pb.WhichOneof("value_type")
if value_type == "timestamp_value":
result = _pb_timestamp_to_datetime(value_pb.timestamp_value)
elif value_type == "key_value":
result = key_from_protobuf(value_pb.key_value)
elif value_type == "boolean_value":
result = value_pb.boolean_value
elif value_type == "double_value":
result = value_pb.double_value
elif value_type == "integer_value":
result = value_pb.integer_value
elif value_type == "string_value":
result = value_pb.string_value
elif value_type == "blob_value":
result = value_pb.blob_value
elif value_type == "entity_value":
result = entity_from_protobuf(value_pb.entity_value)
elif value_type == "array_value":
result = [
_get_value_from_value_pb(value) for value in value_pb.array_value.values
]
elif value_type == "geo_point_value":
result = GeoPoint(
value_pb.geo_point_value.latitude, value_pb.geo_point_value.longitude
)
elif value_type == "null_value":
result = None
else:
raise ValueError("Value protobuf did not have any value set")
return result |
<SYSTEM_TASK:>
Assign 'val' to the correct subfield of 'value_pb'.
<END_TASK>
<USER_TASK:>
Description:
def _set_protobuf_value(value_pb, val):
"""Assign 'val' to the correct subfield of 'value_pb'.
The Protobuf API uses different attribute names based on value types
rather than inferring the type.
Some value types (entities, keys, lists) cannot be directly
assigned; this function handles them correctly.
:type value_pb: :class:`.entity_pb2.Value`
:param value_pb: The value protobuf to which the value is being assigned.
:type val: :class:`datetime.datetime`, boolean, float, integer, string,
:class:`google.cloud.datastore.key.Key`,
:class:`google.cloud.datastore.entity.Entity`
:param val: The value to be assigned.
""" |
attr, val = _pb_attr_value(val)
if attr == "key_value":
value_pb.key_value.CopyFrom(val)
elif attr == "timestamp_value":
value_pb.timestamp_value.CopyFrom(val)
elif attr == "entity_value":
entity_pb = entity_to_protobuf(val)
value_pb.entity_value.CopyFrom(entity_pb)
elif attr == "array_value":
if len(val) == 0:
array_value = entity_pb2.ArrayValue(values=[])
value_pb.array_value.CopyFrom(array_value)
else:
l_pb = value_pb.array_value.values
for item in val:
i_pb = l_pb.add()
_set_protobuf_value(i_pb, item)
elif attr == "geo_point_value":
value_pb.geo_point_value.CopyFrom(val)
else: # scalar, just assign
setattr(value_pb, attr, val) |
<SYSTEM_TASK:>
Construct a DB-API time value from the given ticks value.
<END_TASK>
<USER_TASK:>
Description:
def TimeFromTicks(ticks, tz=None):
"""Construct a DB-API time value from the given ticks value.
:type ticks: float
:param ticks:
a number of seconds since the epoch; see the documentation of the
standard Python time module for details.
:type tz: :class:`datetime.tzinfo`
:param tz: (Optional) time zone to use for conversion
:rtype: :class:`datetime.time`
:returns: time represented by ticks.
""" |
dt = datetime.datetime.fromtimestamp(ticks, tz=tz)
return dt.timetz() |
<SYSTEM_TASK:>
Return a fully-qualified registry string.
<END_TASK>
<USER_TASK:>
Description:
def registry_path(cls, project, location, registry):
"""Return a fully-qualified registry string.""" |
return google.api_core.path_template.expand(
"projects/{project}/locations/{location}/registries/{registry}",
project=project,
location=location,
registry=registry,
) |
<SYSTEM_TASK:>
Return a fully-qualified device string.
<END_TASK>
<USER_TASK:>
Description:
def device_path(cls, project, location, registry, device):
"""Return a fully-qualified device string.""" |
return google.api_core.path_template.expand(
"projects/{project}/locations/{location}/registries/{registry}/devices/{device}",
project=project,
location=location,
registry=registry,
device=device,
) |
<SYSTEM_TASK:>
Creates a device in a device registry.
<END_TASK>
<USER_TASK:>
Description:
def create_device(
self,
parent,
device,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates a device in a device registry.
Example:
>>> from google.cloud import iot_v1
>>>
>>> client = iot_v1.DeviceManagerClient()
>>>
>>> parent = client.registry_path('[PROJECT]', '[LOCATION]', '[REGISTRY]')
>>>
>>> # TODO: Initialize `device`:
>>> device = {}
>>>
>>> response = client.create_device(parent, device)
Args:
parent (str): The name of the device registry where this device should be created. For
example,
``projects/example-project/locations/us-central1/registries/my-registry``.
device (Union[dict, ~google.cloud.iot_v1.types.Device]): The device registration details. The field ``name`` must be empty. The
server generates ``name`` from the device registry ``id`` and the
``parent`` field.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.iot_v1.types.Device`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.iot_v1.types.Device` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "create_device" not in self._inner_api_calls:
self._inner_api_calls[
"create_device"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.create_device,
default_retry=self._method_configs["CreateDevice"].retry,
default_timeout=self._method_configs["CreateDevice"].timeout,
client_info=self._client_info,
)
request = device_manager_pb2.CreateDeviceRequest(parent=parent, device=device)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["create_device"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Sets the access control policy on the specified resource. Replaces any
<END_TASK>
<USER_TASK:>
Description:
def set_iam_policy(
self,
resource,
policy,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Sets the access control policy on the specified resource. Replaces any
existing policy.
Example:
>>> from google.cloud import iot_v1
>>>
>>> client = iot_v1.DeviceManagerClient()
>>>
>>> resource = client.registry_path('[PROJECT]', '[LOCATION]', '[REGISTRY]')
>>>
>>> # TODO: Initialize `policy`:
>>> policy = {}
>>>
>>> response = client.set_iam_policy(resource, policy)
Args:
resource (str): REQUIRED: The resource for which the policy is being specified.
``resource`` is usually specified as a path. For example, a Project
resource is specified as ``projects/{project}``.
policy (Union[dict, ~google.cloud.iot_v1.types.Policy]): REQUIRED: The complete policy to be applied to the ``resource``. The
size of the policy is limited to a few 10s of KB. An empty policy is a
valid policy but certain Cloud Platform services (such as Projects)
might reject them.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.iot_v1.types.Policy`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.iot_v1.types.Policy` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "set_iam_policy" not in self._inner_api_calls:
self._inner_api_calls[
"set_iam_policy"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.set_iam_policy,
default_retry=self._method_configs["SetIamPolicy"].retry,
default_timeout=self._method_configs["SetIamPolicy"].timeout,
client_info=self._client_info,
)
request = iam_policy_pb2.SetIamPolicyRequest(resource=resource, policy=policy)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("resource", resource)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["set_iam_policy"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Associates the device with the gateway.
<END_TASK>
<USER_TASK:>
Description:
def bind_device_to_gateway(
self,
parent,
gateway_id,
device_id,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Associates the device with the gateway.
Example:
>>> from google.cloud import iot_v1
>>>
>>> client = iot_v1.DeviceManagerClient()
>>>
>>> parent = client.registry_path('[PROJECT]', '[LOCATION]', '[REGISTRY]')
>>>
>>> # TODO: Initialize `gateway_id`:
>>> gateway_id = ''
>>>
>>> # TODO: Initialize `device_id`:
>>> device_id = ''
>>>
>>> response = client.bind_device_to_gateway(parent, gateway_id, device_id)
Args:
parent (str): The name of the registry. For example,
``projects/example-project/locations/us-central1/registries/my-registry``.
gateway_id (str): The value of ``gateway_id`` can be either the device numeric ID or the
user-defined device identifier.
device_id (str): The device to associate with the specified gateway. The value of
``device_id`` can be either the device numeric ID or the user-defined
device identifier.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.iot_v1.types.BindDeviceToGatewayResponse` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "bind_device_to_gateway" not in self._inner_api_calls:
self._inner_api_calls[
"bind_device_to_gateway"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.bind_device_to_gateway,
default_retry=self._method_configs["BindDeviceToGateway"].retry,
default_timeout=self._method_configs["BindDeviceToGateway"].timeout,
client_info=self._client_info,
)
request = device_manager_pb2.BindDeviceToGatewayRequest(
parent=parent, gateway_id=gateway_id, device_id=device_id
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["bind_device_to_gateway"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Return a fully-qualified queue string.
<END_TASK>
<USER_TASK:>
Description:
def queue_path(cls, project, location, queue):
"""Return a fully-qualified queue string.""" |
return google.api_core.path_template.expand(
"projects/{project}/locations/{location}/queues/{queue}",
project=project,
location=location,
queue=queue,
) |
<SYSTEM_TASK:>
Return a fully-qualified task string.
<END_TASK>
<USER_TASK:>
Description:
def task_path(cls, project, location, queue, task):
"""Return a fully-qualified task string.""" |
return google.api_core.path_template.expand(
"projects/{project}/locations/{location}/queues/{queue}/tasks/{task}",
project=project,
location=location,
queue=queue,
task=task,
) |
<SYSTEM_TASK:>
Creates a queue.
<END_TASK>
<USER_TASK:>
Description:
def create_queue(
self,
parent,
queue,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates a queue.
Queues created with this method allow tasks to live for a maximum of 31
days. After a task is 31 days old, the task will be deleted regardless
of whether it was dispatched or not.
WARNING: Using this method may have unintended side effects if you are
using an App Engine ``queue.yaml`` or ``queue.xml`` file to manage your
queues. Read `Overview of Queue Management and
queue.yaml <https://cloud.google.com/tasks/docs/queue-yaml>`__ before
using this method.
Example:
>>> from google.cloud import tasks_v2
>>>
>>> client = tasks_v2.CloudTasksClient()
>>>
>>> parent = client.location_path('[PROJECT]', '[LOCATION]')
>>>
>>> # TODO: Initialize `queue`:
>>> queue = {}
>>>
>>> response = client.create_queue(parent, queue)
Args:
parent (str): Required.
The location name in which the queue will be created. For example:
``projects/PROJECT_ID/locations/LOCATION_ID``
The list of allowed locations can be obtained by calling Cloud Tasks'
implementation of ``ListLocations``.
queue (Union[dict, ~google.cloud.tasks_v2.types.Queue]): Required.
The queue to create.
``Queue's name`` cannot be the same as an existing queue.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.tasks_v2.types.Queue`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.tasks_v2.types.Queue` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "create_queue" not in self._inner_api_calls:
self._inner_api_calls[
"create_queue"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.create_queue,
default_retry=self._method_configs["CreateQueue"].retry,
default_timeout=self._method_configs["CreateQueue"].timeout,
client_info=self._client_info,
)
request = cloudtasks_pb2.CreateQueueRequest(parent=parent, queue=queue)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["create_queue"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Creates a task and adds it to a queue.
<END_TASK>
<USER_TASK:>
Description:
def create_task(
self,
parent,
task,
response_view=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates a task and adds it to a queue.
Tasks cannot be updated after creation; there is no UpdateTask command.
- For ``App Engine queues``, the maximum task size is 100KB.
Example:
>>> from google.cloud import tasks_v2
>>>
>>> client = tasks_v2.CloudTasksClient()
>>>
>>> parent = client.queue_path('[PROJECT]', '[LOCATION]', '[QUEUE]')
>>>
>>> # TODO: Initialize `task`:
>>> task = {}
>>>
>>> response = client.create_task(parent, task)
Args:
parent (str): Required.
The queue name. For example:
``projects/PROJECT_ID/locations/LOCATION_ID/queues/QUEUE_ID``
The queue must already exist.
task (Union[dict, ~google.cloud.tasks_v2.types.Task]): Required.
The task to add.
Task names have the following format:
``projects/PROJECT_ID/locations/LOCATION_ID/queues/QUEUE_ID/tasks/TASK_ID``.
The user can optionally specify a task ``name``. If a name is not
specified then the system will generate a random unique task id, which
will be set in the task returned in the ``response``.
If ``schedule_time`` is not set or is in the past then Cloud Tasks will
set it to the current time.
Task De-duplication:
Explicitly specifying a task ID enables task de-duplication. If a task's
ID is identical to that of an existing task or a task that was deleted
or executed recently then the call will fail with ``ALREADY_EXISTS``. If
the task's queue was created using Cloud Tasks, then another task with
the same name can't be created for ~1hour after the original task was
deleted or executed. If the task's queue was created using queue.yaml or
queue.xml, then another task with the same name can't be created for
~9days after the original task was deleted or executed.
Because there is an extra lookup cost to identify duplicate task names,
these ``CreateTask`` calls have significantly increased latency. Using
hashed strings for the task id or for the prefix of the task id is
recommended. Choosing task ids that are sequential or have sequential
prefixes, for example using a timestamp, causes an increase in latency
and error rates in all task commands. The infrastructure relies on an
approximately uniform distribution of task ids to store and serve tasks
efficiently.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.tasks_v2.types.Task`
response_view (~google.cloud.tasks_v2.types.View): The response\_view specifies which subset of the ``Task`` will be
returned.
By default response\_view is ``BASIC``; not all information is retrieved
by default because some data, such as payloads, might be desirable to
return only when needed because of its large size or because of the
sensitivity of data that it contains.
Authorization for ``FULL`` requires ``cloudtasks.tasks.fullView``
`Google IAM <https://cloud.google.com/iam/>`___ permission on the
``Task`` resource.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.tasks_v2.types.Task` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "create_task" not in self._inner_api_calls:
self._inner_api_calls[
"create_task"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.create_task,
default_retry=self._method_configs["CreateTask"].retry,
default_timeout=self._method_configs["CreateTask"].timeout,
client_info=self._client_info,
)
request = cloudtasks_pb2.CreateTaskRequest(
parent=parent, task=task, response_view=response_view
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["create_task"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
A generator that yields exponential timeout values.
<END_TASK>
<USER_TASK:>
Description:
def _exponential_timeout_generator(initial, maximum, multiplier, deadline):
"""A generator that yields exponential timeout values.
Args:
initial (float): The initial timeout.
maximum (float): The maximum timeout.
multiplier (float): The multiplier applied to the timeout.
deadline (float): The overall deadline across all invocations.
Yields:
float: A timeout value.
""" |
if deadline is not None:
deadline_datetime = datetime_helpers.utcnow() + datetime.timedelta(
seconds=deadline
)
else:
deadline_datetime = datetime.datetime.max
timeout = initial
while True:
now = datetime_helpers.utcnow()
yield min(
# The calculated timeout based on invocations.
timeout,
# The set maximum timeout.
maximum,
# The remaining time before the deadline is reached.
float((deadline_datetime - now).seconds),
)
timeout = timeout * multiplier |
<SYSTEM_TASK:>
Formats parameters in operation in way BigQuery expects.
<END_TASK>
<USER_TASK:>
Description:
def _format_operation(operation, parameters=None):
"""Formats parameters in operation in way BigQuery expects.
:type: str
:param operation: A Google BigQuery query string.
:type: Mapping[str, Any] or Sequence[Any]
:param parameters: Optional parameter values.
:rtype: str
:returns: A formatted query string.
:raises: :class:`~google.cloud.bigquery.dbapi.ProgrammingError`
if a parameter used in the operation is not found in the
``parameters`` argument.
""" |
if parameters is None:
return operation
if isinstance(parameters, collections_abc.Mapping):
return _format_operation_dict(operation, parameters)
return _format_operation_list(operation, parameters) |
<SYSTEM_TASK:>
Set description from schema.
<END_TASK>
<USER_TASK:>
Description:
def _set_description(self, schema):
"""Set description from schema.
:type schema: Sequence[google.cloud.bigquery.schema.SchemaField]
:param schema: A description of fields in the schema.
""" |
if schema is None:
self.description = None
return
self.description = tuple(
[
Column(
name=field.name,
type_code=field.field_type,
display_size=None,
internal_size=None,
precision=None,
scale=None,
null_ok=field.is_nullable,
)
for field in schema
]
) |
<SYSTEM_TASK:>
Set the rowcount from query results.
<END_TASK>
<USER_TASK:>
Description:
def _set_rowcount(self, query_results):
"""Set the rowcount from query results.
Normally, this sets rowcount to the number of rows returned by the
query, but if it was a DML statement, it sets rowcount to the number
of modified rows.
:type query_results:
:class:`~google.cloud.bigquery.query._QueryResults`
:param query_results: results of a query
""" |
total_rows = 0
num_dml_affected_rows = query_results.num_dml_affected_rows
if query_results.total_rows is not None and query_results.total_rows > 0:
total_rows = query_results.total_rows
if num_dml_affected_rows is not None and num_dml_affected_rows > 0:
total_rows = num_dml_affected_rows
self.rowcount = total_rows |
<SYSTEM_TASK:>
Prepare and execute a database operation.
<END_TASK>
<USER_TASK:>
Description:
def execute(self, operation, parameters=None, job_id=None):
"""Prepare and execute a database operation.
.. note::
When setting query parameters, values which are "text"
(``unicode`` in Python2, ``str`` in Python3) will use
the 'STRING' BigQuery type. Values which are "bytes" (``str`` in
Python2, ``bytes`` in Python3), will use using the 'BYTES' type.
A `~datetime.datetime` parameter without timezone information uses
the 'DATETIME' BigQuery type (example: Global Pi Day Celebration
March 14, 2017 at 1:59pm). A `~datetime.datetime` parameter with
timezone information uses the 'TIMESTAMP' BigQuery type (example:
a wedding on April 29, 2011 at 11am, British Summer Time).
For more information about BigQuery data types, see:
https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types
``STRUCT``/``RECORD`` and ``REPEATED`` query parameters are not
yet supported. See:
https://github.com/GoogleCloudPlatform/google-cloud-python/issues/3524
:type operation: str
:param operation: A Google BigQuery query string.
:type parameters: Mapping[str, Any] or Sequence[Any]
:param parameters:
(Optional) dictionary or sequence of parameter values.
:type job_id: str
:param job_id: (Optional) The job_id to use. If not set, a job ID
is generated at random.
""" |
self._query_data = None
self._query_job = None
client = self.connection._client
# The DB-API uses the pyformat formatting, since the way BigQuery does
# query parameters was not one of the standard options. Convert both
# the query and the parameters to the format expected by the client
# libraries.
formatted_operation = _format_operation(operation, parameters=parameters)
query_parameters = _helpers.to_query_parameters(parameters)
config = job.QueryJobConfig()
config.query_parameters = query_parameters
config.use_legacy_sql = False
self._query_job = client.query(
formatted_operation, job_config=config, job_id=job_id
)
# Wait for the query to finish.
try:
self._query_job.result()
except google.cloud.exceptions.GoogleCloudError as exc:
raise exceptions.DatabaseError(exc)
query_results = self._query_job._query_results
self._set_rowcount(query_results)
self._set_description(query_results.schema) |
<SYSTEM_TASK:>
Try to start fetching data, if not yet started.
<END_TASK>
<USER_TASK:>
Description:
def _try_fetch(self, size=None):
"""Try to start fetching data, if not yet started.
Mutates self to indicate that iteration has started.
""" |
if self._query_job is None:
raise exceptions.InterfaceError(
"No query results: execute() must be called before fetch."
)
is_dml = (
self._query_job.statement_type
and self._query_job.statement_type.upper() != "SELECT"
)
if is_dml:
self._query_data = iter([])
return
if self._query_data is None:
client = self.connection._client
rows_iter = client.list_rows(
self._query_job.destination,
selected_fields=self._query_job._query_results.schema,
page_size=self.arraysize,
)
self._query_data = iter(rows_iter) |
<SYSTEM_TASK:>
Merge pending chunk with next value.
<END_TASK>
<USER_TASK:>
Description:
def _merge_chunk(self, value):
"""Merge pending chunk with next value.
:type value: :class:`~google.protobuf.struct_pb2.Value`
:param value: continuation of chunked value from previous
partial result set.
:rtype: :class:`~google.protobuf.struct_pb2.Value`
:returns: the merged value
""" |
current_column = len(self._current_row)
field = self.fields[current_column]
merged = _merge_by_type(self._pending_chunk, value, field.type)
self._pending_chunk = None
return merged |
<SYSTEM_TASK:>
Consume the next partial result set from the stream.
<END_TASK>
<USER_TASK:>
Description:
def _consume_next(self):
"""Consume the next partial result set from the stream.
Parse the result set into new/existing rows in :attr:`_rows`
""" |
response = six.next(self._response_iterator)
self._counter += 1
if self._metadata is None: # first response
metadata = self._metadata = response.metadata
source = self._source
if source is not None and source._transaction_id is None:
source._transaction_id = metadata.transaction.id
if response.HasField("stats"): # last response
self._stats = response.stats
values = list(response.values)
if self._pending_chunk is not None:
values[0] = self._merge_chunk(values[0])
if response.chunked_value:
self._pending_chunk = values.pop()
self._merge_values(values) |
<SYSTEM_TASK:>
Return exactly one result, or None if there are no results.
<END_TASK>
<USER_TASK:>
Description:
def one_or_none(self):
"""Return exactly one result, or None if there are no results.
:raises: :exc:`ValueError`: If there are multiple results.
:raises: :exc:`RuntimeError`: If consumption has already occurred,
in whole or in part.
""" |
# Sanity check: Has consumption of this query already started?
# If it has, then this is an exception.
if self._metadata is not None:
raise RuntimeError(
"Can not call `.one` or `.one_or_none` after "
"stream consumption has already started."
)
# Consume the first result of the stream.
# If there is no first result, then return None.
iterator = iter(self)
try:
answer = next(iterator)
except StopIteration:
return None
# Attempt to consume more. This should no-op; if we get additional
# rows, then this is an error case.
try:
next(iterator)
raise ValueError("Expected one result; got more.")
except StopIteration:
return answer |
<SYSTEM_TASK:>
Return a fully-qualified product_set string.
<END_TASK>
<USER_TASK:>
Description:
def product_set_path(cls, project, location, product_set):
"""Return a fully-qualified product_set string.""" |
return google.api_core.path_template.expand(
"projects/{project}/locations/{location}/productSets/{product_set}",
project=project,
location=location,
product_set=product_set,
) |
<SYSTEM_TASK:>
Return a fully-qualified product string.
<END_TASK>
<USER_TASK:>
Description:
def product_path(cls, project, location, product):
"""Return a fully-qualified product string.""" |
return google.api_core.path_template.expand(
"projects/{project}/locations/{location}/products/{product}",
project=project,
location=location,
product=product,
) |
<SYSTEM_TASK:>
Return a fully-qualified reference_image string.
<END_TASK>
<USER_TASK:>
Description:
def reference_image_path(cls, project, location, product, reference_image):
"""Return a fully-qualified reference_image string.""" |
return google.api_core.path_template.expand(
"projects/{project}/locations/{location}/products/{product}/referenceImages/{reference_image}",
project=project,
location=location,
product=product,
reference_image=reference_image,
) |
<SYSTEM_TASK:>
Creates and returns a new ProductSet resource.
<END_TASK>
<USER_TASK:>
Description:
def create_product_set(
self,
parent,
product_set,
product_set_id,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates and returns a new ProductSet resource.
Possible errors:
- Returns INVALID\_ARGUMENT if display\_name is missing, or is longer
than 4096 characters.
Example:
>>> from google.cloud import vision_v1p3beta1
>>>
>>> client = vision_v1p3beta1.ProductSearchClient()
>>>
>>> parent = client.location_path('[PROJECT]', '[LOCATION]')
>>>
>>> # TODO: Initialize `product_set`:
>>> product_set = {}
>>>
>>> # TODO: Initialize `product_set_id`:
>>> product_set_id = ''
>>>
>>> response = client.create_product_set(parent, product_set, product_set_id)
Args:
parent (str): The project in which the ProductSet should be created.
Format is ``projects/PROJECT_ID/locations/LOC_ID``.
product_set (Union[dict, ~google.cloud.vision_v1p3beta1.types.ProductSet]): The ProductSet to create.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.vision_v1p3beta1.types.ProductSet`
product_set_id (str): A user-supplied resource id for this ProductSet. If set, the server will
attempt to use this value as the resource id. If it is already in use,
an error is returned with code ALREADY\_EXISTS. Must be at most 128
characters long. It cannot contain the character ``/``.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.vision_v1p3beta1.types.ProductSet` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "create_product_set" not in self._inner_api_calls:
self._inner_api_calls[
"create_product_set"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.create_product_set,
default_retry=self._method_configs["CreateProductSet"].retry,
default_timeout=self._method_configs["CreateProductSet"].timeout,
client_info=self._client_info,
)
request = product_search_service_pb2.CreateProductSetRequest(
parent=parent, product_set=product_set, product_set_id=product_set_id
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["create_product_set"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Creates and returns a new ReferenceImage resource.
<END_TASK>
<USER_TASK:>
Description:
def create_reference_image(
self,
parent,
reference_image,
reference_image_id,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates and returns a new ReferenceImage resource.
The ``bounding_poly`` field is optional. If ``bounding_poly`` is not
specified, the system will try to detect regions of interest in the
image that are compatible with the product\_category on the parent
product. If it is specified, detection is ALWAYS skipped. The system
converts polygons into non-rotated rectangles.
Note that the pipeline will resize the image if the image resolution is
too large to process (above 50MP).
Possible errors:
- Returns INVALID\_ARGUMENT if the image\_uri is missing or longer than
4096 characters.
- Returns INVALID\_ARGUMENT if the product does not exist.
- Returns INVALID\_ARGUMENT if bounding\_poly is not provided, and
nothing compatible with the parent product's product\_category is
detected.
- Returns INVALID\_ARGUMENT if bounding\_poly contains more than 10
polygons.
Example:
>>> from google.cloud import vision_v1p3beta1
>>>
>>> client = vision_v1p3beta1.ProductSearchClient()
>>>
>>> parent = client.product_path('[PROJECT]', '[LOCATION]', '[PRODUCT]')
>>>
>>> # TODO: Initialize `reference_image`:
>>> reference_image = {}
>>>
>>> # TODO: Initialize `reference_image_id`:
>>> reference_image_id = ''
>>>
>>> response = client.create_reference_image(parent, reference_image, reference_image_id)
Args:
parent (str): Resource name of the product in which to create the reference image.
Format is ``projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID``.
reference_image (Union[dict, ~google.cloud.vision_v1p3beta1.types.ReferenceImage]): The reference image to create.
If an image ID is specified, it is ignored.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.vision_v1p3beta1.types.ReferenceImage`
reference_image_id (str): A user-supplied resource id for the ReferenceImage to be added. If set,
the server will attempt to use this value as the resource id. If it is
already in use, an error is returned with code ALREADY\_EXISTS. Must be
at most 128 characters long. It cannot contain the character ``/``.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.vision_v1p3beta1.types.ReferenceImage` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "create_reference_image" not in self._inner_api_calls:
self._inner_api_calls[
"create_reference_image"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.create_reference_image,
default_retry=self._method_configs["CreateReferenceImage"].retry,
default_timeout=self._method_configs["CreateReferenceImage"].timeout,
client_info=self._client_info,
)
request = product_search_service_pb2.CreateReferenceImageRequest(
parent=parent,
reference_image=reference_image,
reference_image_id=reference_image_id,
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["create_reference_image"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Asynchronous API that imports a list of reference images to specified
<END_TASK>
<USER_TASK:>
Description:
def import_product_sets(
self,
parent,
input_config,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Asynchronous API that imports a list of reference images to specified
product sets based on a list of image information.
The ``google.longrunning.Operation`` API can be used to keep track of
the progress and results of the request. ``Operation.metadata`` contains
``BatchOperationMetadata``. (progress) ``Operation.response`` contains
``ImportProductSetsResponse``. (results)
The input source of this method is a csv file on Google Cloud Storage.
For the format of the csv file please see
``ImportProductSetsGcsSource.csv_file_uri``.
Example:
>>> from google.cloud import vision_v1p3beta1
>>>
>>> client = vision_v1p3beta1.ProductSearchClient()
>>>
>>> parent = client.location_path('[PROJECT]', '[LOCATION]')
>>>
>>> # TODO: Initialize `input_config`:
>>> input_config = {}
>>>
>>> response = client.import_product_sets(parent, input_config)
>>>
>>> def callback(operation_future):
... # Handle result.
... result = operation_future.result()
>>>
>>> response.add_done_callback(callback)
>>>
>>> # Handle metadata.
>>> metadata = response.metadata()
Args:
parent (str): The project in which the ProductSets should be imported.
Format is ``projects/PROJECT_ID/locations/LOC_ID``.
input_config (Union[dict, ~google.cloud.vision_v1p3beta1.types.ImportProductSetsInputConfig]): The input content for the list of requests.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.vision_v1p3beta1.types.ImportProductSetsInputConfig`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.vision_v1p3beta1.types._OperationFuture` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "import_product_sets" not in self._inner_api_calls:
self._inner_api_calls[
"import_product_sets"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.import_product_sets,
default_retry=self._method_configs["ImportProductSets"].retry,
default_timeout=self._method_configs["ImportProductSets"].timeout,
client_info=self._client_info,
)
request = product_search_service_pb2.ImportProductSetsRequest(
parent=parent, input_config=input_config
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
operation = self._inner_api_calls["import_product_sets"](
request, retry=retry, timeout=timeout, metadata=metadata
)
return google.api_core.operation.from_gapic(
operation,
self.transport._operations_client,
product_search_service_pb2.ImportProductSetsResponse,
metadata_type=product_search_service_pb2.BatchOperationMetadata,
) |
<SYSTEM_TASK:>
Special helper to parse ``LogEntry`` protobuf into a dictionary.
<END_TASK>
<USER_TASK:>
Description:
def _parse_log_entry(entry_pb):
"""Special helper to parse ``LogEntry`` protobuf into a dictionary.
The ``proto_payload`` field in ``LogEntry`` is of type ``Any``. This
can be problematic if the type URL in the payload isn't in the
``google.protobuf`` registry. To help with parsing unregistered types,
this function will remove ``proto_payload`` before parsing.
:type entry_pb: :class:`.log_entry_pb2.LogEntry`
:param entry_pb: Log entry protobuf.
:rtype: dict
:returns: The parsed log entry. The ``protoPayload`` key may contain
the raw ``Any`` protobuf from ``entry_pb.proto_payload`` if
it could not be parsed.
""" |
try:
return MessageToDict(entry_pb)
except TypeError:
if entry_pb.HasField("proto_payload"):
proto_payload = entry_pb.proto_payload
entry_pb.ClearField("proto_payload")
entry_mapping = MessageToDict(entry_pb)
entry_mapping["protoPayload"] = proto_payload
return entry_mapping
else:
raise |
<SYSTEM_TASK:>
Convert a log entry protobuf to the native object.
<END_TASK>
<USER_TASK:>
Description:
def _item_to_entry(iterator, entry_pb, loggers):
"""Convert a log entry protobuf to the native object.
.. note::
This method does not have the correct signature to be used as
the ``item_to_value`` argument to
:class:`~google.api_core.page_iterator.Iterator`. It is intended to be
patched with a mutable ``loggers`` argument that can be updated
on subsequent calls. For an example, see how the method is
used above in :meth:`_LoggingAPI.list_entries`.
:type iterator: :class:`~google.api_core.page_iterator.Iterator`
:param iterator: The iterator that is currently in use.
:type entry_pb: :class:`.log_entry_pb2.LogEntry`
:param entry_pb: Log entry protobuf returned from the API.
:type loggers: dict
:param loggers:
A mapping of logger fullnames -> loggers. If the logger
that owns the entry is not in ``loggers``, the entry
will have a newly-created logger.
:rtype: :class:`~google.cloud.logging.entries._BaseEntry`
:returns: The next log entry in the page.
""" |
resource = _parse_log_entry(entry_pb)
return entry_from_resource(resource, iterator.client, loggers) |
<SYSTEM_TASK:>
Create an instance of the Logging API adapter.
<END_TASK>
<USER_TASK:>
Description:
def make_logging_api(client):
"""Create an instance of the Logging API adapter.
:type client: :class:`~google.cloud.logging.client.Client`
:param client: The client that holds configuration details.
:rtype: :class:`_LoggingAPI`
:returns: A metrics API instance with the proper credentials.
""" |
generated = LoggingServiceV2Client(
credentials=client._credentials, client_info=_CLIENT_INFO
)
return _LoggingAPI(generated, client) |
<SYSTEM_TASK:>
Create an instance of the Metrics API adapter.
<END_TASK>
<USER_TASK:>
Description:
def make_metrics_api(client):
"""Create an instance of the Metrics API adapter.
:type client: :class:`~google.cloud.logging.client.Client`
:param client: The client that holds configuration details.
:rtype: :class:`_MetricsAPI`
:returns: A metrics API instance with the proper credentials.
""" |
generated = MetricsServiceV2Client(
credentials=client._credentials, client_info=_CLIENT_INFO
)
return _MetricsAPI(generated, client) |
<SYSTEM_TASK:>
Create an instance of the Sinks API adapter.
<END_TASK>
<USER_TASK:>
Description:
def make_sinks_api(client):
"""Create an instance of the Sinks API adapter.
:type client: :class:`~google.cloud.logging.client.Client`
:param client: The client that holds configuration details.
:rtype: :class:`_SinksAPI`
:returns: A metrics API instance with the proper credentials.
""" |
generated = ConfigServiceV2Client(
credentials=client._credentials, client_info=_CLIENT_INFO
)
return _SinksAPI(generated, client) |
<SYSTEM_TASK:>
Return a fully-qualified tenant string.
<END_TASK>
<USER_TASK:>
Description:
def tenant_path(cls, project, tenant):
"""Return a fully-qualified tenant string.""" |
return google.api_core.path_template.expand(
"projects/{project}/tenants/{tenant}", project=project, tenant=tenant
) |
<SYSTEM_TASK:>
Creates a new tenant entity.
<END_TASK>
<USER_TASK:>
Description:
def create_tenant(
self,
parent,
tenant,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates a new tenant entity.
Example:
>>> from google.cloud import talent_v4beta1
>>>
>>> client = talent_v4beta1.TenantServiceClient()
>>>
>>> parent = client.project_path('[PROJECT]')
>>>
>>> # TODO: Initialize `tenant`:
>>> tenant = {}
>>>
>>> response = client.create_tenant(parent, tenant)
Args:
parent (str): Required.
Resource name of the project under which the tenant is created.
The format is "projects/{project\_id}", for example,
"projects/api-test-project".
tenant (Union[dict, ~google.cloud.talent_v4beta1.types.Tenant]): Required.
The tenant to be created.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.talent_v4beta1.types.Tenant`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.talent_v4beta1.types.Tenant` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "create_tenant" not in self._inner_api_calls:
self._inner_api_calls[
"create_tenant"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.create_tenant,
default_retry=self._method_configs["CreateTenant"].retry,
default_timeout=self._method_configs["CreateTenant"].timeout,
client_info=self._client_info,
)
request = tenant_service_pb2.CreateTenantRequest(parent=parent, tenant=tenant)
return self._inner_api_calls["create_tenant"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Retrieves specified tenant.
<END_TASK>
<USER_TASK:>
Description:
def get_tenant(
self,
name,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Retrieves specified tenant.
Example:
>>> from google.cloud import talent_v4beta1
>>>
>>> client = talent_v4beta1.TenantServiceClient()
>>>
>>> name = client.tenant_path('[PROJECT]', '[TENANT]')
>>>
>>> response = client.get_tenant(name)
Args:
name (str): Required.
The resource name of the tenant to be retrieved.
The format is "projects/{project\_id}/tenants/{tenant\_id}", for
example, "projects/api-test-project/tenants/foo".
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.talent_v4beta1.types.Tenant` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "get_tenant" not in self._inner_api_calls:
self._inner_api_calls[
"get_tenant"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.get_tenant,
default_retry=self._method_configs["GetTenant"].retry,
default_timeout=self._method_configs["GetTenant"].timeout,
client_info=self._client_info,
)
request = tenant_service_pb2.GetTenantRequest(name=name)
return self._inner_api_calls["get_tenant"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.