text_prompt
stringlengths 100
17.7k
⌀ | code_prompt
stringlengths 7
9.86k
⌀ |
---|---|
<SYSTEM_TASK:>
Upload the contents of a table from a pandas DataFrame.
<END_TASK>
<USER_TASK:>
Description:
def load_table_from_dataframe(
self,
dataframe,
destination,
num_retries=_DEFAULT_NUM_RETRIES,
job_id=None,
job_id_prefix=None,
location=None,
project=None,
job_config=None,
):
"""Upload the contents of a table from a pandas DataFrame.
Similar to :meth:`load_table_from_uri`, this method creates, starts and
returns a :class:`~google.cloud.bigquery.job.LoadJob`.
Arguments:
dataframe (pandas.DataFrame):
A :class:`~pandas.DataFrame` containing the data to load.
destination (google.cloud.bigquery.table.TableReference):
The destination table to use for loading the data. If it is an
existing table, the schema of the :class:`~pandas.DataFrame`
must match the schema of the destination table. If the table
does not yet exist, the schema is inferred from the
:class:`~pandas.DataFrame`.
If a string is passed in, this method attempts to create a
table reference from a string using
:func:`google.cloud.bigquery.table.TableReference.from_string`.
Keyword Arguments:
num_retries (int, optional): Number of upload retries.
job_id (str, optional): Name of the job.
job_id_prefix (str, optional):
The user-provided prefix for a randomly generated
job ID. This parameter will be ignored if a ``job_id`` is
also given.
location (str):
Location where to run the job. Must match the location of the
destination table.
project (str, optional):
Project ID of the project of where to run the job. Defaults
to the client's project.
job_config (google.cloud.bigquery.job.LoadJobConfig, optional):
Extra configuration options for the job.
Returns:
google.cloud.bigquery.job.LoadJob: A new load job.
Raises:
ImportError:
If a usable parquet engine cannot be found. This method
requires :mod:`pyarrow` or :mod:`fastparquet` to be
installed.
""" |
job_id = _make_job_id(job_id, job_id_prefix)
if job_config is None:
job_config = job.LoadJobConfig()
job_config.source_format = job.SourceFormat.PARQUET
if location is None:
location = self.location
tmpfd, tmppath = tempfile.mkstemp(suffix="_job_{}.parquet".format(job_id[:8]))
os.close(tmpfd)
try:
dataframe.to_parquet(tmppath)
with open(tmppath, "rb") as parquet_file:
return self.load_table_from_file(
parquet_file,
destination,
num_retries=num_retries,
rewind=True,
job_id=job_id,
job_id_prefix=job_id_prefix,
location=location,
project=project,
job_config=job_config,
)
finally:
os.remove(tmppath) |
<SYSTEM_TASK:>
Copy one or more tables to another table.
<END_TASK>
<USER_TASK:>
Description:
def copy_table(
self,
sources,
destination,
job_id=None,
job_id_prefix=None,
location=None,
project=None,
job_config=None,
retry=DEFAULT_RETRY,
):
"""Copy one or more tables to another table.
See
https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.copy
Arguments:
sources (Union[ \
:class:`~google.cloud.bigquery.table.Table`, \
:class:`~google.cloud.bigquery.table.TableReference`, \
str, \
Sequence[ \
Union[ \
:class:`~google.cloud.bigquery.table.Table`, \
:class:`~google.cloud.bigquery.table.TableReference`, \
str, \
] \
], \
]):
Table or tables to be copied.
destination (Union[
:class:`~google.cloud.bigquery.table.Table`, \
:class:`~google.cloud.bigquery.table.TableReference`, \
str, \
]):
Table into which data is to be copied.
Keyword Arguments:
job_id (str): (Optional) The ID of the job.
job_id_prefix (str)
(Optional) the user-provided prefix for a randomly generated
job ID. This parameter will be ignored if a ``job_id`` is
also given.
location (str):
Location where to run the job. Must match the location of any
source table as well as the destination table.
project (str):
Project ID of the project of where to run the job. Defaults
to the client's project.
job_config (google.cloud.bigquery.job.CopyJobConfig):
(Optional) Extra configuration options for the job.
retry (google.api_core.retry.Retry):
(Optional) How to retry the RPC.
Returns:
google.cloud.bigquery.job.CopyJob: A new copy job instance.
""" |
job_id = _make_job_id(job_id, job_id_prefix)
if project is None:
project = self.project
if location is None:
location = self.location
job_ref = job._JobReference(job_id, project=project, location=location)
# sources can be one of many different input types. (string, Table,
# TableReference, or a sequence of any of those.) Convert them all to a
# list of TableReferences.
#
# _table_arg_to_table_ref leaves lists unmodified.
sources = _table_arg_to_table_ref(sources, default_project=self.project)
if not isinstance(sources, collections_abc.Sequence):
sources = [sources]
sources = [
_table_arg_to_table_ref(source, default_project=self.project)
for source in sources
]
destination = _table_arg_to_table_ref(destination, default_project=self.project)
copy_job = job.CopyJob(
job_ref, sources, destination, client=self, job_config=job_config
)
copy_job._begin(retry=retry)
return copy_job |
<SYSTEM_TASK:>
Start a job to extract a table into Cloud Storage files.
<END_TASK>
<USER_TASK:>
Description:
def extract_table(
self,
source,
destination_uris,
job_id=None,
job_id_prefix=None,
location=None,
project=None,
job_config=None,
retry=DEFAULT_RETRY,
):
"""Start a job to extract a table into Cloud Storage files.
See
https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.extract
Arguments:
source (Union[ \
:class:`google.cloud.bigquery.table.Table`, \
:class:`google.cloud.bigquery.table.TableReference`, \
src, \
]):
Table to be extracted.
destination_uris (Union[str, Sequence[str]]):
URIs of Cloud Storage file(s) into which table data is to be
extracted; in format
``gs://<bucket_name>/<object_name_or_glob>``.
Keyword Arguments:
job_id (str): (Optional) The ID of the job.
job_id_prefix (str)
(Optional) the user-provided prefix for a randomly generated
job ID. This parameter will be ignored if a ``job_id`` is
also given.
location (str):
Location where to run the job. Must match the location of the
source table.
project (str):
Project ID of the project of where to run the job. Defaults
to the client's project.
job_config (google.cloud.bigquery.job.ExtractJobConfig):
(Optional) Extra configuration options for the job.
retry (google.api_core.retry.Retry):
(Optional) How to retry the RPC.
:type source: :class:`google.cloud.bigquery.table.TableReference`
:param source: table to be extracted.
Returns:
google.cloud.bigquery.job.ExtractJob: A new extract job instance.
""" |
job_id = _make_job_id(job_id, job_id_prefix)
if project is None:
project = self.project
if location is None:
location = self.location
job_ref = job._JobReference(job_id, project=project, location=location)
source = _table_arg_to_table_ref(source, default_project=self.project)
if isinstance(destination_uris, six.string_types):
destination_uris = [destination_uris]
extract_job = job.ExtractJob(
job_ref, source, destination_uris, client=self, job_config=job_config
)
extract_job._begin(retry=retry)
return extract_job |
<SYSTEM_TASK:>
Run a SQL query.
<END_TASK>
<USER_TASK:>
Description:
def query(
self,
query,
job_config=None,
job_id=None,
job_id_prefix=None,
location=None,
project=None,
retry=DEFAULT_RETRY,
):
"""Run a SQL query.
See
https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query
Arguments:
query (str):
SQL query to be executed. Defaults to the standard SQL
dialect. Use the ``job_config`` parameter to change dialects.
Keyword Arguments:
job_config (google.cloud.bigquery.job.QueryJobConfig):
(Optional) Extra configuration options for the job.
To override any options that were previously set in
the ``default_query_job_config`` given to the
``Client`` constructor, manually set those options to ``None``,
or whatever value is preferred.
job_id (str): (Optional) ID to use for the query job.
job_id_prefix (str):
(Optional) The prefix to use for a randomly generated job ID.
This parameter will be ignored if a ``job_id`` is also given.
location (str):
Location where to run the job. Must match the location of the
any table used in the query as well as the destination table.
project (str):
Project ID of the project of where to run the job. Defaults
to the client's project.
retry (google.api_core.retry.Retry):
(Optional) How to retry the RPC.
Returns:
google.cloud.bigquery.job.QueryJob: A new query job instance.
""" |
job_id = _make_job_id(job_id, job_id_prefix)
if project is None:
project = self.project
if location is None:
location = self.location
if self._default_query_job_config:
if job_config:
# anything that's not defined on the incoming
# that is in the default,
# should be filled in with the default
# the incoming therefore has precedence
job_config = job_config._fill_from_default(
self._default_query_job_config
)
else:
job_config = self._default_query_job_config
job_ref = job._JobReference(job_id, project=project, location=location)
query_job = job.QueryJob(job_ref, query, client=self, job_config=job_config)
query_job._begin(retry=retry)
return query_job |
<SYSTEM_TASK:>
Insert rows into a table via the streaming API.
<END_TASK>
<USER_TASK:>
Description:
def insert_rows(self, table, rows, selected_fields=None, **kwargs):
"""Insert rows into a table via the streaming API.
See
https://cloud.google.com/bigquery/docs/reference/rest/v2/tabledata/insertAll
Args:
table (Union[ \
:class:`~google.cloud.bigquery.table.Table`, \
:class:`~google.cloud.bigquery.table.TableReference`, \
str, \
]):
The destination table for the row data, or a reference to it.
rows (Union[ \
Sequence[Tuple], \
Sequence[dict], \
]):
Row data to be inserted. If a list of tuples is given, each
tuple should contain data for each schema field on the
current table and in the same order as the schema fields. If
a list of dictionaries is given, the keys must include all
required fields in the schema. Keys which do not correspond
to a field in the schema are ignored.
selected_fields (Sequence[ \
:class:`~google.cloud.bigquery.schema.SchemaField`, \
]):
The fields to return. Required if ``table`` is a
:class:`~google.cloud.bigquery.table.TableReference`.
kwargs (dict):
Keyword arguments to
:meth:`~google.cloud.bigquery.client.Client.insert_rows_json`.
Returns:
Sequence[Mappings]:
One mapping per row with insert errors: the "index" key
identifies the row, and the "errors" key contains a list of
the mappings describing one or more problems with the row.
Raises:
ValueError: if table's schema is not set
""" |
table = _table_arg_to_table(table, default_project=self.project)
if not isinstance(table, Table):
raise TypeError(_NEED_TABLE_ARGUMENT)
schema = table.schema
# selected_fields can override the table schema.
if selected_fields is not None:
schema = selected_fields
if len(schema) == 0:
raise ValueError(
(
"Could not determine schema for table '{}'. Call client.get_table() "
"or pass in a list of schema fields to the selected_fields argument."
).format(table)
)
json_rows = [_record_field_to_json(schema, row) for row in rows]
return self.insert_rows_json(table, json_rows, **kwargs) |
<SYSTEM_TASK:>
Insert rows into a table without applying local type conversions.
<END_TASK>
<USER_TASK:>
Description:
def insert_rows_json(
self,
table,
json_rows,
row_ids=None,
skip_invalid_rows=None,
ignore_unknown_values=None,
template_suffix=None,
retry=DEFAULT_RETRY,
):
"""Insert rows into a table without applying local type conversions.
See
https://cloud.google.com/bigquery/docs/reference/rest/v2/tabledata/insertAll
table (Union[ \
:class:`~google.cloud.bigquery.table.Table` \
:class:`~google.cloud.bigquery.table.TableReference`, \
str, \
]):
The destination table for the row data, or a reference to it.
json_rows (Sequence[dict]):
Row data to be inserted. Keys must match the table schema fields
and values must be JSON-compatible representations.
row_ids (Sequence[str]):
(Optional) Unique ids, one per row being inserted. If omitted,
unique IDs are created.
skip_invalid_rows (bool):
(Optional) Insert all valid rows of a request, even if invalid
rows exist. The default value is False, which causes the entire
request to fail if any invalid rows exist.
ignore_unknown_values (bool):
(Optional) Accept rows that contain values that do not match the
schema. The unknown values are ignored. Default is False, which
treats unknown values as errors.
template_suffix (str):
(Optional) treat ``name`` as a template table and provide a suffix.
BigQuery will create the table ``<name> + <template_suffix>`` based
on the schema of the template table. See
https://cloud.google.com/bigquery/streaming-data-into-bigquery#template-tables
retry (:class:`google.api_core.retry.Retry`):
(Optional) How to retry the RPC.
Returns:
Sequence[Mappings]:
One mapping per row with insert errors: the "index" key
identifies the row, and the "errors" key contains a list of
the mappings describing one or more problems with the row.
""" |
# Convert table to just a reference because unlike insert_rows,
# insert_rows_json doesn't need the table schema. It's not doing any
# type conversions.
table = _table_arg_to_table_ref(table, default_project=self.project)
rows_info = []
data = {"rows": rows_info}
for index, row in enumerate(json_rows):
info = {"json": row}
if row_ids is not None:
info["insertId"] = row_ids[index]
else:
info["insertId"] = str(uuid.uuid4())
rows_info.append(info)
if skip_invalid_rows is not None:
data["skipInvalidRows"] = skip_invalid_rows
if ignore_unknown_values is not None:
data["ignoreUnknownValues"] = ignore_unknown_values
if template_suffix is not None:
data["templateSuffix"] = template_suffix
# We can always retry, because every row has an insert ID.
response = self._call_api(
retry, method="POST", path="%s/insertAll" % table.path, data=data
)
errors = []
for error in response.get("insertErrors", ()):
errors.append({"index": int(error["index"]), "errors": error["errors"]})
return errors |
<SYSTEM_TASK:>
List the partitions in a table.
<END_TASK>
<USER_TASK:>
Description:
def list_partitions(self, table, retry=DEFAULT_RETRY):
"""List the partitions in a table.
Arguments:
table (Union[ \
:class:`~google.cloud.bigquery.table.Table`, \
:class:`~google.cloud.bigquery.table.TableReference`, \
str, \
]):
The table or reference from which to get partition info
retry (google.api_core.retry.Retry):
(Optional) How to retry the RPC.
Returns:
List[str]:
A list of the partition ids present in the partitioned table
""" |
table = _table_arg_to_table_ref(table, default_project=self.project)
meta_table = self.get_table(
TableReference(
self.dataset(table.dataset_id, project=table.project),
"%s$__PARTITIONS_SUMMARY__" % table.table_id,
)
)
subset = [col for col in meta_table.schema if col.name == "partition_id"]
return [
row[0]
for row in self.list_rows(meta_table, selected_fields=subset, retry=retry)
] |
<SYSTEM_TASK:>
List the rows of the table.
<END_TASK>
<USER_TASK:>
Description:
def list_rows(
self,
table,
selected_fields=None,
max_results=None,
page_token=None,
start_index=None,
page_size=None,
retry=DEFAULT_RETRY,
):
"""List the rows of the table.
See
https://cloud.google.com/bigquery/docs/reference/rest/v2/tabledata/list
.. note::
This method assumes that the provided schema is up-to-date with the
schema as defined on the back-end: if the two schemas are not
identical, the values returned may be incomplete. To ensure that the
local copy of the schema is up-to-date, call ``client.get_table``.
Args:
table (Union[ \
:class:`~google.cloud.bigquery.table.Table`, \
:class:`~google.cloud.bigquery.table.TableListItem`, \
:class:`~google.cloud.bigquery.table.TableReference`, \
str, \
]):
The table to list, or a reference to it. When the table
object does not contain a schema and ``selected_fields`` is
not supplied, this method calls ``get_table`` to fetch the
table schema.
selected_fields (Sequence[ \
:class:`~google.cloud.bigquery.schema.SchemaField` \
]):
The fields to return. If not supplied, data for all columns
are downloaded.
max_results (int):
(Optional) maximum number of rows to return.
page_token (str):
(Optional) Token representing a cursor into the table's rows.
If not passed, the API will return the first page of the
rows. The token marks the beginning of the iterator to be
returned and the value of the ``page_token`` can be accessed
at ``next_page_token`` of the
:class:`~google.cloud.bigquery.table.RowIterator`.
start_index (int):
(Optional) The zero-based index of the starting row to read.
page_size (int):
Optional. The maximum number of rows in each page of results
from this request. Non-positive values are ignored. Defaults
to a sensible value set by the API.
retry (:class:`google.api_core.retry.Retry`):
(Optional) How to retry the RPC.
Returns:
google.cloud.bigquery.table.RowIterator:
Iterator of row data
:class:`~google.cloud.bigquery.table.Row`-s. During each
page, the iterator will have the ``total_rows`` attribute
set, which counts the total number of rows **in the table**
(this is distinct from the total number of rows in the
current page: ``iterator.page.num_items``).
""" |
table = _table_arg_to_table(table, default_project=self.project)
if not isinstance(table, Table):
raise TypeError(_NEED_TABLE_ARGUMENT)
schema = table.schema
# selected_fields can override the table schema.
if selected_fields is not None:
schema = selected_fields
# No schema, but no selected_fields. Assume the developer wants all
# columns, so get the table resource for them rather than failing.
elif len(schema) == 0:
table = self.get_table(table.reference, retry=retry)
schema = table.schema
params = {}
if selected_fields is not None:
params["selectedFields"] = ",".join(field.name for field in selected_fields)
if start_index is not None:
params["startIndex"] = start_index
row_iterator = RowIterator(
client=self,
api_request=functools.partial(self._call_api, retry),
path="%s/data" % (table.path,),
schema=schema,
page_token=page_token,
max_results=max_results,
page_size=page_size,
extra_params=params,
table=table,
# Pass in selected_fields separately from schema so that full
# tables can be fetched without a column filter.
selected_fields=selected_fields,
)
return row_iterator |
<SYSTEM_TASK:>
Helper function for schema_from_json that takes a
<END_TASK>
<USER_TASK:>
Description:
def _schema_from_json_file_object(self, file_obj):
"""Helper function for schema_from_json that takes a
file object that describes a table schema.
Returns:
List of schema field objects.
""" |
json_data = json.load(file_obj)
return [SchemaField.from_api_repr(field) for field in json_data] |
<SYSTEM_TASK:>
Helper function for schema_to_json that takes a schema list and file
<END_TASK>
<USER_TASK:>
Description:
def _schema_to_json_file_object(self, schema_list, file_obj):
"""Helper function for schema_to_json that takes a schema list and file
object and writes the schema list to the file object with json.dump
""" |
json.dump(schema_list, file_obj, indent=2, sort_keys=True) |
<SYSTEM_TASK:>
Takes a file object or file path that contains json that describes
<END_TASK>
<USER_TASK:>
Description:
def schema_from_json(self, file_or_path):
"""Takes a file object or file path that contains json that describes
a table schema.
Returns:
List of schema field objects.
""" |
if isinstance(file_or_path, io.IOBase):
return self._schema_from_json_file_object(file_or_path)
with open(file_or_path) as file_obj:
return self._schema_from_json_file_object(file_obj) |
<SYSTEM_TASK:>
Takes a list of schema field objects.
<END_TASK>
<USER_TASK:>
Description:
def schema_to_json(self, schema_list, destination):
"""Takes a list of schema field objects.
Serializes the list of schema field objects as json to a file.
Destination is a file path or a file object.
""" |
json_schema_list = [f.to_api_repr() for f in schema_list]
if isinstance(destination, io.IOBase):
return self._schema_to_json_file_object(json_schema_list, destination)
with open(destination, mode="w") as file_obj:
return self._schema_to_json_file_object(json_schema_list, file_obj) |
<SYSTEM_TASK:>
Refresh self from the server-provided protobuf.
<END_TASK>
<USER_TASK:>
Description:
def _update_from_pb(self, instance_pb):
"""Refresh self from the server-provided protobuf.
Helper for :meth:`from_pb` and :meth:`reload`.
""" |
if not instance_pb.display_name: # Simple field (string)
raise ValueError("Instance protobuf does not contain display_name")
self.display_name = instance_pb.display_name
self.configuration_name = instance_pb.config
self.node_count = instance_pb.node_count |
<SYSTEM_TASK:>
Creates an instance from a protobuf.
<END_TASK>
<USER_TASK:>
Description:
def from_pb(cls, instance_pb, client):
"""Creates an instance from a protobuf.
:type instance_pb:
:class:`google.spanner.v2.spanner_instance_admin_pb2.Instance`
:param instance_pb: A instance protobuf object.
:type client: :class:`~google.cloud.spanner_v1.client.Client`
:param client: The client that owns the instance.
:rtype: :class:`Instance`
:returns: The instance parsed from the protobuf response.
:raises ValueError:
if the instance name does not match
``projects/{project}/instances/{instance_id}`` or if the parsed
project ID does not match the project ID on the client.
""" |
match = _INSTANCE_NAME_RE.match(instance_pb.name)
if match is None:
raise ValueError(
"Instance protobuf name was not in the " "expected format.",
instance_pb.name,
)
if match.group("project") != client.project:
raise ValueError(
"Project ID on instance does not match the " "project ID on the client"
)
instance_id = match.group("instance_id")
configuration_name = instance_pb.config
result = cls(instance_id, client, configuration_name)
result._update_from_pb(instance_pb)
return result |
<SYSTEM_TASK:>
Make a copy of this instance.
<END_TASK>
<USER_TASK:>
Description:
def copy(self):
"""Make a copy of this instance.
Copies the local data stored as simple types and copies the client
attached to this instance.
:rtype: :class:`~google.cloud.spanner_v1.instance.Instance`
:returns: A copy of the current instance.
""" |
new_client = self._client.copy()
return self.__class__(
self.instance_id,
new_client,
self.configuration_name,
node_count=self.node_count,
display_name=self.display_name,
) |
<SYSTEM_TASK:>
Test whether this instance exists.
<END_TASK>
<USER_TASK:>
Description:
def exists(self):
"""Test whether this instance exists.
See
https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.instance.v1#google.spanner.admin.instance.v1.InstanceAdmin.GetInstanceConfig
:rtype: bool
:returns: True if the instance exists, else false
""" |
api = self._client.instance_admin_api
metadata = _metadata_with_prefix(self.name)
try:
api.get_instance(self.name, metadata=metadata)
except NotFound:
return False
return True |
<SYSTEM_TASK:>
Update this instance.
<END_TASK>
<USER_TASK:>
Description:
def update(self):
"""Update this instance.
See
https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.instance.v1#google.spanner.admin.instance.v1.InstanceAdmin.UpdateInstance
.. note::
Updates the ``display_name`` and ``node_count``. To change those
values before updating, set them via
.. code:: python
instance.display_name = 'New display name'
instance.node_count = 5
before calling :meth:`update`.
:rtype: :class:`google.api_core.operation.Operation`
:returns: an operation instance
:raises NotFound: if the instance does not exist
""" |
api = self._client.instance_admin_api
instance_pb = admin_v1_pb2.Instance(
name=self.name,
config=self.configuration_name,
display_name=self.display_name,
node_count=self.node_count,
)
field_mask = FieldMask(paths=["config", "display_name", "node_count"])
metadata = _metadata_with_prefix(self.name)
future = api.update_instance(
instance=instance_pb, field_mask=field_mask, metadata=metadata
)
return future |
<SYSTEM_TASK:>
Mark an instance and all of its databases for permanent deletion.
<END_TASK>
<USER_TASK:>
Description:
def delete(self):
"""Mark an instance and all of its databases for permanent deletion.
See
https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.instance.v1#google.spanner.admin.instance.v1.InstanceAdmin.DeleteInstance
Immediately upon completion of the request:
* Billing will cease for all of the instance's reserved resources.
Soon afterward:
* The instance and all databases within the instance will be deleteed.
All data in the databases will be permanently deleted.
""" |
api = self._client.instance_admin_api
metadata = _metadata_with_prefix(self.name)
api.delete_instance(self.name, metadata=metadata) |
<SYSTEM_TASK:>
Factory to create a database within this instance.
<END_TASK>
<USER_TASK:>
Description:
def database(self, database_id, ddl_statements=(), pool=None):
"""Factory to create a database within this instance.
:type database_id: str
:param database_id: The ID of the instance.
:type ddl_statements: list of string
:param ddl_statements: (Optional) DDL statements, excluding the
'CREATE DATABSE' statement.
:type pool: concrete subclass of
:class:`~google.cloud.spanner_v1.pool.AbstractSessionPool`.
:param pool: (Optional) session pool to be used by database.
:rtype: :class:`~google.cloud.spanner_v1.database.Database`
:returns: a database owned by this instance.
""" |
return Database(database_id, self, ddl_statements=ddl_statements, pool=pool) |
<SYSTEM_TASK:>
List databases for the instance.
<END_TASK>
<USER_TASK:>
Description:
def list_databases(self, page_size=None, page_token=None):
"""List databases for the instance.
See
https://cloud.google.com/spanner/reference/rpc/google.spanner.admin.database.v1#google.spanner.admin.database.v1.DatabaseAdmin.ListDatabases
:type page_size: int
:param page_size:
Optional. The maximum number of databases in each page of results
from this request. Non-positive values are ignored. Defaults
to a sensible value set by the API.
:type page_token: str
:param page_token:
Optional. If present, return the next batch of databases, using
the value, which must correspond to the ``nextPageToken`` value
returned in the previous response. Deprecated: use the ``pages``
property of the returned iterator instead of manually passing
the token.
:rtype: :class:`~google.api._ore.page_iterator.Iterator`
:returns:
Iterator of :class:`~google.cloud.spanner_v1.database.Database`
resources within the current instance.
""" |
metadata = _metadata_with_prefix(self.name)
page_iter = self._client.database_admin_api.list_databases(
self.name, page_size=page_size, metadata=metadata
)
page_iter.next_page_token = page_token
page_iter.item_to_value = self._item_to_database
return page_iter |
<SYSTEM_TASK:>
Convert a database protobuf to the native object.
<END_TASK>
<USER_TASK:>
Description:
def _item_to_database(self, iterator, database_pb):
"""Convert a database protobuf to the native object.
:type iterator: :class:`~google.api_core.page_iterator.Iterator`
:param iterator: The iterator that is currently in use.
:type database_pb: :class:`~google.spanner.admin.database.v1.Database`
:param database_pb: A database returned from the API.
:rtype: :class:`~google.cloud.spanner_v1.database.Database`
:returns: The next database in the page.
""" |
return Database.from_pb(database_pb, self, pool=BurstyPool()) |
<SYSTEM_TASK:>
Poll and wait for the Future to be resolved.
<END_TASK>
<USER_TASK:>
Description:
def _blocking_poll(self, timeout=None):
"""Poll and wait for the Future to be resolved.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
""" |
if self._result_set:
return
retry_ = self._retry.with_deadline(timeout)
try:
retry_(self._done_or_raise)()
except exceptions.RetryError:
raise concurrent.futures.TimeoutError(
"Operation did not complete within the designated " "timeout."
) |
<SYSTEM_TASK:>
Get the result of the operation, blocking if necessary.
<END_TASK>
<USER_TASK:>
Description:
def result(self, timeout=None):
"""Get the result of the operation, blocking if necessary.
Args:
timeout (int):
How long (in seconds) to wait for the operation to complete.
If None, wait indefinitely.
Returns:
google.protobuf.Message: The Operation's result.
Raises:
google.api_core.GoogleAPICallError: If the operation errors or if
the timeout is reached before the operation completes.
""" |
self._blocking_poll(timeout=timeout)
if self._exception is not None:
# pylint: disable=raising-bad-type
# Pylint doesn't recognize that this is valid in this case.
raise self._exception
return self._result |
<SYSTEM_TASK:>
Add a callback to be executed when the operation is complete.
<END_TASK>
<USER_TASK:>
Description:
def add_done_callback(self, fn):
"""Add a callback to be executed when the operation is complete.
If the operation is not already complete, this will start a helper
thread to poll for the status of the operation in the background.
Args:
fn (Callable[Future]): The callback to execute when the operation
is complete.
""" |
if self._result_set:
_helpers.safe_invoke_callback(fn, self)
return
self._done_callbacks.append(fn)
if self._polling_thread is None:
# The polling thread will exit on its own as soon as the operation
# is done.
self._polling_thread = _helpers.start_daemon_thread(
target=self._blocking_poll
) |
<SYSTEM_TASK:>
Instantiate client.
<END_TASK>
<USER_TASK:>
Description:
def instantiate_client(_unused_client, _unused_to_delete):
"""Instantiate client.""" |
# [START client_create_default]
from google.cloud import logging
client = logging.Client()
# [END client_create_default]
credentials = object()
# [START client_create_explicit]
from google.cloud import logging
client = logging.Client(project="my-project", credentials=credentials) |
<SYSTEM_TASK:>
List entries via client across multiple projects.
<END_TASK>
<USER_TASK:>
Description:
def client_list_entries_multi_project(
client, to_delete
): # pylint: disable=unused-argument
"""List entries via client across multiple projects.""" |
# [START client_list_entries_multi_project]
PROJECT_IDS = ["one-project", "another-project"]
for entry in client.list_entries(project_ids=PROJECT_IDS): # API call(s)
do_something_with(entry) |
<SYSTEM_TASK:>
Issue a warning if `distribution_name` is installed.
<END_TASK>
<USER_TASK:>
Description:
def complain(distribution_name):
"""Issue a warning if `distribution_name` is installed.
In a future release, this method will be updated to raise ImportError
rather than just send a warning.
Args:
distribution_name (str): The name of the obsolete distribution.
""" |
try:
pkg_resources.get_distribution(distribution_name)
warnings.warn(
"The {pkg} distribution is now obsolete. "
"Please `pip uninstall {pkg}`. "
"In the future, this warning will become an ImportError.".format(
pkg=distribution_name
),
DeprecationWarning,
)
except pkg_resources.DistributionNotFound:
pass |
<SYSTEM_TASK:>
Builds customer encryption key headers
<END_TASK>
<USER_TASK:>
Description:
def _get_encryption_headers(key, source=False):
"""Builds customer encryption key headers
:type key: bytes
:param key: 32 byte key to build request key and hash.
:type source: bool
:param source: If true, return headers for the "source" blob; otherwise,
return headers for the "destination" blob.
:rtype: dict
:returns: dict of HTTP headers being sent in request.
""" |
if key is None:
return {}
key = _to_bytes(key)
key_hash = hashlib.sha256(key).digest()
key_hash = base64.b64encode(key_hash)
key = base64.b64encode(key)
if source:
prefix = "X-Goog-Copy-Source-Encryption-"
else:
prefix = "X-Goog-Encryption-"
return {
prefix + "Algorithm": "AES256",
prefix + "Key": _bytes_to_unicode(key),
prefix + "Key-Sha256": _bytes_to_unicode(key_hash),
} |
<SYSTEM_TASK:>
Re-wrap and raise an ``InvalidResponse`` exception.
<END_TASK>
<USER_TASK:>
Description:
def _raise_from_invalid_response(error):
"""Re-wrap and raise an ``InvalidResponse`` exception.
:type error: :exc:`google.resumable_media.InvalidResponse`
:param error: A caught exception from the ``google-resumable-media``
library.
:raises: :class:`~google.cloud.exceptions.GoogleCloudError` corresponding
to the failed status code
""" |
response = error.response
error_message = str(error)
message = u"{method} {url}: {error}".format(
method=response.request.method, url=response.request.url, error=error_message
)
raise exceptions.from_http_status(response.status_code, message, response=response) |
<SYSTEM_TASK:>
Add one query parameter to a base URL.
<END_TASK>
<USER_TASK:>
Description:
def _add_query_parameters(base_url, name_value_pairs):
"""Add one query parameter to a base URL.
:type base_url: string
:param base_url: Base URL (may already contain query parameters)
:type name_value_pairs: list of (string, string) tuples.
:param name_value_pairs: Names and values of the query parameters to add
:rtype: string
:returns: URL with additional query strings appended.
""" |
if len(name_value_pairs) == 0:
return base_url
scheme, netloc, path, query, frag = urlsplit(base_url)
query = parse_qsl(query)
query.extend(name_value_pairs)
return urlunsplit((scheme, netloc, path, urlencode(query), frag)) |
<SYSTEM_TASK:>
Set the blob's default chunk size.
<END_TASK>
<USER_TASK:>
Description:
def chunk_size(self, value):
"""Set the blob's default chunk size.
:type value: int
:param value: (Optional) The current blob's chunk size, if it is set.
:raises: :class:`ValueError` if ``value`` is not ``None`` and is not a
multiple of 256 KB.
""" |
if value is not None and value > 0 and value % self._CHUNK_SIZE_MULTIPLE != 0:
raise ValueError(
"Chunk size must be a multiple of %d." % (self._CHUNK_SIZE_MULTIPLE,)
)
self._chunk_size = value |
<SYSTEM_TASK:>
Getter property for the URL path to this Blob.
<END_TASK>
<USER_TASK:>
Description:
def path(self):
"""Getter property for the URL path to this Blob.
:rtype: str
:returns: The URL path to this Blob.
""" |
if not self.name:
raise ValueError("Cannot determine path without a blob name.")
return self.path_helper(self.bucket.path, self.name) |
<SYSTEM_TASK:>
The public URL for this blob.
<END_TASK>
<USER_TASK:>
Description:
def public_url(self):
"""The public URL for this blob.
Use :meth:`make_public` to enable anonymous access via the returned
URL.
:rtype: `string`
:returns: The public URL for this blob.
""" |
return "{storage_base_url}/{bucket_name}/{quoted_name}".format(
storage_base_url=_API_ACCESS_ENDPOINT,
bucket_name=self.bucket.name,
quoted_name=quote(self.name.encode("utf-8")),
) |
<SYSTEM_TASK:>
Generates a signed URL for this blob.
<END_TASK>
<USER_TASK:>
Description:
def generate_signed_url(
self,
expiration=None,
api_access_endpoint=_API_ACCESS_ENDPOINT,
method="GET",
content_md5=None,
content_type=None,
response_disposition=None,
response_type=None,
generation=None,
headers=None,
query_parameters=None,
client=None,
credentials=None,
version=None,
):
"""Generates a signed URL for this blob.
.. note::
If you are on Google Compute Engine, you can't generate a signed
URL using GCE service account. Follow `Issue 50`_ for updates on
this. If you'd like to be able to generate a signed URL from GCE,
you can use a standard service account from a JSON file rather
than a GCE service account.
.. _Issue 50: https://github.com/GoogleCloudPlatform/\
google-auth-library-python/issues/50
If you have a blob that you want to allow access to for a set
amount of time, you can use this method to generate a URL that
is only valid within a certain time period.
This is particularly useful if you don't want publicly
accessible blobs, but don't want to require users to explicitly
log in.
:type expiration: Union[Integer, datetime.datetime, datetime.timedelta]
:param expiration: Point in time when the signed URL should expire.
:type api_access_endpoint: str
:param api_access_endpoint: Optional URI base.
:type method: str
:param method: The HTTP verb that will be used when requesting the URL.
:type content_md5: str
:param content_md5: (Optional) The MD5 hash of the object referenced by
``resource``.
:type content_type: str
:param content_type: (Optional) The content type of the object
referenced by ``resource``.
:type response_disposition: str
:param response_disposition: (Optional) Content disposition of
responses to requests for the signed URL.
For example, to enable the signed URL
to initiate a file of ``blog.png``, use
the value
``'attachment; filename=blob.png'``.
:type response_type: str
:param response_type: (Optional) Content type of responses to requests
for the signed URL. Used to over-ride the content
type of the underlying blob/object.
:type generation: str
:param generation: (Optional) A value that indicates which generation
of the resource to fetch.
:type headers: dict
:param headers:
(Optional) Additional HTTP headers to be included as part of the
signed URLs. See:
https://cloud.google.com/storage/docs/xml-api/reference-headers
Requests using the signed URL *must* pass the specified header
(name and value) with each request for the URL.
:type query_parameters: dict
:param query_parameters:
(Optional) Additional query paramtersto be included as part of the
signed URLs. See:
https://cloud.google.com/storage/docs/xml-api/reference-headers#query
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: (Optional) The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
:type credentials: :class:`oauth2client.client.OAuth2Credentials` or
:class:`NoneType`
:param credentials: (Optional) The OAuth2 credentials to use to sign
the URL. Defaults to the credentials stored on the
client used.
:type version: str
:param version: (Optional) The version of signed credential to create.
Must be one of 'v2' | 'v4'.
:raises: :exc:`ValueError` when version is invalid.
:raises: :exc:`TypeError` when expiration is not a valid type.
:raises: :exc:`AttributeError` if credentials is not an instance
of :class:`google.auth.credentials.Signing`.
:rtype: str
:returns: A signed URL you can use to access the resource
until expiration.
""" |
if version is None:
version = "v2"
elif version not in ("v2", "v4"):
raise ValueError("'version' must be either 'v2' or 'v4'")
resource = "/{bucket_name}/{quoted_name}".format(
bucket_name=self.bucket.name, quoted_name=quote(self.name.encode("utf-8"))
)
if credentials is None:
client = self._require_client(client)
credentials = client._credentials
if version == "v2":
helper = generate_signed_url_v2
else:
helper = generate_signed_url_v4
return helper(
credentials,
resource=resource,
expiration=expiration,
api_access_endpoint=api_access_endpoint,
method=method.upper(),
content_md5=content_md5,
content_type=content_type,
response_type=response_type,
response_disposition=response_disposition,
generation=generation,
headers=headers,
query_parameters=query_parameters,
) |
<SYSTEM_TASK:>
Determines whether or not this blob exists.
<END_TASK>
<USER_TASK:>
Description:
def exists(self, client=None):
"""Determines whether or not this blob exists.
If :attr:`user_project` is set on the bucket, bills the API request
to that project.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
:rtype: bool
:returns: True if the blob exists in Cloud Storage.
""" |
client = self._require_client(client)
# We only need the status code (200 or not) so we seek to
# minimize the returned payload.
query_params = self._query_params
query_params["fields"] = "name"
try:
# We intentionally pass `_target_object=None` since fields=name
# would limit the local properties.
client._connection.api_request(
method="GET",
path=self.path,
query_params=query_params,
_target_object=None,
)
# NOTE: This will not fail immediately in a batch. However, when
# Batch.finish() is called, the resulting `NotFound` will be
# raised.
return True
except NotFound:
return False |
<SYSTEM_TASK:>
Deletes a blob from Cloud Storage.
<END_TASK>
<USER_TASK:>
Description:
def delete(self, client=None):
"""Deletes a blob from Cloud Storage.
If :attr:`user_project` is set on the bucket, bills the API request
to that project.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
:rtype: :class:`Blob`
:returns: The blob that was just deleted.
:raises: :class:`google.cloud.exceptions.NotFound`
(propagated from
:meth:`google.cloud.storage.bucket.Bucket.delete_blob`).
""" |
return self.bucket.delete_blob(
self.name, client=client, generation=self.generation
) |
<SYSTEM_TASK:>
Get the download URL for the current blob.
<END_TASK>
<USER_TASK:>
Description:
def _get_download_url(self):
"""Get the download URL for the current blob.
If the ``media_link`` has been loaded, it will be used, otherwise
the URL will be constructed from the current blob's path (and possibly
generation) to avoid a round trip.
:rtype: str
:returns: The download URL for the current blob.
""" |
name_value_pairs = []
if self.media_link is None:
base_url = _DOWNLOAD_URL_TEMPLATE.format(path=self.path)
if self.generation is not None:
name_value_pairs.append(("generation", "{:d}".format(self.generation)))
else:
base_url = self.media_link
if self.user_project is not None:
name_value_pairs.append(("userProject", self.user_project))
return _add_query_parameters(base_url, name_value_pairs) |
<SYSTEM_TASK:>
Perform a download without any error handling.
<END_TASK>
<USER_TASK:>
Description:
def _do_download(
self, transport, file_obj, download_url, headers, start=None, end=None
):
"""Perform a download without any error handling.
This is intended to be called by :meth:`download_to_file` so it can
be wrapped with error handling / remapping.
:type transport:
:class:`~google.auth.transport.requests.AuthorizedSession`
:param transport: The transport (with credentials) that will
make authenticated requests.
:type file_obj: file
:param file_obj: A file handle to which to write the blob's data.
:type download_url: str
:param download_url: The URL where the media can be accessed.
:type headers: dict
:param headers: Optional headers to be sent with the request(s).
:type start: int
:param start: Optional, the first byte in a range to be downloaded.
:type end: int
:param end: Optional, The last byte in a range to be downloaded.
""" |
if self.chunk_size is None:
download = Download(
download_url, stream=file_obj, headers=headers, start=start, end=end
)
download.consume(transport)
else:
download = ChunkedDownload(
download_url,
self.chunk_size,
file_obj,
headers=headers,
start=start if start else 0,
end=end,
)
while not download.finished:
download.consume_next_chunk(transport) |
<SYSTEM_TASK:>
Download the contents of this blob into a file-like object.
<END_TASK>
<USER_TASK:>
Description:
def download_to_file(self, file_obj, client=None, start=None, end=None):
"""Download the contents of this blob into a file-like object.
.. note::
If the server-set property, :attr:`media_link`, is not yet
initialized, makes an additional API request to load it.
Downloading a file that has been encrypted with a `customer-supplied`_
encryption key:
.. literalinclude:: snippets.py
:start-after: [START download_to_file]
:end-before: [END download_to_file]
:dedent: 4
The ``encryption_key`` should be a str or bytes with a length of at
least 32.
For more fine-grained control over the download process, check out
`google-resumable-media`_. For example, this library allows
downloading **parts** of a blob rather than the whole thing.
If :attr:`user_project` is set on the bucket, bills the API request
to that project.
:type file_obj: file
:param file_obj: A file handle to which to write the blob's data.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
:type start: int
:param start: Optional, the first byte in a range to be downloaded.
:type end: int
:param end: Optional, The last byte in a range to be downloaded.
:raises: :class:`google.cloud.exceptions.NotFound`
""" |
download_url = self._get_download_url()
headers = _get_encryption_headers(self._encryption_key)
headers["accept-encoding"] = "gzip"
transport = self._get_transport(client)
try:
self._do_download(transport, file_obj, download_url, headers, start, end)
except resumable_media.InvalidResponse as exc:
_raise_from_invalid_response(exc) |
<SYSTEM_TASK:>
Download the contents of this blob into a named file.
<END_TASK>
<USER_TASK:>
Description:
def download_to_filename(self, filename, client=None, start=None, end=None):
"""Download the contents of this blob into a named file.
If :attr:`user_project` is set on the bucket, bills the API request
to that project.
:type filename: str
:param filename: A filename to be passed to ``open``.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
:type start: int
:param start: Optional, the first byte in a range to be downloaded.
:type end: int
:param end: Optional, The last byte in a range to be downloaded.
:raises: :class:`google.cloud.exceptions.NotFound`
""" |
try:
with open(filename, "wb") as file_obj:
self.download_to_file(file_obj, client=client, start=start, end=end)
except resumable_media.DataCorruption:
# Delete the corrupt downloaded file.
os.remove(filename)
raise
updated = self.updated
if updated is not None:
mtime = time.mktime(updated.timetuple())
os.utime(file_obj.name, (mtime, mtime)) |
<SYSTEM_TASK:>
Download the contents of this blob as a string.
<END_TASK>
<USER_TASK:>
Description:
def download_as_string(self, client=None, start=None, end=None):
"""Download the contents of this blob as a string.
If :attr:`user_project` is set on the bucket, bills the API request
to that project.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
:type start: int
:param start: Optional, the first byte in a range to be downloaded.
:type end: int
:param end: Optional, The last byte in a range to be downloaded.
:rtype: bytes
:returns: The data stored in this blob.
:raises: :class:`google.cloud.exceptions.NotFound`
""" |
string_buffer = BytesIO()
self.download_to_file(string_buffer, client=client, start=start, end=end)
return string_buffer.getvalue() |
<SYSTEM_TASK:>
Determine the content type from the current object.
<END_TASK>
<USER_TASK:>
Description:
def _get_content_type(self, content_type, filename=None):
"""Determine the content type from the current object.
The return value will be determined in order of precedence:
- The value passed in to this method (if not :data:`None`)
- The value stored on the current blob
- The default value ('application/octet-stream')
:type content_type: str
:param content_type: (Optional) type of content.
:type filename: str
:param filename: (Optional) The name of the file where the content
is stored.
:rtype: str
:returns: Type of content gathered from the object.
""" |
if content_type is None:
content_type = self.content_type
if content_type is None and filename is not None:
content_type, _ = mimetypes.guess_type(filename)
if content_type is None:
content_type = _DEFAULT_CONTENT_TYPE
return content_type |
<SYSTEM_TASK:>
Get required arguments for performing an upload.
<END_TASK>
<USER_TASK:>
Description:
def _get_upload_arguments(self, content_type):
"""Get required arguments for performing an upload.
The content type returned will be determined in order of precedence:
- The value passed in to this method (if not :data:`None`)
- The value stored on the current blob
- The default value ('application/octet-stream')
:type content_type: str
:param content_type: Type of content being uploaded (or :data:`None`).
:rtype: tuple
:returns: A triple of
* A header dictionary
* An object metadata dictionary
* The ``content_type`` as a string (according to precedence)
""" |
headers = _get_encryption_headers(self._encryption_key)
object_metadata = self._get_writable_metadata()
content_type = self._get_content_type(content_type)
return headers, object_metadata, content_type |
<SYSTEM_TASK:>
Determine an upload strategy and then perform the upload.
<END_TASK>
<USER_TASK:>
Description:
def _do_upload(
self, client, stream, content_type, size, num_retries, predefined_acl
):
"""Determine an upload strategy and then perform the upload.
If the size of the data to be uploaded exceeds 5 MB a resumable media
request will be used, otherwise the content and the metadata will be
uploaded in a single multipart upload request.
The content type of the upload will be determined in order
of precedence:
- The value passed in to this method (if not :data:`None`)
- The value stored on the current blob
- The default value ('application/octet-stream')
:type client: :class:`~google.cloud.storage.client.Client`
:param client: (Optional) The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
:type stream: IO[bytes]
:param stream: A bytes IO object open for reading.
:type content_type: str
:param content_type: Type of content being uploaded (or :data:`None`).
:type size: int
:param size: The number of bytes to be uploaded (which will be read
from ``stream``). If not provided, the upload will be
concluded once ``stream`` is exhausted (or :data:`None`).
:type num_retries: int
:param num_retries: Number of upload retries. (Deprecated: This
argument will be removed in a future release.)
:type predefined_acl: str
:param predefined_acl: (Optional) predefined access control list
:rtype: dict
:returns: The parsed JSON from the "200 OK" response. This will be the
**only** response in the multipart case and it will be the
**final** response in the resumable case.
""" |
if size is not None and size <= _MAX_MULTIPART_SIZE:
response = self._do_multipart_upload(
client, stream, content_type, size, num_retries, predefined_acl
)
else:
response = self._do_resumable_upload(
client, stream, content_type, size, num_retries, predefined_acl
)
return response.json() |
<SYSTEM_TASK:>
Upload the contents of this blob from a file-like object.
<END_TASK>
<USER_TASK:>
Description:
def upload_from_file(
self,
file_obj,
rewind=False,
size=None,
content_type=None,
num_retries=None,
client=None,
predefined_acl=None,
):
"""Upload the contents of this blob from a file-like object.
The content type of the upload will be determined in order
of precedence:
- The value passed in to this method (if not :data:`None`)
- The value stored on the current blob
- The default value ('application/octet-stream')
.. note::
The effect of uploading to an existing blob depends on the
"versioning" and "lifecycle" policies defined on the blob's
bucket. In the absence of those policies, upload will
overwrite any existing contents.
See the `object versioning`_ and `lifecycle`_ API documents
for details.
Uploading a file with a `customer-supplied`_ encryption key:
.. literalinclude:: snippets.py
:start-after: [START upload_from_file]
:end-before: [END upload_from_file]
:dedent: 4
The ``encryption_key`` should be a str or bytes with a length of at
least 32.
For more fine-grained over the upload process, check out
`google-resumable-media`_.
If :attr:`user_project` is set on the bucket, bills the API request
to that project.
:type file_obj: file
:param file_obj: A file handle open for reading.
:type rewind: bool
:param rewind: If True, seek to the beginning of the file handle before
writing the file to Cloud Storage.
:type size: int
:param size: The number of bytes to be uploaded (which will be read
from ``file_obj``). If not provided, the upload will be
concluded once ``file_obj`` is exhausted.
:type content_type: str
:param content_type: Optional type of content being uploaded.
:type num_retries: int
:param num_retries: Number of upload retries. (Deprecated: This
argument will be removed in a future release.)
:type client: :class:`~google.cloud.storage.client.Client`
:param client: (Optional) The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
:type predefined_acl: str
:param predefined_acl: (Optional) predefined access control list
:raises: :class:`~google.cloud.exceptions.GoogleCloudError`
if the upload response returns an error status.
.. _object versioning: https://cloud.google.com/storage/\
docs/object-versioning
.. _lifecycle: https://cloud.google.com/storage/docs/lifecycle
""" |
if num_retries is not None:
warnings.warn(_NUM_RETRIES_MESSAGE, DeprecationWarning, stacklevel=2)
_maybe_rewind(file_obj, rewind=rewind)
predefined_acl = ACL.validate_predefined(predefined_acl)
try:
created_json = self._do_upload(
client, file_obj, content_type, size, num_retries, predefined_acl
)
self._set_properties(created_json)
except resumable_media.InvalidResponse as exc:
_raise_from_invalid_response(exc) |
<SYSTEM_TASK:>
Upload this blob's contents from the content of a named file.
<END_TASK>
<USER_TASK:>
Description:
def upload_from_filename(
self, filename, content_type=None, client=None, predefined_acl=None
):
"""Upload this blob's contents from the content of a named file.
The content type of the upload will be determined in order
of precedence:
- The value passed in to this method (if not :data:`None`)
- The value stored on the current blob
- The value given by ``mimetypes.guess_type``
- The default value ('application/octet-stream')
.. note::
The effect of uploading to an existing blob depends on the
"versioning" and "lifecycle" policies defined on the blob's
bucket. In the absence of those policies, upload will
overwrite any existing contents.
See the `object versioning
<https://cloud.google.com/storage/docs/object-versioning>`_ and
`lifecycle <https://cloud.google.com/storage/docs/lifecycle>`_
API documents for details.
If :attr:`user_project` is set on the bucket, bills the API request
to that project.
:type filename: str
:param filename: The path to the file.
:type content_type: str
:param content_type: Optional type of content being uploaded.
:type client: :class:`~google.cloud.storage.client.Client`
:param client: (Optional) The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
:type predefined_acl: str
:param predefined_acl: (Optional) predefined access control list
""" |
content_type = self._get_content_type(content_type, filename=filename)
with open(filename, "rb") as file_obj:
total_bytes = os.fstat(file_obj.fileno()).st_size
self.upload_from_file(
file_obj,
content_type=content_type,
client=client,
size=total_bytes,
predefined_acl=predefined_acl,
) |
<SYSTEM_TASK:>
Upload contents of this blob from the provided string.
<END_TASK>
<USER_TASK:>
Description:
def upload_from_string(
self, data, content_type="text/plain", client=None, predefined_acl=None
):
"""Upload contents of this blob from the provided string.
.. note::
The effect of uploading to an existing blob depends on the
"versioning" and "lifecycle" policies defined on the blob's
bucket. In the absence of those policies, upload will
overwrite any existing contents.
See the `object versioning
<https://cloud.google.com/storage/docs/object-versioning>`_ and
`lifecycle <https://cloud.google.com/storage/docs/lifecycle>`_
API documents for details.
If :attr:`user_project` is set on the bucket, bills the API request
to that project.
:type data: bytes or str
:param data: The data to store in this blob. If the value is
text, it will be encoded as UTF-8.
:type content_type: str
:param content_type: Optional type of content being uploaded. Defaults
to ``'text/plain'``.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
:type predefined_acl: str
:param predefined_acl: (Optional) predefined access control list
""" |
data = _to_bytes(data, encoding="utf-8")
string_buffer = BytesIO(data)
self.upload_from_file(
file_obj=string_buffer,
size=len(data),
content_type=content_type,
client=client,
predefined_acl=predefined_acl,
) |
<SYSTEM_TASK:>
Create a resumable upload session.
<END_TASK>
<USER_TASK:>
Description:
def create_resumable_upload_session(
self, content_type=None, size=None, origin=None, client=None
):
"""Create a resumable upload session.
Resumable upload sessions allow you to start an upload session from
one client and complete the session in another. This method is called
by the initiator to set the metadata and limits. The initiator then
passes the session URL to the client that will upload the binary data.
The client performs a PUT request on the session URL to complete the
upload. This process allows untrusted clients to upload to an
access-controlled bucket. For more details, see the
`documentation on signed URLs`_.
.. _documentation on signed URLs:
https://cloud.google.com/storage/\
docs/access-control/signed-urls#signing-resumable
The content type of the upload will be determined in order
of precedence:
- The value passed in to this method (if not :data:`None`)
- The value stored on the current blob
- The default value ('application/octet-stream')
.. note::
The effect of uploading to an existing blob depends on the
"versioning" and "lifecycle" policies defined on the blob's
bucket. In the absence of those policies, upload will
overwrite any existing contents.
See the `object versioning
<https://cloud.google.com/storage/docs/object-versioning>`_ and
`lifecycle <https://cloud.google.com/storage/docs/lifecycle>`_
API documents for details.
If :attr:`encryption_key` is set, the blob will be encrypted with
a `customer-supplied`_ encryption key.
If :attr:`user_project` is set on the bucket, bills the API request
to that project.
:type size: int
:param size: (Optional). The maximum number of bytes that can be
uploaded using this session. If the size is not known
when creating the session, this should be left blank.
:type content_type: str
:param content_type: (Optional) Type of content being uploaded.
:type origin: str
:param origin: (Optional) If set, the upload can only be completed
by a user-agent that uploads from the given origin. This
can be useful when passing the session to a web client.
:type client: :class:`~google.cloud.storage.client.Client`
:param client: (Optional) The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
:rtype: str
:returns: The resumable upload session URL. The upload can be
completed by making an HTTP PUT request with the
file's contents.
:raises: :class:`google.cloud.exceptions.GoogleCloudError`
if the session creation response returns an error status.
""" |
extra_headers = {}
if origin is not None:
# This header is specifically for client-side uploads, it
# determines the origins allowed for CORS.
extra_headers["Origin"] = origin
try:
dummy_stream = BytesIO(b"")
# Send a fake the chunk size which we **know** will be acceptable
# to the `ResumableUpload` constructor. The chunk size only
# matters when **sending** bytes to an upload.
upload, _ = self._initiate_resumable_upload(
client,
dummy_stream,
content_type,
size,
None,
predefined_acl=None,
extra_headers=extra_headers,
chunk_size=self._CHUNK_SIZE_MULTIPLE,
)
return upload.resumable_url
except resumable_media.InvalidResponse as exc:
_raise_from_invalid_response(exc) |
<SYSTEM_TASK:>
Update blob's ACL, granting read access to anonymous users.
<END_TASK>
<USER_TASK:>
Description:
def make_public(self, client=None):
"""Update blob's ACL, granting read access to anonymous users.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
""" |
self.acl.all().grant_read()
self.acl.save(client=client) |
<SYSTEM_TASK:>
Update blob's ACL, revoking read access for anonymous users.
<END_TASK>
<USER_TASK:>
Description:
def make_private(self, client=None):
"""Update blob's ACL, revoking read access for anonymous users.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
""" |
self.acl.all().revoke_read()
self.acl.save(client=client) |
<SYSTEM_TASK:>
Concatenate source blobs into this one.
<END_TASK>
<USER_TASK:>
Description:
def compose(self, sources, client=None):
"""Concatenate source blobs into this one.
If :attr:`user_project` is set on the bucket, bills the API request
to that project.
:type sources: list of :class:`Blob`
:param sources: blobs whose contents will be composed into this blob.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
""" |
client = self._require_client(client)
query_params = {}
if self.user_project is not None:
query_params["userProject"] = self.user_project
request = {
"sourceObjects": [{"name": source.name} for source in sources],
"destination": self._properties.copy(),
}
api_response = client._connection.api_request(
method="POST",
path=self.path + "/compose",
query_params=query_params,
data=request,
_target_object=self,
)
self._set_properties(api_response) |
<SYSTEM_TASK:>
Rewrite source blob into this one.
<END_TASK>
<USER_TASK:>
Description:
def rewrite(self, source, token=None, client=None):
"""Rewrite source blob into this one.
If :attr:`user_project` is set on the bucket, bills the API request
to that project.
:type source: :class:`Blob`
:param source: blob whose contents will be rewritten into this blob.
:type token: str
:param token: Optional. Token returned from an earlier, not-completed
call to rewrite the same source blob. If passed,
result will include updated status, total bytes written.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
:rtype: tuple
:returns: ``(token, bytes_rewritten, total_bytes)``, where ``token``
is a rewrite token (``None`` if the rewrite is complete),
``bytes_rewritten`` is the number of bytes rewritten so far,
and ``total_bytes`` is the total number of bytes to be
rewritten.
""" |
client = self._require_client(client)
headers = _get_encryption_headers(self._encryption_key)
headers.update(_get_encryption_headers(source._encryption_key, source=True))
query_params = self._query_params
if "generation" in query_params:
del query_params["generation"]
if token:
query_params["rewriteToken"] = token
if source.generation:
query_params["sourceGeneration"] = source.generation
if self.kms_key_name is not None:
query_params["destinationKmsKeyName"] = self.kms_key_name
api_response = client._connection.api_request(
method="POST",
path=source.path + "/rewriteTo" + self.path,
query_params=query_params,
data=self._properties,
headers=headers,
_target_object=self,
)
rewritten = int(api_response["totalBytesRewritten"])
size = int(api_response["objectSize"])
# The resource key is set if and only if the API response is
# completely done. Additionally, there is no rewrite token to return
# in this case.
if api_response["done"]:
self._set_properties(api_response["resource"])
return None, rewritten, size
return api_response["rewriteToken"], rewritten, size |
<SYSTEM_TASK:>
Update blob's storage class via a rewrite-in-place. This helper will
<END_TASK>
<USER_TASK:>
Description:
def update_storage_class(self, new_class, client=None):
"""Update blob's storage class via a rewrite-in-place. This helper will
wait for the rewrite to complete before returning, so it may take some
time for large files.
See
https://cloud.google.com/storage/docs/per-object-storage-class
If :attr:`user_project` is set on the bucket, bills the API request
to that project.
:type new_class: str
:param new_class: new storage class for the object
:type client: :class:`~google.cloud.storage.client.Client`
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the blob's bucket.
""" |
if new_class not in self._STORAGE_CLASSES:
raise ValueError("Invalid storage class: %s" % (new_class,))
# Update current blob's storage class prior to rewrite
self._patch_property("storageClass", new_class)
# Execute consecutive rewrite operations until operation is done
token, _, _ = self.rewrite(self)
while token is not None:
token, _, _ = self.rewrite(self, token=token) |
<SYSTEM_TASK:>
Verifies that a ``path`` has the correct form.
<END_TASK>
<USER_TASK:>
Description:
def verify_path(path, is_collection):
"""Verifies that a ``path`` has the correct form.
Checks that all of the elements in ``path`` are strings.
Args:
path (Tuple[str, ...]): The components in a collection or
document path.
is_collection (bool): Indicates if the ``path`` represents
a document or a collection.
Raises:
ValueError: if
* the ``path`` is empty
* ``is_collection=True`` and there are an even number of elements
* ``is_collection=False`` and there are an odd number of elements
* an element is not a string
""" |
num_elements = len(path)
if num_elements == 0:
raise ValueError("Document or collection path cannot be empty")
if is_collection:
if num_elements % 2 == 0:
raise ValueError("A collection must have an odd number of path elements")
else:
if num_elements % 2 == 1:
raise ValueError("A document must have an even number of path elements")
for element in path:
if not isinstance(element, six.string_types):
msg = BAD_PATH_TEMPLATE.format(element, type(element))
raise ValueError(msg) |
<SYSTEM_TASK:>
Converts a native Python value into a Firestore protobuf ``Value``.
<END_TASK>
<USER_TASK:>
Description:
def encode_value(value):
"""Converts a native Python value into a Firestore protobuf ``Value``.
Args:
value (Union[NoneType, bool, int, float, datetime.datetime, \
str, bytes, dict, ~google.cloud.Firestore.GeoPoint]): A native
Python value to convert to a protobuf field.
Returns:
~google.cloud.firestore_v1beta1.types.Value: A
value encoded as a Firestore protobuf.
Raises:
TypeError: If the ``value`` is not one of the accepted types.
""" |
if value is None:
return document_pb2.Value(null_value=struct_pb2.NULL_VALUE)
# Must come before six.integer_types since ``bool`` is an integer subtype.
if isinstance(value, bool):
return document_pb2.Value(boolean_value=value)
if isinstance(value, six.integer_types):
return document_pb2.Value(integer_value=value)
if isinstance(value, float):
return document_pb2.Value(double_value=value)
if isinstance(value, DatetimeWithNanoseconds):
return document_pb2.Value(timestamp_value=value.timestamp_pb())
if isinstance(value, datetime.datetime):
return document_pb2.Value(timestamp_value=_datetime_to_pb_timestamp(value))
if isinstance(value, six.text_type):
return document_pb2.Value(string_value=value)
if isinstance(value, six.binary_type):
return document_pb2.Value(bytes_value=value)
# NOTE: We avoid doing an isinstance() check for a Document
# here to avoid import cycles.
document_path = getattr(value, "_document_path", None)
if document_path is not None:
return document_pb2.Value(reference_value=document_path)
if isinstance(value, GeoPoint):
return document_pb2.Value(geo_point_value=value.to_protobuf())
if isinstance(value, list):
value_list = [encode_value(element) for element in value]
value_pb = document_pb2.ArrayValue(values=value_list)
return document_pb2.Value(array_value=value_pb)
if isinstance(value, dict):
value_dict = encode_dict(value)
value_pb = document_pb2.MapValue(fields=value_dict)
return document_pb2.Value(map_value=value_pb)
raise TypeError(
"Cannot convert to a Firestore Value", value, "Invalid type", type(value)
) |
<SYSTEM_TASK:>
Encode a dictionary into protobuf ``Value``-s.
<END_TASK>
<USER_TASK:>
Description:
def encode_dict(values_dict):
"""Encode a dictionary into protobuf ``Value``-s.
Args:
values_dict (dict): The dictionary to encode as protobuf fields.
Returns:
Dict[str, ~google.cloud.firestore_v1beta1.types.Value]: A
dictionary of string keys and ``Value`` protobufs as dictionary
values.
""" |
return {key: encode_value(value) for key, value in six.iteritems(values_dict)} |
<SYSTEM_TASK:>
Convert a reference value string to a document.
<END_TASK>
<USER_TASK:>
Description:
def reference_value_to_document(reference_value, client):
"""Convert a reference value string to a document.
Args:
reference_value (str): A document reference value.
client (~.firestore_v1beta1.client.Client): A client that has
a document factory.
Returns:
~.firestore_v1beta1.document.DocumentReference: The document
corresponding to ``reference_value``.
Raises:
ValueError: If the ``reference_value`` is not of the expected
format: ``projects/{project}/databases/{database}/documents/...``.
ValueError: If the ``reference_value`` does not come from the same
project / database combination as the ``client``.
""" |
# The first 5 parts are
# projects, {project}, databases, {database}, documents
parts = reference_value.split(DOCUMENT_PATH_DELIMITER, 5)
if len(parts) != 6:
msg = BAD_REFERENCE_ERROR.format(reference_value)
raise ValueError(msg)
# The sixth part is `a/b/c/d` (i.e. the document path)
document = client.document(parts[-1])
if document._document_path != reference_value:
msg = WRONG_APP_REFERENCE.format(reference_value, client._database_string)
raise ValueError(msg)
return document |
<SYSTEM_TASK:>
Converts a Firestore protobuf ``Value`` to a native Python value.
<END_TASK>
<USER_TASK:>
Description:
def decode_value(value, client):
"""Converts a Firestore protobuf ``Value`` to a native Python value.
Args:
value (google.cloud.firestore_v1beta1.types.Value): A
Firestore protobuf to be decoded / parsed / converted.
client (~.firestore_v1beta1.client.Client): A client that has
a document factory.
Returns:
Union[NoneType, bool, int, float, datetime.datetime, \
str, bytes, dict, ~google.cloud.Firestore.GeoPoint]: A native
Python value converted from the ``value``.
Raises:
NotImplementedError: If the ``value_type`` is ``reference_value``.
ValueError: If the ``value_type`` is unknown.
""" |
value_type = value.WhichOneof("value_type")
if value_type == "null_value":
return None
elif value_type == "boolean_value":
return value.boolean_value
elif value_type == "integer_value":
return value.integer_value
elif value_type == "double_value":
return value.double_value
elif value_type == "timestamp_value":
return DatetimeWithNanoseconds.from_timestamp_pb(value.timestamp_value)
elif value_type == "string_value":
return value.string_value
elif value_type == "bytes_value":
return value.bytes_value
elif value_type == "reference_value":
return reference_value_to_document(value.reference_value, client)
elif value_type == "geo_point_value":
return GeoPoint(value.geo_point_value.latitude, value.geo_point_value.longitude)
elif value_type == "array_value":
return [decode_value(element, client) for element in value.array_value.values]
elif value_type == "map_value":
return decode_dict(value.map_value.fields, client)
else:
raise ValueError("Unknown ``value_type``", value_type) |
<SYSTEM_TASK:>
Converts a protobuf map of Firestore ``Value``-s.
<END_TASK>
<USER_TASK:>
Description:
def decode_dict(value_fields, client):
"""Converts a protobuf map of Firestore ``Value``-s.
Args:
value_fields (google.protobuf.pyext._message.MessageMapContainer): A
protobuf map of Firestore ``Value``-s.
client (~.firestore_v1beta1.client.Client): A client that has
a document factory.
Returns:
Dict[str, Union[NoneType, bool, int, float, datetime.datetime, \
str, bytes, dict, ~google.cloud.Firestore.GeoPoint]]: A dictionary
of native Python values converted from the ``value_fields``.
""" |
return {
key: decode_value(value, client) for key, value in six.iteritems(value_fields)
} |
<SYSTEM_TASK:>
Parse a document ID from a document protobuf.
<END_TASK>
<USER_TASK:>
Description:
def get_doc_id(document_pb, expected_prefix):
"""Parse a document ID from a document protobuf.
Args:
document_pb (google.cloud.proto.firestore.v1beta1.\
document_pb2.Document): A protobuf for a document that
was created in a ``CreateDocument`` RPC.
expected_prefix (str): The expected collection prefix for the
fully-qualified document name.
Returns:
str: The document ID from the protobuf.
Raises:
ValueError: If the name does not begin with the prefix.
""" |
prefix, document_id = document_pb.name.rsplit(DOCUMENT_PATH_DELIMITER, 1)
if prefix != expected_prefix:
raise ValueError(
"Unexpected document name",
document_pb.name,
"Expected to begin with",
expected_prefix,
)
return document_id |
<SYSTEM_TASK:>
Do depth-first walk of tree, yielding field_path, value
<END_TASK>
<USER_TASK:>
Description:
def extract_fields(document_data, prefix_path, expand_dots=False):
"""Do depth-first walk of tree, yielding field_path, value""" |
if not document_data:
yield prefix_path, _EmptyDict
else:
for key, value in sorted(six.iteritems(document_data)):
if expand_dots:
sub_key = FieldPath.from_string(key)
else:
sub_key = FieldPath(key)
field_path = FieldPath(*(prefix_path.parts + sub_key.parts))
if isinstance(value, dict):
for s_path, s_value in extract_fields(value, field_path):
yield s_path, s_value
else:
yield field_path, value |
<SYSTEM_TASK:>
Set a value into a document for a field_path
<END_TASK>
<USER_TASK:>
Description:
def set_field_value(document_data, field_path, value):
"""Set a value into a document for a field_path""" |
current = document_data
for element in field_path.parts[:-1]:
current = current.setdefault(element, {})
if value is _EmptyDict:
value = {}
current[field_path.parts[-1]] = value |
<SYSTEM_TASK:>
Get the transaction ID from a ``Transaction`` object.
<END_TASK>
<USER_TASK:>
Description:
def get_transaction_id(transaction, read_operation=True):
"""Get the transaction ID from a ``Transaction`` object.
Args:
transaction (Optional[~.firestore_v1beta1.transaction.\
Transaction]): An existing transaction that this query will
run in.
read_operation (Optional[bool]): Indicates if the transaction ID
will be used in a read operation. Defaults to :data:`True`.
Returns:
Optional[bytes]: The ID of the transaction, or :data:`None` if the
``transaction`` is :data:`None`.
Raises:
ValueError: If the ``transaction`` is not in progress (only if
``transaction`` is not :data:`None`).
ReadAfterWriteError: If the ``transaction`` has writes stored on
it and ``read_operation`` is :data:`True`.
""" |
if transaction is None:
return None
else:
if not transaction.in_progress:
raise ValueError(INACTIVE_TXN)
if read_operation and len(transaction._write_pbs) > 0:
raise ReadAfterWriteError(READ_AFTER_WRITE_ERROR)
return transaction.id |
<SYSTEM_TASK:>
Return a fully-qualified uptime_check_config string.
<END_TASK>
<USER_TASK:>
Description:
def uptime_check_config_path(cls, project, uptime_check_config):
"""Return a fully-qualified uptime_check_config string.""" |
return google.api_core.path_template.expand(
"projects/{project}/uptimeCheckConfigs/{uptime_check_config}",
project=project,
uptime_check_config=uptime_check_config,
) |
<SYSTEM_TASK:>
Creates a new uptime check configuration.
<END_TASK>
<USER_TASK:>
Description:
def create_uptime_check_config(
self,
parent,
uptime_check_config,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates a new uptime check configuration.
Example:
>>> from google.cloud import monitoring_v3
>>>
>>> client = monitoring_v3.UptimeCheckServiceClient()
>>>
>>> parent = client.project_path('[PROJECT]')
>>>
>>> # TODO: Initialize `uptime_check_config`:
>>> uptime_check_config = {}
>>>
>>> response = client.create_uptime_check_config(parent, uptime_check_config)
Args:
parent (str): The project in which to create the uptime check. The format is
``projects/[PROJECT_ID]``.
uptime_check_config (Union[dict, ~google.cloud.monitoring_v3.types.UptimeCheckConfig]): The new uptime check configuration.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.monitoring_v3.types.UptimeCheckConfig`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.monitoring_v3.types.UptimeCheckConfig` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
if metadata is None:
metadata = []
metadata = list(metadata)
# Wrap the transport method to add retry and timeout logic.
if "create_uptime_check_config" not in self._inner_api_calls:
self._inner_api_calls[
"create_uptime_check_config"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.create_uptime_check_config,
default_retry=self._method_configs["CreateUptimeCheckConfig"].retry,
default_timeout=self._method_configs["CreateUptimeCheckConfig"].timeout,
client_info=self._client_info,
)
request = uptime_service_pb2.CreateUptimeCheckConfigRequest(
parent=parent, uptime_check_config=uptime_check_config
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["create_uptime_check_config"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Get information about document references.
<END_TASK>
<USER_TASK:>
Description:
def _reference_info(references):
"""Get information about document references.
Helper for :meth:`~.firestore_v1beta1.client.Client.get_all`.
Args:
references (List[.DocumentReference, ...]): Iterable of document
references.
Returns:
Tuple[List[str, ...], Dict[str, .DocumentReference]]: A two-tuple of
* fully-qualified documents paths for each reference in ``references``
* a mapping from the paths to the original reference. (If multiple
``references`` contains multiple references to the same document,
that key will be overwritten in the result.)
""" |
document_paths = []
reference_map = {}
for reference in references:
doc_path = reference._document_path
document_paths.append(doc_path)
reference_map[doc_path] = reference
return document_paths, reference_map |
<SYSTEM_TASK:>
Get a document reference from a dictionary.
<END_TASK>
<USER_TASK:>
Description:
def _get_reference(document_path, reference_map):
"""Get a document reference from a dictionary.
This just wraps a simple dictionary look-up with a helpful error that is
specific to :meth:`~.firestore.client.Client.get_all`, the
**public** caller of this function.
Args:
document_path (str): A fully-qualified document path.
reference_map (Dict[str, .DocumentReference]): A mapping (produced
by :func:`_reference_info`) of fully-qualified document paths to
document references.
Returns:
.DocumentReference: The matching reference.
Raises:
ValueError: If ``document_path`` has not been encountered.
""" |
try:
return reference_map[document_path]
except KeyError:
msg = _BAD_DOC_TEMPLATE.format(document_path)
raise ValueError(msg) |
<SYSTEM_TASK:>
Parse a `BatchGetDocumentsResponse` protobuf.
<END_TASK>
<USER_TASK:>
Description:
def _parse_batch_get(get_doc_response, reference_map, client):
"""Parse a `BatchGetDocumentsResponse` protobuf.
Args:
get_doc_response (~google.cloud.proto.firestore.v1beta1.\
firestore_pb2.BatchGetDocumentsResponse): A single response (from
a stream) containing the "get" response for a document.
reference_map (Dict[str, .DocumentReference]): A mapping (produced
by :func:`_reference_info`) of fully-qualified document paths to
document references.
client (~.firestore_v1beta1.client.Client): A client that has
a document factory.
Returns:
[.DocumentSnapshot]: The retrieved snapshot.
Raises:
ValueError: If the response has a ``result`` field (a oneof) other
than ``found`` or ``missing``.
""" |
result_type = get_doc_response.WhichOneof("result")
if result_type == "found":
reference = _get_reference(get_doc_response.found.name, reference_map)
data = _helpers.decode_dict(get_doc_response.found.fields, client)
snapshot = DocumentSnapshot(
reference,
data,
exists=True,
read_time=get_doc_response.read_time,
create_time=get_doc_response.found.create_time,
update_time=get_doc_response.found.update_time,
)
elif result_type == "missing":
snapshot = DocumentSnapshot(
None,
None,
exists=False,
read_time=get_doc_response.read_time,
create_time=None,
update_time=None,
)
else:
raise ValueError(
"`BatchGetDocumentsResponse.result` (a oneof) had a field other "
"than `found` or `missing` set, or was unset"
)
return snapshot |
<SYSTEM_TASK:>
Lazy-loading getter GAPIC Firestore API.
<END_TASK>
<USER_TASK:>
Description:
def _firestore_api(self):
"""Lazy-loading getter GAPIC Firestore API.
Returns:
~.gapic.firestore.v1beta1.firestore_client.FirestoreClient: The
GAPIC client with the credentials of the current client.
""" |
if self._firestore_api_internal is None:
self._firestore_api_internal = firestore_client.FirestoreClient(
credentials=self._credentials
)
return self._firestore_api_internal |
<SYSTEM_TASK:>
The database string corresponding to this client's project.
<END_TASK>
<USER_TASK:>
Description:
def _database_string(self):
"""The database string corresponding to this client's project.
This value is lazy-loaded and cached.
Will be of the form
``projects/{project_id}/databases/{database_id}``
but ``database_id == '(default)'`` for the time being.
Returns:
str: The fully-qualified database string for the current
project. (The default database is also in this string.)
""" |
if self._database_string_internal is None:
# NOTE: database_root_path() is a classmethod, so we don't use
# self._firestore_api (it isn't necessary).
db_str = firestore_client.FirestoreClient.database_root_path(
self.project, self._database
)
self._database_string_internal = db_str
return self._database_string_internal |
<SYSTEM_TASK:>
The RPC metadata for this client's associated database.
<END_TASK>
<USER_TASK:>
Description:
def _rpc_metadata(self):
"""The RPC metadata for this client's associated database.
Returns:
Sequence[Tuple(str, str)]: RPC metadata with resource prefix
for the database associated with this client.
""" |
if self._rpc_metadata_internal is None:
self._rpc_metadata_internal = _helpers.metadata_with_prefix(
self._database_string
)
return self._rpc_metadata_internal |
<SYSTEM_TASK:>
Get a reference to a document in a collection.
<END_TASK>
<USER_TASK:>
Description:
def document(self, *document_path):
"""Get a reference to a document in a collection.
For a top-level document:
.. code-block:: python
>>> client.document('collek/shun')
>>> # is the same as
>>> client.document('collek', 'shun')
For a document in a sub-collection:
.. code-block:: python
>>> client.document('mydocs/doc/subcol/child')
>>> # is the same as
>>> client.document('mydocs', 'doc', 'subcol', 'child')
Documents in sub-collections can be nested deeper in a similar fashion.
Args:
document_path (Tuple[str, ...]): Can either be
* A single ``/``-delimited path to a document
* A tuple of document path segments
Returns:
~.firestore_v1beta1.document.DocumentReference: A reference
to a document in a collection.
""" |
if len(document_path) == 1:
path = document_path[0].split(_helpers.DOCUMENT_PATH_DELIMITER)
else:
path = document_path
return DocumentReference(*path, client=self) |
<SYSTEM_TASK:>
Create a write option for write operations.
<END_TASK>
<USER_TASK:>
Description:
def write_option(**kwargs):
"""Create a write option for write operations.
Write operations include :meth:`~.DocumentReference.set`,
:meth:`~.DocumentReference.update` and
:meth:`~.DocumentReference.delete`.
One of the following keyword arguments must be provided:
* ``last_update_time`` (:class:`google.protobuf.timestamp_pb2.\
Timestamp`): A timestamp. When set, the target document must
exist and have been last updated at that time. Protobuf
``update_time`` timestamps are typically returned from methods
that perform write operations as part of a "write result"
protobuf or directly.
* ``exists`` (:class:`bool`): Indicates if the document being modified
should already exist.
Providing no argument would make the option have no effect (so
it is not allowed). Providing multiple would be an apparent
contradiction, since ``last_update_time`` assumes that the
document **was** updated (it can't have been updated if it
doesn't exist) and ``exists`` indicate that it is unknown if the
document exists or not.
Args:
kwargs (Dict[str, Any]): The keyword arguments described above.
Raises:
TypeError: If anything other than exactly one argument is
provided by the caller.
""" |
if len(kwargs) != 1:
raise TypeError(_BAD_OPTION_ERR)
name, value = kwargs.popitem()
if name == "last_update_time":
return _helpers.LastUpdateOption(value)
elif name == "exists":
return _helpers.ExistsOption(value)
else:
extra = "{!r} was provided".format(name)
raise TypeError(_BAD_OPTION_ERR, extra) |
<SYSTEM_TASK:>
Retrieve a batch of documents.
<END_TASK>
<USER_TASK:>
Description:
def get_all(self, references, field_paths=None, transaction=None):
"""Retrieve a batch of documents.
.. note::
Documents returned by this method are not guaranteed to be
returned in the same order that they are given in ``references``.
.. note::
If multiple ``references`` refer to the same document, the server
will only return one result.
See :meth:`~.firestore_v1beta1.client.Client.field_path` for
more information on **field paths**.
If a ``transaction`` is used and it already has write operations
added, this method cannot be used (i.e. read-after-write is not
allowed).
Args:
references (List[.DocumentReference, ...]): Iterable of document
references to be retrieved.
field_paths (Optional[Iterable[str, ...]]): An iterable of field
paths (``.``-delimited list of field names) to use as a
projection of document fields in the returned results. If
no value is provided, all fields will be returned.
transaction (Optional[~.firestore_v1beta1.transaction.\
Transaction]): An existing transaction that these
``references`` will be retrieved in.
Yields:
.DocumentSnapshot: The next document snapshot that fulfills the
query, or :data:`None` if the document does not exist.
""" |
document_paths, reference_map = _reference_info(references)
mask = _get_doc_mask(field_paths)
response_iterator = self._firestore_api.batch_get_documents(
self._database_string,
document_paths,
mask,
transaction=_helpers.get_transaction_id(transaction),
metadata=self._rpc_metadata,
)
for get_doc_response in response_iterator:
yield _parse_batch_get(get_doc_response, reference_map, self) |
<SYSTEM_TASK:>
List top-level collections of the client's database.
<END_TASK>
<USER_TASK:>
Description:
def collections(self):
"""List top-level collections of the client's database.
Returns:
Sequence[~.firestore_v1beta1.collection.CollectionReference]:
iterator of subcollections of the current document.
""" |
iterator = self._firestore_api.list_collection_ids(
self._database_string, metadata=self._rpc_metadata
)
iterator.client = self
iterator.item_to_value = _item_to_collection_ref
return iterator |
<SYSTEM_TASK:>
Begin a transaction on the database.
<END_TASK>
<USER_TASK:>
Description:
def begin(self):
"""Begin a transaction on the database.
:rtype: bytes
:returns: the ID for the newly-begun transaction.
:raises ValueError:
if the transaction is already begun, committed, or rolled back.
""" |
if self._transaction_id is not None:
raise ValueError("Transaction already begun")
if self.committed is not None:
raise ValueError("Transaction already committed")
if self._rolled_back:
raise ValueError("Transaction is already rolled back")
database = self._session._database
api = database.spanner_api
metadata = _metadata_with_prefix(database.name)
txn_options = TransactionOptions(read_write=TransactionOptions.ReadWrite())
response = api.begin_transaction(
self._session.name, txn_options, metadata=metadata
)
self._transaction_id = response.id
return self._transaction_id |
<SYSTEM_TASK:>
Perform an ``ExecuteSql`` API request with DML.
<END_TASK>
<USER_TASK:>
Description:
def execute_update(self, dml, params=None, param_types=None, query_mode=None):
"""Perform an ``ExecuteSql`` API request with DML.
:type dml: str
:param dml: SQL DML statement
:type params: dict, {str -> column value}
:param params: values for parameter replacement. Keys must match
the names used in ``dml``.
:type param_types: dict[str -> Union[dict, .types.Type]]
:param param_types:
(Optional) maps explicit types for one or more param values;
required if parameters are passed.
:type query_mode:
:class:`google.cloud.spanner_v1.proto.ExecuteSqlRequest.QueryMode`
:param query_mode: Mode governing return of results / query plan. See
https://cloud.google.com/spanner/reference/rpc/google.spanner.v1#google.spanner.v1.ExecuteSqlRequest.QueryMode1
:rtype: int
:returns: Count of rows affected by the DML statement.
""" |
params_pb = self._make_params_pb(params, param_types)
database = self._session._database
metadata = _metadata_with_prefix(database.name)
transaction = self._make_txn_selector()
api = database.spanner_api
response = api.execute_sql(
self._session.name,
dml,
transaction=transaction,
params=params_pb,
param_types=param_types,
query_mode=query_mode,
seqno=self._execute_sql_count,
metadata=metadata,
)
self._execute_sql_count += 1
return response.stats.row_count_exact |
<SYSTEM_TASK:>
Perform a batch of DML statements via an ``ExecuteBatchDml`` request.
<END_TASK>
<USER_TASK:>
Description:
def batch_update(self, statements):
"""Perform a batch of DML statements via an ``ExecuteBatchDml`` request.
:type statements:
Sequence[Union[ str, Tuple[str, Dict[str, Any], Dict[str, Union[dict, .types.Type]]]]]
:param statements:
List of DML statements, with optional params / param types.
If passed, 'params' is a dict mapping names to the values
for parameter replacement. Keys must match the names used in the
corresponding DML statement. If 'params' is passed, 'param_types'
must also be passed, as a dict mapping names to the type of
value passed in 'params'.
:rtype:
Tuple(status, Sequence[int])
:returns:
Status code, plus counts of rows affected by each completed DML
statement. Note that if the staus code is not ``OK``, the
statement triggering the error will not have an entry in the
list, nor will any statements following that one.
""" |
parsed = []
for statement in statements:
if isinstance(statement, str):
parsed.append({"sql": statement})
else:
dml, params, param_types = statement
params_pb = self._make_params_pb(params, param_types)
parsed.append(
{"sql": dml, "params": params_pb, "param_types": param_types}
)
database = self._session._database
metadata = _metadata_with_prefix(database.name)
transaction = self._make_txn_selector()
api = database.spanner_api
response = api.execute_batch_dml(
session=self._session.name,
transaction=transaction,
statements=parsed,
seqno=self._execute_sql_count,
metadata=metadata,
)
self._execute_sql_count += 1
row_counts = [
result_set.stats.row_count_exact for result_set in response.result_sets
]
return response.status, row_counts |
<SYSTEM_TASK:>
Return a fully-qualified organization_deidentify_template string.
<END_TASK>
<USER_TASK:>
Description:
def organization_deidentify_template_path(cls, organization, deidentify_template):
"""Return a fully-qualified organization_deidentify_template string.""" |
return google.api_core.path_template.expand(
"organizations/{organization}/deidentifyTemplates/{deidentify_template}",
organization=organization,
deidentify_template=deidentify_template,
) |
<SYSTEM_TASK:>
Return a fully-qualified project_deidentify_template string.
<END_TASK>
<USER_TASK:>
Description:
def project_deidentify_template_path(cls, project, deidentify_template):
"""Return a fully-qualified project_deidentify_template string.""" |
return google.api_core.path_template.expand(
"projects/{project}/deidentifyTemplates/{deidentify_template}",
project=project,
deidentify_template=deidentify_template,
) |
<SYSTEM_TASK:>
Return a fully-qualified organization_inspect_template string.
<END_TASK>
<USER_TASK:>
Description:
def organization_inspect_template_path(cls, organization, inspect_template):
"""Return a fully-qualified organization_inspect_template string.""" |
return google.api_core.path_template.expand(
"organizations/{organization}/inspectTemplates/{inspect_template}",
organization=organization,
inspect_template=inspect_template,
) |
<SYSTEM_TASK:>
Return a fully-qualified project_inspect_template string.
<END_TASK>
<USER_TASK:>
Description:
def project_inspect_template_path(cls, project, inspect_template):
"""Return a fully-qualified project_inspect_template string.""" |
return google.api_core.path_template.expand(
"projects/{project}/inspectTemplates/{inspect_template}",
project=project,
inspect_template=inspect_template,
) |
<SYSTEM_TASK:>
Return a fully-qualified project_job_trigger string.
<END_TASK>
<USER_TASK:>
Description:
def project_job_trigger_path(cls, project, job_trigger):
"""Return a fully-qualified project_job_trigger string.""" |
return google.api_core.path_template.expand(
"projects/{project}/jobTriggers/{job_trigger}",
project=project,
job_trigger=job_trigger,
) |
<SYSTEM_TASK:>
Return a fully-qualified dlp_job string.
<END_TASK>
<USER_TASK:>
Description:
def dlp_job_path(cls, project, dlp_job):
"""Return a fully-qualified dlp_job string.""" |
return google.api_core.path_template.expand(
"projects/{project}/dlpJobs/{dlp_job}", project=project, dlp_job=dlp_job
) |
<SYSTEM_TASK:>
Return a fully-qualified organization_stored_info_type string.
<END_TASK>
<USER_TASK:>
Description:
def organization_stored_info_type_path(cls, organization, stored_info_type):
"""Return a fully-qualified organization_stored_info_type string.""" |
return google.api_core.path_template.expand(
"organizations/{organization}/storedInfoTypes/{stored_info_type}",
organization=organization,
stored_info_type=stored_info_type,
) |
<SYSTEM_TASK:>
Return a fully-qualified project_stored_info_type string.
<END_TASK>
<USER_TASK:>
Description:
def project_stored_info_type_path(cls, project, stored_info_type):
"""Return a fully-qualified project_stored_info_type string.""" |
return google.api_core.path_template.expand(
"projects/{project}/storedInfoTypes/{stored_info_type}",
project=project,
stored_info_type=stored_info_type,
) |
<SYSTEM_TASK:>
Report error payload.
<END_TASK>
<USER_TASK:>
Description:
def report_error_event(self, error_report):
"""Report error payload.
:type error_report: dict
:param: error_report:
dict payload of the error report formatted according to
https://cloud.google.com/error-reporting/docs/formatting-error-messages
This object should be built using
:meth:~`google.cloud.error_reporting.client._build_error_report`
""" |
logger = self.logging_client.logger("errors")
logger.log_struct(error_report) |
<SYSTEM_TASK:>
Returns a routing header string for the given request parameters.
<END_TASK>
<USER_TASK:>
Description:
def to_routing_header(params):
"""Returns a routing header string for the given request parameters.
Args:
params (Mapping[str, Any]): A dictionary containing the request
parameters used for routing.
Returns:
str: The routing header string.
""" |
if sys.version_info[0] < 3:
# Python 2 does not have the "safe" parameter for urlencode.
return urlencode(params).replace("%2F", "/")
return urlencode(
params,
# Per Google API policy (go/api-url-encoding), / is not encoded.
safe="/",
) |
<SYSTEM_TASK:>
Construct a field description protobuf.
<END_TASK>
<USER_TASK:>
Description:
def StructField(name, field_type): # pylint: disable=invalid-name
"""Construct a field description protobuf.
:type name: str
:param name: the name of the field
:type field_type: :class:`type_pb2.Type`
:param field_type: the type of the field
:rtype: :class:`type_pb2.StructType.Field`
:returns: the appropriate struct-field-type protobuf
""" |
return type_pb2.StructType.Field(name=name, type=field_type) |
<SYSTEM_TASK:>
Construct a struct parameter type description protobuf.
<END_TASK>
<USER_TASK:>
Description:
def Struct(fields): # pylint: disable=invalid-name
"""Construct a struct parameter type description protobuf.
:type fields: list of :class:`type_pb2.StructType.Field`
:param fields: the fields of the struct
:rtype: :class:`type_pb2.Type`
:returns: the appropriate struct-type protobuf
""" |
return type_pb2.Type(
code=type_pb2.STRUCT, struct_type=type_pb2.StructType(fields=fields)
) |
<SYSTEM_TASK:>
Perform bi-directional speech recognition.
<END_TASK>
<USER_TASK:>
Description:
def streaming_recognize(
self,
config,
requests,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
):
"""Perform bi-directional speech recognition.
This method allows you to receive results while sending audio;
it is only available via. gRPC (not REST).
.. warning::
This method is EXPERIMENTAL. Its interface might change in the
future.
Example:
>>> from google.cloud.speech_v1 import enums
>>> from google.cloud.speech_v1 import SpeechClient
>>> from google.cloud.speech_v1 import types
>>> client = SpeechClient()
>>> config = types.StreamingRecognitionConfig(
... config=types.RecognitionConfig(
... encoding=enums.RecognitionConfig.AudioEncoding.FLAC,
... ),
... )
>>> request = types.StreamingRecognizeRequest(audio_content=b'...')
>>> requests = [request]
>>> for element in client.streaming_recognize(config, requests):
... # process element
... pass
Args:
config (:class:`~.types.StreamingRecognitionConfig`): The
configuration to use for the stream.
requests (Iterable[:class:`~.types.StreamingRecognizeRequest`]):
The input objects.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
Returns:
Iterable[:class:`~.types.StreamingRecognizeResponse`]
Raises:
:exc:`google.gax.errors.GaxError` if the RPC is aborted.
:exc:`ValueError` if the parameters are invalid.
""" |
return super(SpeechHelpers, self).streaming_recognize(
self._streaming_request_iterable(config, requests),
retry=retry,
timeout=timeout,
) |
<SYSTEM_TASK:>
A generator that yields the config followed by the requests.
<END_TASK>
<USER_TASK:>
Description:
def _streaming_request_iterable(self, config, requests):
"""A generator that yields the config followed by the requests.
Args:
config (~.speech_v1.types.StreamingRecognitionConfig): The
configuration to use for the stream.
requests (Iterable[~.speech_v1.types.StreamingRecognizeRequest]):
The input objects.
Returns:
Iterable[~.speech_v1.types.StreamingRecognizeRequest]): The
correctly formatted input for
:meth:`~.speech_v1.SpeechClient.streaming_recognize`.
""" |
yield self.types.StreamingRecognizeRequest(streaming_config=config)
for request in requests:
yield request |
<SYSTEM_TASK:>
Return a fully-qualified project_data_source string.
<END_TASK>
<USER_TASK:>
Description:
def project_data_source_path(cls, project, data_source):
"""Return a fully-qualified project_data_source string.""" |
return google.api_core.path_template.expand(
"projects/{project}/dataSources/{data_source}",
project=project,
data_source=data_source,
) |
<SYSTEM_TASK:>
Return a fully-qualified project_transfer_config string.
<END_TASK>
<USER_TASK:>
Description:
def project_transfer_config_path(cls, project, transfer_config):
"""Return a fully-qualified project_transfer_config string.""" |
return google.api_core.path_template.expand(
"projects/{project}/transferConfigs/{transfer_config}",
project=project,
transfer_config=transfer_config,
) |
<SYSTEM_TASK:>
Return a fully-qualified project_run string.
<END_TASK>
<USER_TASK:>
Description:
def project_run_path(cls, project, transfer_config, run):
"""Return a fully-qualified project_run string.""" |
return google.api_core.path_template.expand(
"projects/{project}/transferConfigs/{transfer_config}/runs/{run}",
project=project,
transfer_config=transfer_config,
run=run,
) |
<SYSTEM_TASK:>
Creates a new data transfer configuration.
<END_TASK>
<USER_TASK:>
Description:
def create_transfer_config(
self,
parent,
transfer_config,
authorization_code=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates a new data transfer configuration.
Example:
>>> from google.cloud import bigquery_datatransfer_v1
>>>
>>> client = bigquery_datatransfer_v1.DataTransferServiceClient()
>>>
>>> parent = client.project_path('[PROJECT]')
>>>
>>> # TODO: Initialize `transfer_config`:
>>> transfer_config = {}
>>>
>>> response = client.create_transfer_config(parent, transfer_config)
Args:
parent (str): The BigQuery project id where the transfer configuration should be
created. Must be in the format
/projects/{project\_id}/locations/{location\_id} If specified location
and location of the destination bigquery dataset do not match - the
request will fail.
transfer_config (Union[dict, ~google.cloud.bigquery_datatransfer_v1.types.TransferConfig]): Data transfer configuration to create.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.bigquery_datatransfer_v1.types.TransferConfig`
authorization_code (str): Optional OAuth2 authorization code to use with this transfer
configuration. This is required if new credentials are needed, as
indicated by ``CheckValidCreds``. In order to obtain
authorization\_code, please make a request to
https://www.gstatic.com/bigquerydatatransfer/oauthz/auth?client\_id=&scope=<data\_source\_scopes>&redirect\_uri=<redirect\_uri>
- client\_id should be OAuth client\_id of BigQuery DTS API for the
given data source returned by ListDataSources method.
- data\_source\_scopes are the scopes returned by ListDataSources
method.
- redirect\_uri is an optional parameter. If not specified, then
authorization code is posted to the opener of authorization flow
window. Otherwise it will be sent to the redirect uri. A special
value of urn:ietf:wg:oauth:2.0:oob means that authorization code
should be returned in the title bar of the browser, with the page
text prompting the user to copy the code and paste it in the
application.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.bigquery_datatransfer_v1.types.TransferConfig` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "create_transfer_config" not in self._inner_api_calls:
self._inner_api_calls[
"create_transfer_config"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.create_transfer_config,
default_retry=self._method_configs["CreateTransferConfig"].retry,
default_timeout=self._method_configs["CreateTransferConfig"].timeout,
client_info=self._client_info,
)
request = datatransfer_pb2.CreateTransferConfigRequest(
parent=parent,
transfer_config=transfer_config,
authorization_code=authorization_code,
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["create_transfer_config"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Updates a data transfer configuration.
<END_TASK>
<USER_TASK:>
Description:
def update_transfer_config(
self,
transfer_config,
update_mask,
authorization_code=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Updates a data transfer configuration.
All fields must be set, even if they are not updated.
Example:
>>> from google.cloud import bigquery_datatransfer_v1
>>>
>>> client = bigquery_datatransfer_v1.DataTransferServiceClient()
>>>
>>> # TODO: Initialize `transfer_config`:
>>> transfer_config = {}
>>>
>>> # TODO: Initialize `update_mask`:
>>> update_mask = {}
>>>
>>> response = client.update_transfer_config(transfer_config, update_mask)
Args:
transfer_config (Union[dict, ~google.cloud.bigquery_datatransfer_v1.types.TransferConfig]): Data transfer configuration to create.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.bigquery_datatransfer_v1.types.TransferConfig`
update_mask (Union[dict, ~google.cloud.bigquery_datatransfer_v1.types.FieldMask]): Required list of fields to be updated in this request.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.bigquery_datatransfer_v1.types.FieldMask`
authorization_code (str): Optional OAuth2 authorization code to use with this transfer
configuration. If it is provided, the transfer configuration will be
associated with the authorizing user. In order to obtain
authorization\_code, please make a request to
https://www.gstatic.com/bigquerydatatransfer/oauthz/auth?client\_id=&scope=<data\_source\_scopes>&redirect\_uri=<redirect\_uri>
- client\_id should be OAuth client\_id of BigQuery DTS API for the
given data source returned by ListDataSources method.
- data\_source\_scopes are the scopes returned by ListDataSources
method.
- redirect\_uri is an optional parameter. If not specified, then
authorization code is posted to the opener of authorization flow
window. Otherwise it will be sent to the redirect uri. A special
value of urn:ietf:wg:oauth:2.0:oob means that authorization code
should be returned in the title bar of the browser, with the page
text prompting the user to copy the code and paste it in the
application.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.bigquery_datatransfer_v1.types.TransferConfig` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "update_transfer_config" not in self._inner_api_calls:
self._inner_api_calls[
"update_transfer_config"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.update_transfer_config,
default_retry=self._method_configs["UpdateTransferConfig"].retry,
default_timeout=self._method_configs["UpdateTransferConfig"].timeout,
client_info=self._client_info,
)
request = datatransfer_pb2.UpdateTransferConfigRequest(
transfer_config=transfer_config,
update_mask=update_mask,
authorization_code=authorization_code,
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("transfer_config.name", transfer_config.name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["update_transfer_config"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Gets the most recent threat list diffs.
<END_TASK>
<USER_TASK:>
Description:
def compute_threat_list_diff(
self,
threat_type,
constraints,
version_token=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Gets the most recent threat list diffs.
Example:
>>> from google.cloud import webrisk_v1beta1
>>> from google.cloud.webrisk_v1beta1 import enums
>>>
>>> client = webrisk_v1beta1.WebRiskServiceV1Beta1Client()
>>>
>>> # TODO: Initialize `threat_type`:
>>> threat_type = enums.ThreatType.THREAT_TYPE_UNSPECIFIED
>>>
>>> # TODO: Initialize `constraints`:
>>> constraints = {}
>>>
>>> response = client.compute_threat_list_diff(threat_type, constraints)
Args:
threat_type (~google.cloud.webrisk_v1beta1.types.ThreatType): Required. The ThreatList to update.
constraints (Union[dict, ~google.cloud.webrisk_v1beta1.types.Constraints]): The constraints associated with this request.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.webrisk_v1beta1.types.Constraints`
version_token (bytes): The current version token of the client for the requested list (the
client version that was received from the last successful diff).
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.webrisk_v1beta1.types.ComputeThreatListDiffResponse` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "compute_threat_list_diff" not in self._inner_api_calls:
self._inner_api_calls[
"compute_threat_list_diff"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.compute_threat_list_diff,
default_retry=self._method_configs["ComputeThreatListDiff"].retry,
default_timeout=self._method_configs["ComputeThreatListDiff"].timeout,
client_info=self._client_info,
)
request = webrisk_pb2.ComputeThreatListDiffRequest(
threat_type=threat_type,
constraints=constraints,
version_token=version_token,
)
return self._inner_api_calls["compute_threat_list_diff"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
This method is used to check whether a URI is on a given threatList.
<END_TASK>
<USER_TASK:>
Description:
def search_uris(
self,
uri,
threat_types,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
This method is used to check whether a URI is on a given threatList.
Example:
>>> from google.cloud import webrisk_v1beta1
>>> from google.cloud.webrisk_v1beta1 import enums
>>>
>>> client = webrisk_v1beta1.WebRiskServiceV1Beta1Client()
>>>
>>> # TODO: Initialize `uri`:
>>> uri = ''
>>>
>>> # TODO: Initialize `threat_types`:
>>> threat_types = []
>>>
>>> response = client.search_uris(uri, threat_types)
Args:
uri (str): The URI to be checked for matches.
threat_types (list[~google.cloud.webrisk_v1beta1.types.ThreatType]): Required. The ThreatLists to search in.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.webrisk_v1beta1.types.SearchUrisResponse` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "search_uris" not in self._inner_api_calls:
self._inner_api_calls[
"search_uris"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.search_uris,
default_retry=self._method_configs["SearchUris"].retry,
default_timeout=self._method_configs["SearchUris"].timeout,
client_info=self._client_info,
)
request = webrisk_pb2.SearchUrisRequest(uri=uri, threat_types=threat_types)
return self._inner_api_calls["search_uris"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
<SYSTEM_TASK:>
Gets the full hashes that match the requested hash prefix.
<END_TASK>
<USER_TASK:>
Description:
def search_hashes(
self,
hash_prefix=None,
threat_types=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Gets the full hashes that match the requested hash prefix.
This is used after a hash prefix is looked up in a threatList
and there is a match. The client side threatList only holds partial hashes
so the client must query this method to determine if there is a full
hash match of a threat.
Example:
>>> from google.cloud import webrisk_v1beta1
>>>
>>> client = webrisk_v1beta1.WebRiskServiceV1Beta1Client()
>>>
>>> response = client.search_hashes()
Args:
hash_prefix (bytes): A hash prefix, consisting of the most significant 4-32 bytes of a SHA256
hash. For JSON requests, this field is base64-encoded.
threat_types (list[~google.cloud.webrisk_v1beta1.types.ThreatType]): Required. The ThreatLists to search in.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will not
be retried.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.webrisk_v1beta1.types.SearchHashesResponse` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
""" |
# Wrap the transport method to add retry and timeout logic.
if "search_hashes" not in self._inner_api_calls:
self._inner_api_calls[
"search_hashes"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.search_hashes,
default_retry=self._method_configs["SearchHashes"].retry,
default_timeout=self._method_configs["SearchHashes"].timeout,
client_info=self._client_info,
)
request = webrisk_pb2.SearchHashesRequest(
hash_prefix=hash_prefix, threat_types=threat_types
)
return self._inner_api_calls["search_hashes"](
request, retry=retry, timeout=timeout, metadata=metadata
) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.