code
stringlengths
26
870k
docstring
stringlengths
1
65.6k
func_name
stringlengths
1
194
language
stringclasses
1 value
repo
stringlengths
8
68
path
stringlengths
5
194
url
stringlengths
46
254
license
stringclasses
4 values
def gpu_type(self): """Specify the GPU type to be used in the tasks. If omitted, the job will run on any gpu type, optional""" return None
Specify the GPU type to be used in the tasks. If omitted, the job will run on any gpu type, optional
gpu_type
python
spotify/luigi
luigi/contrib/pai.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/pai.py
Apache-2.0
def retry_count(self): """Job retry count, no less than 0, optional""" return 0
Job retry count, no less than 0, optional
retry_count
python
spotify/luigi
luigi/contrib/pai.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/pai.py
Apache-2.0
def __init__(self, *args, **kwargs): """ :param pai_url: The rest server url of PAI clusters, default is 'http://127.0.0.1:9186'. :param token: The token used to auth the rest server of PAI. """ super(PaiTask, self).__init__(*args, **kwargs) self.__init_token()
:param pai_url: The rest server url of PAI clusters, default is 'http://127.0.0.1:9186'. :param token: The token used to auth the rest server of PAI.
__init__
python
spotify/luigi
luigi/contrib/pai.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/pai.py
Apache-2.0
def auth_method(self): """ This can be set to ``kubeconfig`` or ``service-account``. It defaults to ``kubeconfig``. For more details, please refer to: - kubeconfig: http://kubernetes.io/docs/user-guide/kubeconfig-file - service-account: http://kubernetes.io/docs/user-guide/service-accounts """ return self.kubernetes_config.auth_method
This can be set to ``kubeconfig`` or ``service-account``. It defaults to ``kubeconfig``. For more details, please refer to: - kubeconfig: http://kubernetes.io/docs/user-guide/kubeconfig-file - service-account: http://kubernetes.io/docs/user-guide/service-accounts
auth_method
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def kubeconfig_path(self): """ Path to kubeconfig file used for cluster authentication. It defaults to "~/.kube/config", which is the default location when using minikube (http://kubernetes.io/docs/getting-started-guides/minikube). When auth_method is ``service-account`` this property is ignored. **WARNING**: For Python versions < 3.5 kubeconfig must point to a Kubernetes API hostname, and NOT to an IP address. For more details, please refer to: http://kubernetes.io/docs/user-guide/kubeconfig-file """ return self.kubernetes_config.kubeconfig_path
Path to kubeconfig file used for cluster authentication. It defaults to "~/.kube/config", which is the default location when using minikube (http://kubernetes.io/docs/getting-started-guides/minikube). When auth_method is ``service-account`` this property is ignored. **WARNING**: For Python versions < 3.5 kubeconfig must point to a Kubernetes API hostname, and NOT to an IP address. For more details, please refer to: http://kubernetes.io/docs/user-guide/kubeconfig-file
kubeconfig_path
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def kubernetes_namespace(self): """ Namespace in Kubernetes where the job will run. It defaults to the default namespace in Kubernetes For more details, please refer to: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ """ return self.kubernetes_config.kubernetes_namespace
Namespace in Kubernetes where the job will run. It defaults to the default namespace in Kubernetes For more details, please refer to: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
kubernetes_namespace
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def name(self): """ A name for this job. This task will automatically append a UUID to the name before to submit to Kubernetes. """ raise NotImplementedError("subclass must define name")
A name for this job. This task will automatically append a UUID to the name before to submit to Kubernetes.
name
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def labels(self): """ Return custom labels for kubernetes job. example:: ``{"run_dt": datetime.date.today().strftime('%F')}`` """ return {}
Return custom labels for kubernetes job. example:: ``{"run_dt": datetime.date.today().strftime('%F')}``
labels
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def spec_schema(self): """ Kubernetes Job spec schema in JSON format, an example follows. .. code-block:: javascript { "containers": [{ "name": "pi", "image": "perl", "command": ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] }], "restartPolicy": "Never" } **restartPolicy** - If restartPolicy is not defined, it will be set to "Never" by default. - **Warning**: restartPolicy=OnFailure will bypass max_retrials, and restart the container until success, with the risk of blocking the Luigi task. For more informations please refer to: http://kubernetes.io/docs/user-guide/pods/multi-container/#the-spec-schema """ raise NotImplementedError("subclass must define spec_schema")
Kubernetes Job spec schema in JSON format, an example follows. .. code-block:: javascript { "containers": [{ "name": "pi", "image": "perl", "command": ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] }], "restartPolicy": "Never" } **restartPolicy** - If restartPolicy is not defined, it will be set to "Never" by default. - **Warning**: restartPolicy=OnFailure will bypass max_retrials, and restart the container until success, with the risk of blocking the Luigi task. For more informations please refer to: http://kubernetes.io/docs/user-guide/pods/multi-container/#the-spec-schema
spec_schema
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def max_retrials(self): """ Maximum number of retrials in case of failure. """ return self.kubernetes_config.max_retrials
Maximum number of retrials in case of failure.
max_retrials
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def backoff_limit(self): """ Maximum number of retries before considering the job as failed. See: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#pod-backoff-failure-policy """ return 6
Maximum number of retries before considering the job as failed. See: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#pod-backoff-failure-policy
backoff_limit
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def delete_on_success(self): """ Delete the Kubernetes workload if the job has ended successfully. """ return True
Delete the Kubernetes workload if the job has ended successfully.
delete_on_success
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def print_pod_logs_on_exit(self): """ Fetch and print the pod logs once the job is completed. """ return False
Fetch and print the pod logs once the job is completed.
print_pod_logs_on_exit
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def active_deadline_seconds(self): """ Time allowed to successfully schedule pods. See: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-termination-and-cleanup """ return None
Time allowed to successfully schedule pods. See: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-termination-and-cleanup
active_deadline_seconds
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def poll_interval(self): """How often to poll Kubernetes for job status, in seconds.""" return self.__DEFAULT_POLL_INTERVAL
How often to poll Kubernetes for job status, in seconds.
poll_interval
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def pod_creation_wait_interal(self): """Delay for initial pod creation for just submitted job in seconds""" return self.__DEFAULT_POD_CREATION_INTERVAL
Delay for initial pod creation for just submitted job in seconds
pod_creation_wait_interal
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def __track_job(self): """Poll job status while active""" while not self.__verify_job_has_started(): time.sleep(self.poll_interval) self.__logger.debug("Waiting for Kubernetes job " + self.uu_name + " to start") self.__print_kubectl_hints() status = self.__get_job_status() while status == "RUNNING": self.__logger.debug("Kubernetes job " + self.uu_name + " is running") time.sleep(self.poll_interval) status = self.__get_job_status() assert status != "FAILED", "Kubernetes job " + self.uu_name + " failed" # status == "SUCCEEDED" self.__logger.info("Kubernetes job " + self.uu_name + " succeeded") self.signal_complete()
Poll job status while active
__track_job
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def signal_complete(self): """Signal job completion for scheduler and dependent tasks. Touching a system file is an easy way to signal completion. example:: .. code-block:: python with self.output().open('w') as output_file: output_file.write('') """ pass
Signal job completion for scheduler and dependent tasks. Touching a system file is an easy way to signal completion. example:: .. code-block:: python with self.output().open('w') as output_file: output_file.write('')
signal_complete
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def __verify_job_has_started(self): """Asserts that the job has successfully started""" # Verify that the job started self.__get_job() # Verify that the pod started pods = self.__get_pods() if not pods: self.__logger.debug( 'No pods found for %s, waiting for cluster state to match the job definition' % self.uu_name ) time.sleep(self.pod_creation_wait_interal) pods = self.__get_pods() assert len(pods) > 0, "No pod scheduled by " + self.uu_name for pod in pods: status = pod.obj['status'] for cont_stats in status.get('containerStatuses', []): if 'terminated' in cont_stats['state']: t = cont_stats['state']['terminated'] err_msg = "Pod %s %s (exit code %d). Logs: `kubectl logs pod/%s`" % ( pod.name, t['reason'], t['exitCode'], pod.name) assert t['exitCode'] == 0, err_msg if 'waiting' in cont_stats['state']: wr = cont_stats['state']['waiting']['reason'] assert wr == 'ContainerCreating', "Pod %s %s. Logs: `kubectl logs pod/%s`" % ( pod.name, wr, pod.name) for cond in status.get('conditions', []): if 'message' in cond: if cond['reason'] == 'ContainersNotReady': return False assert cond['status'] != 'False', \ "[ERROR] %s - %s" % (cond['reason'], cond['message']) return True
Asserts that the job has successfully started
__verify_job_has_started
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def __get_job_status(self): """Return the Kubernetes job status""" # Figure out status and return it job = self.__get_job() if "succeeded" in job.obj["status"] and job.obj["status"]["succeeded"] > 0: job.scale(replicas=0) if self.print_pod_logs_on_exit: self.__print_pod_logs() if self.delete_on_success: self.__delete_job_cascade(job) return "SUCCEEDED" if "failed" in job.obj["status"]: failed_cnt = job.obj["status"]["failed"] self.__logger.debug("Kubernetes job " + self.uu_name + " status.failed: " + str(failed_cnt)) if self.print_pod_logs_on_exit: self.__print_pod_logs() if failed_cnt > self.max_retrials: job.scale(replicas=0) # avoid more retrials return "FAILED" return "RUNNING"
Return the Kubernetes job status
__get_job_status
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def output(self): """ An output target is necessary for checking job completion unless an alternative complete method is defined. Example:: return luigi.LocalTarget(os.path.join('/tmp', 'example')) """ pass
An output target is necessary for checking job completion unless an alternative complete method is defined. Example:: return luigi.LocalTarget(os.path.join('/tmp', 'example'))
output
python
spotify/luigi
luigi/contrib/kubernetes.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/kubernetes.py
Apache-2.0
def main(args=sys.argv): """Run the work() method from the class instance in the file "job-instance.pickle". """ try: tarball = "--no-tarball" not in args # Set up logging. logging.basicConfig(level=logging.WARN) work_dir = args[1] assert os.path.exists(work_dir), "First argument to sge_runner.py must be a directory that exists" project_dir = args[2] sys.path.append(project_dir) _do_work_on_compute_node(work_dir, tarball) except Exception as e: # Dump encoded data that we will try to fetch using mechanize print(e) raise
Run the work() method from the class instance in the file "job-instance.pickle".
main
python
spotify/luigi
luigi/contrib/sge_runner.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/sge_runner.py
Apache-2.0
def __init__(self, replace_pairs): """ Initializes a MultiReplacer instance. :param replace_pairs: list of 2-tuples which hold strings to be replaced and replace string. :type replace_pairs: tuple """ replace_list = list(replace_pairs) # make a copy in case input is iterable self._replace_dict = dict(replace_list) pattern = '|'.join(re.escape(x) for x, y in replace_list) self._search_re = re.compile(pattern)
Initializes a MultiReplacer instance. :param replace_pairs: list of 2-tuples which hold strings to be replaced and replace string. :type replace_pairs: tuple
__init__
python
spotify/luigi
luigi/contrib/postgres.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/postgres.py
Apache-2.0
def __init__( self, host, database, user, password, table, update_id, port=None ): """ Args: host (str): Postgres server address. Possibly a host:port string. database (str): Database name user (str): Database user password (str): Password for specified user update_id (str): An identifier for this data set port (int): Postgres server port. """ if ':' in host: self.host, self.port = host.split(':') else: self.host = host self.port = port or self.DEFAULT_DB_PORT self.database = database self.user = user self.password = password self.table = table self.update_id = update_id
Args: host (str): Postgres server address. Possibly a host:port string. database (str): Database name user (str): Database user password (str): Password for specified user update_id (str): An identifier for this data set port (int): Postgres server port.
__init__
python
spotify/luigi
luigi/contrib/postgres.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/postgres.py
Apache-2.0
def touch(self, connection=None): """ Mark this update as complete. Important: If the marker table doesn't exist, the connection transaction will be aborted and the connection reset. Then the marker table will be created. """ self.create_marker_table() if connection is None: # TODO: test this connection = self.connect() connection.autocommit = True # if connection created here, we commit it here if self.use_db_timestamps: connection.cursor().execute( """INSERT INTO {marker_table} (update_id, target_table) VALUES (%s, %s) """.format(marker_table=self.marker_table), (self.update_id, self.table)) else: connection.cursor().execute( """INSERT INTO {marker_table} (update_id, target_table, inserted) VALUES (%s, %s, %s); """.format(marker_table=self.marker_table), (self.update_id, self.table, datetime.datetime.now()))
Mark this update as complete. Important: If the marker table doesn't exist, the connection transaction will be aborted and the connection reset. Then the marker table will be created.
touch
python
spotify/luigi
luigi/contrib/postgres.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/postgres.py
Apache-2.0
def connect(self): """ Get a DBAPI 2.0 connection object to the database where the table is. """ connection = dbapi.connect( host=self.host, port=self.port, database=self.database, user=self.user, password=self.password) connection.set_client_encoding('utf-8') return connection
Get a DBAPI 2.0 connection object to the database where the table is.
connect
python
spotify/luigi
luigi/contrib/postgres.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/postgres.py
Apache-2.0
def create_marker_table(self): """ Create marker table if it doesn't exist. Using a separate connection since the transaction might have to be reset. """ connection = self.connect() connection.autocommit = True cursor = connection.cursor() if self.use_db_timestamps: sql = """ CREATE TABLE {marker_table} ( update_id TEXT PRIMARY KEY, target_table TEXT, inserted TIMESTAMP DEFAULT NOW()) """.format(marker_table=self.marker_table) else: sql = """ CREATE TABLE {marker_table} ( update_id TEXT PRIMARY KEY, target_table TEXT, inserted TIMESTAMP); """.format(marker_table=self.marker_table) try: cursor.execute(sql) except dbapi.DatabaseError as e: if db_error_code(e) == ERROR_DUPLICATE_TABLE: pass else: raise connection.close()
Create marker table if it doesn't exist. Using a separate connection since the transaction might have to be reset.
create_marker_table
python
spotify/luigi
luigi/contrib/postgres.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/postgres.py
Apache-2.0
def rows(self): """ Return/yield tuples or lists corresponding to each row to be inserted. """ with self.input().open('r') as fobj: for line in fobj: yield line.strip('\n').split('\t')
Return/yield tuples or lists corresponding to each row to be inserted.
rows
python
spotify/luigi
luigi/contrib/postgres.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/postgres.py
Apache-2.0
def map_column(self, value): """ Applied to each column of every row returned by `rows`. Default behaviour is to escape special characters and identify any self.null_values. """ if value in self.null_values: return r'\\N' else: return default_escape(str(value))
Applied to each column of every row returned by `rows`. Default behaviour is to escape special characters and identify any self.null_values.
map_column
python
spotify/luigi
luigi/contrib/postgres.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/postgres.py
Apache-2.0
def output(self): """ Returns a PostgresTarget representing the inserted dataset. Normally you don't override this. """ return PostgresTarget( host=self.host, database=self.database, user=self.user, password=self.password, table=self.table, update_id=self.update_id, port=self.port )
Returns a PostgresTarget representing the inserted dataset. Normally you don't override this.
output
python
spotify/luigi
luigi/contrib/postgres.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/postgres.py
Apache-2.0
def run(self): """ Inserts data generated by rows() into target table. If the target table doesn't exist, self.create_table will be called to attempt to create the table. Normally you don't want to override this. """ if not (self.table and self.columns): raise Exception("table and columns need to be specified") connection = self.output().connect() # transform all data generated by rows() using map_column and write data # to a temporary file for import using postgres COPY tmp_dir = luigi.configuration.get_config().get('postgres', 'local-tmp-dir', None) tmp_file = tempfile.TemporaryFile(dir=tmp_dir) n = 0 for row in self.rows(): n += 1 if n % 100000 == 0: logger.info("Wrote %d lines", n) rowstr = self.column_separator.join(self.map_column(val) for val in row) rowstr += "\n" tmp_file.write(rowstr.encode('utf-8')) logger.info("Done writing, importing at %s", datetime.datetime.now()) tmp_file.seek(0) # attempt to copy the data into postgres # if it fails because the target table doesn't exist # try to create it by running self.create_table for attempt in range(2): try: cursor = connection.cursor() self.init_copy(connection) self.copy(cursor, tmp_file) self.post_copy(connection) if self.enable_metadata_columns: self.post_copy_metacolumns(cursor) except dbapi.DatabaseError as e: if db_error_code(e) == ERROR_UNDEFINED_TABLE and attempt == 0: # if first attempt fails with "relation not found", try creating table logger.info("Creating table %s", self.table) # reset() is a psycopg2-specific method if hasattr(connection, 'reset'): connection.reset() else: _pg8000_connection_reset(connection) self.create_table(connection) else: raise else: break # mark as complete in same transaction self.output().touch(connection) # commit and clean up connection.commit() connection.close() tmp_file.close()
Inserts data generated by rows() into target table. If the target table doesn't exist, self.create_table will be called to attempt to create the table. Normally you don't want to override this.
run
python
spotify/luigi
luigi/contrib/postgres.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/postgres.py
Apache-2.0
def output(self): """ Returns a PostgresTarget representing the executed query. Normally you don't override this. """ return PostgresTarget( host=self.host, database=self.database, user=self.user, password=self.password, table=self.table, update_id=self.update_id, port=self.port )
Returns a PostgresTarget representing the executed query. Normally you don't override this.
output
python
spotify/luigi
luigi/contrib/postgres.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/postgres.py
Apache-2.0
def __init__(self, host, port, index, doc_type, update_id, marker_index_hist_size=0, http_auth=None, timeout=10, extra_elasticsearch_args=None): """ :param host: Elasticsearch server host :type host: str :param port: Elasticsearch server port :type port: int :param index: index name :type index: str :param doc_type: doctype name :type doc_type: str :param update_id: an identifier for this data set :type update_id: str :param marker_index_hist_size: list of changes to the index to remember :type marker_index_hist_size: int :param timeout: Elasticsearch connection timeout :type timeout: int :param extra_elasticsearch_args: extra args for Elasticsearch :type Extra: dict """ if extra_elasticsearch_args is None: extra_elasticsearch_args = {} self.host = host self.port = port self.http_auth = http_auth self.index = index self.doc_type = doc_type self.update_id = update_id self.marker_index_hist_size = marker_index_hist_size self.timeout = timeout self.extra_elasticsearch_args = extra_elasticsearch_args self.es = elasticsearch.Elasticsearch( connection_class=Urllib3HttpConnection, host=self.host, port=self.port, http_auth=self.http_auth, timeout=self.timeout, **self.extra_elasticsearch_args )
:param host: Elasticsearch server host :type host: str :param port: Elasticsearch server port :type port: int :param index: index name :type index: str :param doc_type: doctype name :type doc_type: str :param update_id: an identifier for this data set :type update_id: str :param marker_index_hist_size: list of changes to the index to remember :type marker_index_hist_size: int :param timeout: Elasticsearch connection timeout :type timeout: int :param extra_elasticsearch_args: extra args for Elasticsearch :type Extra: dict
__init__
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def marker_index_document_id(self): """ Generate an id for the indicator document. """ params = '%s:%s:%s' % (self.index, self.doc_type, self.update_id) return hashlib.sha1(params.encode('utf-8')).hexdigest()
Generate an id for the indicator document.
marker_index_document_id
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def touch(self): """ Mark this update as complete. The document id would be sufficient but, for documentation, we index the parameters `update_id`, `target_index`, `target_doc_type` and `date` as well. """ self.create_marker_index() self.es.index(index=self.marker_index, doc_type=self.marker_doc_type, id=self.marker_index_document_id(), body={ 'update_id': self.update_id, 'target_index': self.index, 'target_doc_type': self.doc_type, 'date': datetime.datetime.now()}) self.es.indices.flush(index=self.marker_index) self.ensure_hist_size()
Mark this update as complete. The document id would be sufficient but, for documentation, we index the parameters `update_id`, `target_index`, `target_doc_type` and `date` as well.
touch
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def exists(self): """ Test, if this task has been run. """ try: self.es.get(index=self.marker_index, doc_type=self.marker_doc_type, id=self.marker_index_document_id()) return True except elasticsearch.NotFoundError: logger.debug('Marker document not found.') except elasticsearch.ElasticsearchException as err: logger.warn(err) return False
Test, if this task has been run.
exists
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def create_marker_index(self): """ Create the index that will keep track of the tasks if necessary. """ if not self.es.indices.exists(index=self.marker_index): self.es.indices.create(index=self.marker_index)
Create the index that will keep track of the tasks if necessary.
create_marker_index
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def ensure_hist_size(self): """ Shrink the history of updates for a `index/doc_type` combination down to `self.marker_index_hist_size`. """ if self.marker_index_hist_size == 0: return result = self.es.search(index=self.marker_index, doc_type=self.marker_doc_type, body={'query': { 'term': {'target_index': self.index}}}, sort=('date:desc',)) for i, hit in enumerate(result.get('hits').get('hits'), start=1): if i > self.marker_index_hist_size: marker_document_id = hit.get('_id') self.es.delete(id=marker_document_id, index=self.marker_index, doc_type=self.marker_doc_type) self.es.indices.flush(index=self.marker_index)
Shrink the history of updates for a `index/doc_type` combination down to `self.marker_index_hist_size`.
ensure_hist_size
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def host(self): """ ES hostname. """ return 'localhost'
ES hostname.
host
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def port(self): """ ES port. """ return 9200
ES port.
port
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def http_auth(self): """ ES optional http auth information as either ‘:’ separated string or a tuple, e.g. `('user', 'pass')` or `"user:pass"`. """ return None
ES optional http auth information as either ‘:’ separated string or a tuple, e.g. `('user', 'pass')` or `"user:pass"`.
http_auth
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def index(self): """ The target index. May exist or not. """ return None
The target index. May exist or not.
index
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def doc_type(self): """ The target doc_type. """ return 'default'
The target doc_type.
doc_type
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def mapping(self): """ Dictionary with custom mapping or `None`. """ return None
Dictionary with custom mapping or `None`.
mapping
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def settings(self): """ Settings to be used at index creation time. """ return {'settings': {}}
Settings to be used at index creation time.
settings
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def chunk_size(self): """ Single API call for this number of docs. """ return 2000
Single API call for this number of docs.
chunk_size
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def raise_on_error(self): """ Whether to fail fast. """ return True
Whether to fail fast.
raise_on_error
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def purge_existing_index(self): """ Whether to delete the `index` completely before any indexing. """ return False
Whether to delete the `index` completely before any indexing.
purge_existing_index
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def marker_index_hist_size(self): """ Number of event log entries in the marker index. 0: unlimited. """ return 0
Number of event log entries in the marker index. 0: unlimited.
marker_index_hist_size
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def timeout(self): """ Timeout. """ return 10
Timeout.
timeout
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def extra_elasticsearch_args(self): """ Extra arguments to pass to the Elasticsearch constructor """ return {}
Extra arguments to pass to the Elasticsearch constructor
extra_elasticsearch_args
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def docs(self): """ Return the documents to be indexed. Beside the user defined fields, the document may contain an `_index`, `_type` and `_id`. """ with self.input().open('r') as fobj: for line in fobj: yield line
Return the documents to be indexed. Beside the user defined fields, the document may contain an `_index`, `_type` and `_id`.
docs
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def _docs(self): """ Since `self.docs` may yield documents that do not explicitly contain `_index` or `_type`, add those attributes here, if necessary. """ iterdocs = iter(self.docs()) first = next(iterdocs) needs_parsing = False if isinstance(first, str): needs_parsing = True elif isinstance(first, dict): pass else: raise RuntimeError('Document must be either JSON strings or dict.') for doc in itertools.chain([first], iterdocs): if needs_parsing: doc = json.loads(doc) if '_index' not in doc: doc['_index'] = self.index if '_type' not in doc: doc['_type'] = self.doc_type yield doc
Since `self.docs` may yield documents that do not explicitly contain `_index` or `_type`, add those attributes here, if necessary.
_docs
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def create_index(self): """ Override to provide code for creating the target index. By default it will be created without any special settings or mappings. """ es = self._init_connection() if not es.indices.exists(index=self.index): es.indices.create(index=self.index, body=self.settings)
Override to provide code for creating the target index. By default it will be created without any special settings or mappings.
create_index
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def delete_index(self): """ Delete the index, if it exists. """ es = self._init_connection() if es.indices.exists(index=self.index): es.indices.delete(index=self.index)
Delete the index, if it exists.
delete_index
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def update_id(self): """ This id will be a unique identifier for this indexing task. """ return self.task_id
This id will be a unique identifier for this indexing task.
update_id
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def output(self): """ Returns a ElasticsearchTarget representing the inserted dataset. Normally you don't override this. """ return ElasticsearchTarget( host=self.host, port=self.port, http_auth=self.http_auth, index=self.index, doc_type=self.doc_type, update_id=self.update_id(), marker_index_hist_size=self.marker_index_hist_size, timeout=self.timeout, extra_elasticsearch_args=self.extra_elasticsearch_args )
Returns a ElasticsearchTarget representing the inserted dataset. Normally you don't override this.
output
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def run(self): """ Run task, namely: * purge existing index, if requested (`purge_existing_index`), * create the index, if missing, * apply mappings, if given, * set refresh interval to -1 (disable) for performance reasons, * bulk index in batches of size `chunk_size` (2000), * set refresh interval to 1s, * refresh Elasticsearch, * create entry in marker index. """ if self.purge_existing_index: self.delete_index() self.create_index() es = self._init_connection() if self.mapping: es.indices.put_mapping(index=self.index, doc_type=self.doc_type, body=self.mapping) es.indices.put_settings({"index": {"refresh_interval": "-1"}}, index=self.index) bulk(es, self._docs(), chunk_size=self.chunk_size, raise_on_error=self.raise_on_error) es.indices.put_settings({"index": {"refresh_interval": "1s"}}, index=self.index) es.indices.refresh() self.output().touch()
Run task, namely: * purge existing index, if requested (`purge_existing_index`), * create the index, if missing, * apply mappings, if given, * set refresh interval to -1 (disable) for performance reasons, * bulk index in batches of size `chunk_size` (2000), * set refresh interval to 1s, * refresh Elasticsearch, * create entry in marker index.
run
python
spotify/luigi
luigi/contrib/esindex.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/esindex.py
Apache-2.0
def dataset_exists(self, dataset): """Returns whether the given dataset exists. If regional location is specified for the dataset, that is also checked to be compatible with the remote dataset, otherwise an exception is thrown. :param dataset: :type dataset: BQDataset """ try: response = self.client.datasets().get(projectId=dataset.project_id, datasetId=dataset.dataset_id).execute() if dataset.location is not None: fetched_location = response.get('location') if dataset.location != fetched_location: raise Exception('''Dataset already exists with regional location {}. Can't use {}.'''.format( fetched_location if fetched_location is not None else 'unspecified', dataset.location)) except http.HttpError as ex: if ex.resp.status == 404: return False raise return True
Returns whether the given dataset exists. If regional location is specified for the dataset, that is also checked to be compatible with the remote dataset, otherwise an exception is thrown. :param dataset: :type dataset: BQDataset
dataset_exists
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def table_exists(self, table): """Returns whether the given table exists. :param table: :type table: BQTable """ if not self.dataset_exists(table.dataset): return False try: self.client.tables().get(projectId=table.project_id, datasetId=table.dataset_id, tableId=table.table_id).execute() except http.HttpError as ex: if ex.resp.status == 404: return False raise return True
Returns whether the given table exists. :param table: :type table: BQTable
table_exists
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def make_dataset(self, dataset, raise_if_exists=False, body=None): """Creates a new dataset with the default permissions. :param dataset: :type dataset: BQDataset :param raise_if_exists: whether to raise an exception if the dataset already exists. :raises luigi.target.FileAlreadyExists: if raise_if_exists=True and the dataset exists """ if body is None: body = {} try: # Construct a message body in the format required by # https://developers.google.com/resources/api-libraries/documentation/bigquery/v2/python/latest/bigquery_v2.datasets.html#insert body['datasetReference'] = { 'projectId': dataset.project_id, 'datasetId': dataset.dataset_id } if dataset.location is not None: body['location'] = dataset.location self.client.datasets().insert(projectId=dataset.project_id, body=body).execute() except http.HttpError as ex: if ex.resp.status == 409: if raise_if_exists: raise luigi.target.FileAlreadyExists() else: raise
Creates a new dataset with the default permissions. :param dataset: :type dataset: BQDataset :param raise_if_exists: whether to raise an exception if the dataset already exists. :raises luigi.target.FileAlreadyExists: if raise_if_exists=True and the dataset exists
make_dataset
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def delete_dataset(self, dataset, delete_nonempty=True): """Deletes a dataset (and optionally any tables in it), if it exists. :param dataset: :type dataset: BQDataset :param delete_nonempty: if true, will delete any tables before deleting the dataset """ if not self.dataset_exists(dataset): return self.client.datasets().delete(projectId=dataset.project_id, datasetId=dataset.dataset_id, deleteContents=delete_nonempty).execute()
Deletes a dataset (and optionally any tables in it), if it exists. :param dataset: :type dataset: BQDataset :param delete_nonempty: if true, will delete any tables before deleting the dataset
delete_dataset
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def delete_table(self, table): """Deletes a table, if it exists. :param table: :type table: BQTable """ if not self.table_exists(table): return self.client.tables().delete(projectId=table.project_id, datasetId=table.dataset_id, tableId=table.table_id).execute()
Deletes a table, if it exists. :param table: :type table: BQTable
delete_table
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def list_datasets(self, project_id): """Returns the list of datasets in a given project. :param project_id: :type project_id: str """ request = self.client.datasets().list(projectId=project_id, maxResults=1000) response = request.execute() while response is not None: for ds in response.get('datasets', []): yield ds['datasetReference']['datasetId'] request = self.client.datasets().list_next(request, response) if request is None: break response = request.execute()
Returns the list of datasets in a given project. :param project_id: :type project_id: str
list_datasets
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def list_tables(self, dataset): """Returns the list of tables in a given dataset. :param dataset: :type dataset: BQDataset """ request = self.client.tables().list(projectId=dataset.project_id, datasetId=dataset.dataset_id, maxResults=1000) response = request.execute() while response is not None: for t in response.get('tables', []): yield t['tableReference']['tableId'] request = self.client.tables().list_next(request, response) if request is None: break response = request.execute()
Returns the list of tables in a given dataset. :param dataset: :type dataset: BQDataset
list_tables
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def get_view(self, table): """Returns the SQL query for a view, or None if it doesn't exist or is not a view. :param table: The table containing the view. :type table: BQTable """ request = self.client.tables().get(projectId=table.project_id, datasetId=table.dataset_id, tableId=table.table_id) try: response = request.execute() except http.HttpError as ex: if ex.resp.status == 404: return None raise return response['view']['query'] if 'view' in response else None
Returns the SQL query for a view, or None if it doesn't exist or is not a view. :param table: The table containing the view. :type table: BQTable
get_view
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def update_view(self, table, view): """Updates the SQL query for a view. If the output table exists, it is replaced with the supplied view query. Otherwise a new table is created with this view. :param table: The table to contain the view. :type table: BQTable :param view: The SQL query for the view. :type view: str """ body = { 'tableReference': { 'projectId': table.project_id, 'datasetId': table.dataset_id, 'tableId': table.table_id }, 'view': { 'query': view } } if self.table_exists(table): self.client.tables().update(projectId=table.project_id, datasetId=table.dataset_id, tableId=table.table_id, body=body).execute() else: self.client.tables().insert(projectId=table.project_id, datasetId=table.dataset_id, body=body).execute()
Updates the SQL query for a view. If the output table exists, it is replaced with the supplied view query. Otherwise a new table is created with this view. :param table: The table to contain the view. :type table: BQTable :param view: The SQL query for the view. :type view: str
update_view
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def run_job(self, project_id, body, dataset=None): """Runs a BigQuery "job". See the documentation for the format of body. .. note:: You probably don't need to use this directly. Use the tasks defined below. :param dataset: :type dataset: BQDataset :return: the job id of the job. :rtype: str :raises luigi.contrib.BigQueryExecutionError: if the job fails. """ if dataset and not self.dataset_exists(dataset): self.make_dataset(dataset) new_job = self.client.jobs().insert(projectId=project_id, body=body).execute() job_id = new_job['jobReference']['jobId'] logger.info('Started import job %s:%s', project_id, job_id) while True: status = self.client.jobs().get(projectId=project_id, jobId=job_id).execute(num_retries=10) if status['status']['state'] == 'DONE': if status['status'].get('errorResult'): raise BigQueryExecutionError(job_id, status['status']['errorResult']) return job_id logger.info('Waiting for job %s:%s to complete...', project_id, job_id) time.sleep(5)
Runs a BigQuery "job". See the documentation for the format of body. .. note:: You probably don't need to use this directly. Use the tasks defined below. :param dataset: :type dataset: BQDataset :return: the job id of the job. :rtype: str :raises luigi.contrib.BigQueryExecutionError: if the job fails.
run_job
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def copy(self, source_table, dest_table, create_disposition=CreateDisposition.CREATE_IF_NEEDED, write_disposition=WriteDisposition.WRITE_TRUNCATE): """Copies (or appends) a table to another table. :param source_table: :type source_table: BQTable :param dest_table: :type dest_table: BQTable :param create_disposition: whether to create the table if needed :type create_disposition: CreateDisposition :param write_disposition: whether to append/truncate/fail if the table exists :type write_disposition: WriteDisposition """ job = { "configuration": { "copy": { "sourceTable": { "projectId": source_table.project_id, "datasetId": source_table.dataset_id, "tableId": source_table.table_id, }, "destinationTable": { "projectId": dest_table.project_id, "datasetId": dest_table.dataset_id, "tableId": dest_table.table_id, }, "createDisposition": create_disposition, "writeDisposition": write_disposition, } } } self.run_job(dest_table.project_id, job, dataset=dest_table.dataset)
Copies (or appends) a table to another table. :param source_table: :type source_table: BQTable :param dest_table: :type dest_table: BQTable :param create_disposition: whether to create the table if needed :type create_disposition: CreateDisposition :param write_disposition: whether to append/truncate/fail if the table exists :type write_disposition: WriteDisposition
copy
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def from_bqtable(cls, table, client=None): """A constructor that takes a :py:class:`BQTable`. :param table: :type table: BQTable """ return cls(table.project_id, table.dataset_id, table.table_id, client=client)
A constructor that takes a :py:class:`BQTable`. :param table: :type table: BQTable
from_bqtable
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def source_format(self): """The source format to use (see :py:class:`SourceFormat`).""" return SourceFormat.NEWLINE_DELIMITED_JSON
The source format to use (see :py:class:`SourceFormat`).
source_format
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def encoding(self): """The encoding of the data that is going to be loaded (see :py:class:`Encoding`).""" return Encoding.UTF_8
The encoding of the data that is going to be loaded (see :py:class:`Encoding`).
encoding
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def write_disposition(self): """What to do if the table already exists. By default this will fail the job. See :py:class:`WriteDisposition`""" return WriteDisposition.WRITE_EMPTY
What to do if the table already exists. By default this will fail the job. See :py:class:`WriteDisposition`
write_disposition
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def schema(self): """Schema in the format defined at https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.load.schema. If the value is falsy, it is omitted and inferred by BigQuery.""" return []
Schema in the format defined at https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.load.schema. If the value is falsy, it is omitted and inferred by BigQuery.
schema
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def max_bad_records(self): """ The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result.""" return 0
The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result.
max_bad_records
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def field_delimiter(self): """The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-byte character.""" return FieldDelimiter.COMMA
The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-byte character.
field_delimiter
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def source_uris(self): """The fully-qualified URIs that point to your data in Google Cloud Storage. Each URI can contain one '*' wildcard character and it must come after the 'bucket' name.""" return [x.path for x in luigi.task.flatten(self.input())]
The fully-qualified URIs that point to your data in Google Cloud Storage. Each URI can contain one '*' wildcard character and it must come after the 'bucket' name.
source_uris
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def skip_leading_rows(self): """The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped.""" return 0
The number of rows at the top of a CSV file that BigQuery will skip when loading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped.
skip_leading_rows
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def allow_jagged_rows(self): """Accept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.""" return False
Accept rows that are missing trailing optional columns. The missing values are treated as nulls. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. Only applicable to CSV, ignored for other formats.
allow_jagged_rows
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def ignore_unknown_values(self): """Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names""" return False
Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names
ignore_unknown_values
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def allow_quoted_new_lines(self): """ Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.""" return False
Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
allow_quoted_new_lines
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def configure_job(self, configuration): """Set additional job configuration. This allows to specify job configuration parameters that are not exposed via Task properties. :param configuration: Current configuration. :return: New or updated configuration. """ return configuration
Set additional job configuration. This allows to specify job configuration parameters that are not exposed via Task properties. :param configuration: Current configuration. :return: New or updated configuration.
configure_job
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def write_disposition(self): """What to do if the table already exists. By default this will fail the job. See :py:class:`WriteDisposition`""" return WriteDisposition.WRITE_TRUNCATE
What to do if the table already exists. By default this will fail the job. See :py:class:`WriteDisposition`
write_disposition
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def create_disposition(self): """Whether to create the table or not. See :py:class:`CreateDisposition`""" return CreateDisposition.CREATE_IF_NEEDED
Whether to create the table or not. See :py:class:`CreateDisposition`
create_disposition
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def flatten_results(self): """Flattens all nested and repeated fields in the query results. allowLargeResults must be true if this is set to False.""" return True
Flattens all nested and repeated fields in the query results. allowLargeResults must be true if this is set to False.
flatten_results
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def query(self): """The query, in text form.""" raise NotImplementedError()
The query, in text form.
query
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def query_mode(self): """The query mode. See :py:class:`QueryMode`.""" return QueryMode.INTERACTIVE
The query mode. See :py:class:`QueryMode`.
query_mode
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def udf_resource_uris(self): """Iterator of code resource to load from a Google Cloud Storage URI (gs://bucket/path). """ return []
Iterator of code resource to load from a Google Cloud Storage URI (gs://bucket/path).
udf_resource_uris
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def use_legacy_sql(self): """Whether to use legacy SQL """ return True
Whether to use legacy SQL
use_legacy_sql
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def configure_job(self, configuration): """Set additional job configuration. This allows to specify job configuration parameters that are not exposed via Task properties. :param configuration: Current configuration. :return: New or updated configuration. """ return configuration
Set additional job configuration. This allows to specify job configuration parameters that are not exposed via Task properties. :param configuration: Current configuration. :return: New or updated configuration.
configure_job
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def view(self): """The SQL query for the view, in text form.""" raise NotImplementedError()
The SQL query for the view, in text form.
view
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def destination_uris(self): """ The fully-qualified URIs that point to your data in Google Cloud Storage. Each URI can contain one '*' wildcard character and it must come after the 'bucket' name. Wildcarded destinationUris in GCSQueryTarget might not be resolved correctly and result in incomplete data. If a GCSQueryTarget is used to pass wildcarded destinationUris be sure to overwrite this property to suppress the warning. """ return [x.path for x in luigi.task.flatten(self.output())]
The fully-qualified URIs that point to your data in Google Cloud Storage. Each URI can contain one '*' wildcard character and it must come after the 'bucket' name. Wildcarded destinationUris in GCSQueryTarget might not be resolved correctly and result in incomplete data. If a GCSQueryTarget is used to pass wildcarded destinationUris be sure to overwrite this property to suppress the warning.
destination_uris
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def print_header(self): """Whether to print the header or not.""" return PrintHeader.TRUE
Whether to print the header or not.
print_header
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def field_delimiter(self): """ The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-byte character. """ return FieldDelimiter.COMMA
The separator for fields in a CSV file. The separator can be any ISO-8859-1 single-byte character.
field_delimiter
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def destination_format(self): """ The destination format to use (see :py:class:`DestinationFormat`). """ return DestinationFormat.CSV
The destination format to use (see :py:class:`DestinationFormat`).
destination_format
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def compression(self): """Whether to use compression.""" return Compression.NONE
Whether to use compression.
compression
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def configure_job(self, configuration): """Set additional job configuration. This allows to specify job configuration parameters that are not exposed via Task properties. :param configuration: Current configuration. :return: New or updated configuration. """ return configuration
Set additional job configuration. This allows to specify job configuration parameters that are not exposed via Task properties. :param configuration: Current configuration. :return: New or updated configuration.
configure_job
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def __init__(self, job_id, error_message) -> None: """ :param job_id: BigQuery Job ID :type job_id: str :param error_message: status['status']['errorResult'] for the failed job :type error_message: str """ super().__init__('BigQuery job {} failed: {}'.format(job_id, error_message)) self.error_message = error_message self.job_id = job_id
:param job_id: BigQuery Job ID :type job_id: str :param error_message: status['status']['errorResult'] for the failed job :type error_message: str
__init__
python
spotify/luigi
luigi/contrib/bigquery.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/bigquery.py
Apache-2.0
def _wait_for_consistency(checker): """Eventual consistency: wait until GCS reports something is true. This is necessary for e.g. create/delete where the operation might return, but won't be reflected for a bit. """ for _ in range(EVENTUAL_CONSISTENCY_MAX_SLEEPS): if checker(): return time.sleep(EVENTUAL_CONSISTENCY_SLEEP_INTERVAL) logger.warning('Exceeded wait for eventual GCS consistency - this may be a' 'bug in the library or something is terribly wrong.')
Eventual consistency: wait until GCS reports something is true. This is necessary for e.g. create/delete where the operation might return, but won't be reflected for a bit.
_wait_for_consistency
python
spotify/luigi
luigi/contrib/gcs.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/gcs.py
Apache-2.0
def rename(self, *args, **kwargs): """ Alias for ``move()`` """ self.move(*args, **kwargs)
Alias for ``move()``
rename
python
spotify/luigi
luigi/contrib/gcs.py
https://github.com/spotify/luigi/blob/master/luigi/contrib/gcs.py
Apache-2.0