INSTRUCTION
stringlengths 1
46.3k
| RESPONSE
stringlengths 75
80.2k
|
---|---|
Creates a KustoConnection string builder that will authenticate with AAD application and key.
:param str connection_string: Kusto connection string should by of the format: https://<clusterName>.kusto.windows.net
:param str aad_app_id: AAD application ID.
:param str app_key: Corresponding key of the AAD application.
:param str authority_id: Authority id (aka Tenant id) must be provided
|
def with_aad_application_key_authentication(cls, connection_string, aad_app_id, app_key, authority_id):
"""Creates a KustoConnection string builder that will authenticate with AAD application and key.
:param str connection_string: Kusto connection string should by of the format: https://<clusterName>.kusto.windows.net
:param str aad_app_id: AAD application ID.
:param str app_key: Corresponding key of the AAD application.
:param str authority_id: Authority id (aka Tenant id) must be provided
"""
_assert_value_is_valid(aad_app_id)
_assert_value_is_valid(app_key)
_assert_value_is_valid(authority_id)
kcsb = cls(connection_string)
kcsb[kcsb.ValidKeywords.aad_federated_security] = True
kcsb[kcsb.ValidKeywords.application_client_id] = aad_app_id
kcsb[kcsb.ValidKeywords.application_key] = app_key
kcsb[kcsb.ValidKeywords.authority_id] = authority_id
return kcsb
|
Creates a KustoConnection string builder that will authenticate with AAD application and
a certificate credentials.
:param str connection_string: Kusto connection string should by of the format:
https://<clusterName>.kusto.windows.net
:param str aad_app_id: AAD application ID.
:param str certificate: A PEM encoded certificate private key.
:param str thumbprint: hex encoded thumbprint of the certificate.
:param str authority_id: Authority id (aka Tenant id) must be provided
|
def with_aad_application_certificate_authentication(
cls, connection_string, aad_app_id, certificate, thumbprint, authority_id
):
"""Creates a KustoConnection string builder that will authenticate with AAD application and
a certificate credentials.
:param str connection_string: Kusto connection string should by of the format:
https://<clusterName>.kusto.windows.net
:param str aad_app_id: AAD application ID.
:param str certificate: A PEM encoded certificate private key.
:param str thumbprint: hex encoded thumbprint of the certificate.
:param str authority_id: Authority id (aka Tenant id) must be provided
"""
_assert_value_is_valid(aad_app_id)
_assert_value_is_valid(certificate)
_assert_value_is_valid(thumbprint)
_assert_value_is_valid(authority_id)
kcsb = cls(connection_string)
kcsb[kcsb.ValidKeywords.aad_federated_security] = True
kcsb[kcsb.ValidKeywords.application_client_id] = aad_app_id
kcsb[kcsb.ValidKeywords.application_certificate] = certificate
kcsb[kcsb.ValidKeywords.application_certificate_thumbprint] = thumbprint
kcsb[kcsb.ValidKeywords.authority_id] = authority_id
return kcsb
|
Creates a KustoConnection string builder that will authenticate with AAD application and
password.
:param str connection_string: Kusto connection string should by of the format: https://<clusterName>.kusto.windows.net
:param str authority_id: optional param. defaults to "common"
|
def with_aad_device_authentication(cls, connection_string, authority_id="common"):
"""Creates a KustoConnection string builder that will authenticate with AAD application and
password.
:param str connection_string: Kusto connection string should by of the format: https://<clusterName>.kusto.windows.net
:param str authority_id: optional param. defaults to "common"
"""
kcsb = cls(connection_string)
kcsb[kcsb.ValidKeywords.aad_federated_security] = True
kcsb[kcsb.ValidKeywords.authority_id] = authority_id
return kcsb
|
Executes a query or management command.
:param str database: Database against query will be executed.
:param str query: Query to be executed.
:param azure.kusto.data.request.ClientRequestProperties properties: Optional additional properties.
:return: Kusto response data set.
:rtype: azure.kusto.data._response.KustoResponseDataSet
|
def execute(self, database, query, properties=None):
"""Executes a query or management command.
:param str database: Database against query will be executed.
:param str query: Query to be executed.
:param azure.kusto.data.request.ClientRequestProperties properties: Optional additional properties.
:return: Kusto response data set.
:rtype: azure.kusto.data._response.KustoResponseDataSet
"""
if query.startswith("."):
return self.execute_mgmt(database, query, properties)
return self.execute_query(database, query, properties)
|
Executes a query.
:param str database: Database against query will be executed.
:param str query: Query to be executed.
:param azure.kusto.data.request.ClientRequestProperties properties: Optional additional properties.
:return: Kusto response data set.
:rtype: azure.kusto.data._response.KustoResponseDataSet
|
def execute_query(self, database, query, properties=None):
"""Executes a query.
:param str database: Database against query will be executed.
:param str query: Query to be executed.
:param azure.kusto.data.request.ClientRequestProperties properties: Optional additional properties.
:return: Kusto response data set.
:rtype: azure.kusto.data._response.KustoResponseDataSet
"""
return self._execute(self._query_endpoint, database, query, KustoClient._query_default_timeout, properties)
|
Executes a management command.
:param str database: Database against query will be executed.
:param str query: Query to be executed.
:param azure.kusto.data.request.ClientRequestProperties properties: Optional additional properties.
:return: Kusto response data set.
:rtype: azure.kusto.data._response.KustoResponseDataSet
|
def execute_mgmt(self, database, query, properties=None):
"""Executes a management command.
:param str database: Database against query will be executed.
:param str query: Query to be executed.
:param azure.kusto.data.request.ClientRequestProperties properties: Optional additional properties.
:return: Kusto response data set.
:rtype: azure.kusto.data._response.KustoResponseDataSet
"""
return self._execute(self._mgmt_endpoint, database, query, KustoClient._mgmt_default_timeout, properties)
|
Executes given query against this client
|
def _execute(self, endpoint, database, query, default_timeout, properties=None):
"""Executes given query against this client"""
request_payload = {"db": database, "csl": query}
if properties:
request_payload["properties"] = properties.to_json()
request_headers = {
"Accept": "application/json",
"Accept-Encoding": "gzip,deflate",
"Content-Type": "application/json; charset=utf-8",
"x-ms-client-version": "Kusto.Python.Client:" + VERSION,
"x-ms-client-request-id": "KPC.execute;" + str(uuid.uuid4()),
}
if self._auth_provider:
request_headers["Authorization"] = self._auth_provider.acquire_authorization_header()
timeout = self._get_timeout(properties, default_timeout)
response = self._session.post(endpoint, headers=request_headers, json=request_payload, timeout=timeout.seconds)
if response.status_code == 200:
if endpoint.endswith("v2/rest/query"):
return KustoResponseDataSetV2(response.json())
return KustoResponseDataSetV1(response.json())
raise KustoServiceError([response.json()], response)
|
Executes streaming ingest against this client.
:param str database: Target database.
:param str table: Target table.
:param io.BaseIO stream: stream object which contains the data to ingest.
:param DataFormat stream_format: Format of the data in the stream.
:param str mapping_name: Pre-defined mapping of the table. Required when stream_format is json/avro.
Other optional params: optional request headers as documented at:
https://kusto.azurewebsites.net/docs/api/rest/streaming-ingest.html
|
def execute_streaming_ingest(
self,
database,
table,
stream,
stream_format,
mapping_name=None,
accept=None,
accept_encoding="gzip,deflate",
connection="Keep-Alive",
content_length=None,
content_encoding=None,
expect=None,
):
"""Executes streaming ingest against this client.
:param str database: Target database.
:param str table: Target table.
:param io.BaseIO stream: stream object which contains the data to ingest.
:param DataFormat stream_format: Format of the data in the stream.
:param str mapping_name: Pre-defined mapping of the table. Required when stream_format is json/avro.
Other optional params: optional request headers as documented at:
https://kusto.azurewebsites.net/docs/api/rest/streaming-ingest.html
"""
request_params = {"streamFormat": stream_format}
if stream_format in self._mapping_required_formats and mapping_name is not None:
request_params["mappingName"] = mapping_name
request_headers = {
"Accept-Encoding": accept_encoding,
"Connection": connection,
"x-ms-client-version": "Kusto.Python.StreamingClient:" + VERSION,
"x-ms-client-request-id": "KPSC.execute;" + str(uuid.uuid4()),
"Host": self._streaming_ingest_endpoint.split("/")[2],
}
if accept is not None:
request_headers["Accept"] = accept
if content_encoding is not None:
request_headers["Content-Encoding"] = content_encoding
if expect is not None:
request_headers["Expect"] = expect
if content_length is not None:
request_headers["Content-Length"] = str(content_length)
if self._auth_provider:
request_headers["Authorization"] = self._auth_provider.acquire_authorization_header()
response = self._session.post(
self._streaming_ingest_endpoint + database + "/" + table,
params=request_params,
headers=request_headers,
data=stream,
timeout=KustoClient._query_default_timeout,
)
if response.status_code == 200:
return KustoResponseDataSetV1(response.json())
raise KustoServiceError([response.content], response)
|
Sets an option's value
|
def set_option(self, name, value):
"""Sets an option's value"""
_assert_value_is_valid(name)
self._options[name] = value
|
Parses uri into a ResourceUri object
|
def parse(cls, uri):
"""Parses uri into a ResourceUri object"""
match = _URI_FORMAT.search(uri)
return cls(match.group(1), match.group(2), match.group(3), match.group(4))
|
Peek status queue
:param int n: number of messages to return as part of peek.
:param bool raw: should message content be returned as is (no parsing).
|
def peek(self, n=1, raw=False):
"""Peek status queue
:param int n: number of messages to return as part of peek.
:param bool raw: should message content be returned as is (no parsing).
"""
def _peek_specific_q(_q, _n):
has_messages = False
for m in _q.service.peek_messages(_q.name, num_messages=_n):
if m is not None:
has_messages = True
result.append(m if raw else self._deserialize_message(m))
# short circut to prevent unneeded work
if len(result) == n:
return True
return has_messages
q_services = self._get_q_services()
random.shuffle(q_services)
per_q = int(n / len(q_services)) + 1
result = []
non_empty_qs = []
for q in q_services:
if _peek_specific_q(q, per_q):
non_empty_qs.append(q)
if len(result) == n:
return result
# in-case queues aren't balanced, and we didn't get enough messages, iterate again and this time get all that we can
for q in non_empty_qs:
_peek_specific_q(q, n)
if len(result) == n:
return result
# because we ask for n / len(qs) + 1, we might get more message then requests
return result
|
Pop status queue
:param int n: number of messages to return as part of peek.
:param bool raw: should message content be returned as is (no parsing).
:param bool delete: should message be deleted after pop. default is True as this is expected of a q.
|
def pop(self, n=1, raw=False, delete=True):
"""Pop status queue
:param int n: number of messages to return as part of peek.
:param bool raw: should message content be returned as is (no parsing).
:param bool delete: should message be deleted after pop. default is True as this is expected of a q.
"""
def _pop_specific_q(_q, _n):
has_messages = False
for m in _q.service.get_messages(_q.name, num_messages=_n):
if m is not None:
has_messages = True
result.append(m if raw else self._deserialize_message(m))
if delete:
_q.service.delete_message(_q.name, m.id, m.pop_receipt)
# short circut to prevent unneeded work
if len(result) == n:
return True
return has_messages
q_services = self._get_q_services()
random.shuffle(q_services)
per_q = int(n / len(q_services)) + 1
result = []
non_empty_qs = []
for q in q_services:
if _pop_specific_q(q, per_q):
non_empty_qs.append(q)
if len(result) == n:
return result
# in-case queues aren't balanced, and we didn't get enough messages, iterate again and this time get all that we can
for q in non_empty_qs:
_pop_specific_q(q, n)
if len(result) == n:
return result
# because we ask for n / len(qs) + 1, we might get more message then requests
return result
|
Ingest from pandas DataFrame.
:param pandas.DataFrame df: input dataframe to ingest.
:param azure.kusto.ingest.IngestionProperties ingestion_properties: Ingestion properties.
|
def ingest_from_dataframe(self, df, ingestion_properties):
"""Ingest from pandas DataFrame.
:param pandas.DataFrame df: input dataframe to ingest.
:param azure.kusto.ingest.IngestionProperties ingestion_properties: Ingestion properties.
"""
from pandas import DataFrame
if not isinstance(df, DataFrame):
raise ValueError("Expected DataFrame instance, found {}".format(type(df)))
file_name = "df_{timestamp}_{pid}.csv.gz".format(timestamp=int(time.time()), pid=os.getpid())
temp_file_path = os.path.join(tempfile.gettempdir(), file_name)
df.to_csv(temp_file_path, index=False, encoding="utf-8", header=False, compression="gzip")
fd = FileDescriptor(temp_file_path)
ingestion_properties.format = DataFormat.csv
self._ingest(fd.zipped_stream, fd.size, ingestion_properties, content_encoding="gzip")
fd.delete_files()
os.unlink(temp_file_path)
|
Ingest from local files.
:param file_descriptor: a FileDescriptor to be ingested.
:param azure.kusto.ingest.IngestionProperties ingestion_properties: Ingestion properties.
|
def ingest_from_file(self, file_descriptor, ingestion_properties):
"""Ingest from local files.
:param file_descriptor: a FileDescriptor to be ingested.
:param azure.kusto.ingest.IngestionProperties ingestion_properties: Ingestion properties.
"""
if isinstance(file_descriptor, FileDescriptor):
descriptor = file_descriptor
else:
descriptor = FileDescriptor(file_descriptor)
self._ingest(descriptor.zipped_stream, descriptor.size, ingestion_properties, content_encoding="gzip")
descriptor.delete_files()
|
Ingest from io streams.
:param azure.kusto.ingest.StreamDescriptor stream_descriptor: An object that contains a description of the stream to
be ingested.
:param azure.kusto.ingest.IngestionProperties ingestion_properties: Ingestion properties.
|
def ingest_from_stream(self, stream_descriptor, ingestion_properties):
"""Ingest from io streams.
:param azure.kusto.ingest.StreamDescriptor stream_descriptor: An object that contains a description of the stream to
be ingested.
:param azure.kusto.ingest.IngestionProperties ingestion_properties: Ingestion properties.
"""
if not isinstance(stream_descriptor, StreamDescriptor):
stream_descriptor = StreamDescriptor(stream_descriptor)
if isinstance(stream_descriptor.stream, TextIOWrapper):
stream = stream_descriptor.stream.buffer
else:
stream = stream_descriptor.stream
self._ingest(stream, stream_descriptor.size, ingestion_properties)
|
Dictating the corresponding mapping to the format.
|
def get_mapping_format(self):
"""Dictating the corresponding mapping to the format."""
if self.format == DataFormat.json or self.format == DataFormat.avro:
return self.format.name
else:
return DataFormat.csv.name
|
If t is an atom, return it as a string, otherwise raise InvalidTypeError.
|
def getAtomChars(t):
"""If t is an atom, return it as a string, otherwise raise InvalidTypeError.
"""
s = c_char_p()
if PL_get_atom_chars(t, byref(s)):
return s.value
else:
raise InvalidTypeError("atom")
|
If t is of type bool, return it, otherwise raise InvalidTypeError.
|
def getBool(t):
"""If t is of type bool, return it, otherwise raise InvalidTypeError.
"""
b = c_int()
if PL_get_long(t, byref(b)):
return bool(b.value)
else:
raise InvalidTypeError("bool")
|
If t is of type long, return it, otherwise raise InvalidTypeError.
|
def getLong(t):
"""If t is of type long, return it, otherwise raise InvalidTypeError.
"""
i = c_long()
if PL_get_long(t, byref(i)):
return i.value
else:
raise InvalidTypeError("long")
|
If t is of type float, return it, otherwise raise InvalidTypeError.
|
def getFloat(t):
"""If t is of type float, return it, otherwise raise InvalidTypeError.
"""
d = c_double()
if PL_get_float(t, byref(d)):
return d.value
else:
raise InvalidTypeError("float")
|
If t is of type string, return it, otherwise raise InvalidTypeError.
|
def getString(t):
"""If t is of type string, return it, otherwise raise InvalidTypeError.
"""
slen = c_int()
s = c_char_p()
if PL_get_string_chars(t, byref(s), byref(slen)):
return s.value
else:
raise InvalidTypeError("string")
|
Return t as a list.
|
def getList(x):
"""
Return t as a list.
"""
t = PL_copy_term_ref(x)
head = PL_new_term_ref()
result = []
while PL_get_list(t, head, t):
result.append(getTerm(head))
head = PL_new_term_ref()
return result
|
Register a Python predicate
``func``: Function to be registered. The function should return a value in
``foreign_t``, ``True`` or ``False``.
``name`` : Name of the function. If this value is not used, ``func.func_name``
should exist.
``arity``: Arity (number of arguments) of the function. If this value is not
used, ``func.arity`` should exist.
|
def registerForeign(func, name=None, arity=None, flags=0):
"""Register a Python predicate
``func``: Function to be registered. The function should return a value in
``foreign_t``, ``True`` or ``False``.
``name`` : Name of the function. If this value is not used, ``func.func_name``
should exist.
``arity``: Arity (number of arguments) of the function. If this value is not
used, ``func.arity`` should exist.
"""
global cwraps
if arity is None:
arity = func.arity
if name is None:
name = func.__name__
cwrap = _callbackWrapper(arity)
fwrap = _foreignWrapper(func)
fwrap2 = cwrap(fwrap)
cwraps.append(fwrap2)
return PL_register_foreign(name, arity, fwrap2, flags)
|
Call term in module.
``term``: a Term or term handle
|
def call(*terms, **kwargs):
"""Call term in module.
``term``: a Term or term handle
"""
for kwarg in kwargs:
if kwarg not in ["module"]:
raise KeyError
module = kwargs.get("module", None)
t = terms[0]
for tx in terms[1:]:
t = _comma(t, tx)
return PL_call(t.handle, module)
|
Create a new module.
``name``: An Atom or a string
|
def newModule(name):
"""Create a new module.
``name``: An Atom or a string
"""
if isinstance(name, str):
name = Atom(name)
return PL_new_module(name.handle)
|
Create an atom from a Term or term handle.
|
def fromTerm(cls, term):
"""Create an atom from a Term or term handle."""
if isinstance(term, Term):
term = term.handle
elif not isinstance(term, (c_void_p, int)):
raise ArgumentTypeError((str(Term), str(c_void_p)), str(type(term)))
a = atom_t()
if PL_get_atom(term, byref(a)):
return cls(a.value)
|
Create a functor from a Term or term handle.
|
def fromTerm(cls, term):
"""Create a functor from a Term or term handle."""
if isinstance(term, Term):
term = term.handle
elif not isinstance(term, (c_void_p, int)):
raise ArgumentTypeError((str(Term), str(int)), str(type(term)))
f = functor_t()
if PL_get_functor(term, byref(f)):
# get args
args = []
arity = PL_functor_arity(f.value)
# let's have all args be consecutive
a0 = PL_new_term_refs(arity)
for i, a in enumerate(range(1, arity + 1)):
if PL_get_arg(a, term, a0 + i):
args.append(getTerm(a0 + i))
return cls(f.value, args=args, a0=a0)
|
This function tries to use an executable on the path to find SWI-Prolog
SO/DLL and the resource file.
:returns:
A tuple of (path to the swipl DLL, path to the resource file)
:returns type:
({str, None}, {str, None})
|
def _findSwiplFromExec():
"""
This function tries to use an executable on the path to find SWI-Prolog
SO/DLL and the resource file.
:returns:
A tuple of (path to the swipl DLL, path to the resource file)
:returns type:
({str, None}, {str, None})
"""
platform = sys.platform[:3]
fullName = None
swiHome = None
try: # try to get library path from swipl executable.
# We may have pl or swipl as the executable
try:
cmd = Popen(['swipl', '--dump-runtime-variables'], stdout=PIPE)
except OSError:
cmd = Popen(['pl', '--dump-runtime-variables'], stdout=PIPE)
ret = cmd.communicate()
# Parse the output into a dictionary
ret = ret[0].decode().replace(';', '').splitlines()
ret = [line.split('=', 1) for line in ret]
rtvars = dict((name, value[1:-1]) for name, value in ret) # [1:-1] gets
# rid of the
# quotes
if rtvars['PLSHARED'] == 'no':
raise ImportError('SWI-Prolog is not installed as a shared '
'library.')
else: # PLSHARED == 'yes'
swiHome = rtvars['PLBASE'] # The environment is in PLBASE
if not os.path.exists(swiHome):
swiHome = None
# determine platform specific path
if platform == "win":
dllName = rtvars['PLLIB'][:-4] + '.' + rtvars['PLSOEXT']
path = os.path.join(rtvars['PLBASE'], 'bin')
fullName = os.path.join(path, dllName)
if not os.path.exists(fullName):
fullName = None
elif platform == "cyg":
# e.g. /usr/lib/pl-5.6.36/bin/i686-cygwin/cygpl.dll
dllName = 'cygpl.dll'
path = os.path.join(rtvars['PLBASE'], 'bin', rtvars['PLARCH'])
fullName = os.path.join(path, dllName)
if not os.path.exists(fullName):
fullName = None
elif platform == "dar":
dllName = 'lib' + rtvars['PLLIB'][2:] + '.' + rtvars['PLSOEXT']
path = os.path.join(rtvars['PLBASE'], 'lib', rtvars['PLARCH'])
baseName = os.path.join(path, dllName)
if os.path.exists(baseName):
fullName = baseName
else: # We will search for versions
fullName = None
else: # assume UNIX-like
# The SO name in some linuxes is of the form libswipl.so.5.10.2,
# so we have to use glob to find the correct one
dllName = 'lib' + rtvars['PLLIB'][2:] + '.' + rtvars['PLSOEXT']
path = os.path.join(rtvars['PLBASE'], 'lib', rtvars['PLARCH'])
baseName = os.path.join(path, dllName)
if os.path.exists(baseName):
fullName = baseName
else: # We will search for versions
pattern = baseName + '.*'
files = glob.glob(pattern)
if len(files) == 0:
fullName = None
elif len(files) == 1:
fullName = files[0]
else: # Will this ever happen?
fullName = None
except (OSError, KeyError): # KeyError from accessing rtvars
pass
return (fullName, swiHome)
|
This function uses several heuristics to gues where SWI-Prolog is installed
in Windows. It always returns None as the path of the resource file because,
in Windows, the way to find it is more robust so the SWI-Prolog DLL is
always able to find it.
:returns:
A tuple of (path to the swipl DLL, path to the resource file)
:returns type:
({str, None}, {str, None})
|
def _findSwiplWin():
import re
"""
This function uses several heuristics to gues where SWI-Prolog is installed
in Windows. It always returns None as the path of the resource file because,
in Windows, the way to find it is more robust so the SWI-Prolog DLL is
always able to find it.
:returns:
A tuple of (path to the swipl DLL, path to the resource file)
:returns type:
({str, None}, {str, None})
"""
dllNames = ('swipl.dll', 'libswipl.dll')
# First try: check the usual installation path (this is faster but
# hardcoded)
programFiles = os.getenv('ProgramFiles')
paths = [os.path.join(programFiles, r'pl\bin', dllName)
for dllName in dllNames]
for path in paths:
if os.path.exists(path):
return (path, None)
# Second try: use the find_library
path = _findSwiplPathFromFindLib()
if path is not None and os.path.exists(path):
return (path, None)
# Third try: use reg.exe to find the installation path in the registry
# (reg should be installed in all Windows XPs)
try:
cmd = Popen(['reg', 'query',
r'HKEY_LOCAL_MACHINE\Software\SWI\Prolog',
'/v', 'home'], stdout=PIPE)
ret = cmd.communicate()
# Result is like:
# ! REG.EXE VERSION 3.0
#
# HKEY_LOCAL_MACHINE\Software\SWI\Prolog
# home REG_SZ C:\Program Files\pl
# (Note: spaces may be \t or spaces in the output)
ret = ret[0].splitlines()
ret = [line.decode("utf-8") for line in ret if len(line) > 0]
pattern = re.compile('[^h]*home[^R]*REG_SZ( |\t)*(.*)$')
match = pattern.match(ret[-1])
if match is not None:
path = match.group(2)
paths = [os.path.join(path, 'bin', dllName)
for dllName in dllNames]
for path in paths:
if os.path.exists(path):
return (path, None)
except OSError:
# reg.exe not found? Weird...
pass
# May the exec is on path?
(path, swiHome) = _findSwiplFromExec()
if path is not None:
return (path, swiHome)
# Last try: maybe it is in the current dir
for dllName in dllNames:
if os.path.exists(dllName):
return (dllName, None)
return (None, None)
|
This function uses several heuristics to guess where SWI-Prolog is
installed in Linuxes.
:returns:
A tuple of (path to the swipl so, path to the resource file)
:returns type:
({str, None}, {str, None})
|
def _findSwiplLin():
"""
This function uses several heuristics to guess where SWI-Prolog is
installed in Linuxes.
:returns:
A tuple of (path to the swipl so, path to the resource file)
:returns type:
({str, None}, {str, None})
"""
# Maybe the exec is on path?
(path, swiHome) = _findSwiplFromExec()
if path is not None:
return (path, swiHome)
# If it is not, use find_library
path = _findSwiplPathFromFindLib()
if path is not None:
return (path, swiHome)
# Our last try: some hardcoded paths.
paths = ['/lib', '/usr/lib', '/usr/local/lib', '.', './lib']
names = ['libswipl.so', 'libpl.so']
path = None
for name in names:
for try_ in paths:
try_ = os.path.join(try_, name)
if os.path.exists(try_):
path = try_
break
if path is not None:
return (path, swiHome)
return (None, None)
|
This function is a 2-time recursive func,
that findin file in dirs
:parameters:
- `path` (str) - Directory path
- `name` (str) - Name of file, that we lookin for
:returns:
Path to the swipl so, path to the resource file
:returns type:
(str)
|
def walk(path, name):
"""
This function is a 2-time recursive func,
that findin file in dirs
:parameters:
- `path` (str) - Directory path
- `name` (str) - Name of file, that we lookin for
:returns:
Path to the swipl so, path to the resource file
:returns type:
(str)
"""
back_path = path[:]
path = os.path.join(path, name)
if os.path.exists(path):
return path
else:
for dir_ in os.listdir(back_path):
path = os.path.join(back_path, dir_)
if os.path.isdir(path):
res_path = walk(path, name)
if res_path is not None:
return (res_path, back_path)
return None
|
This function is guesing where SWI-Prolog is
installed in MacOS via .app.
:parameters:
- `swi_ver` (str) - Version of SWI-Prolog in '[0-9].[0-9].[0-9]' format
:returns:
A tuple of (path to the swipl so, path to the resource file)
:returns type:
({str, None}, {str, None})
|
def _findSwiplMacOSHome():
"""
This function is guesing where SWI-Prolog is
installed in MacOS via .app.
:parameters:
- `swi_ver` (str) - Version of SWI-Prolog in '[0-9].[0-9].[0-9]' format
:returns:
A tuple of (path to the swipl so, path to the resource file)
:returns type:
({str, None}, {str, None})
"""
# Need more help with MacOS
# That way works, but need more work
names = ['libswipl.dylib', 'libpl.dylib']
path = os.environ.get('SWI_HOME_DIR')
if path is None:
path = os.environ.get('SWI_LIB_DIR')
if path is None:
path = os.environ.get('PLBASE')
if path is None:
swi_ver = get_swi_ver()
path = '/Applications/SWI-Prolog.app/Contents/swipl-' + swi_ver + '/lib/'
paths = [path]
for name in names:
for path in paths:
(path_res, back_path) = walk(path, name)
if path_res is not None:
os.environ['SWI_LIB_DIR'] = back_path
return (path_res, None)
return (None, None)
|
This function uses several heuristics to guess where SWI-Prolog is
installed in MacOS.
:returns:
A tuple of (path to the swipl so, path to the resource file)
:returns type:
({str, None}, {str, None})
|
def _findSwiplDar():
"""
This function uses several heuristics to guess where SWI-Prolog is
installed in MacOS.
:returns:
A tuple of (path to the swipl so, path to the resource file)
:returns type:
({str, None}, {str, None})
"""
# If the exec is in path
(path, swiHome) = _findSwiplFromExec()
if path is not None:
return (path, swiHome)
# If it is not, use find_library
path = _findSwiplPathFromFindLib()
if path is not None:
return (path, swiHome)
# Last guess, searching for the file
paths = ['.', './lib', '/usr/lib/', '/usr/local/lib', '/opt/local/lib']
names = ['libswipl.dylib', 'libpl.dylib']
for name in names:
for path in paths:
path = os.path.join(path, name)
if os.path.exists(path):
return (path, None)
return (None, None)
|
This function makes a big effort to find the path to the SWI-Prolog shared
library. Since this is both OS dependent and installation dependent, we may
not aways succeed. If we do, we return a name/path that can be used by
CDLL(). Otherwise we raise an exception.
:return: Tuple. Fist element is the name or path to the library that can be
used by CDLL. Second element is the path were SWI-Prolog resource
file may be found (this is needed in some Linuxes)
:rtype: Tuple of strings
:raises ImportError: If we cannot guess the name of the library
|
def _findSwipl():
"""
This function makes a big effort to find the path to the SWI-Prolog shared
library. Since this is both OS dependent and installation dependent, we may
not aways succeed. If we do, we return a name/path that can be used by
CDLL(). Otherwise we raise an exception.
:return: Tuple. Fist element is the name or path to the library that can be
used by CDLL. Second element is the path were SWI-Prolog resource
file may be found (this is needed in some Linuxes)
:rtype: Tuple of strings
:raises ImportError: If we cannot guess the name of the library
"""
# Now begins the guesswork
platform = sys.platform[:3]
if platform == "win": # In Windows, we have the default installer
# path and the registry to look
(path, swiHome) = _findSwiplWin()
elif platform in ("lin", "cyg"):
(path, swiHome) = _findSwiplLin()
elif platform == "dar": # Help with MacOS is welcome!!
(path, swiHome) = _findSwiplDar()
if path is None:
(path, swiHome) = _findSwiplMacOSHome()
else:
# This should work for other UNIX
(path, swiHome) = _findSwiplLin()
# This is a catch all raise
if path is None:
raise ImportError('Could not find the SWI-Prolog library in this '
'platform. If you are sure it is installed, please '
'open an issue.')
else:
return (path, swiHome)
|
When the path to the DLL is not in Windows search path, Windows will not be
able to find other DLLs on the same directory, so we have to add it to the
path. This function takes care of it.
:parameters:
- `dll` (str) - File name of the DLL
|
def _fixWindowsPath(dll):
"""
When the path to the DLL is not in Windows search path, Windows will not be
able to find other DLLs on the same directory, so we have to add it to the
path. This function takes care of it.
:parameters:
- `dll` (str) - File name of the DLL
"""
if sys.platform[:3] != 'win':
return # Nothing to do here
pathToDll = os.path.dirname(dll)
currentWindowsPath = os.getenv('PATH')
if pathToDll not in currentWindowsPath:
# We will prepend the path, to avoid conflicts between DLLs
newPath = pathToDll + ';' + currentWindowsPath
os.putenv('PATH', newPath)
|
Turns a string into a bytes if necessary (i.e. if it is not already a bytes
object or None).
If string is None, int or c_char_p it will be returned directly.
:param string: The string that shall be transformed
:type string: str, bytes or type(None)
:return: Transformed string
:rtype: c_char_p compatible object (bytes, c_char_p, int or None)
|
def str_to_bytes(string):
"""
Turns a string into a bytes if necessary (i.e. if it is not already a bytes
object or None).
If string is None, int or c_char_p it will be returned directly.
:param string: The string that shall be transformed
:type string: str, bytes or type(None)
:return: Transformed string
:rtype: c_char_p compatible object (bytes, c_char_p, int or None)
"""
if string is None or isinstance(string, (int, c_char_p)):
return string
if not isinstance(string, bytes):
if string not in _stringMap:
_stringMap[string] = string.encode()
string = _stringMap[string]
return string
|
This function turns an array of strings into a pointer array
with pointers pointing to the encodings of those strings
Possibly contained bytes are kept as they are.
:param strList: List of strings that shall be converted
:type strList: List of strings
:returns: Pointer array with pointers pointing to bytes
:raises: TypeError if strList is not list, set or tuple
|
def list_to_bytes_list(strList):
"""
This function turns an array of strings into a pointer array
with pointers pointing to the encodings of those strings
Possibly contained bytes are kept as they are.
:param strList: List of strings that shall be converted
:type strList: List of strings
:returns: Pointer array with pointers pointing to bytes
:raises: TypeError if strList is not list, set or tuple
"""
pList = c_char_p * len(strList)
# if strList is already a pointerarray or None, there is nothing to do
if isinstance(strList, (pList, type(None))):
return strList
if not isinstance(strList, (list, set, tuple)):
raise TypeError("strList must be list, set or tuple, not " +
str(type(strList)))
pList = pList()
for i, elem in enumerate(strList):
pList[i] = str_to_bytes(elem)
return pList
|
Decorator function which can be used to automatically turn an incoming
string into a bytes object and an incoming list to a pointer array if
necessary.
:param strings: Indices of the arguments must be pointers to bytes
:type strings: List of integers
:param arrays: Indices of the arguments must be arrays of pointers to bytes
:type arrays: List of integers
|
def check_strings(strings, arrays):
"""
Decorator function which can be used to automatically turn an incoming
string into a bytes object and an incoming list to a pointer array if
necessary.
:param strings: Indices of the arguments must be pointers to bytes
:type strings: List of integers
:param arrays: Indices of the arguments must be arrays of pointers to bytes
:type arrays: List of integers
"""
# if given a single element, turn it into a list
if isinstance(strings, int):
strings = [strings]
elif strings is None:
strings = []
# check if all entries are integers
for i,k in enumerate(strings):
if not isinstance(k, int):
raise TypeError(('Wrong type for index at {0} '+
'in strings. Must be int, not {1}!').format(i,k))
# if given a single element, turn it into a list
if isinstance(arrays, int):
arrays = [arrays]
elif arrays is None:
arrays = []
# check if all entries are integers
for i,k in enumerate(arrays):
if not isinstance(k, int):
raise TypeError(('Wrong type for index at {0} '+
'in arrays. Must be int, not {1}!').format(i,k))
# check if some index occurs in both
if set(strings).intersection(arrays):
raise ValueError('One or more elements occur in both arrays and ' +
' strings. One parameter cannot be both list and string!')
# create the checker that will check all arguments given by argsToCheck
# and turn them into the right datatype.
def checker(func):
def check_and_call(*args):
args = list(args)
for i in strings:
arg = args[i]
args[i] = str_to_bytes(arg)
for i in arrays:
arg = args[i]
args[i] = list_to_bytes_list(arg)
return func(*args)
return check_and_call
return checker
|
Run a prolog query and return a generator.
If the query is a yes/no question, returns {} for yes, and nothing for no.
Otherwise returns a generator of dicts with variables as keys.
>>> prolog = Prolog()
>>> prolog.assertz("father(michael,john)")
>>> prolog.assertz("father(michael,gina)")
>>> bool(list(prolog.query("father(michael,john)")))
True
>>> bool(list(prolog.query("father(michael,olivia)")))
False
>>> print sorted(prolog.query("father(michael,X)"))
[{'X': 'gina'}, {'X': 'john'}]
|
def query(cls, query, maxresult=-1, catcherrors=True, normalize=True):
"""Run a prolog query and return a generator.
If the query is a yes/no question, returns {} for yes, and nothing for no.
Otherwise returns a generator of dicts with variables as keys.
>>> prolog = Prolog()
>>> prolog.assertz("father(michael,john)")
>>> prolog.assertz("father(michael,gina)")
>>> bool(list(prolog.query("father(michael,john)")))
True
>>> bool(list(prolog.query("father(michael,olivia)")))
False
>>> print sorted(prolog.query("father(michael,X)"))
[{'X': 'gina'}, {'X': 'john'}]
"""
return cls._QueryWrapper()(query, maxresult, catcherrors, normalize)
|
Calculates the request payload size
|
def calculate_size(uuid, partition_id, interrupt):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(uuid)
data_size += INT_SIZE_IN_BYTES
data_size += BOOLEAN_SIZE_IN_BYTES
return data_size
|
Encode request into client_message
|
def encode_request(uuid, partition_id, interrupt):
""" Encode request into client_message"""
client_message = ClientMessage(payload_size=calculate_size(uuid, partition_id, interrupt))
client_message.set_message_type(REQUEST_TYPE)
client_message.set_retryable(RETRYABLE)
client_message.append_str(uuid)
client_message.append_int(partition_id)
client_message.append_bool(interrupt)
client_message.update_frame_length()
return client_message
|
Calculates the request payload size
|
def calculate_size(name, include_value, local_only):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(name)
data_size += BOOLEAN_SIZE_IN_BYTES
data_size += BOOLEAN_SIZE_IN_BYTES
return data_size
|
Decode response from client message
|
def decode_response(client_message, to_object=None):
""" Decode response from client message"""
parameters = dict(response=None)
parameters['response'] = client_message.read_str()
return parameters
|
Event handler
|
def handle(client_message, handle_event_entry=None, to_object=None):
""" Event handler """
message_type = client_message.get_message_type()
if message_type == EVENT_ENTRY and handle_event_entry is not None:
key = None
if not client_message.read_bool():
key = client_message.read_data()
value = None
if not client_message.read_bool():
value = client_message.read_data()
old_value = None
if not client_message.read_bool():
old_value = client_message.read_data()
merging_value = None
if not client_message.read_bool():
merging_value = client_message.read_data()
event_type = client_message.read_int()
uuid = client_message.read_str()
number_of_affected_entries = client_message.read_int()
handle_event_entry(key=key, value=value, old_value=old_value, merging_value=merging_value, event_type=event_type, uuid=uuid, number_of_affected_entries=number_of_affected_entries)
|
Calculates the request payload size
|
def calculate_size(name, function):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(name)
data_size += calculate_size_data(function)
return data_size
|
Decode response from client message
|
def decode_response(client_message, to_object=None):
""" Decode response from client message"""
parameters = dict(response=None)
if not client_message.read_bool():
parameters['response'] = to_object(client_message.read_data())
return parameters
|
Calculates the request payload size
|
def calculate_size(name, expected):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(name)
data_size += BOOLEAN_SIZE_IN_BYTES
if expected is not None:
data_size += calculate_size_data(expected)
return data_size
|
Calculates the request payload size
|
def calculate_size(name, permits):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(name)
data_size += INT_SIZE_IN_BYTES
return data_size
|
Calculates the request payload size
|
def calculate_size(name, value_list):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(name)
data_size += INT_SIZE_IN_BYTES
for value_list_item in value_list:
data_size += calculate_size_data(value_list_item)
return data_size
|
Encode request into client_message
|
def encode_request(name, value_list):
""" Encode request into client_message"""
client_message = ClientMessage(payload_size=calculate_size(name, value_list))
client_message.set_message_type(REQUEST_TYPE)
client_message.set_retryable(RETRYABLE)
client_message.append_str(name)
client_message.append_int(len(value_list))
for value_list_item in value_list:
client_message.append_data(value_list_item)
client_message.update_frame_length()
return client_message
|
Adds the specified item to this queue if there is available space.
:param item: (object), the specified item.
:return: (bool), ``true`` if element is successfully added, ``false`` otherwise.
|
def add(self, item):
"""
Adds the specified item to this queue if there is available space.
:param item: (object), the specified item.
:return: (bool), ``true`` if element is successfully added, ``false`` otherwise.
"""
def result_fnc(f):
if f.result():
return True
raise Full("Queue is full!")
return self.offer(item).continue_with(result_fnc)
|
Adds the elements in the specified collection to this queue.
:param items: (Collection), collection which includes the items to be added.
:return: (bool), ``true`` if this queue is changed after call, ``false`` otherwise.
|
def add_all(self, items):
"""
Adds the elements in the specified collection to this queue.
:param items: (Collection), collection which includes the items to be added.
:return: (bool), ``true`` if this queue is changed after call, ``false`` otherwise.
"""
check_not_none(items, "Value can't be None")
data_items = []
for item in items:
check_not_none(item, "Value can't be None")
data_items.append(self._to_data(item))
return self._encode_invoke(queue_add_all_codec, data_list=data_items)
|
Adds an item listener for this queue. Listener will be notified for all queue add/remove events.
:param include_value: (bool), whether received events include the updated item or not (optional).
:param item_added_func: Function to be called when an item is added to this set (optional).
:param item_removed_func: Function to be called when an item is deleted from this set (optional).
:return: (str), a registration id which is used as a key to remove the listener.
|
def add_listener(self, include_value=False, item_added_func=None, item_removed_func=None):
"""
Adds an item listener for this queue. Listener will be notified for all queue add/remove events.
:param include_value: (bool), whether received events include the updated item or not (optional).
:param item_added_func: Function to be called when an item is added to this set (optional).
:param item_removed_func: Function to be called when an item is deleted from this set (optional).
:return: (str), a registration id which is used as a key to remove the listener.
"""
request = queue_add_listener_codec.encode_request(self.name, include_value, False)
def handle_event_item(item, uuid, event_type):
item = item if include_value else None
member = self._client.cluster.get_member_by_uuid(uuid)
item_event = ItemEvent(self.name, item, event_type, member, self._to_object)
if event_type == ItemEventType.added:
if item_added_func:
item_added_func(item_event)
else:
if item_removed_func:
item_removed_func(item_event)
return self._start_listening(request,
lambda m: queue_add_listener_codec.handle(m, handle_event_item),
lambda r: queue_add_listener_codec.decode_response(r)['response'],
self.partition_key)
|
Determines whether this queue contains all of the items in the specified collection or not.
:param items: (Collection), the specified collection which includes the items to be searched.
:return: (bool), ``true`` if all of the items in the specified collection exist in this queue, ``false`` otherwise.
|
def contains_all(self, items):
"""
Determines whether this queue contains all of the items in the specified collection or not.
:param items: (Collection), the specified collection which includes the items to be searched.
:return: (bool), ``true`` if all of the items in the specified collection exist in this queue, ``false`` otherwise.
"""
check_not_none(items, "Items can't be None")
data_items = []
for item in items:
check_not_none(item, "item can't be None")
data_items.append(self._to_data(item))
return self._encode_invoke(queue_contains_all_codec, data_list=data_items)
|
Inserts the specified element into this queue if it is possible to do so immediately without violating capacity
restrictions. Returns ``true`` upon success. If there is no space currently available:
* If a timeout is provided, it waits until this timeout elapses and returns the result.
* If a timeout is not provided, returns ``false`` immediately.
:param item: (object), the item to be added.
:param timeout: (long), maximum time in seconds to wait for addition (optional).
:return: (bool), ``true`` if the element was added to this queue, ``false`` otherwise.
|
def offer(self, item, timeout=0):
"""
Inserts the specified element into this queue if it is possible to do so immediately without violating capacity
restrictions. Returns ``true`` upon success. If there is no space currently available:
* If a timeout is provided, it waits until this timeout elapses and returns the result.
* If a timeout is not provided, returns ``false`` immediately.
:param item: (object), the item to be added.
:param timeout: (long), maximum time in seconds to wait for addition (optional).
:return: (bool), ``true`` if the element was added to this queue, ``false`` otherwise.
"""
check_not_none(item, "Value can't be None")
element_data = self._to_data(item)
return self._encode_invoke(queue_offer_codec, value=element_data, timeout_millis=to_millis(timeout))
|
Transfers all available items to the given `list`_ and removes these items from this queue. If a max_size is
specified, it transfers at most the given number of items. In case of a failure, an item can exist in both
collections or none of them.
This operation may be more efficient than polling elements repeatedly and putting into collection.
:param list: (`list`_), the list where the items in this queue will be transferred.
:param max_size: (int), the maximum number items to transfer (optional).
:return: (int), number of transferred items.
.. _list: https://docs.python.org/2/library/functions.html#list
|
def drain_to(self, list, max_size=-1):
"""
Transfers all available items to the given `list`_ and removes these items from this queue. If a max_size is
specified, it transfers at most the given number of items. In case of a failure, an item can exist in both
collections or none of them.
This operation may be more efficient than polling elements repeatedly and putting into collection.
:param list: (`list`_), the list where the items in this queue will be transferred.
:param max_size: (int), the maximum number items to transfer (optional).
:return: (int), number of transferred items.
.. _list: https://docs.python.org/2/library/functions.html#list
"""
def drain_result(f):
resp = f.result()
list.extend(resp)
return len(resp)
return self._encode_invoke(queue_drain_to_max_size_codec, max_size=max_size).continue_with(
drain_result)
|
Adds the specified element into this queue. If there is no space, it waits until necessary space becomes
available.
:param item: (object), the specified item.
|
def put(self, item):
"""
Adds the specified element into this queue. If there is no space, it waits until necessary space becomes
available.
:param item: (object), the specified item.
"""
check_not_none(item, "Value can't be None")
element_data = self._to_data(item)
return self._encode_invoke(queue_put_codec, value=element_data)
|
Removes all of the elements of the specified collection from this queue.
:param items: (Collection), the specified collection.
:return: (bool), ``true`` if the call changed this queue, ``false`` otherwise.
|
def remove_all(self, items):
"""
Removes all of the elements of the specified collection from this queue.
:param items: (Collection), the specified collection.
:return: (bool), ``true`` if the call changed this queue, ``false`` otherwise.
"""
check_not_none(items, "Value can't be None")
data_items = []
for item in items:
check_not_none(item, "Value can't be None")
data_items.append(self._to_data(item))
return self._encode_invoke(queue_compare_and_remove_all_codec, data_list=data_items)
|
Removes the specified item listener. Returns silently if the specified listener was not added before.
:param registration_id: (str), id of the listener to be deleted.
:return: (bool), ``true`` if the item listener is removed, ``false`` otherwise.
|
def remove_listener(self, registration_id):
"""
Removes the specified item listener. Returns silently if the specified listener was not added before.
:param registration_id: (str), id of the listener to be deleted.
:return: (bool), ``true`` if the item listener is removed, ``false`` otherwise.
"""
return self._stop_listening(registration_id, lambda i: queue_remove_listener_codec.encode_request(self.name, i))
|
Removes the items which are not contained in the specified collection. In other words, only the items that
are contained in the specified collection will be retained.
:param items: (Collection), collection which includes the elements to be retained in this set.
:return: (bool), ``true`` if this queue changed as a result of the call.
|
def retain_all(self, items):
"""
Removes the items which are not contained in the specified collection. In other words, only the items that
are contained in the specified collection will be retained.
:param items: (Collection), collection which includes the elements to be retained in this set.
:return: (bool), ``true`` if this queue changed as a result of the call.
"""
check_not_none(items, "Value can't be None")
data_items = []
for item in items:
check_not_none(item, "Value can't be None")
data_items.append(self._to_data(item))
return self._encode_invoke(queue_compare_and_retain_all_codec, data_list=data_items)
|
Decode response from client message
|
def decode_response(client_message, to_object=None):
""" Decode response from client message"""
parameters = dict(partitions=None)
partitions_size = client_message.read_int()
partitions = {}
for _ in range(0, partitions_size):
partitions_key = AddressCodec.decode(client_message, to_object)
partitions_val_size = client_message.read_int()
partitions_val = []
for _ in range(0, partitions_val_size):
partitions_val_item = client_message.read_int()
partitions_val.append(partitions_val_item)
partitions[partitions_key] = partitions_val
parameters['partitions'] = partitions
return parameters
|
Calculates the request payload size
|
def calculate_size(name, data_list):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(name)
data_size += INT_SIZE_IN_BYTES
for data_list_item in data_list:
data_size += calculate_size_data(data_list_item)
return data_size
|
Encode request into client_message
|
def encode_request(name, data_list):
""" Encode request into client_message"""
client_message = ClientMessage(payload_size=calculate_size(name, data_list))
client_message.set_message_type(REQUEST_TYPE)
client_message.set_retryable(RETRYABLE)
client_message.append_str(name)
client_message.append_int(len(data_list))
for data_list_item in data_list:
client_message.append_data(data_list_item)
client_message.update_frame_length()
return client_message
|
Calculates the request payload size
|
def calculate_size(name, new_value):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(name)
data_size += LONG_SIZE_IN_BYTES
return data_size
|
Adds the given value to the current value and returns the previous value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
:raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:param delta: (int), the value to add.
:return: (int), the previous value.
|
def get_and_add(self, delta):
"""
Adds the given value to the current value and returns the previous value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
:raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:param delta: (int), the value to add.
:return: (int), the previous value.
"""
return self._invoke_internal(pn_counter_add_codec, delta=delta, get_before_update=True)
|
Adds the given value to the current value and returns the updated value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
:raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:param delta: (int), the value to add.
:return: (int), the updated value.
|
def add_and_get(self, delta):
"""
Adds the given value to the current value and returns the updated value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
:raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:param delta: (int), the value to add.
:return: (int), the updated value.
"""
return self._invoke_internal(pn_counter_add_codec, delta=delta, get_before_update=False)
|
Subtracts the given value from the current value and returns the previous value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
:raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:param delta: (int), the value to subtract.
:return: (int), the previous value.
|
def get_and_subtract(self, delta):
"""
Subtracts the given value from the current value and returns the previous value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
:raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:param delta: (int), the value to subtract.
:return: (int), the previous value.
"""
return self._invoke_internal(pn_counter_add_codec, delta=-1 * delta, get_before_update=True)
|
Subtracts the given value from the current value and returns the updated value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
:raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:param delta: (int), the value to subtract.
:return: (int), the updated value.
|
def subtract_and_get(self, delta):
"""
Subtracts the given value from the current value and returns the updated value.
:raises NoDataMemberInClusterError: if the cluster does not contain any data members.
:raises UnsupportedOperationError: if the cluster version is less than 3.10.
:raises ConsistencyLostError: if the session guarantees have been lost.
:param delta: (int), the value to subtract.
:return: (int), the updated value.
"""
return self._invoke_internal(pn_counter_add_codec, delta=-1 * delta, get_before_update=False)
|
Decode response from client message
|
def decode_response(client_message, to_object=None):
""" Decode response from client message"""
parameters = dict(response=None)
parameters['response'] = client_message.read_int()
return parameters
|
Event handler
|
def handle(client_message, handle_event_map_partition_lost=None, to_object=None):
""" Event handler """
message_type = client_message.get_message_type()
if message_type == EVENT_MAPPARTITIONLOST and handle_event_map_partition_lost is not None:
partition_id = client_message.read_int()
uuid = client_message.read_str()
handle_event_map_partition_lost(partition_id=partition_id, uuid=uuid)
|
Transactional implementation of :func:`List.add(item) <hazelcast.proxy.list.List.add>`
:param item: (object), the new item to be added.
:return: (bool), ``true`` if the item is added successfully, ``false`` otherwise.
|
def add(self, item):
"""
Transactional implementation of :func:`List.add(item) <hazelcast.proxy.list.List.add>`
:param item: (object), the new item to be added.
:return: (bool), ``true`` if the item is added successfully, ``false`` otherwise.
"""
check_not_none(item, "item can't be none")
return self._encode_invoke(transactional_list_add_codec, item=self._to_data(item))
|
Transactional implementation of :func:`List.remove(item) <hazelcast.proxy.list.List.remove>`
:param item: (object), the specified item to be removed.
:return: (bool), ``true`` if the item is removed successfully, ``false`` otherwise.
|
def remove(self, item):
"""
Transactional implementation of :func:`List.remove(item) <hazelcast.proxy.list.List.remove>`
:param item: (object), the specified item to be removed.
:return: (bool), ``true`` if the item is removed successfully, ``false`` otherwise.
"""
check_not_none(item, "item can't be none")
return self._encode_invoke(transactional_list_remove_codec, item=self._to_data(item))
|
Calculates the request payload size
|
def calculate_size(name, service_name):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(name)
data_size += calculate_size_str(service_name)
return data_size
|
Calculates the request payload size
|
def calculate_size(name, permits, timeout):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(name)
data_size += INT_SIZE_IN_BYTES
data_size += LONG_SIZE_IN_BYTES
return data_size
|
Creates cluster-wide :class:`~hazelcast.proxy.id_generator.IdGenerator`.
:param name: (str), name of the IdGenerator proxy.
:return: (:class:`~hazelcast.proxy.id_generator.IdGenerator`), IdGenerator proxy for the given name.
|
def get_id_generator(self, name):
"""
Creates cluster-wide :class:`~hazelcast.proxy.id_generator.IdGenerator`.
:param name: (str), name of the IdGenerator proxy.
:return: (:class:`~hazelcast.proxy.id_generator.IdGenerator`), IdGenerator proxy for the given name.
"""
atomic_long = self.get_atomic_long(ID_GENERATOR_ATOMIC_LONG_PREFIX + name)
return self.proxy.get_or_create(ID_GENERATOR_SERVICE, name, atomic_long=atomic_long)
|
Creates a new :class:`~hazelcast.transaction.Transaction` associated with the current thread using default or given options.
:param timeout: (long), the timeout in seconds determines the maximum lifespan of a transaction. So if a
transaction is configured with a timeout of 2 minutes, then it will automatically rollback if it hasn't
committed yet.
:param durability: (int), the durability is the number of machines that can take over if a member fails during a
transaction commit or rollback
:param type: (Transaction Type), the transaction type which can be :const:`~hazelcast.transaction.TWO_PHASE` or :const:`~hazelcast.transaction.ONE_PHASE`
:return: (:class:`~hazelcast.transaction.Transaction`), new Transaction associated with the current thread.
|
def new_transaction(self, timeout=120, durability=1, type=TWO_PHASE):
"""
Creates a new :class:`~hazelcast.transaction.Transaction` associated with the current thread using default or given options.
:param timeout: (long), the timeout in seconds determines the maximum lifespan of a transaction. So if a
transaction is configured with a timeout of 2 minutes, then it will automatically rollback if it hasn't
committed yet.
:param durability: (int), the durability is the number of machines that can take over if a member fails during a
transaction commit or rollback
:param type: (Transaction Type), the transaction type which can be :const:`~hazelcast.transaction.TWO_PHASE` or :const:`~hazelcast.transaction.ONE_PHASE`
:return: (:class:`~hazelcast.transaction.Transaction`), new Transaction associated with the current thread.
"""
return self.transaction_manager.new_transaction(timeout, durability, type)
|
Shuts down this HazelcastClient.
|
def shutdown(self):
"""
Shuts down this HazelcastClient.
"""
if self.lifecycle.is_live:
self.lifecycle.fire_lifecycle_event(LIFECYCLE_STATE_SHUTTING_DOWN)
self.near_cache_manager.destroy_all_near_caches()
self.statistics.shutdown()
self.partition_service.shutdown()
self.heartbeat.shutdown()
self.cluster.shutdown()
self.reactor.shutdown()
self.lifecycle.fire_lifecycle_event(LIFECYCLE_STATE_SHUTDOWN)
self.logger.info("Client shutdown.", extra=self._logger_extras)
|
Event handler
|
def handle(client_message, handle_event_member=None, handle_event_member_list=None, handle_event_member_attribute_change=None, to_object=None):
""" Event handler """
message_type = client_message.get_message_type()
if message_type == EVENT_MEMBER and handle_event_member is not None:
member = MemberCodec.decode(client_message, to_object)
event_type = client_message.read_int()
handle_event_member(member=member, event_type=event_type)
if message_type == EVENT_MEMBERLIST and handle_event_member_list is not None:
members_size = client_message.read_int()
members = []
for _ in range(0, members_size):
members_item = MemberCodec.decode(client_message, to_object)
members.append(members_item)
handle_event_member_list(members=members)
if message_type == EVENT_MEMBERATTRIBUTECHANGE and handle_event_member_attribute_change is not None:
uuid = client_message.read_str()
key = client_message.read_str()
operation_type = client_message.read_int()
value = None
if not client_message.read_bool():
value = client_message.read_str()
handle_event_member_attribute_change(uuid=uuid, key=key, operation_type=operation_type, value=value)
|
Subscribes to this topic. When someone publishes a message on this topic, on_message() function is called if
provided.
:param on_message: (Function), function to be called when a message is published.
:return: (str), a registration id which is used as a key to remove the listener.
|
def add_listener(self, on_message=None):
"""
Subscribes to this topic. When someone publishes a message on this topic, on_message() function is called if
provided.
:param on_message: (Function), function to be called when a message is published.
:return: (str), a registration id which is used as a key to remove the listener.
"""
request = topic_add_message_listener_codec.encode_request(self.name, False)
def handle(item, publish_time, uuid):
member = self._client.cluster.get_member_by_uuid(uuid)
item_event = TopicMessage(self.name, item, publish_time, member, self._to_object)
on_message(item_event)
return self._start_listening(request,
lambda m: topic_add_message_listener_codec.handle(m, handle),
lambda r: topic_add_message_listener_codec.decode_response(r)['response'],
self.partition_key)
|
Publishes the message to all subscribers of this topic
:param message: (object), the message to be published.
|
def publish(self, message):
"""
Publishes the message to all subscribers of this topic
:param message: (object), the message to be published.
"""
message_data = self._to_data(message)
self._encode_invoke(topic_publish_codec, message=message_data)
|
Stops receiving messages for the given message listener. If the given listener already removed, this method does
nothing.
:param registration_id: (str), registration id of the listener to be removed.
:return: (bool), ``true`` if the listener is removed, ``false`` otherwise.
|
def remove_listener(self, registration_id):
"""
Stops receiving messages for the given message listener. If the given listener already removed, this method does
nothing.
:param registration_id: (str), registration id of the listener to be removed.
:return: (bool), ``true`` if the listener is removed, ``false`` otherwise.
"""
return self._stop_listening(registration_id,
lambda i: topic_remove_message_listener_codec.encode_request(self.name, i))
|
Calculates the request payload size
|
def calculate_size(name, from_, to):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(name)
data_size += INT_SIZE_IN_BYTES
data_size += INT_SIZE_IN_BYTES
return data_size
|
Encode request into client_message
|
def encode_request(name, from_, to):
""" Encode request into client_message"""
client_message = ClientMessage(payload_size=calculate_size(name, from_, to))
client_message.set_message_type(REQUEST_TYPE)
client_message.set_retryable(RETRYABLE)
client_message.append_str(name)
client_message.append_int(from_)
client_message.append_int(to)
client_message.update_frame_length()
return client_message
|
Decode response from client message
|
def decode_response(client_message, to_object=None):
""" Decode response from client message"""
parameters = dict(response=None)
response_size = client_message.read_int()
response = []
for _ in range(0, response_size):
response_item = client_message.read_data()
response.append(response_item)
parameters['response'] = ImmutableLazyDataList(response, to_object)
return parameters
|
Validates the serializer for given type.
:param serializer: (Serializer), the serializer to be validated.
:param _type: (Type), type to be used for serializer validation.
|
def validate_serializer(serializer, _type):
"""
Validates the serializer for given type.
:param serializer: (Serializer), the serializer to be validated.
:param _type: (Type), type to be used for serializer validation.
"""
if not issubclass(serializer, _type):
raise ValueError("Serializer should be an instance of {}".format(_type.__name__))
|
Utility method for defining enums.
:param enums: Parameters of enumeration.
:return: (Enum), the created enumerations.
|
def enum(**enums):
"""
Utility method for defining enums.
:param enums: Parameters of enumeration.
:return: (Enum), the created enumerations.
"""
enums['reverse'] = dict((value, key) for key, value in six.iteritems(enums))
return type('Enum', (), enums)
|
:param value: (Number), value to be translated to seconds
:param time_unit: Time duration in seconds
:return: Value of the value in seconds
|
def to_seconds(value, time_unit):
"""
:param value: (Number), value to be translated to seconds
:param time_unit: Time duration in seconds
:return: Value of the value in seconds
"""
if isinstance(value, bool):
# bool is a subclass of int. Don't let bool and float multiplication.
raise TypeError
return float(value) * time_unit
|
Calculates the request payload size
|
def calculate_size(name, listener_flags, local_only):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(name)
data_size += INT_SIZE_IN_BYTES
data_size += BOOLEAN_SIZE_IN_BYTES
return data_size
|
Event handler
|
def handle(client_message, handle_event_imap_invalidation=None, handle_event_imap_batch_invalidation=None, to_object=None):
""" Event handler """
message_type = client_message.get_message_type()
if message_type == EVENT_IMAPINVALIDATION and handle_event_imap_invalidation is not None:
key = None
if not client_message.read_bool():
key = client_message.read_data()
handle_event_imap_invalidation(key=key)
if message_type == EVENT_IMAPBATCHINVALIDATION and handle_event_imap_batch_invalidation is not None:
keys_size = client_message.read_int()
keys = []
for _ in range(0, keys_size):
keys_item = client_message.read_data()
keys.append(keys_item)
handle_event_imap_batch_invalidation(keys=keys)
|
Creates an exception with given error codec.
:param error_codec: (Error Codec), error codec which includes the class name, message and exception trace.
:return: (Exception), the created exception.
|
def create_exception(error_codec):
"""
Creates an exception with given error codec.
:param error_codec: (Error Codec), error codec which includes the class name, message and exception trace.
:return: (Exception), the created exception.
"""
if error_codec.error_code in ERROR_CODE_TO_ERROR:
return ERROR_CODE_TO_ERROR[error_codec.error_code](error_codec.message)
stack_trace = "\n".join(
["\tat %s.%s(%s:%s)" % (x.declaring_class, x.method_name, x.file_name, x.line_number) for x in
error_codec.stack_trace])
message = "Got exception from server:\n %s: %s\n %s" % (error_codec.class_name,
error_codec.message,
stack_trace)
return HazelcastError(message)
|
Calculates the request payload size
|
def calculate_size(name, start_sequence, min_count, max_count, filter):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(name)
data_size += LONG_SIZE_IN_BYTES
data_size += INT_SIZE_IN_BYTES
data_size += INT_SIZE_IN_BYTES
data_size += BOOLEAN_SIZE_IN_BYTES
if filter is not None:
data_size += calculate_size_data(filter)
return data_size
|
Encode request into client_message
|
def encode_request(name, start_sequence, min_count, max_count, filter):
""" Encode request into client_message"""
client_message = ClientMessage(payload_size=calculate_size(name, start_sequence, min_count, max_count, filter))
client_message.set_message_type(REQUEST_TYPE)
client_message.set_retryable(RETRYABLE)
client_message.append_str(name)
client_message.append_long(start_sequence)
client_message.append_int(min_count)
client_message.append_int(max_count)
client_message.append_bool(filter is None)
if filter is not None:
client_message.append_data(filter)
client_message.update_frame_length()
return client_message
|
Decode response from client message
|
def decode_response(client_message, to_object=None):
""" Decode response from client message"""
parameters = dict(read_count=None, items=None)
parameters['read_count'] = client_message.read_int()
items_size = client_message.read_int()
items = []
for _ in range(0, items_size):
items_item = client_message.read_data()
items.append(items_item)
parameters['items'] = ImmutableLazyDataList(items, to_object)
return parameters
|
Calculates the request payload size
|
def calculate_size(name, items):
""" Calculates the request payload size"""
data_size = 0
data_size += calculate_size_str(name)
data_size += INT_SIZE_IN_BYTES
for items_item in items:
data_size += calculate_size_data(items_item)
return data_size
|
Encode request into client_message
|
def encode_request(name, items):
""" Encode request into client_message"""
client_message = ClientMessage(payload_size=calculate_size(name, items))
client_message.set_message_type(REQUEST_TYPE)
client_message.set_retryable(RETRYABLE)
client_message.append_str(name)
client_message.append_int(len(items))
for items_item in items:
client_message.append_data(items_item)
client_message.update_frame_length()
return client_message
|
Returns the capacity of this Ringbuffer.
:return: (long), the capacity of Ringbuffer.
|
def capacity(self):
"""
Returns the capacity of this Ringbuffer.
:return: (long), the capacity of Ringbuffer.
"""
if not self._capacity:
def cache_capacity(f):
self._capacity = f.result()
return f.result()
return self._encode_invoke(ringbuffer_capacity_codec).continue_with(cache_capacity)
return ImmediateFuture(self._capacity)
|
Adds the specified item to the tail of the Ringbuffer. If there is no space in the Ringbuffer, the action is
determined by overflow policy as :const:`OVERFLOW_POLICY_OVERWRITE` or :const:`OVERFLOW_POLICY_FAIL`.
:param item: (object), the specified item to be added.
:param overflow_policy: (int), the OverflowPolicy to be used when there is no space (optional).
:return: (long), the sequenceId of the added item, or -1 if the add failed.
|
def add(self, item, overflow_policy=OVERFLOW_POLICY_OVERWRITE):
"""
Adds the specified item to the tail of the Ringbuffer. If there is no space in the Ringbuffer, the action is
determined by overflow policy as :const:`OVERFLOW_POLICY_OVERWRITE` or :const:`OVERFLOW_POLICY_FAIL`.
:param item: (object), the specified item to be added.
:param overflow_policy: (int), the OverflowPolicy to be used when there is no space (optional).
:return: (long), the sequenceId of the added item, or -1 if the add failed.
"""
return self._encode_invoke(ringbuffer_add_codec, value=self._to_data(item), overflow_policy=overflow_policy)
|
Adds all of the item in the specified collection to the tail of the Ringbuffer. An add_all is likely to
outperform multiple calls to add(object) due to better io utilization and a reduced number of executed
operations. The items are added in the order of the Iterator of the collection.
If there is no space in the Ringbuffer, the action is determined by overflow policy as :const:`OVERFLOW_POLICY_OVERWRITE`
or :const:`OVERFLOW_POLICY_FAIL`.
:param items: (Collection), the specified collection which contains the items to be added.
:param overflow_policy: (int), the OverflowPolicy to be used when there is no space (optional).
:return: (long), the sequenceId of the last written item, or -1 of the last write is failed.
|
def add_all(self, items, overflow_policy=OVERFLOW_POLICY_OVERWRITE):
"""
Adds all of the item in the specified collection to the tail of the Ringbuffer. An add_all is likely to
outperform multiple calls to add(object) due to better io utilization and a reduced number of executed
operations. The items are added in the order of the Iterator of the collection.
If there is no space in the Ringbuffer, the action is determined by overflow policy as :const:`OVERFLOW_POLICY_OVERWRITE`
or :const:`OVERFLOW_POLICY_FAIL`.
:param items: (Collection), the specified collection which contains the items to be added.
:param overflow_policy: (int), the OverflowPolicy to be used when there is no space (optional).
:return: (long), the sequenceId of the last written item, or -1 of the last write is failed.
"""
check_not_empty(items, "items can't be empty")
if len(items) > MAX_BATCH_SIZE:
raise AssertionError("Batch size can't be greater than %d" % MAX_BATCH_SIZE)
for item in items:
check_not_none(item, "item can't be None")
item_list = [self._to_data(x) for x in items]
return self._encode_invoke(ringbuffer_add_all_codec, value_list=item_list, overflow_policy=overflow_policy)
|
Reads one item from the Ringbuffer. If the sequence is one beyond the current tail, this call blocks until an
item is added. Currently it isn't possible to control how long this call is going to block.
:param sequence: (long), the sequence of the item to read.
:return: (object), the read item.
|
def read_one(self, sequence):
"""
Reads one item from the Ringbuffer. If the sequence is one beyond the current tail, this call blocks until an
item is added. Currently it isn't possible to control how long this call is going to block.
:param sequence: (long), the sequence of the item to read.
:return: (object), the read item.
"""
check_not_negative(sequence, "sequence can't be smaller than 0")
return self._encode_invoke(ringbuffer_read_one_codec, sequence=sequence)
|
Reads a batch of items from the Ringbuffer. If the number of available items after the first read item is
smaller than the max_count, these items are returned. So it could be the number of items read is smaller than
the max_count. If there are less items available than min_count, then this call blocks. Reading a batch of items
is likely to perform better because less overhead is involved.
:param start_sequence: (long), the start_sequence of the first item to read.
:param min_count: (int), the minimum number of items to read.
:param max_count: (int), the maximum number of items to read.
:return: (Sequence), the list of read items.
|
def read_many(self, start_sequence, min_count, max_count):
"""
Reads a batch of items from the Ringbuffer. If the number of available items after the first read item is
smaller than the max_count, these items are returned. So it could be the number of items read is smaller than
the max_count. If there are less items available than min_count, then this call blocks. Reading a batch of items
is likely to perform better because less overhead is involved.
:param start_sequence: (long), the start_sequence of the first item to read.
:param min_count: (int), the minimum number of items to read.
:param max_count: (int), the maximum number of items to read.
:return: (Sequence), the list of read items.
"""
check_not_negative(start_sequence, "sequence can't be smaller than 0")
check_true(max_count >= min_count, "max count should be greater or equal to min count")
check_true(min_count <= self.capacity().result(), "min count should be smaller or equal to capacity")
check_true(max_count < MAX_BATCH_SIZE, "max count can't be greater than %d" % MAX_BATCH_SIZE)
return self._encode_invoke(ringbuffer_read_many_codec, response_handler=self._read_many_response_handler,
start_sequence=start_sequence, min_count=min_count,
max_count=max_count, filter=None)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.