language
stringclasses 2
values | func_code_string
stringlengths 63
466k
|
---|---|
java | @Deprecated
public void setPaddingChar(final char paddingChar) {
if (Character.isDigit(paddingChar)) {
throw new IllegalArgumentException("Padding character should not be a digit.");
}
getOrCreateComponentModel().paddingChar = paddingChar;
} |
java | public static String get(String url, Map<String, String> body) {
return get(url, JSONObject.toJSONString(body));
} |
java | public static TreeMap<String, String> getTemplates(CmsObject cms, String currWpPath) throws CmsException {
return getElements(cms, CmsWorkplace.VFS_DIR_TEMPLATES, currWpPath, true);
} |
python | def recognize(self,
audio,
model=None,
language_customization_id=None,
acoustic_customization_id=None,
base_model_version=None,
customization_weight=None,
inactivity_timeout=None,
keywords=None,
keywords_threshold=None,
max_alternatives=None,
word_alternatives_threshold=None,
word_confidence=None,
timestamps=None,
profanity_filter=None,
smart_formatting=None,
speaker_labels=None,
customization_id=None,
grammar_name=None,
redaction=None,
content_type=None,
**kwargs):
"""
Recognize audio.
Sends audio and returns transcription results for a recognition request. You can
pass a maximum of 100 MB and a minimum of 100 bytes of audio with a request. The
service automatically detects the endianness of the incoming audio and, for audio
that includes multiple channels, downmixes the audio to one-channel mono during
transcoding. The method returns only final results; to enable interim results, use
the WebSocket API.
**See also:** [Making a basic HTTP
request](https://cloud.ibm.com/docs/services/speech-to-text/http.html#HTTP-basic).
### Streaming mode
For requests to transcribe live audio as it becomes available, you must set the
`Transfer-Encoding` header to `chunked` to use streaming mode. In streaming mode,
the service closes the connection (status code 408) if it does not receive at
least 15 seconds of audio (including silence) in any 30-second period. The service
also closes the connection (status code 400) if it detects no speech for
`inactivity_timeout` seconds of streaming audio; use the `inactivity_timeout`
parameter to change the default of 30 seconds.
**See also:**
* [Audio
transmission](https://cloud.ibm.com/docs/services/speech-to-text/input.html#transmission)
*
[Timeouts](https://cloud.ibm.com/docs/services/speech-to-text/input.html#timeouts)
### Audio formats (content types)
The service accepts audio in the following formats (MIME types).
* For formats that are labeled **Required**, you must use the `Content-Type`
header with the request to specify the format of the audio.
* For all other formats, you can omit the `Content-Type` header or specify
`application/octet-stream` with the header to have the service automatically
detect the format of the audio. (With the `curl` command, you can specify either
`\"Content-Type:\"` or `\"Content-Type: application/octet-stream\"`.)
Where indicated, the format that you specify must include the sampling rate and
can optionally include the number of channels and the endianness of the audio.
* `audio/alaw` (**Required.** Specify the sampling rate (`rate`) of the audio.)
* `audio/basic` (**Required.** Use only with narrowband models.)
* `audio/flac`
* `audio/g729` (Use only with narrowband models.)
* `audio/l16` (**Required.** Specify the sampling rate (`rate`) and optionally the
number of channels (`channels`) and endianness (`endianness`) of the audio.)
* `audio/mp3`
* `audio/mpeg`
* `audio/mulaw` (**Required.** Specify the sampling rate (`rate`) of the audio.)
* `audio/ogg` (The service automatically detects the codec of the input audio.)
* `audio/ogg;codecs=opus`
* `audio/ogg;codecs=vorbis`
* `audio/wav` (Provide audio with a maximum of nine channels.)
* `audio/webm` (The service automatically detects the codec of the input audio.)
* `audio/webm;codecs=opus`
* `audio/webm;codecs=vorbis`
The sampling rate of the audio must match the sampling rate of the model for the
recognition request: for broadband models, at least 16 kHz; for narrowband models,
at least 8 kHz. If the sampling rate of the audio is higher than the minimum
required rate, the service down-samples the audio to the appropriate rate. If the
sampling rate of the audio is lower than the minimum required rate, the request
fails.
**See also:** [Audio
formats](https://cloud.ibm.com/docs/services/speech-to-text/audio-formats.html).
### Multipart speech recognition
**Note:** The Watson SDKs do not support multipart speech recognition.
The HTTP `POST` method of the service also supports multipart speech recognition.
With multipart requests, you pass all audio data as multipart form data. You
specify some parameters as request headers and query parameters, but you pass JSON
metadata as form data to control most aspects of the transcription.
The multipart approach is intended for use with browsers for which JavaScript is
disabled or when the parameters used with the request are greater than the 8 KB
limit imposed by most HTTP servers and proxies. You can encounter this limit, for
example, if you want to spot a very large number of keywords.
**See also:** [Making a multipart HTTP
request](https://cloud.ibm.com/docs/services/speech-to-text/http.html#HTTP-multi).
:param file audio: The audio to transcribe.
:param str model: The identifier of the model that is to be used for the
recognition request. See [Languages and
models](https://cloud.ibm.com/docs/services/speech-to-text/models.html).
:param str language_customization_id: The customization ID (GUID) of a custom
language model that is to be used with the recognition request. The base model of
the specified custom language model must match the model specified with the
`model` parameter. You must make the request with credentials for the instance of
the service that owns the custom model. By default, no custom language model is
used. See [Custom
models](https://cloud.ibm.com/docs/services/speech-to-text/input.html#custom-input).
**Note:** Use this parameter instead of the deprecated `customization_id`
parameter.
:param str acoustic_customization_id: The customization ID (GUID) of a custom
acoustic model that is to be used with the recognition request. The base model of
the specified custom acoustic model must match the model specified with the
`model` parameter. You must make the request with credentials for the instance of
the service that owns the custom model. By default, no custom acoustic model is
used. See [Custom
models](https://cloud.ibm.com/docs/services/speech-to-text/input.html#custom-input).
:param str base_model_version: The version of the specified base model that is to
be used with recognition request. Multiple versions of a base model can exist when
a model is updated for internal improvements. The parameter is intended primarily
for use with custom models that have been upgraded for a new base model. The
default value depends on whether the parameter is used with or without a custom
model. See [Base model
version](https://cloud.ibm.com/docs/services/speech-to-text/input.html#version).
:param float customization_weight: If you specify the customization ID (GUID) of a
custom language model with the recognition request, the customization weight tells
the service how much weight to give to words from the custom language model
compared to those from the base model for the current request.
Specify a value between 0.0 and 1.0. Unless a different customization weight was
specified for the custom model when it was trained, the default value is 0.3. A
customization weight that you specify overrides a weight that was specified when
the custom model was trained.
The default value yields the best performance in general. Assign a higher value if
your audio makes frequent use of OOV words from the custom model. Use caution when
setting the weight: a higher value can improve the accuracy of phrases from the
custom model's domain, but it can negatively affect performance on non-domain
phrases.
See [Custom
models](https://cloud.ibm.com/docs/services/speech-to-text/input.html#custom-input).
:param int inactivity_timeout: The time in seconds after which, if only silence
(no speech) is detected in streaming audio, the connection is closed with a 400
error. The parameter is useful for stopping audio submission from a live
microphone when a user simply walks away. Use `-1` for infinity. See [Inactivity
timeout](https://cloud.ibm.com/docs/services/speech-to-text/input.html#timeouts-inactivity).
:param list[str] keywords: An array of keyword strings to spot in the audio. Each
keyword string can include one or more string tokens. Keywords are spotted only in
the final results, not in interim hypotheses. If you specify any keywords, you
must also specify a keywords threshold. You can spot a maximum of 1000 keywords.
Omit the parameter or specify an empty array if you do not need to spot keywords.
See [Keyword
spotting](https://cloud.ibm.com/docs/services/speech-to-text/output.html#keyword_spotting).
:param float keywords_threshold: A confidence value that is the lower bound for
spotting a keyword. A word is considered to match a keyword if its confidence is
greater than or equal to the threshold. Specify a probability between 0.0 and 1.0.
If you specify a threshold, you must also specify one or more keywords. The
service performs no keyword spotting if you omit either parameter. See [Keyword
spotting](https://cloud.ibm.com/docs/services/speech-to-text/output.html#keyword_spotting).
:param int max_alternatives: The maximum number of alternative transcripts that
the service is to return. By default, the service returns a single transcript. If
you specify a value of `0`, the service uses the default value, `1`. See [Maximum
alternatives](https://cloud.ibm.com/docs/services/speech-to-text/output.html#max_alternatives).
:param float word_alternatives_threshold: A confidence value that is the lower
bound for identifying a hypothesis as a possible word alternative (also known as
\"Confusion Networks\"). An alternative word is considered if its confidence is
greater than or equal to the threshold. Specify a probability between 0.0 and 1.0.
By default, the service computes no alternative words. See [Word
alternatives](https://cloud.ibm.com/docs/services/speech-to-text/output.html#word_alternatives).
:param bool word_confidence: If `true`, the service returns a confidence measure
in the range of 0.0 to 1.0 for each word. By default, the service returns no word
confidence scores. See [Word
confidence](https://cloud.ibm.com/docs/services/speech-to-text/output.html#word_confidence).
:param bool timestamps: If `true`, the service returns time alignment for each
word. By default, no timestamps are returned. See [Word
timestamps](https://cloud.ibm.com/docs/services/speech-to-text/output.html#word_timestamps).
:param bool profanity_filter: If `true`, the service filters profanity from all
output except for keyword results by replacing inappropriate words with a series
of asterisks. Set the parameter to `false` to return results with no censoring.
Applies to US English transcription only. See [Profanity
filtering](https://cloud.ibm.com/docs/services/speech-to-text/output.html#profanity_filter).
:param bool smart_formatting: If `true`, the service converts dates, times, series
of digits and numbers, phone numbers, currency values, and internet addresses into
more readable, conventional representations in the final transcript of a
recognition request. For US English, the service also converts certain keyword
strings to punctuation symbols. By default, the service performs no smart
formatting.
**Note:** Applies to US English, Japanese, and Spanish transcription only.
See [Smart
formatting](https://cloud.ibm.com/docs/services/speech-to-text/output.html#smart_formatting).
:param bool speaker_labels: If `true`, the response includes labels that identify
which words were spoken by which participants in a multi-person exchange. By
default, the service returns no speaker labels. Setting `speaker_labels` to `true`
forces the `timestamps` parameter to be `true`, regardless of whether you specify
`false` for the parameter.
**Note:** Applies to US English, Japanese, and Spanish transcription only. To
determine whether a language model supports speaker labels, you can also use the
**Get a model** method and check that the attribute `speaker_labels` is set to
`true`.
See [Speaker
labels](https://cloud.ibm.com/docs/services/speech-to-text/output.html#speaker_labels).
:param str customization_id: **Deprecated.** Use the `language_customization_id`
parameter to specify the customization ID (GUID) of a custom language model that
is to be used with the recognition request. Do not specify both parameters with a
request.
:param str grammar_name: The name of a grammar that is to be used with the
recognition request. If you specify a grammar, you must also use the
`language_customization_id` parameter to specify the name of the custom language
model for which the grammar is defined. The service recognizes only strings that
are recognized by the specified grammar; it does not recognize other custom words
from the model's words resource. See
[Grammars](https://cloud.ibm.com/docs/services/speech-to-text/input.html#grammars-input).
:param bool redaction: If `true`, the service redacts, or masks, numeric data from
final transcripts. The feature redacts any number that has three or more
consecutive digits by replacing each digit with an `X` character. It is intended
to redact sensitive numeric data, such as credit card numbers. By default, the
service performs no redaction.
When you enable redaction, the service automatically enables smart formatting,
regardless of whether you explicitly disable that feature. To ensure maximum
security, the service also disables keyword spotting (ignores the `keywords` and
`keywords_threshold` parameters) and returns only a single final transcript
(forces the `max_alternatives` parameter to be `1`).
**Note:** Applies to US English, Japanese, and Korean transcription only.
See [Numeric
redaction](https://cloud.ibm.com/docs/services/speech-to-text/output.html#redaction).
:param str content_type: The format (MIME type) of the audio. For more information
about specifying an audio format, see **Audio formats (content types)** in the
method description.
:param dict headers: A `dict` containing the request headers
:return: A `DetailedResponse` containing the result, headers and HTTP status code.
:rtype: DetailedResponse
"""
if audio is None:
raise ValueError('audio must be provided')
headers = {'Content-Type': content_type}
if 'headers' in kwargs:
headers.update(kwargs.get('headers'))
sdk_headers = get_sdk_headers('speech_to_text', 'V1', 'recognize')
headers.update(sdk_headers)
params = {
'model': model,
'language_customization_id': language_customization_id,
'acoustic_customization_id': acoustic_customization_id,
'base_model_version': base_model_version,
'customization_weight': customization_weight,
'inactivity_timeout': inactivity_timeout,
'keywords': self._convert_list(keywords),
'keywords_threshold': keywords_threshold,
'max_alternatives': max_alternatives,
'word_alternatives_threshold': word_alternatives_threshold,
'word_confidence': word_confidence,
'timestamps': timestamps,
'profanity_filter': profanity_filter,
'smart_formatting': smart_formatting,
'speaker_labels': speaker_labels,
'customization_id': customization_id,
'grammar_name': grammar_name,
'redaction': redaction
}
data = audio
url = '/v1/recognize'
response = self.request(
method='POST',
url=url,
headers=headers,
params=params,
data=data,
accept_json=True)
return response |
java | @SideOnly(Side.CLIENT)
public static void addHitEffects(World world, RayTraceResult target, ParticleManager particleManager, IBlockState... states)
{
BlockPos pos = target.getBlockPos();
if (ArrayUtils.isEmpty(states))
states = new IBlockState[] { world.getBlockState(pos) };
IBlockState baseState = world.getBlockState(pos);
if (baseState.getRenderType() != EnumBlockRenderType.INVISIBLE)
return;
double fxX = pos.getX() + world.rand.nextDouble();
double fxY = pos.getY() + world.rand.nextDouble();
double fxZ = pos.getZ() + world.rand.nextDouble();
AxisAlignedBB aabb = baseState.getBoundingBox(world, pos);
switch (target.sideHit)
{
case DOWN:
fxY = pos.getY() + aabb.minY - 0.1F;
break;
case UP:
fxY = pos.getY() + aabb.maxY + 0.1F;
break;
case NORTH:
fxZ = pos.getZ() + aabb.minZ - 0.1F;
break;
case SOUTH:
fxZ = pos.getZ() + aabb.maxY + 0.1F;
break;
case EAST:
fxX = pos.getX() + aabb.maxX + 0.1F;
break;
case WEST:
fxX = pos.getX() + aabb.minX + 0.1F;
break;
default:
break;
}
int id = Block.getStateId(states[world.rand.nextInt(states.length)]);
ParticleDigging.Factory factory = new ParticleDigging.Factory();
ParticleDigging fx = (ParticleDigging) factory.createParticle(0, world, fxX, fxY, fxZ, 0, 0, 0, id);
fx.multiplyVelocity(0.2F).multipleParticleScaleBy(0.6F);
particleManager.addEffect(fx);
} |
java | public static ContaineredTaskManagerParameters create(
Configuration config,
long containerMemoryMB,
int numSlots) {
// (1) try to compute how much memory used by container
final long cutoffMB = calculateCutoffMB(config, containerMemoryMB);
// (2) split the remaining Java memory between heap and off-heap
final long heapSizeMB = TaskManagerServices.calculateHeapSizeMB(containerMemoryMB - cutoffMB, config);
// use the cut-off memory for off-heap (that was its intention)
final long offHeapSizeMB = containerMemoryMB - heapSizeMB;
// (3) obtain the additional environment variables from the configuration
final HashMap<String, String> envVars = new HashMap<>();
final String prefix = ResourceManagerOptions.CONTAINERIZED_TASK_MANAGER_ENV_PREFIX;
for (String key : config.keySet()) {
if (key.startsWith(prefix) && key.length() > prefix.length()) {
// remove prefix
String envVarKey = key.substring(prefix.length());
envVars.put(envVarKey, config.getString(key, null));
}
}
// done
return new ContaineredTaskManagerParameters(
containerMemoryMB, heapSizeMB, offHeapSizeMB, numSlots, envVars);
} |
java | @Exported(visibility=2)
public List<Cause> getCauses() {
List<Cause> r = new ArrayList<>();
for (Map.Entry<Cause,Integer> entry : causeBag.entrySet()) {
r.addAll(Collections.nCopies(entry.getValue(), entry.getKey()));
}
return Collections.unmodifiableList(r);
} |
python | def get(self, path, default_value):
"""
Get value for config item into a string value; leading slash is optional
and ignored.
"""
return lib.zconfig_get(self._as_parameter_, path, default_value) |
python | def _parse_for_errors(self):
""" Look for an error tag and raise APIError for fatal errors or APIWarning for nonfatal ones. """
error = self._response.find('{www.clusterpoint.com}error')
if error is not None:
if error.find('level').text.lower() in ('rejected', 'failed', 'error', 'fatal'):
raise APIError(error)
else:
warnings.warn(APIWarning(error)) |
java | public void exportResourcesAndUserdata(String exportFile, String pathList) throws Exception {
exportResourcesAndUserdata(exportFile, pathList, false);
} |
python | def scale_joint_sfs_folded(s, n1, n2):
"""Scale a folded joint site frequency spectrum.
Parameters
----------
s : array_like, int, shape (m_chromosomes//2, n_chromosomes//2)
Folded joint site frequency spectrum.
n1, n2 : int, optional
The total number of chromosomes called in each population.
Returns
-------
joint_sfs_folded_scaled : ndarray, int, shape (m_chromosomes//2, n_chromosomes//2)
Scaled folded joint site frequency spectrum.
""" # noqa
out = np.empty_like(s)
for i in range(s.shape[0]):
for j in range(s.shape[1]):
out[i, j] = s[i, j] * i * j * (n1 - i) * (n2 - j)
return out |
java | protected <T> T processTransactionError(Transaction t, TxCallable<T> callable, TxCallable<T> process) throws Exception {
try {
return callable.call(t);
} catch (Exception e) {
return processCheckRowCountError(t, e, e, process);
}
} |
python | def createPdf(htmlreport, outfile=None, css=None, images={}):
"""create a PDF from some HTML.
htmlreport: rendered html
outfile: pdf filename; if supplied, caller is responsible for creating
and removing it.
css: remote URL of css file to download
images: A dictionary containing possible URLs (keys) and local filenames
(values) with which they may to be replaced during rendering.
# WeasyPrint will attempt to retrieve images directly from the URL
# referenced in the HTML report, which may refer back to a single-threaded
# (and currently occupied) zeoclient, hanging it. All image source
# URL's referenced in htmlreport should be local files.
"""
# A list of files that should be removed after PDF is written
cleanup = []
css_def = ''
if css:
if css.startswith("http://") or css.startswith("https://"):
# Download css file in temp dir
u = urllib2.urlopen(css)
_cssfile = tempfile.mktemp(suffix='.css')
localFile = open(_cssfile, 'w')
localFile.write(u.read())
localFile.close()
cleanup.append(_cssfile)
else:
_cssfile = css
cssfile = open(_cssfile, 'r')
css_def = cssfile.read()
htmlreport = to_utf8(htmlreport)
for (key, val) in images.items():
htmlreport = htmlreport.replace(key, val)
# render
htmlreport = to_utf8(htmlreport)
renderer = HTML(string=htmlreport, url_fetcher=senaite_url_fetcher, encoding='utf-8')
pdf_fn = outfile if outfile else tempfile.mktemp(suffix=".pdf")
if css:
renderer.write_pdf(pdf_fn, stylesheets=[CSS(string=css_def)])
else:
renderer.write_pdf(pdf_fn)
# return file data
pdf_data = open(pdf_fn, "rb").read()
if outfile is None:
os.remove(pdf_fn)
for fn in cleanup:
os.remove(fn)
return pdf_data |
java | public com.google.api.ads.admanager.axis.v201902.LineItemDiscountType getDiscountType() {
return discountType;
} |
python | def char_size(self, size):
'''Changes font size
Args:
size: change font size. Options are 24' '32' '48' for bitmap fonts
33, 38, 42, 46, 50, 58, 67, 75, 83, 92, 100, 117, 133, 150, 167, 200 233,
11, 44, 77, 111, 144 for outline fonts.
Returns:
None
Raises:
RuntimeError: Invalid font size.
Warning: Your font is currently set to outline and you have selected a bitmap only font size
Warning: Your font is currently set to bitmap and you have selected an outline only font size
'''
sizes = {'24':0,
'32':0,
'48':0,
'33':0,
'38':0,
'42':0,
'46':0,
'50':0,
'58':0,
'67':0,
'75':0,
'83':0,
'92':0,
'100':0,
'117':0,
'133':0,
'150':0,
'167':0,
'200':0,
'233':0,
'11':1,
'44':1,
'77':1,
'111':1,
'144':1
}
if size in sizes:
if size in ['24','32','48'] and self.fonttype != self.font_types['bitmap']:
raise Warning('Your font is currently set to outline and you have selected a bitmap only font size')
if size not in ['24', '32', '48'] and self.fonttype != self.font_types['outline']:
raise Warning('Your font is currently set to bitmap and you have selected an outline only font size')
self.send(chr(27)+'X'+chr(0)+chr(int(size))+chr(sizes[size]))
else:
raise RuntimeError('Invalid size for function charSize, choices are auto 4pt 6pt 9pt 12pt 18pt and 24pt') |
python | def mmGetCellActivityPlot(self, title=None, showReset=False,
resetShading=0.25):
""" Returns plot of the cell activity.
@param title an optional title for the figure
@param showReset if true, the first set of cell activities after a reset
will have a gray background
@param resetShading If showReset is true, this float specifies the
intensity of the reset background with 0.0 being white and 1.0 being black
@return (Plot) plot
"""
cellTrace = self._mmTraces["activeCells"].data
cellCount = self.getNumColumns()
activityType = "Cell Activity"
return self.mmGetCellTracePlot(cellTrace, cellCount, activityType,
title=title, showReset=showReset,
resetShading=resetShading) |
java | protected Label newButtonLabel(final String id, final String resourceKey,
final String defaultValue)
{
final IModel<String> labelModel = ResourceModelFactory.newResourceModel(resourceKey, this,
defaultValue);
final Label label = new Label(id, labelModel);
label.setOutputMarkupId(true);
return label;
} |
python | def build_synchronize_decorator():
"""Returns a decorator which prevents concurrent calls to functions.
Usage:
synchronized = build_synchronize_decorator()
@synchronized
def read_value():
...
@synchronized
def write_value(x):
...
Returns:
make_threadsafe (fct): The decorator which lock all functions to which it
is applied under a same lock
"""
lock = threading.Lock()
def lock_decorator(fn):
@functools.wraps(fn)
def lock_decorated(*args, **kwargs):
with lock:
return fn(*args, **kwargs)
return lock_decorated
return lock_decorator |
java | protected void offerAt(final int pos, O e) {
if(pos == NO_VALUE) {
// resize when needed
if(size + 1 > queue.length) {
resize(size + 1);
}
index.put(e, size);
size++;
heapifyUp(size - 1, e);
heapModified();
return;
}
assert (pos >= 0) : "Unexpected negative position.";
assert (queue[pos].equals(e));
// Did the value improve?
if(comparator.compare(e, queue[pos]) >= 0) {
return;
}
heapifyUp(pos, e);
heapModified();
return;
} |
java | public static StatsInstanceImpl createInstance(String name, String configXmlPath, ObjectName userProvidedMBeanObjectName, boolean bCreateDefaultMBean,
StatisticActions actionLsnr)
throws StatsFactoryException {
PmiModuleConfig cfg = PerfModules.getConfigFromXMLFile(configXmlPath, actionLsnr.getCurrentBundle());
if (cfg == null) {
Tr.warning(tc, "PMI0102W", configXmlPath);
throw new StatsFactoryException(nls.getFormattedMessage("PMI0102W", new Object[] { configXmlPath }, "Unable to read custom PMI module configuration: {0}") + ". "
+ PerfModules.getParseExceptionMsg());
}
StatsInstanceImpl instance = new StatsInstanceImpl(name, cfg, actionLsnr);
//instance._scListener = scl;
instance._register(userProvidedMBeanObjectName, bCreateDefaultMBean);
return instance;
} |
python | def incoming_phone_numbers(self):
"""
Access the incoming_phone_numbers
:returns: twilio.rest.api.v2010.account.incoming_phone_number.IncomingPhoneNumberList
:rtype: twilio.rest.api.v2010.account.incoming_phone_number.IncomingPhoneNumberList
"""
if self._incoming_phone_numbers is None:
self._incoming_phone_numbers = IncomingPhoneNumberList(
self._version,
account_sid=self._solution['sid'],
)
return self._incoming_phone_numbers |
java | protected MemberId next() {
// If a connection was already established then use that connection.
if (currentNode != null) {
return currentNode;
}
if (!selector.hasNext()) {
if (selector.leader() != null) {
selector.reset(null, selector.members());
this.currentNode = selector.next();
this.selectionId++;
return currentNode;
} else {
log.debug("Failed to connect to the cluster");
selector.reset();
return null;
}
} else {
this.currentNode = selector.next();
this.selectionId++;
return currentNode;
}
} |
python | def _prepare_memoization_key(args, kwargs):
"""
Make a tuple of arguments which can be used as a key
for a memoized function's lookup_table. If some object can't be hashed
then used its __repr__ instead.
"""
key_list = []
for arg in args:
try:
hash(arg)
key_list.append(arg)
except:
key_list.append(repr(arg))
for (k, v) in kwargs.items():
try:
hash(k)
hash(v)
key_list.append((k, v))
except:
key_list.append((repr(k), repr(v)))
return tuple(key_list) |
python | def download(self, data, block_num=-1):
"""
Downloads a DB data into the AG.
A whole block (including header and footer) must be available into the
user buffer.
:param block_num: New Block number (or -1)
:param data: the user buffer
"""
type_ = c_byte
size = len(data)
cdata = (type_ * len(data)).from_buffer_copy(data)
result = self.library.Cli_Download(self.pointer, block_num,
byref(cdata), size)
return result |
java | public static Optional<Class> resolveSuperGenericTypeArgument(Class type) {
try {
Type genericSuperclass = type.getGenericSuperclass();
if (genericSuperclass instanceof ParameterizedType) {
return resolveSingleTypeArgument(genericSuperclass);
}
return Optional.empty();
} catch (NoClassDefFoundError e) {
return Optional.empty();
}
} |
java | @Before
public void setUp() {
defineApplicationLocales();
clearKeys();
List<Key> keys = new ArrayList<>();
keys.add(createKey(KEY_DEFAULT, false, false, false));
Key outdatedKey = createKey(KEY_OUTDATED, false, false, false);
outdatedKey.setOutdated();
keys.add(outdatedKey);
keys.add(createKey(KEY_TLN_APPROX, true, false, false));
keys.add(createKey(KEY_TLN_OUTDATED, false, true, false));
keys.add(createKey(KEY_TLN_MISSING, false, false, true));
Key key = keyFactory.createKey(KEY_WITH_MISSING_TRANSLATION);
key.addTranslation(FR, TRANSLATION_FR);
keys.add(key);
for (Key keyToPersist : keys) {
keyRepository.add(keyToPersist);
}
} |
java | public EntryStream<K, V> filterValues(Predicate<? super V> valuePredicate) {
return filter(e -> valuePredicate.test(e.getValue()));
} |
python | def label_weight(base, label_name=None, children=[], parents=[],
dependencies=[]):
"""
Function that returns a Formatoption class for modifying the fontweight
This function returns a :class:`~psyplot.plotter.Formatoption` instance
that modifies the weight of the given `base` formatoption
Parameters
----------
base: Formatoption
The base formatoption instance that is used in the
:class:`psyplot.Plotter` subclass to create the label. The instance
must have a ``texts`` attribute which stores all the
:class:`matplotlib.text.Text` instances.
label_name: str
The name of the label to use in the documentation. If None,
it will be ``key``, where ``key`` is the
:attr:`psyplot.plotter.Formatoption.key`` attribute of `base`
children: list of str
The childrens of the resulting formatoption class (besides the `base`
formatoption which is included anyway)
parents: list of str
The parents of the resulting formatoption class (besides the `base`
the properties formatoption from `base` (see :func:`label_props`))
dependencies: list of str
The dependencies of the formatoption
Returns
-------
Formatoption
The formatoption instance that modifies the fontweight of `base`
See Also
--------
label_size, label_props, Figtitle, Title"""
label_name = label_name or base.key
cl_children = children
cl_parents = parents
cl_dependencies = dependencies
class LabelWeight(Formatoption):
__doc__ = """
Set the fontweight of the %s
Possible types
--------------
%%(fontweights)s
See Also
--------
%s, %s, %s""" % (label_name, base.key, base.key + 'size',
base.key + 'props')
children = [base.key] + \
cl_children
parent = [base.key + 'props'] + cl_parents
dependencies = cl_dependencies
group = 'labels'
name = 'Font weight of ' + (base.name or base.key)
def update(self, value):
for text in getattr(self, base.key).texts:
text.set_weight(value)
def get_fmt_widget(self, parent, project):
"""Get a widget with the different font weights"""
from psy_simple.widgets.texts import FontWeightWidget
return FontWeightWidget(
parent, self, next(iter(getattr(self, base.key).texts), None),
base)
return LabelWeight(base.key + 'weight') |
java | private void shrink() {
/* If the size of the topmost array is at its minimum, don't do
* anything. This doesn't change the asymptotic memory usage because
* we only do this for small arrays.
*/
if (mArrays.length == kMinArraySize) return;
/* Otherwise, we currently have 2^(2n) / 8 = 2^(2n - 3) elements.
* We're about to shrink into a grid of 2^(2n - 2) elements, and so
* we'll fill in half of the elements.
*/
T[][] newArrays = (T[][]) new Object[mArrays.length / 2][];
/* Copy everything over. We'll need half as many arrays as before. */
for (int i = 0; i < newArrays.length / 2; ++i) {
/* Create the arrays. */
newArrays[i] = (T[]) new Object[newArrays.length];
/* Move everything into it. If this is an odd array, it comes
* from the upper half of the old array; otherwise it comes from
* the lower half.
*/
System.arraycopy(mArrays[i / 2], (i % 2 == 0) ? 0 : newArrays.length,
newArrays[i], 0, newArrays.length);
/* Play nice with the GC. If this is an odd-numbered array, we
* just copied over everything we needed and can clear out the
* old array.
*/
if (i % 2 == 1)
mArrays[i / 2] = null;
}
/* Copy the arrays over. */
mArrays = newArrays;
/* Drop the lg2 of the size. */
--mLgSize;
} |
python | def get_field(field, slog, fl=False):
"""parse sample log for field
set fl=True to return a float
otherwise, returns int
"""
field += r'\:\s+([\d\.]+)'
match = re.search(field, slog)
if match:
if fl:
return float(match.group(1))
return int(match.group(1))
return 0 |
python | def merge_sketches(outdir, sketch_paths):
"""Merge new Mash sketches with current Mash sketches
Args:
outdir (str): output directory to write merged Mash sketch file
sketch_paths (list of str): Mash sketch file paths for input fasta files
Returns:
str: output path for Mash sketch file with new and old sketches
"""
merge_sketch_path = os.path.join(outdir, 'sistr.msh')
args = ['mash', 'paste', merge_sketch_path]
for x in sketch_paths:
args.append(x)
args.append(MASH_SKETCH_FILE)
logging.info('Running Mash paste with command: %s', ' '.join(args))
p = Popen(args)
p.wait()
assert os.path.exists(merge_sketch_path), 'Merged sketch was not created at {}'.format(merge_sketch_path)
return merge_sketch_path |
java | @Override
public <DATA> JsonObjectPersister<DATA> createInFileObjectPersister(Class<DATA> clazz, File cacheFolder)
throws CacheCreationException {
return new JsonObjectPersister<DATA>(getApplication(), jsonFactory, clazz, cacheFolder);
} |
java | public VpnSiteInner beginCreateOrUpdate(String resourceGroupName, String vpnSiteName, VpnSiteInner vpnSiteParameters) {
return beginCreateOrUpdateWithServiceResponseAsync(resourceGroupName, vpnSiteName, vpnSiteParameters).toBlocking().single().body();
} |
python | def unfinished(finished_status,
update_interval,
status_key,
edit_at_key):
"""
Create dict query for pymongo that getting all unfinished task.
:param finished_status: int, status code that less than this
will be considered as unfinished.
:param update_interval: int, the record will be updated every x seconds.
:param status_key: status code field key, support dot notation.
:param edit_at_key: edit_at time field key, support dot notation.
:return: dict, a pymongo filter.
**中文文档**
状态码小于某个值, 或者, 现在距离更新时间已经超过一定阈值.
"""
return {
"$or": [
{status_key: {"$lt": finished_status}},
{edit_at_key: {"$lt": x_seconds_before_now(update_interval)}},
]
} |
java | private Attribute parseAttribute(StreamTokenizer tokenizer) throws IOException, ParseException {
Attribute attribute = null;
// Get attribute name.
getNextToken(tokenizer);
String attributeName = tokenizer.sval;
getNextToken(tokenizer);
// Check if attribute is nominal.
if (tokenizer.ttype == StreamTokenizer.TT_WORD) {
// Attribute is real, integer, or string.
if (tokenizer.sval.equalsIgnoreCase(ARFF_ATTRIBUTE_REAL) ||
tokenizer.sval.equalsIgnoreCase(ARFF_ATTRIBUTE_INTEGER) ||
tokenizer.sval.equalsIgnoreCase(ARFF_ATTRIBUTE_NUMERIC)) {
attribute = new NumericAttribute(attributeName);
readTillEOL(tokenizer);
} else if (tokenizer.sval.equalsIgnoreCase(ARFF_ATTRIBUTE_STRING)) {
attribute = new StringAttribute(attributeName);
readTillEOL(tokenizer);
} else if (tokenizer.sval.equalsIgnoreCase(ARFF_ATTRIBUTE_DATE)) {
String format = null;
if (tokenizer.nextToken() != StreamTokenizer.TT_EOL) {
if ((tokenizer.ttype != StreamTokenizer.TT_WORD) && (tokenizer.ttype != '\'') && (tokenizer.ttype != '\"')) {
throw new ParseException("not a valid date format", tokenizer.lineno());
}
format = tokenizer.sval;
readTillEOL(tokenizer);
} else {
tokenizer.pushBack();
}
attribute = new DateAttribute(attributeName, null, format);
readTillEOL(tokenizer);
} else if (tokenizer.sval.equalsIgnoreCase(ARFF_ATTRIBUTE_RELATIONAL)) {
readTillEOL(tokenizer);
} else if (tokenizer.sval.equalsIgnoreCase(ARFF_END_SUBRELATION)) {
getNextToken(tokenizer);
} else {
throw new ParseException("Invalid attribute type or invalid enumeration", tokenizer.lineno());
}
} else {
// Attribute is nominal.
List<String> attributeValues = new ArrayList<>();
tokenizer.pushBack();
// Get values for nominal attribute.
if (tokenizer.nextToken() != '{') {
throw new ParseException("{ expected at beginning of enumeration", tokenizer.lineno());
}
while (tokenizer.nextToken() != '}') {
if (tokenizer.ttype == StreamTokenizer.TT_EOL) {
throw new ParseException("} expected at end of enumeration", tokenizer.lineno());
} else {
attributeValues.add(tokenizer.sval.trim());
}
}
String[] values = new String[attributeValues.size()];
for (int i = 0; i < values.length; i++) {
values[i] = attributeValues.get(i);
}
attribute = new NominalAttribute(attributeName, values);
}
getLastToken(tokenizer, false);
getFirstToken(tokenizer);
if (tokenizer.ttype == StreamTokenizer.TT_EOF) {
throw new ParseException(PREMATURE_END_OF_FILE, tokenizer.lineno());
}
return attribute;
} |
python | def deprecated(func_or_text, environ=os.environ):
'''
Decorator used to mark functions as deprecated. It will result in a
warning being emmitted hen the function is called.
Usage:
>>> @deprecated
... def fnc():
... pass
Usage (custom message):
>>> @deprecated('This is deprecated')
... def fnc():
... pass
:param func_or_text: message or callable to decorate
:type func_or_text: callable
:param environ: optional environment mapping
:type environ: collections.abc.Mapping
:returns: nested decorator or new decorated function (depending on params)
:rtype: callable
'''
def inner(func):
message = (
'Deprecated function {}.'.format(func.__name__)
if callable(func_or_text) else
func_or_text
)
@functools.wraps(func)
def new_func(*args, **kwargs):
with warnings.catch_warnings():
if getdebug(environ):
warnings.simplefilter('always', DeprecationWarning)
warnings.warn(message, category=DeprecationWarning,
stacklevel=3)
return func(*args, **kwargs)
return new_func
return inner(func_or_text) if callable(func_or_text) else inner |
python | def authenticate(self, name, password, mechanism="DEFAULT"):
"""
Send an authentication command for this database.
mostly stolen from pymongo
"""
if not isinstance(name, (bytes, unicode)):
raise TypeError("TxMongo: name must be an instance of basestring.")
if not isinstance(password, (bytes, unicode)):
raise TypeError("TxMongo: password must be an instance of basestring.")
"""
Authenticating
"""
return self.connection.authenticate(self, name, password, mechanism) |
python | def form_group_classes(self):
"""
Full list of classes for the class attribute of the form group. Returned as a string
with spaces separating each class, ready for insertion into the class attribute.
This will generally look like the following:
'form-group has-error custom-class'
"""
classes = ['form-group']
if self.style == styles.BOOTSTRAP_4 and self.form_type == formtype.HORIZONTAL:
classes.append('row')
if self.error and self.style == styles.BOOTSTRAP_3:
classes.append('has-error')
if self.form_group_css_class:
classes.append(self.form_group_css_class)
return ' '.join(classes) |
python | def draw(
self,
tree_style=None,
height=None,
width=None,
axes=None,
orient=None,
tip_labels=None,
tip_labels_colors=None,
tip_labels_style=None,
tip_labels_align=None,
node_labels=None,
node_labels_style=None,
node_sizes=None,
node_colors=None,
node_style=None,
node_hover=None,
node_markers=None,
edge_colors=None,
edge_widths=None,
edge_type=None,
edge_style=None,
edge_align_style=None,
use_edge_lengths=None,
scalebar=None,
padding=None,
xbaseline=0,
ybaseline=0,
**kwargs):
"""
Plot a Toytree tree, returns a tuple of Toyplot (Canvas, Axes) objects.
Parameters:
-----------
tree_style: str
One of several preset styles for tree plotting. The default is 'n'
(normal). Other options inlude 'c' (coalescent), 'd' (dark), and
'm' (multitree). You also create your own TreeStyle objects.
The tree_style sets a default set of styling on top of which other
arguments passed to draw() will override when plotting.
height: int (optional; default=None)
If None the plot height is autosized. If 'axes' arg is used then
tree is drawn on an existing Canvas, Axes and this arg is ignored.
width: int (optional; default=None)
Similar to height (above).
axes: Toyplot.Cartesian (default=None)
A toyplot cartesian axes object. If provided tree is drawn on it.
If not provided then a new Canvas and Cartesian axes are created
and returned with the tree plot added to it.
use_edge_lengths: bool (default=False)
Use edge lengths from .treenode (.get_edge_lengths) else
edges are set to length >=1 to make tree ultrametric.
tip_labels: [True, False, list]
If True then the tip labels from .treenode are added to the plot.
If False no tip labels are added. If a list of tip labels
is provided it must be the same length as .get_tip_labels().
tip_labels_colors:
...
tip_labels_style:
...
tip_labels_align:
...
node_labels: [True, False, list]
If True then nodes are shown, if False then nodes are suppressed
If a list of node labels is provided it must be the same length
and order as nodes in .get_node_values(). Node labels can be
generated in the proper order using the the .get_node_labels()
function from a Toytree tree to draw info from the tree features.
For example: node_labels=tree.get_node_labels("support").
node_sizes: [int, list, None]
If None then nodes are not shown, otherwise, if node_labels
then node_size can be modified. If a list of node sizes is
provided it must be the same length and order as nodes in
.get_node_dict().
node_colors: [list]
Use this argument only if you wish to set different colors for
different nodes, in which case you must enter a list of colors
as string names or HEX values the length and order of nodes in
.get_node_dict(). If all nodes will be the same color then use
instead the node_style dictionary:
e.g., node_style={"fill": 'red'}
node_style: [dict]
...
node_hover: [True, False, list, dict]
Default is True in which case node hover will show the node
values. If False then no hover is shown. If a list or dict
is provided (which should be in node order) then the values
will be shown in order. If a dict then labels can be provided
as well.
"""
# allow ts as a shorthand for tree_style
if kwargs.get("ts"):
tree_style = kwargs.get("ts")
# pass a copy of this tree so that any mods to .style are not saved
nself = deepcopy(self)
if tree_style:
nself.style.update(TreeStyle(tree_style[0]))
# update kwargs to merge it with user-entered arguments:
userargs = {
"height": height,
"width": width,
"orient": orient,
"tip_labels": tip_labels,
"tip_labels_colors": tip_labels_colors,
"tip_labels_align": tip_labels_align,
"tip_labels_style": tip_labels_style,
"node_labels": node_labels,
"node_labels_style": node_labels_style,
"node_sizes": node_sizes,
"node_colors": node_colors,
"node_hover": node_hover,
"node_style": node_style,
"node_markers": node_markers,
"edge_type": edge_type,
"edge_colors": edge_colors,
"edge_widths": edge_widths,
"edge_style": edge_style,
"edge_align_style": edge_align_style,
"use_edge_lengths": use_edge_lengths,
"scalebar": scalebar,
"padding": padding,
"xbaseline": xbaseline,
"ybaseline": ybaseline,
}
kwargs.update(userargs)
censored = {i: j for (i, j) in kwargs.items() if j is not None}
nself.style.update(censored)
# warn user if they entered kwargs that weren't recognized:
unrecognized = [i for i in kwargs if i not in userargs]
if unrecognized:
print("unrecognized arguments skipped: {}".format(unrecognized))
print("check the docs, argument names may have changed.")
# Init Drawing class object.
draw = Drawing(nself)
# Debug returns the object to test with.
if kwargs.get("debug"):
return draw
# Make plot. If user provided explicit axes then include them.
canvas, axes = draw.update(axes=axes)
return canvas, axes |
python | def step(self, step_size: Timedelta=None):
"""Advance the simulation one step.
Parameters
----------
step_size
An optional size of step to take. Must be the same type as the
simulation clock's step size (usually a pandas.Timedelta).
"""
old_step_size = self.clock.step_size
if step_size is not None:
if not isinstance(step_size, type(self.clock.step_size)):
raise ValueError(f"Provided time must be an instance of {type(self.clock.step_size)}")
self.clock._step_size = step_size
super().step()
self.clock._step_size = old_step_size |
python | def dt_comp(self, sampled_topics):
"""
Compute document-topic matrix from sampled_topics.
"""
samples = sampled_topics.shape[0]
dt = np.zeros((self.D, self.K, samples))
for s in range(samples):
dt[:, :, s] = \
samplers_lda.dt_comp(self.docid, sampled_topics[s, :],
self.N, self.K, self.D, self.alpha)
return dt |
java | private Long findTransactionBeginPosition(ErosaConnection mysqlConnection, final EntryPosition entryPosition)
throws IOException {
// 针对开始的第一条为非Begin记录,需要从该binlog扫描
final java.util.concurrent.atomic.AtomicLong preTransactionStartPosition = new java.util.concurrent.atomic.AtomicLong(0L);
mysqlConnection.reconnect();
mysqlConnection.seek(entryPosition.getJournalName(), 4L, entryPosition.getGtid(), new SinkFunction<LogEvent>() {
private LogPosition lastPosition;
public boolean sink(LogEvent event) {
try {
CanalEntry.Entry entry = parseAndProfilingIfNecessary(event, true);
if (entry == null) {
return true;
}
// 直接查询第一条业务数据,确认是否为事务Begin
// 记录一下transaction begin position
if (entry.getEntryType() == CanalEntry.EntryType.TRANSACTIONBEGIN
&& entry.getHeader().getLogfileOffset() < entryPosition.getPosition()) {
preTransactionStartPosition.set(entry.getHeader().getLogfileOffset());
}
if (entry.getHeader().getLogfileOffset() >= entryPosition.getPosition()) {
return false;// 退出
}
lastPosition = buildLastPosition(entry);
} catch (Exception e) {
processSinkError(e, lastPosition, entryPosition.getJournalName(), entryPosition.getPosition());
return false;
}
return running;
}
});
// 判断一下找到的最接近position的事务头的位置
if (preTransactionStartPosition.get() > entryPosition.getPosition()) {
logger.error("preTransactionEndPosition greater than startPosition from zk or localconf, maybe lost data");
throw new CanalParseException("preTransactionStartPosition greater than startPosition from zk or localconf, maybe lost data");
}
return preTransactionStartPosition.get();
} |
java | public static boolean isUnauthenticated(Subject subject) {
if (TraceComponent.isAnyTracingEnabled() && tc.isEntryEnabled()) {
SibTr.entry(tc, CLASS_NAME + "isUnauthenticated", subject);
}
boolean result = subjectHelper.isUnauthenticated(subject);
if (TraceComponent.isAnyTracingEnabled() && tc.isEntryEnabled()) {
SibTr.exit(tc, CLASS_NAME + "isUnauthenticated", result);
}
return result;
} |
java | Map<String, ProvidedName> collectProvidedNames(Node externs, Node root) {
if (this.providedNames.isEmpty()) {
// goog is special-cased because it is provided in Closure's base library.
providedNames.put(GOOG, new ProvidedName(GOOG, null, null, false /* implicit */, false));
NodeTraversal.traverseRoots(compiler, new CollectDefinitions(), externs, root);
}
return this.providedNames;
} |
java | public static <E> ProxyChannel<E> createMpscProxy(int capacity,
Class<E> iFace,
WaitStrategy waitStrategy) {
return createProxy(capacity,
iFace,
waitStrategy,
MpscOffHeapFixedSizeRingBuffer.class);
} |
java | public static ReturnType getReturnTypeDescriptor(String methodDescriptor) {
int index = methodDescriptor.indexOf(')') + 1;
if (methodDescriptor.charAt(index) == 'L') {
return new ReturnType(methodDescriptor.substring(index + 1, methodDescriptor.length() - 1), Kind.REFERENCE);
}
else {
return new ReturnType(methodDescriptor.substring(index), methodDescriptor.charAt(index) == '[' ? Kind.ARRAY
: Kind.PRIMITIVE);
}
} |
java | public void setReceiveMessageAttributeNames(List<String> receiveMessageAttributeNames) {
if (receiveMessageAttributeNames == null) {
this.receiveMessageAttributeNames = Collections.emptyList();
} else {
this.receiveMessageAttributeNames = Collections.unmodifiableList(new ArrayList<String>(receiveMessageAttributeNames));
}
} |
python | def resolve(self, cfg, addr, func_addr, block, jumpkind):
"""
Resolves the indirect jump in MIPS ELF binaries where all external function calls are indexed using gp.
:param cfg: A CFG instance.
:param int addr: IRSB address.
:param int func_addr: The function address.
:param pyvex.IRSB block: The IRSB.
:param str jumpkind: The jumpkind.
:return: If it was resolved and targets alongside it
:rtype: tuple
"""
project = self.project
b = Blade(cfg.graph, addr, -1, cfg=cfg, project=project, ignore_sp=True, ignore_bp=True,
ignored_regs=('gp',)
)
sources = [n for n in b.slice.nodes() if b.slice.in_degree(n) == 0]
if not sources:
return False, []
source = sources[0]
source_addr = source[0]
annotated_cfg = AnnotatedCFG(project, None, detect_loops=False)
annotated_cfg.from_digraph(b.slice)
state = project.factory.blank_state(addr=source_addr, mode="fastpath",
remove_options=options.refs
)
func = cfg.kb.functions.function(addr=func_addr)
gp_offset = project.arch.registers['gp'][0]
if 'gp' not in func.info:
sec = project.loader.find_section_containing(func.addr)
if sec is None or sec.name != '.plt':
# this might a special case: gp is only used once in this function, and it can be initialized right before
# its use site.
# TODO: handle this case
l.debug('Failed to determine value of register gp for function %#x.', func.addr)
return False, [ ]
else:
state.regs.gp = func.info['gp']
def overwrite_tmp_value(state):
state.inspect.tmp_write_expr = state.solver.BVV(func.info['gp'], state.arch.bits)
# Special handling for cases where `gp` is stored on the stack
got_gp_stack_store = False
for block_addr_in_slice in set(slice_node[0] for slice_node in b.slice.nodes()):
for stmt in project.factory.block(block_addr_in_slice).vex.statements:
if isinstance(stmt, pyvex.IRStmt.Put) and stmt.offset == gp_offset and \
isinstance(stmt.data, pyvex.IRExpr.RdTmp):
tmp_offset = stmt.data.tmp # pylint:disable=cell-var-from-loop
# we must make sure value of that temporary variable equals to the correct gp value
state.inspect.make_breakpoint('tmp_write', when=BP_BEFORE,
condition=lambda s, bbl_addr_=block_addr_in_slice,
tmp_offset_=tmp_offset:
s.scratch.bbl_addr == bbl_addr_ and s.inspect.tmp_write_num == tmp_offset_,
action=overwrite_tmp_value
)
got_gp_stack_store = True
break
if got_gp_stack_store:
break
simgr = self.project.factory.simulation_manager(state)
simgr.use_technique(Slicecutor(annotated_cfg))
simgr.run()
if simgr.cut:
target = simgr.cut[0].addr
if self._is_target_valid(cfg, target):
l.debug("Indirect jump at %#x is resolved to target %#x.", addr, target)
return True, [ target ]
l.debug("Indirect jump at %#x is resolved to target %#x, which seems to be invalid.", addr, target)
return False, [ ]
l.debug("Indirect jump at %#x cannot be resolved by %s.", addr, repr(self))
return False, [ ] |
python | def chain(*layers):
"""Compose two models `f` and `g` such that they become layers of a single
feed-forward model that computes `g(f(x))`.
Raises exception if their dimensions don't match.
"""
if len(layers) == 0:
return FeedForward([])
elif len(layers) == 1:
return layers[0]
else:
return FeedForward(layers) |
java | public /*@Nullable*/Downloader startGetFile(final String path, /*@Nullable*/String rev)
throws DbxException
{
DbxPathV1.checkArgNonRoot("path", path);
String apiPath = "1/files/auto" + path;
/*@Nullable*/String[] params = {
"rev", rev
};
return startGetSomething(apiPath, params);
} |
python | def numeric(self, *args, **kwargs):
"""Compare attributes of pairs with numeric algorithm.
Shortcut of :class:`recordlinkage.compare.Numeric`::
from recordlinkage.compare import Numeric
indexer = recordlinkage.Compare()
indexer.add(Numeric())
"""
compare = Numeric(*args, **kwargs)
self.add(compare)
return self |
java | public void add(IntArrayList values) {
ensureCapacity(size + values.size);
for (int i=0; i<values.size; i++) {
this.add(values.elements[i]);
}
} |
python | def parse_policies(self, fetched_policy, params):
"""
Parse a single IAM policy and fetch additional information
"""
api_client = params['api_client']
policy = {}
policy['name'] = fetched_policy.pop('PolicyName')
policy['id'] = fetched_policy.pop('PolicyId')
policy['arn'] = fetched_policy.pop('Arn')
# Download version and document
policy_version = api_client.get_policy_version(PolicyArn = policy['arn'], VersionId = fetched_policy['DefaultVersionId'])
policy_version = policy_version['PolicyVersion']
policy['PolicyDocument'] = policy_version['Document']
# Get attached IAM entities
policy['attached_to'] = {}
attached_entities = handle_truncated_response(api_client.list_entities_for_policy, {'PolicyArn': policy['arn']}, ['PolicyGroups', 'PolicyRoles', 'PolicyUsers'])
for entity_type in attached_entities:
resource_type = entity_type.replace('Policy', '').lower()
if len(attached_entities[entity_type]):
policy['attached_to'][resource_type] = []
for entity in attached_entities[entity_type]:
name_field = entity_type.replace('Policy', '')[:-1] + 'Name'
resource_name = entity[name_field]
policy['attached_to'][resource_type].append({'name': resource_name})
# Save policy
self.policies[policy['id']] = policy |
java | public void marshall(SegmentImportResource segmentImportResource, ProtocolMarshaller protocolMarshaller) {
if (segmentImportResource == null) {
throw new SdkClientException("Invalid argument passed to marshall(...)");
}
try {
protocolMarshaller.marshall(segmentImportResource.getChannelCounts(), CHANNELCOUNTS_BINDING);
protocolMarshaller.marshall(segmentImportResource.getExternalId(), EXTERNALID_BINDING);
protocolMarshaller.marshall(segmentImportResource.getFormat(), FORMAT_BINDING);
protocolMarshaller.marshall(segmentImportResource.getRoleArn(), ROLEARN_BINDING);
protocolMarshaller.marshall(segmentImportResource.getS3Url(), S3URL_BINDING);
protocolMarshaller.marshall(segmentImportResource.getSize(), SIZE_BINDING);
} catch (Exception e) {
throw new SdkClientException("Unable to marshall request to JSON: " + e.getMessage(), e);
}
} |
python | def loadLibSVMFile(sc, path, numFeatures=-1, minPartitions=None):
"""
Loads labeled data in the LIBSVM format into an RDD of
LabeledPoint. The LIBSVM format is a text-based format used by
LIBSVM and LIBLINEAR. Each line represents a labeled sparse
feature vector using the following format:
label index1:value1 index2:value2 ...
where the indices are one-based and in ascending order. This
method parses each line into a LabeledPoint, where the feature
indices are converted to zero-based.
:param sc: Spark context
:param path: file or directory path in any Hadoop-supported file
system URI
:param numFeatures: number of features, which will be determined
from the input data if a nonpositive value
is given. This is useful when the dataset is
already split into multiple files and you
want to load them separately, because some
features may not present in certain files,
which leads to inconsistent feature
dimensions.
:param minPartitions: min number of partitions
@return: labeled data stored as an RDD of LabeledPoint
>>> from tempfile import NamedTemporaryFile
>>> from pyspark.mllib.util import MLUtils
>>> from pyspark.mllib.regression import LabeledPoint
>>> tempFile = NamedTemporaryFile(delete=True)
>>> _ = tempFile.write(b"+1 1:1.0 3:2.0 5:3.0\\n-1\\n-1 2:4.0 4:5.0 6:6.0")
>>> tempFile.flush()
>>> examples = MLUtils.loadLibSVMFile(sc, tempFile.name).collect()
>>> tempFile.close()
>>> examples[0]
LabeledPoint(1.0, (6,[0,2,4],[1.0,2.0,3.0]))
>>> examples[1]
LabeledPoint(-1.0, (6,[],[]))
>>> examples[2]
LabeledPoint(-1.0, (6,[1,3,5],[4.0,5.0,6.0]))
"""
from pyspark.mllib.regression import LabeledPoint
lines = sc.textFile(path, minPartitions)
parsed = lines.map(lambda l: MLUtils._parse_libsvm_line(l))
if numFeatures <= 0:
parsed.cache()
numFeatures = parsed.map(lambda x: -1 if x[1].size == 0 else x[1][-1]).reduce(max) + 1
return parsed.map(lambda x: LabeledPoint(x[0], Vectors.sparse(numFeatures, x[1], x[2]))) |
java | public int nextSetBit(final int i) {
int x = i >> 6;
long w = bitmap[x];
w >>>= i;
if (w != 0) {
return i + numberOfTrailingZeros(w);
}
for (++x; x < bitmap.length; ++x) {
if (bitmap[x] != 0) {
return x * 64 + numberOfTrailingZeros(bitmap[x]);
}
}
return -1;
} |
python | def linecol_to_pos(text, line, col):
"""Return the offset of this line and column in text.
Lines are one-based, columns zero-based.
This is how Jedi wants it. Don't ask me why.
"""
nth_newline_offset = 0
for i in range(line - 1):
new_offset = text.find("\n", nth_newline_offset)
if new_offset < 0:
raise ValueError("Text does not have {0} lines."
.format(line))
nth_newline_offset = new_offset + 1
offset = nth_newline_offset + col
if offset > len(text):
raise ValueError("Line {0} column {1} is not within the text"
.format(line, col))
return offset |
java | @BetaApi
public final Operation insertBackendService(
String project, BackendService backendServiceResource) {
InsertBackendServiceHttpRequest request =
InsertBackendServiceHttpRequest.newBuilder()
.setProject(project)
.setBackendServiceResource(backendServiceResource)
.build();
return insertBackendService(request);
} |
java | @Override
public void commence(final HttpServletRequest request,
final HttpServletResponse response,
final AuthenticationException authException) throws IOException {
// This is invoked when user tries to access a secured REST resource
// without supplying any credentials. We should just send a 401
// Unauthorized response because there is no 'login page' to redirect
// to.
response.sendError(HttpServletResponse.SC_UNAUTHORIZED,
authException.getMessage());
} |
python | def dirint(ghi, altitudes, doys, pressures, use_delta_kt_prime=True,
temp_dew=None, min_sin_altitude=0.065, min_altitude=3):
"""
Determine DNI from GHI using the DIRINT modification of the DISC
model.
Implements the modified DISC model known as "DIRINT" introduced in
[1]. DIRINT predicts direct normal irradiance (DNI) from measured
global horizontal irradiance (GHI). DIRINT improves upon the DISC
model by using time-series GHI data and dew point temperature
information. The effectiveness of the DIRINT model improves with
each piece of information provided.
The pvlib implementation limits the clearness index to 1.
The DIRINT model requires time series data.
Note:
[1] Perez, R., P. Ineichen, E. Maxwell, R. Seals and A. Zelenka,
(1992). "Dynamic Global-to-Direct Irradiance Conversion Models".
ASHRAE Transactions-Research Series, pp. 354-369
[2] Maxwell, E. L., "A Quasi-Physical Model for Converting Hourly
Global Horizontal to Direct Normal Insolation", Technical Report No.
SERI/TR-215-3087, Golden, CO: Solar Energy Research Institute, 1987.
Args:
ghi: array-like
Global horizontal irradiance in W/m^2.
altitudes: array-like
True (not refraction-corrected) solar altitude angles in decimal
degrees.
doys: array-like
Integers representing the day of the year.
pressures: array-like
The site pressure in Pascal. Pressure may be measured or an
average pressure may be calculated from site altitude.
use_delta_kt_prime: bool, default True
If True, indicates that the stability index delta_kt_prime is
included in the model. The stability index adjusts the estimated
DNI in response to dynamics in the time series of GHI. It is
recommended that delta_kt_prime is not used if the time between
GHI points is 1.5 hours or greater. If use_delta_kt_prime=True,
input data must be Series.
temp_dew: None or array-like, default None
Surface dew point temperatures, in degrees C. Values of temp_dew
must be numeric. If temp_dew is not provided, then dew point
improvements are not applied.
min_sin_altitude: numeric, default 0.065
Minimum value of sin(altitude) to allow when calculating global
clearness index `kt`. Equivalent to altitude = 3.727 degrees.
min_altitude: numeric, default 87
Minimum value of altitude to allow in DNI calculation. DNI will be
set to 0 for times with altitude values smaller than `min_altitude`.
Returns:
dni: array-like
The modeled direct normal irradiance in W/m^2 provided by the
DIRINT model.
"""
# calculate kt_prime values
kt_primes = []
disc_dni = []
for i in range(len(ghi)):
dni, kt, airmass = disc(ghi[i], altitudes[i], doys[i], pressure=pressures[i],
min_sin_altitude=min_sin_altitude,
min_altitude=min_altitude)
kt_prime = clearness_index_zenith_independent(
kt, airmass, max_clearness_index=1)
kt_primes.append(kt_prime)
disc_dni.append(dni)
# calculate delta_kt_prime values
if use_delta_kt_prime is True:
delta_kt_prime = []
for i in range(len(kt_primes)):
try:
kt_prime_1 = kt_primes[i + 1]
except IndexError:
# last hour
kt_prime_1 = kt_primes[0]
delta_kt_prime.append(0.5 * (abs(kt_primes[i] - kt_prime_1) +
abs(kt_primes[i] - kt_primes[i - 1])))
else:
delta_kt_prime = [-1] * len(ghi)
# calculate W values if dew point temperatures have been provided
if temp_dew is not None:
w = [math.exp(0.07 * td - 0.075) for td in temp_dew]
else:
w = [-1] * len(ghi)
# bin the values into appropriate categoreis for lookup in the coefficient matirx.
ktp_bin, alt_bin, w_bin, delta_ktp_bin = \
_dirint_bins(kt_primes, altitudes, w, delta_kt_prime)
# get the dirint coefficient by looking up values in the matrix
coeffs = _get_dirint_coeffs()
dirint_coeffs = [coeffs[ktp_bin[i]][alt_bin[i]][delta_ktp_bin[i]][w_bin[i]]
for i in range(len(ghi))]
# Perez eqn 5
dni = [disc_d * coef for disc_d, coef in zip(disc_dni, dirint_coeffs)]
return dni |
python | def construct_parameter_pattern(parameter):
"""
Given a parameter definition returns a regex pattern that will match that
part of the path.
"""
name = parameter['name']
type = parameter['type']
repeated = '[^/]'
if type == 'integer':
repeated = '\d'
return "(?P<{name}>{repeated}+)".format(name=name, repeated=repeated) |
java | public static I_CmsVfsServiceAsync getVfsService() {
if (VFS_SERVICE == null) {
VFS_SERVICE = GWT.create(I_CmsVfsService.class);
String serviceUrl = CmsCoreProvider.get().link("org.opencms.gwt.CmsVfsService.gwt");
((ServiceDefTarget)VFS_SERVICE).setServiceEntryPoint(serviceUrl);
}
return VFS_SERVICE;
} |
java | public synchronized boolean addAll(int index, Collection<? extends E> c) {
modCount++;
if (index < 0 || index > elementCount)
throw new ArrayIndexOutOfBoundsException(index);
Object[] a = c.toArray();
int numNew = a.length;
ensureCapacityHelper(elementCount + numNew);
int numMoved = elementCount - index;
if (numMoved > 0)
System.arraycopy(elementData, index, elementData, index + numNew,
numMoved);
System.arraycopy(a, 0, elementData, index, numNew);
elementCount += numNew;
return numNew != 0;
} |
python | def handle_response(response):
"""
Given a requests.Response object, throw the appropriate exception, if applicable.
"""
# ignore valid responses
if response.status_code < 400:
return
cls = _status_to_exception_type.get(response.status_code, HttpError)
kwargs = {
'code': response.status_code,
'method': response.request.method,
'url': response.request.url,
'details': response.text,
}
if response.headers and 'retry-after' in response.headers:
kwargs['retry_after'] = response.headers.get('retry-after')
raise cls(**kwargs) |
java | public LinePlot line(double[][] data, Line.Style style) {
return line(null, data, style);
} |
python | def until(condition, fns):
"""
Try a list of seeder functions until a condition is met.
:param condition:
a function that takes one argument - a seed - and returns ``True``
or ``False``
:param fns:
a list of seeder functions
:return:
a "composite" seeder function that calls each supplied function in turn,
and returns the first seed where the condition is met. If the condition
is never met, it returns ``None``.
"""
def f(msg):
for fn in fns:
seed = fn(msg)
if condition(seed):
return seed
return None
return f |
java | public static List<WriteFuture> broadcast(Object message, IoSession... sessions) {
if (sessions == null) {
sessions = EMPTY_SESSIONS;
}
List<WriteFuture> answer = new ArrayList<>(sessions.length);
if (message instanceof IoBuffer) {
for (IoSession s: sessions) {
answer.add(s.write(((IoBuffer) message).duplicate()));
}
} else {
for (IoSession s: sessions) {
answer.add(s.write(message));
}
}
return answer;
} |
java | public void markInitialState() {
if (!attachedObjects.isEmpty()) {
for (T t : attachedObjects) {
if (t instanceof PartialStateHolder) {
((PartialStateHolder) t).markInitialState();
}
}
}
initialState = true;
} |
java | public void sampleSequenceForward(SequenceModel model, int[] sequence, double temperature) {
// System.err.println("Sampling forward");
for (int pos=0; pos<sequence.length; pos++) {
samplePosition(model, sequence, pos, temperature);
}
} |
java | private Later<JsonNode> createFetcher() {
final RequestBuilder call = new GraphRequestBuilder(getGraphEndpoint(), HttpMethod.POST, this.timeout, this.retries);
// This actually creates the correct JSON structure as an array
String batchValue = JSONUtils.toJSON(this.graphRequests, this.mapper);
if (log.isLoggable(Level.FINEST))
log.finest("Batch request is: " + batchValue);
this.addParams(call, new Param[] { new Param("batch", batchValue) });
final HttpResponse response;
try {
response = call.execute();
} catch (IOException ex) {
throw new IOFacebookException(ex);
}
return new Later<JsonNode>() {
@Override
public JsonNode get() throws FacebookException
{
try {
if (response.getResponseCode() == HttpURLConnection.HTTP_OK
|| response.getResponseCode() == HttpURLConnection.HTTP_BAD_REQUEST
|| response.getResponseCode() == HttpURLConnection.HTTP_UNAUTHORIZED) {
// If it was an error, we will recognize it in the content later.
// It's possible we should capture all 4XX codes here.
JsonNode result = mapper.readTree(response.getContentStream());
if (log.isLoggable(Level.FINEST))
log.finest("Response is: " + result);
return result;
} else {
throw new IOFacebookException(
"Unrecognized error " + response.getResponseCode() + " from "
+ call + " :: " + StringUtils.read(response.getContentStream()));
}
} catch (IOException e) {
throw new IOFacebookException("Error calling " + call, e);
}
}
};
} |
java | public void updateBackground(MotionModel homeToCurrent, T frame) {
worldToHome.concat(homeToCurrent, worldToCurrent);
worldToCurrent.invert(currentToWorld);
// find the distorted polygon of the current image in the "home" background reference frame
transform.setModel(currentToWorld);
transform.compute(0, 0, corners[0]);
transform.compute(frame.width-1,0,corners[1]);
transform.compute(frame.width-1,frame.height-1,corners[2]);
transform.compute(0, frame.height-1, corners[3]);
// find the bounding box
int x0 = Integer.MAX_VALUE;
int y0 = Integer.MAX_VALUE;
int x1 = -Integer.MAX_VALUE;
int y1 = -Integer.MAX_VALUE;
for (int i = 0; i < 4; i++) {
Point2D_F32 p = corners[i];
int x = (int)p.x;
int y = (int)p.y;
if( x0 > x ) x0 = x;
if( y0 > y ) y0 = y;
if( x1 < x ) x1 = x;
if( y1 < y ) y1 = y;
}
x1++;y1++;
if( x0 < 0 ) x0 = 0;
if( x1 > backgroundWidth ) x1 = backgroundWidth;
if( y0 < 0 ) y0 = 0;
if( y1 > backgroundHeight ) y1 = backgroundHeight;
updateBackground(x0,y0,x1,y1,frame);
} |
java | private static byte[] lmHash(String password) throws Exception {
byte[] oemPassword = password.toUpperCase().getBytes("US-ASCII");
int length = Math.min(oemPassword.length, 14);
byte[] keyBytes = new byte[14];
System.arraycopy(oemPassword, 0, keyBytes, 0, length);
Key lowKey = createDESKey(keyBytes, 0);
Key highKey = createDESKey(keyBytes, 7);
Cipher des = Cipher.getInstance("DES/ECB/NoPadding");
des.init(Cipher.ENCRYPT_MODE, lowKey);
byte[] lowHash = des.doFinal(LM_HASH_MAGIC_CONSTANT);
des.init(Cipher.ENCRYPT_MODE, highKey);
byte[] highHash = des.doFinal(LM_HASH_MAGIC_CONSTANT);
byte[] lmHash = new byte[16];
System.arraycopy(lowHash, 0, lmHash, 0, 8);
System.arraycopy(highHash, 0, lmHash, 8, 8);
return lmHash;
} |
java | public boolean process( List<AssociatedPair> points , DMatrixRMaj solution ) {
if( points.size() < 8 )
throw new IllegalArgumentException("Must be at least 8 points. Was only "+points.size());
// use normalized coordinates for pixel and calibrated
// TODO re-evaluate decision to normalize for calibrated case
LowLevelMultiViewOps.computeNormalization(points, N1, N2);
createA(points,A);
if (process(A,solution))
return false;
// undo normalization on F
PerspectiveOps.multTranA(N2.matrix(),solution,N1.matrix(),solution);
if( computeFundamental )
return projectOntoFundamentalSpace(solution);
else
return projectOntoEssential(solution);
} |
python | def desc(t=None, reg=True):
"""
Describe Class Dependency
:param reg: should we register this class as well
:param t: custom type as well
:return:
"""
def decorated_fn(cls):
if not inspect.isclass(cls):
return NotImplemented('For now we can only describe classes')
name = t or camel_case_to_underscore(cls.__name__)[0]
if reg:
di.injector.register(name, cls)
else:
di.injector.describe(name, cls)
return cls
return decorated_fn |
java | @Override
public SqlContext inOutParam(final String parameterName, final Object value, final SQLType sqlType) {
if (value instanceof Optional) {
Optional<?> optionalValue = (Optional<?>) value;
if (optionalValue.isPresent()) {
param(new InOutParameter(parameterName, optionalValue.get(), sqlType));
} else {
param(new InOutParameter(parameterName, null, JDBCType.NULL));
}
return this;
} else {
return param(new InOutParameter(parameterName, value, sqlType));
}
} |
java | public static String cutBegin(String data, int maxLength) {
if (data.length() > maxLength) {
return "..." + data.substring(data.length() - maxLength, data.length());
} else {
return data;
}
} |
java | private static void matchScale(BigDecimal[] val) {
if (val[0].scale == val[1].scale) {
return;
} else if (val[0].scale < val[1].scale) {
val[0] = val[0].setScale(val[1].scale, ROUND_UNNECESSARY);
} else if (val[1].scale < val[0].scale) {
val[1] = val[1].setScale(val[0].scale, ROUND_UNNECESSARY);
}
} |
java | public static byte[] bytes(String s) {
try {
return s.getBytes(ENCODING);
} catch (UnsupportedEncodingException e) {
log.error("UnsupportedEncodingException ", e);
throw new RuntimeException(e);
}
} |
java | public static <T> Iterator<T[]> combinationsIterator(final T[] elements, final int subsetSize) {
return new Iterator<T[]>() {
/**
* The index on the combination array.
*/
private int r = 0;
/**
* The index on the elements array.
*/
private int index = 0;
/**
* The indexes of the elements of the combination.
*/
private final int[] selectedIndexes = new int[subsetSize];
/**
* Flag that tells us if there is a next item. If the flag is null then we don't know.
*/
private Boolean hasNext = null;
/**
* {@inheritDoc}
*/
@Override
public boolean hasNext() {
if(hasNext == null) { //if we don't know if there is a next item, we need to try to locate it.
hasNext = locateNext();
}
return hasNext;
}
/**
* {@inheritDoc}
*/
@Override
public T[] next() {
/*
if(!hasNext()) {
throw new java.util.NoSuchElementException();
}
*/
hasNext = null; //we retrieved the item so we need to locate a new item next time.
@SuppressWarnings("unchecked")
T[] combination =(T[]) Array.newInstance(elements[0].getClass(), subsetSize);
for(int i = 0; i< subsetSize; i++) {
combination[i] = elements[selectedIndexes[i]];
}
return combination;
}
/**
* Locates the next item OR informs us that there are no next items.
*
* @return
*/
private boolean locateNext() {
if(subsetSize == 0) {
return false;
}
int N = elements.length;
while(true) {
if(index <= (N + (r - subsetSize))) {
selectedIndexes[r] = index++;
if(r == subsetSize -1) {
return true; //we retrieved the next
}
else {
r++;
}
}
else {
r--;
if(r < 0) {
return false; //does not have next
}
index = selectedIndexes[r]+1;
}
}
}
};
} |
java | public java.util.List<ScheduledInstancesIpv6Address> getIpv6Addresses() {
if (ipv6Addresses == null) {
ipv6Addresses = new com.amazonaws.internal.SdkInternalList<ScheduledInstancesIpv6Address>();
}
return ipv6Addresses;
} |
java | public String createWorkflow(Workflow workflow) {
executionDAO.createWorkflow(workflow);
indexDAO.indexWorkflow(workflow);
return workflow.getWorkflowId();
} |
python | def apply_security_groups_to_lb(self, name, security_groups):
"""
Applies security groups to the load balancer.
Applying security groups that are already registered with the
Load Balancer has no effect.
:type name: string
:param name: The name of the Load Balancer
:type security_groups: List of strings
:param security_groups: The name of the security group(s) to add.
:rtype: List of strings
:return: An updated list of security groups for this Load Balancer.
"""
params = {'LoadBalancerName' : name}
self.build_list_params(params, security_groups,
'SecurityGroups.member.%d')
return self.get_list('ApplySecurityGroupsToLoadBalancer',
params,
None) |
python | def send_to_address(self, asset_id, to_addr, value, fee=None, change_addr=None, id=None, endpoint=None):
"""
Args:
asset_id: (str) asset identifier (for NEO: 'c56f33fc6ecfcd0c225c4ab356fee59390af8560be0e930faebe74a6daff7c9b', for GAS: '602c79718b16e442de58778e148d0b1084e3b2dffd5de6b7b16cee7969282de7')
to_addr: (str) destination address
value: (int/decimal) transfer amount
fee: (decimal, optional) Paying the handling fee helps elevate the priority of the network to process the transfer. It defaults to 0, and can be set to a minimum of 0.00000001. The low priority threshold is 0.001.
change_addr: (str, optional) Change address, default is the first standard address in the wallet.
id: (int, optional) id to use for response tracking
endpoint: (RPCEndpoint, optional) endpoint to specify to use
Returns:
json object of the result or the error encountered in the RPC call
"""
params = [asset_id, to_addr, value]
if fee:
params.append(fee)
if fee and change_addr:
params.append(change_addr)
elif not fee and change_addr:
params.append(0)
params.append(change_addr)
return self._call_endpoint(SEND_TO_ADDRESS, params=params, id=id, endpoint=endpoint) |
python | def get_neuroml_from_sonata(sonata_filename, id, generate_lems = True, format='xml'):
"""
Return a NeuroMLDocument with (most of) the contents of the Sonata model
"""
from neuroml.hdf5.NetworkBuilder import NetworkBuilder
neuroml_handler = NetworkBuilder()
sr = SonataReader(filename=sonata_filename, id=id)
sr.parse(neuroml_handler)
nml_doc = neuroml_handler.get_nml_doc()
sr.add_neuroml_components(nml_doc)
if format == 'xml':
nml_file_name = '%s.net.nml'%id
from neuroml.writers import NeuroMLWriter
NeuroMLWriter.write(nml_doc, nml_file_name)
elif format == 'hdf5':
nml_file_name = '%s.net.nml.h5'%id
from neuroml.writers import NeuroMLHdf5Writer
NeuroMLHdf5Writer.write(nml_doc, nml_file_name)
print_v('Written to: %s'%nml_file_name)
if generate_lems:
lems_file_name = sr.generate_lems_file(nml_file_name, nml_doc)
return sr, lems_file_name, nml_file_name, nml_doc
return nml_doc |
java | @Support({SQLDialect.POSTGRES})
public static <T> Field<T[]> arrayAgg(Field<T> field) {
return DSL.field("array_agg({0})", field.getDataType().getArrayDataType(), field);
} |
java | public static Number multiply(Number left, Number right) {
return NumberMath.multiply(left, right);
} |
python | def event_listen(self, timeout=None, raise_on_disconnect=True):
'''Does not return until PulseLoopStop
gets raised in event callback or timeout passes.
timeout should be in seconds (float),
0 for non-blocking poll and None (default) for no timeout.
raise_on_disconnect causes PulseDisconnected exceptions by default.
Do not run any pulse operations from these callbacks.'''
assert self.event_callback
try: self._pulse_poll(timeout)
except c.pa.CallError: pass # e.g. from mainloop_dispatch() on disconnect
if raise_on_disconnect and not self.connected: raise PulseDisconnected() |
python | def retrieve_data_directory(self):
"""
Retrieve the data directory
Look first into config_filename_global
then into config_filename_user. The latter takes preeminence.
"""
args = self.args
try:
if args['datadirectory']:
aux.ensure_dir(args['datadirectory'])
return args['datadirectory']
except KeyError:
pass
config = configparser.ConfigParser()
config.read([config_filename_global, self.config_filename_user])
section = config.default_section
data_path = config.get(section, 'Data directory',
fallback='~/.local/share/greg')
data_path_expanded = os.path.expanduser(data_path)
aux.ensure_dir(data_path_expanded)
return os.path.expanduser(data_path_expanded) |
java | @Override
public int compareTo(EigenPair o) {
if(this.eigenvalue < o.eigenvalue) {
return -1;
}
if(this.eigenvalue > o.eigenvalue) {
return +1;
}
return 0;
} |
python | def with_connection(func):
"""Decorate a function to open a new datafind connection if required
This method will inspect the ``connection`` keyword, and if `None`
(or missing), will use the ``host`` and ``port`` keywords to open
a new connection and pass it as ``connection=<new>`` to ``func``.
"""
@wraps(func)
def wrapped(*args, **kwargs):
if kwargs.get('connection') is None:
kwargs['connection'] = _choose_connection(host=kwargs.get('host'),
port=kwargs.get('port'))
try:
return func(*args, **kwargs)
except HTTPException:
kwargs['connection'] = reconnect(kwargs['connection'])
return func(*args, **kwargs)
return wrapped |
python | def generate (self, ps):
""" Generates all possible targets contained in this project.
"""
assert isinstance(ps, property_set.PropertySet)
self.manager_.targets().log(
"Building project '%s' with '%s'" % (self.name (), str(ps)))
self.manager_.targets().increase_indent ()
result = GenerateResult ()
for t in self.targets_to_build ():
g = t.generate (ps)
result.extend (g)
self.manager_.targets().decrease_indent ()
return result |
java | public Bucket updateBucket(String bucketName) {
// [START updateBucket]
BucketInfo bucketInfo = BucketInfo.newBuilder(bucketName).setVersioningEnabled(true).build();
Bucket bucket = storage.update(bucketInfo);
// [END updateBucket]
return bucket;
} |
python | def delete_on_computes(self):
"""
Delete the project on computes but not on controller
"""
for compute in list(self._project_created_on_compute):
if compute.id != "local":
yield from compute.delete("/projects/{}".format(self._id))
self._project_created_on_compute.remove(compute) |
python | def get_hostmap(profile):
'''
We abuse the profile combination to also derive a pilot-host map, which
will tell us on what exact host each pilot has been running. To do so, we
check for the PMGR_ACTIVE advance event in agent_0.prof, and use the NTP
sync info to associate a hostname.
'''
# FIXME: This should be replaced by proper hostname logging
# in `pilot.resource_details`.
hostmap = dict() # map pilot IDs to host names
for entry in profile:
if entry[ru.EVENT] == 'hostname':
hostmap[entry[ru.UID]] = entry[ru.MSG]
return hostmap |
java | @SuppressWarnings("unchecked")
protected void processObject(/* String scope, */ Object tempelObject, Set<String> objectClassPath,
ITemplateRepository templateRepository, ITemplateSourceFactory templateSourceFactory) {
// Mapa parametrów pliku tempel.xml:
if(tempelObject instanceof Map) {
properties = new TreeMap<String, String>();
Map<String, String> scopeProperties = (Map<String, String>)tempelObject;
for(String key : scopeProperties.keySet()) {
String value = scopeProperties.get(key);
// value = expressionEvaluator.evaluate(value, properties);
// CHECK: scopeProperties.put(key, value); // aktualizacja wartości po rozwinięciu
properties.put(key, value);
}
// properties.put(scope, Collections.unmodifiableMap(scopeProperties));
return;
}
// Mapa zależności pliku tempel.xml
if(tempelObject instanceof List) {
dependencies = (List<TempelDependency>)tempelObject;
return;
}
// Definicja szabloun z pliku tempel.xml:
if(tempelObject instanceof Template) {
Template<?> template = (Template<?>)tempelObject;
// Rozbudowa ścieżki klas szablonu o elementy specyficzne dla repozytorium:
template.addTemplateClassPathExtender(new RepositoryTemplateClassPathExtender());
template.addTemplateClassPathExtender(new DependenciesTemplateClassPathExtender());
// Rozbudowa ścieżki klas szablonu o zależności szablonu (dla repozytoriów typu Maven):
template.addTemplateClassPathExtender(new FixedSetTemplateClassPathExtender(objectClassPath));
// List<TemplateReference> referenceResources = template.getReferences();
// if(referenceResources != null && !referenceResources.isEmpty()) {
// template.addTemplateClassPathExtender(new ReferenceDependenciesTemplateClassPathExtender(referenceResources));
// }
// Ustawianie referencji we wszystkich podszablonach:
if(template.getResources() != null) {
for(TemplateResource resource : template.getResources()) {
resource.setParentTemplateReference(template);
}
}
// Dodawanie do repozytorium (z wywołaniem template.setRepository(...))
String gId = StringUtils.emptyIfBlank(template.getGroupId());
String tId = StringUtils.emptyIfBlank(template.getTemplateId());
String ver = StringUtils.emptyIfBlank(template.getVersion());
if(!StringUtils.isBlank(gId)) {
templates.add(gId + ":" + tId + ":" + ver);
templateRepository.put(null, gId, tId, ver, template);
}
String key = template.getKey();
if(!StringUtils.isBlank(key)) {
templateRepository.put(key, null, null, null, template);
}
// Na końcu, gdy wywołano template.setRepository(...):
template.setTemplateSourceFactory(templateSourceFactory);
return;
}
} |
java | protected List<DisambiguationPatternRule> loadPatternRules(String filename) throws ParserConfigurationException, SAXException, IOException {
DisambiguationRuleLoader ruleLoader = new DisambiguationRuleLoader();
return ruleLoader.getRules(JLanguageTool.getDataBroker().getFromResourceDirAsStream(filename));
} |
python | def erange(start, end, steps):
"""
Returns a numpy array over the specified range taking geometric steps.
See also numpy.logspace()
"""
if start == 0:
print("Nothing you multiply zero by gives you anything but zero. Try picking something small.")
return None
if end == 0:
print("It takes an infinite number of steps to get to zero. Try a small number?")
return None
# figure out our multiplication scale
x = (1.0*end/start)**(1.0/(steps-1))
# now generate the array
ns = _n.array(list(range(0,steps)))
a = start*_n.power(x,ns)
# tidy up the last element (there's often roundoff error)
a[-1] = end
return a |
java | @Override
public synchronized void addNotification(IntegerID consumerId, AbstractMessage prefetchedMessage)
{
NotificationPacket notifPacket = new NotificationPacket();
notifPacket.setSessionId(sessionId);
notifPacket.setConsumerId(consumerId);
notifPacket.setMessage(prefetchedMessage);
notificationBuffer.add(notifPacket);
} |
python | def get_filename(key, message, default=None, history=None):
"""
Like :meth:`prompt`, but only accepts the name of an existing file
as an input.
:type key: str
:param key: The key under which to store the input in the :class:`InputHistory`.
:type message: str
:param message: The user prompt.
:type default: str|None
:param default: The offered default if none was found in the history.
:type history: :class:`InputHistory` or None
:param history: The history used for recording default values, or None.
"""
def _validate(string):
if not os.path.isfile(string):
return 'File not found. Please enter a filename.'
return prompt(key, message, default, True, _validate, history) |
java | @Setup
public void setup() {
proxyInterceptor = MethodDelegation.to(ByteBuddyProxyInterceptor.class);
accessInterceptor = MethodDelegation.to(ByteBuddyAccessInterceptor.class);
prefixInterceptor = MethodDelegation.to(ByteBuddyPrefixInterceptor.class);
baseClassDescription = TypePool.Default.ofSystemLoader().describe(baseClass.getName()).resolve();
proxyClassDescription = TypePool.Default.ofSystemLoader().describe(ByteBuddyProxyInterceptor.class.getName()).resolve();
accessClassDescription = TypePool.Default.ofSystemLoader().describe(ByteBuddyAccessInterceptor.class.getName()).resolve();
prefixClassDescription = TypePool.Default.ofSystemLoader().describe(ByteBuddyPrefixInterceptor.class.getName()).resolve();
proxyInterceptorDescription = MethodDelegation.to(proxyClassDescription);
accessInterceptorDescription = MethodDelegation.to(accessClassDescription);
prefixInterceptorDescription = MethodDelegation.to(prefixClassDescription);
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.