content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Allows the user to enter fixed width input while conforming to a character format. Powered by jQuery Masked Input plugin.
Please refer to jQuery Masked Input Documentation to learn about plugin options
Step one
Include the javascript file inside the
<body>before core template script inclusions, if it's not there already. Please view jQuery plugin inclusion guideline rules
<script src="assets/plugins/jquery-inputmask/jquery.inputmask.min.js" type="text/javascript">
Step two
Add the markup.
<input type="text" id="phone" class="form-control">
Step three
Apply the plugin.
Make sure you place the following script below all the pre-requisites mentioned in the Step two above.
<script>$(document).ready(function() {$("#phone").mask("(999) 999-9999");});</script> | https://docs.pages.revox.io/form-elements/masked-input | 2018-08-14T15:57:39 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.pages.revox.io |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
Returns an array of RegexPatternSetSummary objects.
For .NET Core and PCL this operation is only available in asynchronous form. Please refer to ListRegexPatternSetsAsync.
Namespace: Amazon.WAFRegional
Assembly: AWSSDK.WAFRegional.dll
Version: 3.x.y.z
Container for the necessary parameters to execute the ListRegexPatternSets service method.
.NET Framework:
Supported in: 4.5, 4.0, 3.5
Portable Class Library:
Supported in: Windows Store Apps
Supported in: Windows Phone 8.1
Supported in: Xamarin Android
Supported in: Xamarin iOS (Unified)
Supported in: Xamarin.Forms | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/WAFRegional/MWAFRegionalListRegexPatternSetsListRegexPatternSetsRequest.html | 2018-08-14T16:37:51 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.aws.amazon.com |
HH\array_key_cast
array_key_cast() can be used to convert a given value to the equivalent that would be used if that value was used as a key in an array
namespace HH; function array_key_cast( mixed $key, ): arraykey;
An integer is returned unchanged. A boolean, float, or resource is cast to an integer (using standard semantics). A null is converted to an empty string. A string is converted to an integer if it represents an integer value, returned unchanged otherwise.
For object, array, vec, dict, or keyset values, an InvalidArgumentException is thrown (as these cannot be used as array keys).
Parameters
mixed $key- The value to be converted.
Return Values
arraykey- - Returns the converted value. | https://docs.hhvm.com/hack/reference/function/HH.array_key_cast/ | 2018-08-14T15:10:55 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.hhvm.com |
Ernest Renan, "Qu'est-ce qu'une nation ?" : commentaire
The Suez Crisis of 1956 has been commonly seen as a turning point in post war world history, the moment when Britain's pretension to world power status was stripped away, and when Egypt became the leader of the Arab world, an event which triggered a radical change in the relations between Israel and its Arab neighbours. The impacts of the Suez crisis are, however, perhaps more ambiguous than would appear at first sight, especially when one examine the background of the crisis. Indeed, since the Second World War, we can notice that the British Empire weakened, the United States developed new global interests, in Egypt, and in the Middle East, the forces of nationalism were emerging and that the tensions of Arab-Israeli conflicts were already strong. After such examinations, we are lead to wonder whether the Suez Crisis really triggered changes in Britain, Egypt and Israel or if the crisis only reflected a former trend and accelerated these transformations. We will thus examine for each country what the impacts of the crisis were, and if these were really caused by the Suez Canal crisis.
[...] For instance, in 1957, the Gold Coast became independent but this had long been in train. Indeed, historians are divided over the influence the Suez crisis had on British's role in the world. One of the most controversial issues is the suggestion that Suez was crucial in the ?descent from power? of Britain after the Second World War. In fact, it seems that the Suez crisis was a turning point in that it reflected the truth that the Britain could not sustain a great power role without the support of the United States. [...]
[...] Finally, what can be said with certainty is that Britain was weaker after Suez precisely because the crisis revealed to all in a lightening flash? Britain's weakness. In a paper presented at the Middle East Institute in Washington in April 1961 Charles Issawi described the nationalization of the Canal as a ?major landmark along the road towards Egyptianisation, industrialization, and state control?. From a short-term perspective, the consequences of the Suez crisis on the Egyptian economy were in a first time the compensation of the shareholders. [...]
[...] However, as it intervened during the Suez war, it also underwent the impacts of the crisis. The event does not appear to have had any direct political repercussion but it did have consequences in its international relations. Egypt had consistently denied Israel the use of the canal and the denial continued even after its nationalization. Also, when in1959 two Israeli cargoes were detained by the Egyptian authorities, the UN Secretary-General Dag Hammarkjöld decided to hold meetings in order to settle an agreement with the Egyptians with regards to Israeli cargoes. [...]
[...] After having examined the changes which have occurred after the crisis, we can conclude that the Suez Canal crisis had indeed many impacts and did create some transformation in Britain, Egypt and Israel but as regards to its major impact on each country, that is to say the disintegration of the British Empire, the growing influence and leadership of Egypt and the tension between Israel and Arab states, the Suez crisis only seemed to have reflected and accelerated an pre-existing trends. Bibliography D. Carlton, Britain and the Suez crisis (1988). G. Lenczowski, The Middle East in World Affairs, (1980). Wm Roger Lewis and Roger Owen, Suez 1956: the crisis and its consequences (1989). Christian Pineau, 1956: Suez (1976). [...]
[...] Mourad M. Wahba, The Role of the State in the Egyptian Economy, 1945- 81, (D. Phil, Oxford, 1986), p Albert Hourani, ?Conclusion', Suez 1956 : the crisis and its consequences, pp. 393-411. [...]
Enter the password to open this PDF file:
-
Docs.school utilise des cookies sur son site. En poursuivant votre navigation sur Docs.school, vous en acceptez l'utilisation. Privacy PolicyOK | https://docs.school/histoire-et-geographie/histoire-contemporaine/dissertation/impact-suez-crisis-britain-egypt-israel-16293.html | 2018-08-14T16:18:24 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.school |
Assessing Channel Commission to Owners
Some channels like Expedia.com and Booking.com, charge a commission on every booking. Some property managers would like to get reimbursed for that from their owners. This support article illustrates how to pass through channel commissions to owners via the owner statement.
Go to Modules > Property ManagementGo to Modules > Property Management
...select the owner and then navigate to the Property tab.
There you will find a setting to "Assess Channel Commissions to Owner", if set to Yes, the channel commissions will pass through to the owner statement.
How to Set Channel CommissionsHow to Set Channel Commissions
Navigate to settings > channel management > integrations
Currently we support commission pass through only for Booking.com and Expedia.com. Whatever commission you set here, will be increased by 2% to reflect the 2% Lodgix charges you on top of the channel commission. So if Booking.com charges you 15%, and Lodgix charges you 2% and you want to pass the full 17% through to the owner, then you would enter the channel commission of 15% here and the Lodgix 2% will just be added on to that. | http://docs.lodgix.com/m/5502/l/684559-assessing-channel-commission-to-owners | 2018-08-14T15:12:26 | CC-MAIN-2018-34 | 1534221209165.16 | [array(['http://s3.amazonaws.com/screensteps_live/images/innfirst/684559/1/rendered/c5f2f168-4201-41f5-92b5-b1eeec2857d8.png?AWSAccessKeyId=AKIAJRW37ULKKSXWY73Q&Expires=1534263146&Signature=2P67NYyXhQa31HEmunsLBYBROJs%3D',
'Go to Modules > Property Management'], dtype=object)
array(['http://s3.amazonaws.com/screensteps_live/images/innfirst/684559/1/rendered/43854a1e-e98f-4ec1-a17c-74c99a748efd.png?AWSAccessKeyId=AKIAJRW37ULKKSXWY73Q&Expires=1534263146&Signature=chzsyly8D1d%2F1OKrJSZXdZhC37g%3D',
'How to Set Channel Commissions'], dtype=object) ] | docs.lodgix.com |
How To's
From Yate Documentation
Yate has multiple routing modules and signalling modules. Here you can find some how to's for configuring Yate using different modules.
Routing
The most popular scenarios for routing:
Yate configuration as Server and / or Client
Various configuration for Yate to act as a Server and as a Client using different protocols.
Call detail records
Below are the modules and some tips you can use when writing call logs. You use this modules to obtain billing information.
Monitoring and debugging Yate
Some examples on how to monitor and enable debugging in Yate and the modules involved in this actions.
Miscellaneous
VoIP to PSTN gateway
SS7 Setups
Troubleshooting
See also | http://docs.yate.ro/wiki/index.php?title=How_To%27s&oldid=8217 | 2018-08-14T15:40:54 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.yate.ro |
Configuration¶
All configuration can be done by adding configuration files. They are looked for in:
-
/etc/luigi/client.cfg
-
luigi.cfg(or its legacy name
client.cfg) in your current working directory
-
LUIGI_CONFIG_PATHenvironment variable
in increasing order of preference. The order only matters in case of key conflicts (see docs for ConfigParser.read). These files are meant for both the client and
luigid. If you decide to specify your own configuration you should make sure that both the client and
luigid load it properly.
The config file is broken into sections, each controlling a different part of the config. Example configuration file:
[hadoop] version=cdh4 streaming-jar=/usr/lib/hadoop-xyz/hadoop-streaming-xyz-123.jar [core] scheduler_host=luigi-host.mycompany.foo
Parameters from config Ingestion¶
All parameters can be overridden from configuration files. For instance if you have a Task definition:
class DailyReport(luigi.contrib.hadoop.JobTask): date = luigi.DateParameter(default=datetime.date.today()) # ...
Then you can override the default value for
DailyReport().date by providing
it in the configuration:
[DailyReport] date=2012-01-01
Configuration classes¶
Using the Parameters from config Ingestion method, we derive the conventional way to do global configuration. Imagine this configuration.
[mysection] option=hello intoption=123
We can create a
Config class:
import luigi # Config classes should be camel cased class mysection(luigi.Config): option = luigi.Parameter(default='world') intoption = luigi.IntParameter(default=555) mysection().option mysection().intoption
Configurable options¶
Luigi comes with a lot of configurable options. Below, we describe each section and the parameters available within it.
[core]¶
These parameters control core Luigi behavior, such as error e-mails and interactions between the worker and scheduler.
- default-scheduler-host
- Hostname of the machine running the scheduler. Defaults to localhost.
- default-scheduler-port
- Port of the remote scheduler api process. Defaults to 8082.
- default-scheduler-url
- Full path to remote scheduler. Defaults to. For TLS support use the URL scheme:
https, example:(Note: you will have to terminate TLS using an HTTP proxy) You can also use this to connect to a local Unix socket using the non-standard URI scheme:
http+unixexample:
http+unix://%2Fvar%2Frun%2Fluigid%2Fluigid.sock/
- hdfs-tmp-dir
- Base directory in which to store temporary files on hdfs. Defaults to tempfile.gettempdir()
- history-filename
- If set, specifies a filename for Luigi to write stuff (currently just job id) to in mapreduce job’s output directory. Useful in a configuration where no history is stored in the output directory by Hadoop.
- log_level
- The default log level to use when no logging_conf_file is set. Must be a valid name of a Python log level. Default is
DEBUG.
- logging_conf_file
- Location of the logging configuration file.
- max_reschedules
- The maximum number of times that a job can be automatically rescheduled by a worker before it will stop trying. Workers will reschedule a job if it is found to not be done when attempting to run a dependent job. This defaults to 1.
- max_shown_tasks
New in version 1.0.20.
The maximum number of tasks returned in a task_list api call. This will restrict the number of tasks shown in task lists in the visualiser. Small values can alleviate frozen browsers when there are too many done tasks. This defaults to 100000 (one hundred thousand).
- max_graph_nodes
New in version 2.0.0.
The maximum number of nodes returned by a dep_graph or inverse_dep_graph api call. Small values can greatly speed up graph display in the visualiser by limiting the number of nodes shown. Some of the nodes that are not sent to the visualiser will still show up as dependencies of nodes that were sent. These nodes are given TRUNCATED status.
- no_configure_logging
- If true, logging is not configured. Defaults to false.
- parallel_scheduling
- If true, the scheduler will compute complete functions of tasks in parallel using multiprocessing. This can significantly speed up scheduling, but requires that all tasks can be pickled. Defaults to false.
- parallel-scheduling-processes
- The number of processes to use for parallel scheduling. If not specified the default number of processes will be the total number of CPUs available.
- rpc-connect-timeout
- Number of seconds to wait before timing out when making an API call. Defaults to 10.0
- rpc-retry-attempts
- The maximum number of retries to connect the central scheduler before giving up. Defaults to 3
- rpc-retry-wait
- Number of seconds to wait before the next attempt will be started to connect to the central scheduler between two retry attempts. Defaults to 30
[worker]¶
These parameters control Luigi worker behavior.
- count_uniques
- If true, workers will only count unique pending jobs when deciding whether to stay alive. So if a worker can’t get a job to run and other workers are waiting on all of its pending jobs, the worker will die. worker-keep-alive must be true for this to have any effect. Defaults to false.
- keep_alive
- If true, workers will stay alive when they run out of jobs to run, as long as they have some pending job waiting to be run. Defaults to false.
- ping_interval
- Number of seconds to wait between pinging scheduler to let it know that the worker is still alive. Defaults to 1.0.
- task_limit
New in version 1.0.25.
Maximum number of tasks to schedule per invocation. Upon exceeding it, the worker will issue a warning and proceed with the workflow obtained thus far. Prevents incidents due to spamming of the scheduler, usually accidental. Default: no limit.
- timeout
New in version 1.0.20.
Number of seconds after which to kill a task which has been running for too long. This provides a default value for all tasks, which can be overridden by setting the worker-timeout property in any task. This only works when using multiple workers, as the timeout is implemented by killing worker subprocesses. Default value is 0, meaning no timeout.
- wait_interval
- Number of seconds for the worker to wait before asking the scheduler for another job after the scheduler has said that it does not have any available jobs.
- wait_jitter
- Size of jitter to add to the worker wait interval such that the multiple workers do not ask the scheduler for another job at the same time. Default: 5.0
- max_reschedules
- Maximum number of times to reschedule a failed task. Default: 1
-. Note: Every time the task remains incomplete, it will count as FAILED, so normal retry logic applies (see: retry_count and retry_delay). This setting works best with worker-keep-alive: true. If false, external tasks will only be evaluated when Luigi is first invoked. In this case, Luigi will not check whether external dependencies are satisfied while a workflow is in progress, so dependent tasks will remain PENDING until the workflow is reinvoked. Defaults to false for backwards compatibility.
- no_install_shutdown_handler
- By default, workers will stop requesting new work and finish running pending tasks after receiving a SIGUSR1 signal. This provides a hook for gracefully shutting down workers that are in the process of running (potentially expensive) tasks. If set to true, Luigi will NOT install this shutdown hook on workers. Note this hook does not work on Windows operating systems, or when jobs are launched outside the main execution thread. Defaults to false.
- send_failure_email
- Controls whether the worker will send e-mails on task and scheduling failures. If set to false, workers will only send e-mails on framework errors during scheduling and all other e-mail must be handled by the scheduler. Defaults to true.
- check_unfulfilled_deps
- If true, the worker checks for completeness of dependencies before running a task. In case unfulfilled dependencies are detected, an exception is raised and the task will not run. This mechanism is useful to detect situations where tasks do not create their outputs properly, or when targets were removed after the dependency tree was built. It is recommended to disable this feature only when the completeness checks are known to be bottlenecks, e.g. when the
exists()calls of the dependencies’ outputs are resource-intensive. Defaults to true.
- force_multiprocessing
- By default, luigi uses multiprocessing when more than one worker process is requested. Whet set to true, multiprocessing is used independent of the the number of workers. Defaults to false.
[elasticsearch]¶
These parameters control use of elasticsearch
- marker-index
- Defaults to “update_log”.
- marker-doc-type
- Defaults to “entry”.
General parameters
- force-send
- If true, e-mails are sent in all run configurations (even if stdout is connected to a tty device). Defaults to False.
- format
Type of e-mail to send. Valid values are “plain”, “html” and “none”. When set to html, tracebacks are wrapped in <pre> tags to get fixed- width font. When set to none, no e-mails will be sent.
Default value is plain.
- method
Valid values are “smtp”, “sendgrid”, “ses” and “sns”. SES and SNS are services of Amazon web services. SendGrid is an email delivery service. The default value is “smtp”.
In order to send messages through Amazon SNS or SES set up your AWS config files or run Luigi on an EC2 instance with proper instance profile.
In order to use sendgrid, fill in your sendgrid username and password in the [sendgrid] section.
In order to use smtp, fill in the appropriate fields in the [smtp] section.
- prefix
- Optional prefix to add to the subject line of all e-mails. For example, setting this to “[LUIGI]” would change the subject line of an e-mail from “Luigi: Framework error” to “[LUIGI] Luigi: Framework error”
- receiver
Recipient of all error e-mails. If this is not set, no error e-mails are sent when Luigi crashes unless the crashed job has owners set. If Luigi is run from the command line, no e-mails will be sent unless output is redirected to a file.
Set it to SNS Topic ARN if you want to receive notifications through Amazon SNS. Make sure to set method to sns in this case too.
- sender
- User name in from field of error e-mails. Default value: luigi-client@<server_name>
[batch_notifier]¶
Parameters controlling the contents of batch notifications sent from the scheduler
- Number of minutes between e-mail sends. Making this larger results in fewer, bigger e-mails. Defaults to 60.
- batch_mode
Controls how tasks are grouped together in the e-mail. Suppose we have the following sequence of failures:
- TaskA(a=1, b=1)
- TaskA(a=1, b=1)
- TaskA(a=2, b=1)
- TaskA(a=1, b=2)
- TaskB(a=1, b=1)
For any setting of batch_mode, the batch e-mail will record 5 failures and mention them in the subject. The difference is in how they will be displayed in the body. Here are example bodies with error_messages set to 0.
“all” only groups together failures for the exact same task:
- TaskA(a=1, b=1) (2 failures)
- TaskA(a=1, b=2) (1 failure)
- TaskA(a=2, b=1) (1 failure)
- TaskB(a=1, b=1) (1 failure)
“family” groups together failures for tasks of the same family:
- TaskA (4 failures)
- TaskB (1 failure)
“unbatched_params” groups together tasks that look the same after removing batched parameters. So if TaskA has a batch_method set for parameter a, we get the following:
- TaskA(b=1) (3 failures)
- TaskA(b=2) (1 failure)
- TaskB(a=1, b=2) (1 failure)
Defaults to “unbatched_params”, which is identical to “all” if you are not using batched parameters.
- error_lines
- Number of lines to include from each error message in the batch e-mail. This can be used to keep e-mails shorter while preserving the more useful information usually found near the bottom of stack traces. This can be set to 0 to include all lines. If you don’t wish to see error messages, instead set error_messages to 0. Defaults to 20.
- error_messages
- Number of messages to preserve for each task group. As most tasks that fail repeatedly do so for similar reasons each time, it’s not usually necessary to keep every message. This controls how many messages are kept for each task or task group. The most recent error messages are kept. Set to 0 to not include error messages in the e-mails. Defaults to 1.
- group_by_error_messages
- Quite often, a system or cluster failure will cause many disparate task types to fail for the same reason. This can cause a lot of noise in the batch e-mails. This cuts down on the noise by listing items with identical error messages together. Error messages are compared after limiting by error_lines. Defaults to true.
[hadoop]¶
Parameters controlling basic hadoop tasks
- command
- Name of command for running hadoop from the command line. Defaults to “hadoop”
- python-executable
- Name of command for running python from the command line. Defaults to “python”
- scheduler
- Type of scheduler to use when scheduling hadoop jobs. Can be “fair” or “capacity”. Defaults to “fair”.
- streaming-jar
- Path to your streaming jar. Must be specified to run streaming jobs.
- version
- Version of hadoop used in your cluster. Can be “cdh3”, “chd4”, or “apache1”. Defaults to “cdh4”.
[hdfs]¶
Parameters controlling the use of snakebite to speed up hdfs queries.
- client
- Client to use for most hadoop commands. Options are “snakebite”, “snakebite_with_hadoopcli_fallback”, “webhdfs” and “hadoopcli”. Snakebite is much faster, so use of it is encouraged. webhdfs is fast and works with Python 3 as well, but has not been used that much in the wild. Both snakebite and webhdfs requires you to install it separately on the machine. Defaults to “hadoopcli”.
- client_version
- Optionally specifies hadoop client version for snakebite.
- effective_user
- Optionally specifies the effective user for snakebite.
- namenode_host
- The hostname of the namenode. Needed for snakebite if snakebite_autoconfig is not set.
- namenode_port
- The port used by snakebite on the namenode. Needed for snakebite if snakebite_autoconfig is not set.
- snakebite_autoconfig
- If true, attempts to automatically detect the host and port of the namenode for snakebite queries. Defaults to false.
- tmp_dir
- Path to where Luigi will put temporary files on hdfs
[hive]¶
Parameters controlling hive tasks
- command
- Name of the command used to run hive on the command line. Defaults to “hive”.
- hiverc-location
- Optional path to hive rc file.
- metastore_host
- Hostname for metastore.
- metastore_port
- Port for hive to connect to metastore host.
- release
- If set to “apache”, uses a hive client that better handles apache hive output. All other values use the standard client Defaults to “cdh4”.
[kubernetes]¶
Parameters controlling Kubernetes Job Tasks
- auth_method
- Authorization method to access the cluster. Options are “kubeconfig” or “service-account”
- kubeconfig_path
- Path to kubeconfig file, for cluster authentication. It defaults to
~/.kube/config, which is the default location when using minikube. When auth_method is “service-account” this property is ignored.
- max_retrials
- Maximum number of retrials in case of job failure.
[mysql]¶
Parameters controlling use of MySQL targets
- marker-table
- Table in which to store status of table updates. This table will be created if it doesn’t already exist. Defaults to “table_updates”.
[postgres]¶
Parameters controlling the use of Postgres targets
- local-tmp-dir
- Directory in which to temporarily store data before writing to postgres. Uses system default if not specified.
- marker-table
- Table in which to store status of table updates. This table will be created if it doesn’t already exist. Defaults to “table_updates”.
[redshift]¶
Parameters controlling the use of Redshift targets
- marker-table
- Table in which to store status of table updates. This table will be created if it doesn’t already exist. Defaults to “table_updates”.
[resources]¶
This section can contain arbitrary keys. Each of these specifies the amount of a global resource that the scheduler can allow workers to use. The scheduler will prevent running jobs with resources specified from exceeding the counts in this section. Unspecified resources are assumed to have limit 1. Example resources section for a configuration with 2 hive resources and 1 mysql resource:
[resources] hive=2 mysql=1
Note that it was not necessary to specify the 1 for mysql here, but it is good practice to do so when you have a fixed set of resources.
[retcode]¶
Configure return codes for the Luigi binary. In the case of multiple return codes that could apply, for example a failing task and missing data, the numerically greatest return code is returned.
We recommend that you copy this set of exit codes to your
luigi.cfg file:
[retcode] # The following return codes are the recommended exit codes for Luigi # They are in increasing level of severity (for most applications) already_running=10 missing_data=20 not_run=25 task_failed=30 scheduling_error=35 unhandled_exception=40
- already_running
- This can happen in two different cases. Either the local lock file was taken at the time the invocation starts up. Or, the central scheduler have reported that some tasks could not have been run, because other workers are already running the tasks.
- missing_data
- For when an
ExternalTaskis not complete, and this caused the worker to give up. As an alternative to fiddling with this, see the [worker] keep_alive option.
- not_run
- For when a task is not granted run permission by the scheduler. Typically because of lack of resources, because the task has been already run by another worker or because the attempted task is in DISABLED state. Connectivity issues with the central scheduler might also cause this. This does not include the cases for which a run is not allowed due to missing dependencies (missing_data) or due to the fact that another worker is currently running the task (already_running).
- task_failed
- For signaling that there were last known to have failed. Typically because some exception have been raised.
- scheduling_error
- For when a task’s
complete()or
requires()method fails with an exception, or when the limit number of tasks is reached.
- unhandled_exception
- For internal Luigi errors. Defaults to 4, since this type of error probably will not recover over time.
If you customize return codes, prefer to set them in range 128 to 255 to avoid conflicts. Return codes in range 0 to 127 are reserved for possible future use by Luigi contributors.
[scalding]¶
Parameters controlling running of scalding jobs
- scala-home
- Home directory for scala on your machine. Defaults to either SCALA_HOME or /usr/share/scala if SCALA_HOME is unset.
- scalding-home
- Home directory for scalding on your machine. Defaults to either SCALDING_HOME or /usr/share/scalding if SCALDING_HOME is unset.
- scalding-provided
- Provided directory for scalding on your machine. Defaults to either SCALDING_HOME/provided or /usr/share/scalding/provided
- scalding-libjars
- Libjars directory for scalding on your machine. Defaults to either SCALDING_HOME/libjars or /usr/share/scalding/libjars
[scheduler]¶
Parameters controlling scheduler behavior
- batch_emails
- Whether to send batch e-mails for failures and disables rather than sending immediate disable e-mails and just relying on workers to send immediate batch e-mails. Defaults to false.
- disable-hard-timeout
- Hard time limit after which tasks will be disabled by the server if they fail again, in seconds. It will disable the task if it fails again after this amount of time. E.g. if this was set to 600 (i.e. 10 minutes), and the task first failed at 10:00am, the task would be disabled if it failed again any time after 10:10am. Note: This setting does not consider the values of the retry_count or disable-window-seconds settings.
- retry_count
- Number of times a task can fail within disable-window-seconds before the scheduler will automatically disable it. If not set, the scheduler will not automatically disable jobs.
- disable-persist-seconds
- Number of seconds for which an automatic scheduler disable lasts. Defaults to 86400 (1 day).
- disable-window-seconds
- Number of seconds during which retry_count failures must occur in order for an automatic disable by the scheduler. The scheduler forgets about disables that have occurred longer ago than this amount of time. Defaults to 3600 (1 hour).
- record_task_history
- If true, stores task history in a database. Defaults to false.
- remove_delay
- Number of seconds to wait before removing a task that has no stakeholders. Defaults to 600 (10 minutes).
- retry_delay
- Number of seconds to wait after a task failure to mark it pending again. Defaults to 900 (15 minutes).
- state_path
Path in which to store the Luigi scheduler’s state. When the scheduler is shut down, its state is stored in this path. The scheduler must be shut down cleanly for this to work, usually with a kill command. If the kill command includes the -9 flag, the scheduler will not be able to save its state. When the scheduler is started, it will load the state from this path if it exists. This will restore all scheduled jobs and other state from when the scheduler last shut down.
Sometimes this path must be deleted when restarting the scheduler after upgrading Luigi, as old state files can become incompatible with the new scheduler. When this happens, all workers should be restarted after the scheduler both to become compatible with the updated code and to reschedule the jobs that the scheduler has now forgotten about.
This defaults to /var/lib/luigi-server/state.pickle
- worker_disconnect_delay
- Number of seconds to wait after a worker has stopped pinging the scheduler before removing it and marking all of its running tasks as failed. Defaults to 60.
- pause_enabled
- If false, disables pause/unpause operations and hides the pause toggle from the visualiser.
- send_messages
- When true, the scheduler is allowed to send messages to running tasks and the central scheduler provides a simple prompt per task to send messages. Defaults to true.
[sendgrid]¶
These parameters control sending error e-mails through SendGrid.
- Password used for sendgrid login
- username
- Name of the user for the sendgrid login
[smtp]¶
These parameters control the smtp server setup.
- host
- Hostname for sending mail throug smtp. Defaults to localhost.
- local_hostname
- If specified, overrides the FQDN of localhost in the HELO/EHLO command.
- no_tls
- If true, connects to smtp without TLS. Defaults to false.
- Password to log in to your smtp server. Must be specified for username to have an effect.
- port
- Port number for smtp on smtp_host. Defaults to 0.
- ssl
- If true, connects to smtp through SSL. Defaults to false.
- timeout
- Sets the number of seconds after which smtp attempts should time out. Defaults to 10.
- username
- Username to log in to your smtp server, if necessary.
[spark]¶
Parameters controlling the default execution of
SparkSubmitTask and
PySparkTask:
Deprecated since version 1.1.1:
SparkJob,
Spark1xJob and
PySpark1xJob
are deprecated. Please use
SparkSubmitTask or
PySparkTask.
- spark-submit
- Command to run in order to submit spark jobs. Default: spark-submit
- master
- Master url to use for spark-submit. Example: local[*], spark://masterhost:7077. Default: Spark default (Prior to 1.1.1: yarn-client)
- deploy-mode
- Whether to launch the driver programs locally (“client”) or on one of the worker machines inside the cluster (“cluster”). Default: Spark default
- jars
- Comma-separated list of local jars to include on the driver and executor classpaths. Default: Spark default
- packages
- Comma-separated list of packages to link to on the driver and executors
- py-files
- Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. Default: Spark default
- files
- Comma-separated list of files to be placed in the working directory of each executor. Default: Spark default
- conf:
- Arbitrary Spark configuration property in the form Prop=Value|Prop2=Value2. Default: Spark default
- properties-file
- Path to a file from which to load extra properties. Default: Spark default
- driver-memory
- Memory for driver (e.g. 1000M, 2G). Default: Spark default
- driver-java-options
- Extra Java options to pass to the driver. Default: Spark default
- driver-library-path
- Extra library path entries to pass to the driver. Default: Spark default
- driver-class-path
- Extra class path entries to pass to the driver. Default: Spark default
- executor-memory
- Memory per executor (e.g. 1000M, 2G). Default: Spark default
Configuration for Spark submit jobs on Spark standalone with cluster deploy mode only:
- driver-cores
- Cores for driver. Default: Spark default
- supervise
- If given, restarts the driver on failure. Default: Spark default
Configuration for Spark submit jobs on Spark standalone and Mesos only:
- total-executor-cores
- Total cores for all executors. Default: Spark default
Configuration for Spark submit jobs on YARN only:
- executor-cores
- Number of cores per executor. Default: Spark default
- queue
- The YARN queue to submit to. Default: Spark default
- num-executors
- Number of executors to launch. Default: Spark default
- archives
- Comma separated list of archives to be extracted into the working directory of each executor. Default: Spark default
- hadoop-conf-dir
- Location of the hadoop conf dir. Sets HADOOP_CONF_DIR environment variable when running spark. Example: /etc/hadoop/conf
Extra configuration for PySparkTask jobs:
- py-packages
- Comma-separated list of local packages (in your python path) to be distributed to the cluster.
Parameters controlling the execution of SparkJob jobs (deprecated):
[task_history]¶
Parameters controlling storage of task history in a database
- db_connection
- Connection string for connecting to the task history db using sqlalchemy.
[execution_summary]¶
Parameters controlling execution summary of a worker
- summary-length
- Maximum number of tasks to show in an execution summary. If the value is 0, then all tasks will be displayed. Default value is 5.
[webhdfs]¶
- port
- The port to use for webhdfs. The normal namenode port is probably on a different port from this one.
- user
- Perform file system operations as the specified user instead of $USER. Since this parameter is not honored by any of the other hdfs clients, you should think twice before setting this parameter.
Per Task Retry-Policy¶
Luigi also supports defining retry-policy per task.
class GenerateWordsFromHdfs(luigi.Task): retry_count = 2 ... class GenerateWordsFromRDBM(luigi.Task): retry_count = 5 ... class CountLetters(luigi.Task): def requires(self): return [GenerateWordsFromHdfs()] def run(): yield GenerateWordsFromRDBM() ...
If none of retry-policy fields is defined per task, the field value will be default value which is defined in luigi config file.
To make luigi sticks to the given retry-policy, be sure you run luigi worker with keep_alive config. Please check
keep_alive config in [worker] section. | https://luigi.readthedocs.io/en/stable/configuration.html | 2018-08-14T16:07:14 | CC-MAIN-2018-34 | 1534221209165.16 | [] | luigi.readthedocs.io |
Luigi Patterns¶
Code Reuse¶¶
A convenient pattern is to have a dummy Task at the end of several dependency chains, so you can trigger a multitude of pipelines by specifying just one task in command line, similarly to how e.g. make works.
class AllReports(luigi.WrapperTask): date = luigi.DateParameter(default=datetime.date.today()) def requires(self): yield SomeReport(self.date) yield SomeOtherReport(self.date) yield CropReport(self.date) yield TPSReport(self.date) yield FooBarBazReport(self.date)
This simple task will not do anything itself, but will invoke a bunch of other tasks. Per each invocation, Luigi will perform as many of the pending jobs as possible (those which have all their dependencies present).
You’ll need to use
WrapperTask for this instead of the usual Task class, because this job will not produce any output of its own, and as such needs a way to indicate when it’s complete. This class is used for tasks that only wrap other tasks and that by definition are done if all their requirements exist.
Triggering recurring tasks¶
A common requirement is to have a daily report (or something else) produced every night. Sometimes for various reasons tasks will keep crashing or lacking their required dependencies for more than a day though, which would lead to a missing deliverable for some date. Oops.
To ensure that the above AllReports task is eventually completed for every day (value of date parameter), one could e.g. add a loop in requires method to yield dependencies on the past few days preceding self.date. Then, so long as Luigi keeps being invoked, the backlog of jobs would catch up nicely after fixing intermittent problems.
Luigi actually comes with a reusable tool for achieving this, called
RangeDailyBase (resp.
RangeHourlyBase). Simply putting
luigi --module all_reports RangeDailyBase --of AllReports --start 2015-01-01
in your crontab will easily keep gaps from occurring from 2015-01-01
onwards. NB - it will not always loop over everything from 2015-01-01
till current time though, but rather a maximum of 3 months ago by
default - see
RangeDailyBase documentation for this and more knobs
for tweaking behavior. See also Monitoring below.
Efficiently triggering recurring tasks¶
RangeDailyBase, described above, is named like that because a more
efficient subclass exists,
RangeDaily (resp.
RangeHourly), tailored for
hundreds of task classes scheduled concurrently with contiguousness
requirements spanning years (which would incur redundant completeness
checks and scheduler overload using the naive looping approach.) Usage:
luigi --module all_reports RangeDaily --of AllReports --start 2015-01-01
It has the same knobs as RangeDailyBase, with some added requirements. Namely the task must implement an efficient bulk_complete method, or must be writing output to file system Target with date parameter value consistently represented in the file path.
Backfilling tasks¶
Also a common use case, sometimes you have tweaked existing recurring task code and you want to schedule recomputation of it over an interval of dates for that or another reason. Most conveniently it is achieved with the above described range tools, just with both start (inclusive) and stop (exclusive) parameters specified:
luigi --module all_reports RangeDaily --of AllReportsV2 --start 2014-10-31 --stop 2014-12-25
Propagating parameters with Range¶
Some tasks you want to recur may include additional parameters which need to be configured.
The Range classes provide a parameter which accepts a
DictParameter
and passes any parameters onwards for this purpose.
luigi RangeDaily --of MyTask --start 2014-10-31 --of-params '{"my_string_param": "123", "my_int_param": 123}'
Alternatively, you can specify parameters at the task family level (as described here), however these will not appear in the task name for the upstream Range task which can have implications in how the scheduler and visualizer handle task instances.
luigi RangeDaily --of MyTask --start 2014-10-31 --MyTask-my-param 123
Batching multiple parameter values into a single run¶
Sometimes it’ll be faster to run multiple jobs together as a single batch rather than running them each individually. When this is the case, you can mark some parameters with a batch_method in their constructor to tell the worker how to combine multiple values. One common way to do this is by simply running the maximum value. This is good for tasks that overwrite older data when a newer one runs. You accomplish this by setting the batch_method to max, like so:
class A(luigi.Task): date = luigi.DateParameter(batch_method=max)
What’s exciting about this is that if you send multiple As to the
scheduler, it can combine them and return one. So if
A(date=2016-07-28),
A(date=2016-07-29) and
A(date=2016-07-30) are all ready to run, you will start running
A(date=2016-07-30). While this is running, the scheduler will show
A(date=2016-07-28),
A(date=2016-07-29) as batch running while
A(date=2016-07-30) is running. When
A(date=2016-07-30) is done
running and becomes FAILED or DONE, the other two tasks will be updated
to the same status.
If you want to limit how big a batch can get, simply set max_batch_size. So if you have
class A(luigi.Task): date = luigi.DateParameter(batch_method=max) max_batch_size = 10
then the scheduler will batch at most 10 jobs together. You probably do not want to do this with the max batch method, but it can be helpful if you use other methods. You can use any method that takes a list of parameter values and returns a single parameter value.
If you have two max batch parameters, you’ll get the max values for both of them. If you have parameters that don’t have a batch method, they’ll be aggregated separately. So if you have a class like
class A(luigi.Task): p1 = luigi.IntParameter(batch_method=max) p2 = luigi.IntParameter(batch_method=max) p3 = luigi.IntParameter()
and you create tasks
A(p1=1, p2=2, p3=0),
A(p1=2, p2=3, p3=0),
A(p1=3, p2=4, p3=1), you’ll get them batched as
A(p1=2, p2=3, p3=0) and
A(p1=3, p2=4, p3=1).
Note that batched tasks do not take up [resources], only the task that ends up running will use resources. The scheduler only checks that there are sufficient resources for each task individually before batching them all together.
Tasks that regularly overwrite the same data source¶
If you are overwriting of the same data source with every run, you’ll need to ensure that two batches can’t run at the same time. You can do this pretty easily by setting batch_method to max and setting a unique resource:
class A(luigi.Task): date = luigi.DateParameter(batch_method=max) resources = {'overwrite_resource': 1}
Now if you have multiple tasks such as
A(date=2016-06-01),
A(date=2016-06-02),
A(date=2016-06-03), the scheduler will just
tell you to run the highest available one and mark the lower ones as
batch_running. Using a unique resource will prevent multiple tasks from
writing to the same location at the same time if a new one becomes
available while others are running.
Avoiding concurrent writes to a single file¶
Updating a single file from several tasks is almost always a bad idea, and you need to be very confident that no other good solution exists before doing this. If, however, you have no other option, then you will probably at least need to ensure that no two tasks try to write to the file _simultaneously_.
By turning ‘resources’ into a Python property, it can return a value dependent on the task parameters or other dynamic attributes:
class A(luigi.Task): ... @property def resources(self): return { self.important_file_name: 1 }
Since, by default, resources have a usage limit of 1, no two instances of Task A will now run if they have the same important_file_name property.
Decreasing resources of running tasks¶
At scheduling time, the luigi scheduler needs to be aware of the maximum resource consumption a task might have once it runs. For some tasks, however, it can be beneficial to decrease the amount of consumed resources between two steps within their run method (e.g. after some heavy computation). In this case, a different task waiting for that particular resource can already be scheduled.
class A(luigi.Task): # set maximum resources a priori resources = {"some_resource": 3} def run(self): # do something ... # decrease consumption of "some_resource" by one self.decrease_running_resources({"some_resource": 1}) # continue with reduced resources ...
Monitoring task pipelines¶
Luigi comes with some existing ways in
luigi.notifications to receive
notifications whenever tasks crash. Email is the most common way.
The above mentioned range tools for recurring tasks not only implement reliable scheduling for you, but also emit events which you can use to set up delay monitoring. That way you can implement alerts for when jobs are stuck for prolonged periods lacking input data or otherwise requiring attention.
Atomic Writes Problem¶
A very common mistake done by luigi plumbers is to write data partially to the
final destination, that is, not atomically. The problem arises because
completion checks in luigi are exactly as naive as running
luigi.target.Target.exists(). And in many cases it just means to check if
a folder exist on disk. During the time we have partially written data, a task
depending on that output would think its input is complete. This can have
devestating effects, as in the thanksgiving bug.
The concept can be illustrated by imagining that we deal with data stored on local disk and by running commands:
# This the BAD way $ mkdir /outputs/final_output $ big-slow-calculation > /outputs/final_output/foo.data
As stated earlier, the problem is that only partial data exists for a duration,
yet we consider the data to be
complete() because the
output folder already exists. Here is a robust version of this:
# This is the good way $ mkdir /outputs/final_output-tmp-123456 $ big-slow-calculation > /outputs/final_output-tmp-123456/foo.data $ mv --no-target-directory --no-clobber /outputs/final_output{-tmp-123456,} $ [[ -d /outputs/final_output-tmp-123456 ]] && rm -r /outputs/final_output-tmp-123456
Indeed, the good way is not as trivial. It involves coming up with a unique
directory name and a pretty complex
mv line, the reason
mv need all
those is because we don’t want
mv to move a directory into a potentially
existing directory. A directory could already exist in exceptional cases, for
example when central locking fails and the same task would somehow run twice at
the same time. Lastly, in the exceptional case where the file was never moved,
one might want to remove the temporary directory that never got used.
Note that this was an example where the storage was on local disk. But for
every storage (hard disk file, hdfs file, database table, etc.) this procedure
will look different. But do every luigi user need to implement that complexity?
Nope, thankfully luigi developers are aware of these and luigi comes with many
built-in solutions. In the case of you’re dealing with a file system
(
FileSystemTarget), you should consider using
temporary_path(). For other targets, you
should ensure that the way you’re writing your final output directory is
atomic.
Sending messages to tasks¶
The central scheduler is able to send messages to particular tasks. When a running task accepts messages, it can access a multiprocessing.Queue object storing incoming messages. You can implement custom behavior to react and respond to messages:
class Example(luigi.Task): # common task setup ... # configure the task to accept all incoming messages accepts_messages = True def run(self): # this example runs some loop and listens for the # "terminate" message, and responds to all other messages for _ in some_loop(): # check incomming messages if not self.scheduler_messages.empty(): msg = self.scheduler_messages.get() if msg.content == "terminate": break else: msg.respond("unknown message") # finalize ...
Messages can be sent right from the scheduler UI which also displays responses (if any). Note that this feature is only available when the scheduler is configured to send messages (see the [scheduler] config), and the task is configured to accept them. | https://luigi.readthedocs.io/en/stable/luigi_patterns.html | 2018-08-14T16:07:16 | CC-MAIN-2018-34 | 1534221209165.16 | [] | luigi.readthedocs.io |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
This is the response object from the DeleteSecurityGroup operation.
Namespace: Amazon.EC2.Model
Assembly: AWSSDK.EC2.dll
Version: 3.x.y.z
The DeleteSecurityGroupResponse type exposes the following members
This example deletes the specified security group.
var response = client.DeleteSecurityGroup(new DeleteSecurityGroupRequest { GroupId = "sg-903004f | https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/EC2/TDeleteSecurityGroupResponse.html | 2018-08-14T16:50:14 | CC-MAIN-2018-34 | 1534221209165.16 | [] | docs.aws.amazon.com |
How Windows Defender Credential Guard works
Applies to
- Windows 10
- Windows Server 2016
Prefer video? See Windows Defender Credential Guard Design in the Deep Dive into Windows Defender Credential Guard video series.
Kerberos, NTLM, and Credential manager isolate secrets by using virtualization-based security. Previous versions of Windows stored secrets in the Local Security Authority (LSA). Prior to Windows 10, the LSA stored secrets used by the operating system in its process memory. With Windows Defender Credential Guard enabled, the LSA process in the operating system talks to a new component called the isolated LSA process that stores and protects those secrets. Data stored by the isolated LSA process is protected using virtualization-based security and is not accessible to the rest of the operating system. LSA uses remote procedure calls to communicate with the isolated LSA process.
For security reasons, the isolated LSA process doesn't host any device drivers. Instead, it only hosts a small subset of operating system binaries that are needed for security and nothing else. All of these binaries are signed with a certificate that is trusted by virtualization-based security and these signatures are validated before launching the file in the protected environment.
When Windows Defender Windows Defender.
When Windows Defender Credential Guard is enabled, Kerberos does not allow unconstrained Kerberos delegation or DES encryption, not only for signed-in credentials, but also prompted or saved credentials.
Here's a high-level overview on how the LSA is isolated by using virtualization-based security:
See also
Deep Dive into Windows Defender Credential Guard: Related videos
Credential Theft and Lateral Traversal
Virtualization-based security
Credentials protected by Windows Defender Credential Guard | https://docs.microsoft.com/en-us/windows/security/identity-protection/credential-guard/credential-guard-how-it-works | 2018-08-14T15:38:25 | CC-MAIN-2018-34 | 1534221209165.16 | [array(['images/credguard.png',
'Windows Defender Credential Guard overview'], dtype=object)] | docs.microsoft.com |
Getting Started with DAML¶
The goal of this tutorial is to get you up and running with full-stack DAML development. We do this through the example of a simple social networking application, showing you three things:
- How to build and run the application
- The design of its different components (App Architecture)
- How to write a new feature for the app (Your First Feature)
We do not aim to be comprehensive in all DAML concepts and tools (covered in Writing DAML) or in all deployment options (see Deploying). For a quick overview of the most important DAML concepts used in this tutorial open the DAML cheat-sheet in a separate tab. The goal is that by the end of this tutorial, you’ll have a good idea of the following:
- What DAML contracts and ledgers are
- How a user interface (UI) interacts with a DAML ledger
- How DAML helps you build a real-life application fast.
With that, let’s get started!
Prerequisites¶
Please make sure that you have the DAML SDK, Java 8 or higher, and Visual Studio Code (the only supported IDE) installed as per instructions from our Installing the SDK page.
You will also need some common software tools to build and interact with the template project.
Git version control system
Yarn package manager for JavaScript. You have to have yarn version 1.10.0 or higher.
Note: Ubuntu 17.04 and higher come with
cmdtestpackage installed by default. If you are getting errors when installing yarn, you may want to run
sudo apt remove cmdtestfirst and then install yarn. More information can be found here as well as in the official yarn installation docs for Debian / Ubuntu
NodeJS in version 8.16 or higher. This will usually be installed automatically as part of installing Yarn.
Note: On Ubuntu 18.04, NodeJS 8.10 will be installed as part of installing Yarn which is too old. You can find instructions for installing newer versions at NodeSource.
A terminal application for command line interaction
Running the app¶
We’ll start by getting the app up and running, and then explain the different components which we will later extend.
First off, open a terminal and instantiate the template project.
daml new create-daml-app --template create-daml-app
This creates a new folder with contents from our template. To see
a list of all available templates run
daml new --list.
Change to the new folder:
cd create-daml-app
Next we need to compile the DAML code to a DAR file:
daml build
Once the DAR file is created you will see this message in terminal
Created .daml/dist/create-daml-app-0.1.0.dar.
Any commands starting with
daml are using the DAML Assistant, a command line tool in the DAML SDK for building and running DAML apps.
In order to connect the UI code to this DAML, we need to run a code generation step:
daml codegen js .daml/dist/create-daml-app-0.1.0.dar -o daml.js
Now, changing to the
ui folder, use Yarn to install the project dependencies:
cd ui yarn install --force --frozen-lockfile
This step may take a couple of moments (it’s worth it!).
You should see
success Saved lockfile. in the output if everything worked as expected.
We can now run the app in two steps.
You’ll need two terminal windows running for this.
In one terminal, at the root of the
create-daml-app directory, run the command:
daml start
You will know that the command has started successfully when you see the
INFO com.daml.http.Main$ - Started server: ServerBinding(/127.0.0.1:7575) message in the terminal. The command does a few things:
- Compiles the DAML code to a DAR file as in the previous
daml buildstep.
- Starts an instance of the Sandbox, an in-memory ledger useful for development, loaded with our DAR.
- Starts a server for the HTTP JSON API, a simple way to run commands against a DAML ledger (in this case the running Sandbox).
We’ll leave these processes running to serve requests from our UI.
In a second terminal, navigate to the
create-daml-app/ui folder and run the application:
cd ui yarn start
This starts the web UI connected to the running Sandbox and JSON API server.
The command should automatically open a window in your default browser at.
Once the web UI has been compiled and started, you should see
Compiled successfully! in your terminal.
If it doesn’t, just open that link in a web browser.
(Depending on your firewall settings, you may be asked whether to allow the app to receive network connections. It is safe to accept.)
You should now see the login page for the social network. For simplicity of this app, there is no password or sign-up required.
First enter your name and click Log in.
You should see the main screen with two panels. One for the users you are following and one for your followers. Initially these are both empty as you are not following anyone and you don’t have any followers! Go ahead and start following users by typing their usernames in the text box and clicking on the Follow button in the top panel.
You’ll notice that the users you just started following appear in the Following panel. However they do not yet appear in the Network panel. This is either because they have not signed up and are not parties on the ledger or they have not yet started followiong you. This social network is similar to Twitter and Instagram, where by following someone, say Alice, you make yourself visible to her but not vice versa. We will see how we encode this in DAML in the next section.
To make this relationship reciprocal, open a new browser window/tab at. (Having separate windows/tabs allows you to see both you and the screen of the user you are following at the same time.) Once you log in as the user you are following - Alice, you’ll notice your name in her network. In fact, Alice can see the entire list of users you are follwing in the Network panel. This is because this list is part of the user data that became visible when you started follwing her.
When Alice starts follwing you, you can see her in your network as well. Just switch to the window where you are logged in as yourself - the network should update automatically.
Play around more with the app at your leisure: create new users and start following more users. Observe when a user becomes visible to others - this will be important to understanding DAML’s privacy model later. When you’re ready, let’s move on to the architecture of our app. | https://docs.daml.com/1.4.0-snapshot.20200729.4851.0.224ab362/getting-started/index.html | 2021-01-16T05:36:44 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.daml.com |
Hello All,
We are having MECM 1910, I am try to deploy Windows 10 1909 OS using Task Sequence. I have Included multiple device model Drivers to Task sequence. Task sequence is deploying successfully.
**But Drivers are installing after First Login, Its taking around 10 mins to install the driver and its prompting for Restart.*
In the Task sequence I have already added the Restart task after driver installation. Still I am facing this problem. Driver should install before user login to the Devices, But its not happening.
Please help me to fix this issue. | https://docs.microsoft.com/en-us/answers/questions/136727/sccm-osd-ts-drivers-are-installing-after-first-log.html?sort=oldest | 2021-01-16T06:53:19 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.microsoft.com |
A SIP account (VoIP account) is a user account in a SIP phone network. It allows you to create own virtual-PBX and make/take calls without any geographical limits. All you need is just a computer or a smartphone, a headset and the Internet.
With any Ringostat plan, you can create as many SIP accounts as you need.
Setting the Cloud PBX
First, go to Virtual PBX > SIP-accounts settings and create SIP-account with the next details:
- Name - a login of a sip-account for authentication;
- Password - a password of a sip-account for authentication;
- Internal number - a number of internal connection between the agents.
Please, note: your password must be at least 8 characters long, with at least one digit, and both lowercase and uppercase letters.
Name of SIP-account always includes the name of the project and begins with it. For example, for project testsite.com the beginning of SIP-account name will be testsitecom_. You can add anything after testsitecom_.
After saving the SIP-account credentials put the mouse to the information field “?”. There will be the information for connecting the SIP-account to a software:
- Login - a login of a sip-account for authentication;
- Password - a password of a sip-account for authentication;
- Gateway - a gateway sip-account is registered at;
- Port number - port a sip-account connects through;
- Codecs - a codec sip-account connects with.
The most popular apps for making calls right from your computer or smartphone:
- PhonerLite;
- Zoiper;
- 3CX.
Setting up the SIP-account in your PBX:
In case you are using a firewall for your internet connection settings, please add our servers IP-addresses to the whitelist:
- 138.201.203.93.
- 176.9.24.184
- 185.60.135.92
- 88.204.205.58
- 209.126.106.57
Extra settings for connecting SIP-account
In case you have any issues with registering of your SIP-account and you use NAT, you will need to configure additional STUN-server:
- STUN-server status – on
- STUN-server – stun.ringostat.com
- STUN-server port - 3479
In case you are using Asterisk PBX, here’s the tips for configuring your SIP-account in it.
In case you need any assistance feel free to contact our Support team via chat or through the email [email protected]. | https://docs.ringostat.com/knowledge-base/article/sip-account-settings | 2021-01-16T06:27:28 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.ringostat.com |
About the Centre Position and Cell Size of a Grid
When you launch the Grid Wizard, you are first prompted with the Grid Parameters dialog, in which you can set up the size of the grid, but also its centre position as well as the size of each cell in the grid.
The coordinates used for the centre position and the cell size are in fields, and are based on the field grid used to position objects in your scene. This is a traditional, 12 fields, 4:3 animation grid, with its centre coordinates being (0, 0), and with horizontal coordinates incrementing from left to right, and vertical coordinates incrementing from bottom to top. You can see how a traditional animation grid, when laid out on your scene, corresponds to the grid displayed in the Grid wizard in the following side by side comparison:
To lay out an animation grid over your scene, simply add a Grid node and connect it to your scene's main composite
When you display a 2D point widget created with the Grid Wizard, it will appear somewhere inside the area covered by the grid it was created with. When you move it around, it will be able to move up to the edges of that grid. Hence, by adjusting the centre position and cell size of your grid before you create it, you can make it so the position of the part of your character rig you want to control closely matches the position of the 2D point as you move it.
For example, if we want to create an grid of the following set of poses, we need to create a grid of 6 vertical poses × 3 horizontal poses:
At the default centre position (0, 0) and cell size (2 fields × 2 fields), a grid of 6 × 3 poses would occupy this area of the scene:
An animator would have to move the 2D point widget in this rectangular area to control the direction in which the character is looking. This is not intuitive. What would be preferable is a grid like this:
To obtain such a result, you would first need to figure out the ideal centre position of the grid. In this case, it is the centre of the character's nose which, according to the grid, is in the middle on the horizontal axis, and 6 fields north on the vertical axis, making its exact coordinates (0, 6).
Then, you would need to adjust the cell size to obtain a less rectangular, more square-like grid. In this case, the default cell size was set to 1 horizontal field and 3 vertical fields, or 1 field × 3 fields. Although this makes vertical rectangular cells, since we have 6 columns for 3 rows, the resulting grid is nearly square-shaped.
By calculating and using precise coordinates, you can even create a 2D point master controller that appears to be attached to a specific articulation of your character. For example, let's say you are working with the four arm poses, and you want to create a 2D point widget that will be attached to the wrist:
You could spread those four wrist poses at the corners of a rectangle, grid, like so:
A rectangle can be represented by a simple grid of 2 poses x 2 poses, which has a single cell. The size of ts cell size would determine the size of the entire grid. Hence, you would simple have to figure out the size and centre coordinates and size of this rectangle, in fields, and create an gird of 2 x 2 poses, positioned at the centre of this rectangle, and with a cell size that is the size of this rectangle. In this case, the rectangle is 6 x 8 fields, and its centre is at (2.5, 2.5). | https://docs.toonboom.com/help/harmony-17/premium/master-controller/about-centre-cell-size-grid-wizard.html | 2021-01-16T05:21:30 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['../Resources/Images/HAR/Stage/Breakdown/master-controller-script-grid-parameters-ui.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/MasterController/interpolation-grid-example-scene-grid-vs-dialog-grid.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/MasterController/interpolation-grid-add-grid-node.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/Breakdown/Head-Rotation-Beaver-Grid.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/MasterController/interpolation-grid-example-default-center-pos-cells-size.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/MasterController/interpolation-grid-example-optimized-center-pos-cells-size.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/MasterController/interpolation-grid-example-arm-poses.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/MasterController/interpolation-grid-example-arm-poses-with-grid.png',
None], dtype=object) ] | docs.toonboom.com |
Using
importlib_metadata¶.
Overview¶
Let’s say you wanted to get the version string for a package you’ve installed
using
pip. We start by creating a virtual environment and installing
something into it:
$ python3 -m venv example $ source example/bin/activate (example) $ pip install importlib_metadata .
Functional API¶
This package provides the following functionality via its public API.
Entry points¶
The
entry_points() function returns a dictionary of all entry points,
keyed by group. Entry points are represented by
EntryPoint instances;
each
EntryPoint has a
.name,
.group, and
.value attributes and
a
.load() method to resolve the value. There are also
.module,
.attr, and
.extras attributes for getting the components of the
.value attribute:
>>> eps = entry_points() >>> list(eps) ['console_scripts', 'distutils.commands', 'distutils.setup_keywords', 'egg_info.writers', 'setuptools.installation'] >>> scripts = eps['console_scripts'] >>> wheel = [ep for ep in scripts if ep.name == 'wheel'][0] >>>.Path:
>>> d.metadata['Requires-Python'] '>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*' >>> d.
By default
importlib_metadata installs a finder for distribution packages
found on the file system. This finder doesn’t actually find any packages,
but it can find the packages’ metadata..
Footnotes | https://importlib-metadata.readthedocs.io/en/stable/using.html | 2021-01-16T06:27:22 | CC-MAIN-2021-04 | 1610703500028.5 | [] | importlib-metadata.readthedocs.io |
-
Action and Policy Examples
The examples in this section demonstrate how to configure rewrite to perform various useful tasks. The examples occur in the server room of Example Manufacturing Inc., a mid-sized manufacturing company that uses its Web site to manage a considerable portion of its sales, deliveries, and customer support.
Example Manufacturing has two domains: example.com for its Web site and email to customers, and example.net for its intranet. Customers use the Example Web site to place orders, request quotes, research products, and contact customer service and technical support.
As an important part of Example’s revenue stream, the Web site must respond quickly and keep customer data confidential. Example therefore has several Web servers and uses NetScaler appliances to balance the Web site load and manage traffic to and from its Web servers.
The Example system administrators use the rewrite features to perform the following tasks:
Example 1: Delete old X-Forwarded-For and Client-IP Headers
Example Inc. removes old X-Forwarded-For and Client-IP HTTP headers from incoming requests.
Example 2: Adding a Local Client-IP Header
Example Inc. adds a new, local Client-IP header to incoming requests.
Example 3: Tagging Secure and Insecure Connections
Example Inc. tags incoming requests with a header that indicates whether the connection is a secure connection.
Example 4: Mask the HTTP Server Type
Example Inc. modifies the HTTP Server: header so that unauthorized users and malicious code cannot use that header to determine the HTTP server software it uses.
Example 5: Redirect an External URL to an Internal URL
Example Inc. hides information about the actual names of its Web servers and the configuration of its server room from users, to make URLs on its Web site shorter and easier to remember and to improve security on its site.
Example 6: Migrating Apache Rewrite Module Rules
Example Inc. moved its Apache rewrite rules to a NetScaler appliance, translating the Apache PERL-based script syntax to the NetScaler rewrite rule syntax.
Example 7: Marketing Keyword Redirection
The marketing department at Example Inc. sets up simplified URLs for certain predefined keyword searches on the company’s Web site.
Example 8: Redirect Queries to the Queried Server.
Example Inc. redirects certain query requests to the appropriate server.
Example 9: Home Page Redirection
Example Inc. recently acquired a smaller competitor, and it now redirects requests to the acquired company’s home page to a page on its own Web site. NetScaler.. | https://docs.citrix.com/en-us/netscaler/12/appexpert/rewrite/rewrite-action-policy-examples.html | 2021-01-16T06:33:29 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.citrix.com |
- Product
- Customers
- Solutions
Select a monitor-based source if you want to build your SLO based on existing or new Datadog monitors. For more information about monitors, see the Monitor documentation. Monitor-based SLOs are useful for a time-based stream of data where you are differentiating time of good behavior vs bad behavior. Using the sum of the good time divided by the sum of total time provides a Service Level Indicator (or SLI).
On the SLO status page, select New SLO +. Then select Monitor.
To start, you need to be using Datadog monitors. To set up a new monitor, go to the monitor creation page and select one of the monitor types that are supported by SLOs (listed below). Search for monitors by name and click on it to add it to the source list.
For example, if you have a Metric Monitor that is configured to alert when user request latency is greater than 250ms, you could set a monitor-based SLO on that monitor. Let’s say you choose an SLO target of 99% over the past 30 days. What this means is that the latency of user requests should be less than 250ms 99% of the time over the past 30 days. To set this up, you would:
Supported Monitor Types:
Example: You might be tracking the uptime of a physical device. You have already configured a metric monitor on
host:foo using a custom metric. This monitor might also ping your on-call team if it’s no longer reachable. To avoid burnout you want to track how often this host is down.
An SLO target is comprised of the target percentage and the time window. When you set a target for a monitor-based SLO the target percentage specifies what portion of the time the underlying monitor(s) of the SLO should be in an OK state, while the time window specifies the rolling time period over which the target should be tracked.
Example:
99% of the time requests should have a latency of less than 300ms.
The overall status can be considered as a percentage of the time where all monitors or all the calculated groups in a single multi-alert monitor are in the
OK state. It is not the average of the aggregated monitors or the aggregated groups, respectively.
Consider the following example for 3 monitors (this is also applicable to a monitor-based SLO based on a single multi-alert monitor):
This can result in the overall status being lower than the average of the individual statuses.
In certain cases, there is an exception to the status calculation for monitor-based SLOs that are comprised of one grouped Synthetic test. Synthetic tests have optional special alerting conditions that change the behavior of when the test enters the ALERT state and consequently impact the overall uptime:
By changing any of these conditions to something other than their defaults, the overall status for a monitor-based SLO using just that one Synthetic test could appear to be better than the aggregated statuses of the Synthetic test’s individual groups.
For more information on Synthetic test alerting conditions, visit the Synthetic Monitoring documentation.
SLOs based on the metric monitor types have a feature called SLO Replay that will backfill SLO statuses with historical data pulled from the underlying monitors' metrics and query configurations. This means that if you create a new Metric Monitor and set an SLO on that new monitor, rather than having to wait a full 7, 30 or 90 days for the SLO’s status to fill out, SLO Replay will trigger and look at the underlying metric of that monitor and the monitor’s query to get the status sooner. SLO Replay also triggers when the underlying metric monitor’s query is changed (e.g. the threshold is changed) to correct the status based on the new monitor configuration. As a result of SLO Replay recalculating an SLO’s status history, the monitor’s status history and the SLO’s status history may not match after a monitor update.
Note: SLO Replay is not supported for SLOs based on Synthetic tests or Service Checks.
Datadog recommends against using monitors with
Alert Recovery Threshold and
Warning Recovery Threshold as they can also affect your SLO calculations and do not allow you to cleanly differentiate between a SLI’s good behavior and bad behavior.
SLO calculations do not take into account when a monitor is resolved manually or as a result of the After x hours automatically resolve this monitor from a triggered state setting. If these are important tools for your workflow, consider cloning your monitor, removing auto-resolve settings and
@-notifications, and using the clone for your SLO.
Confirm you are using the preferred SLI type for your use case. Datadog supports monitor-based SLIs and metric-based SLIs as described in the SLO metric documentation.
Additional helpful documentation, links, and articles: | https://docs.datadoghq.com/monitors/service_level_objectives/monitor/ | 2021-01-16T06:27:34 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.datadoghq.com |
Smart2Pay offers 3 environments that you can use to interact with our payment platform.
Demo – you should start in Demo environment where all the available payment methods are activated by default and you can use to test our plugin right away. No activation is required. You can test various payment flows for more than 150 payment methods. The order status does not update in this environment, you need to use Test and Live environments for an end to end testing.
Test – Once you are familiar with payment flows you should start the integration and testing phase using the test platform. You will receive an e-mail from our merchant integration team with more details such as your test MID and instructions on how to generate the test signature and setup the notification URL where you will receive payment status change notifications so your orders are correctly updated once the payment flows are completed.
Live – As soon as the development and testing phase are successfully completed you will need to follow the same steps for integration on the live platform:.
You can always contact our Merchant Integration Team to request additional information about our services at: [email protected]. | https://docs.smart2pay.com/category/smart2pay-plugins/smart2pay-environments/ | 2021-01-16T05:44:59 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.smart2pay.com |
Host Status Window¶
Intended audience: administrators, developers, users
When a host window has been opened, all servers controlled on this host are displayed.
The color of the server define its state:
These servers are ordered by startup level.
- A popup menu will be displayed with a right click on a server to Start/stop/test… it.
Note
- It is possible to display “Not Controlled” servers if any.
- These servers are not taken into account to compute host state. | https://tango-controls.readthedocs.io/en/latest/tools-and-extensions/built-in/astor/host_window.html | 2021-01-16T06:20:33 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['../../../_images/host_window.jpg', 'image0'], dtype=object)
array(['../../../_images/host_window2.png', 'image1'], dtype=object)] | tango-controls.readthedocs.io |
API Compatibility
We know how easy it is to use APIs when they're structured to work with your existing code and libraries. SocialOS provides four API formats to provide both flexibility and accessibility:
The Native API provides the most comprehensive functionality, supporting user management and authentication, public and private message streams, contact management, triggers, webhooks, CRM and CMS functionality, and advanced filters, feeds, and analytics.
The Parse-compatible API provides a superset of the functionality of the Parse open-source server, including user management, basic analytics, custom classes, push notifications, cloud functions, and triggers.
The Slack-compatible API provides functionality compatible with the Slack Web, Events, and Conversations APIs. This supports user, team, channel, and content management, webhooks, and bot support.
The Twitter-compatible API provides core support for Twitter-compatible messaging, including public and direct messages, lists, friends, followers, muting and blocking, plus extensions for user management and message channels.
Data Compatibility
Information created via any supported API is automatically compatible with all other APIs. Messages posted via the Twitter-compatible API can be routed to a Slack bot or pushed to a mobile app using the a Parse API library. Fields and data structures are mapped by the compatibility service to conform with the standards of each supported API. | https://docs.socialos.io/docs/overview | 2021-01-16T05:56:28 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.socialos.io |
Module core.steps.trainer.base_trainer¶
Classes¶
BaseTrainerStep(serving_model_dir: str = None, transform_output: str = None, train_files=None, eval_files=None, **kwargs)
: Base step interface for all Trainer steps. All of your code concerning
model training should leverage subclasses of this class.
Constructor for the BaseTrainerStep. All subclasses used for custom training of user machine learning models should implement the `run_fn` `model_fn` and `input_fn` methods used for control flow, model training and input data preparation, respectively. Args: serving_model_dir: Directory indicating where to save the trained model. transform_output: Output of a preceding transform component. train_files: String, file pattern of the location of TFRecords for model training. Intended for use in the input_fn. eval_files: String, file pattern of the location of TFRecords for model evaluation. Intended for use in the input_fn. log_dir: Logs output directory. schema: Schema file from a preceding SchemaGen. ### Ancestors (in MRO) * zenml.core.steps.base_step.BaseStep ### Class variables `STEP_TYPE` : ### Static methods `model_fn(train_dataset: tensorflow.python.data.ops.dataset_ops.DatasetV2, eval_dataset: tensorflow.python.data.ops.dataset_ops.DatasetV2)` : Class method defining the training flow of the model. Override this in subclasses to define your own custom training flow. Args: train_dataset: tf.data.Dataset containing the training data. eval_dataset: tf.data.Dataset containing the evaluation data. Returns: model: A trained machine learning model. ### Methods `get_run_fn(self)` : `input_fn(self, file_pattern: List[str], tf_transform_output: tensorflow_transform.output_wrapper.TFTransformOutput)` : Class method for loading data from TFRecords saved to a location on disk. Override this method in subclasses to define your own custom data preparation flow. Args: file_pattern: File pattern matching saved TFRecords on disk. tf_transform_output: Output of the preceding Transform / Preprocessing component. Returns: dataset: A tf.data.Dataset constructed from the input file pattern and transform. `run_fn(self)` : Class method defining the control flow of the training process inside the TFX Trainer Component Executor. Override this method in subclasses to define your own custom training flow. | https://docs.zenml.io/reference/core/steps/trainer/base_trainer.html | 2021-01-16T05:36:17 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.zenml.io |
information about the versions of the specified managed policy, including the version that is currently set as the policy's default version.
For more information about managed policies, see Managed Policies and Inline Policies in the IAM User Guide .
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
list-policy: Versions
list-policy-versions --policy-arn <value> [--max-items <value>] [--cli-input-json <value>] [--starting-token <value>] [--page-size <value>] [--generate-cli-skeleton <value>]
--policy-arn (string)
The Amazon Resource Name (ARN) of the IAM policy for which you want the versions.
For more information about ARNs, see Amazon Resource Names (ARNs) and AWS Service Namespaces in the AWS General Reference .
- information about the versions of the specified managed policy
This example returns the list of available versions of the policy whose ARN is arn:aws:iam::123456789012:policy/MySamplePolicy:
aws iam list-policy-versions --policy-arn arn:aws:iam::123456789012:policy/MySamplePolicy
Output:
{ "IsTruncated": false, "Versions": [ { "CreateDate": "2015-06-02T23:19:44Z", "VersionId": "v2", "IsDefaultVersion": true }, { "CreateDate": "2015-06-02T22:30:47Z", "VersionId": "v1", "IsDefaultVersion": false } ] }
For more information, see Overview of IAM Policies in the Using IAM guide.
Versions -> (list)
A list of policy versions.
For more information about managed policy versions, see Versioning for Managed Policies in the IAM User Guide .
(structure) .
Document -> (string)
The policy document.
The policy document is returned in the response to the GetPolicyVersion and GetAccountAuthorizationDetails operations. It is not returned in the response to the CreatePolicyVersion or ListPolicyVersions operations.
The policy document returned in this structure.
VersionId -> (string)
The identifier for the policy version.
Policy version identifiers always begin with v (always lowercase). When a policy is created, the first policy version is v1 .
IsDefaultVersion -> (boolean)Specifies whether the policy version is set as the policy's default version.
CreateDate -> (timestamp)The date and time, in ISO 8601 date-time format , when the. | https://docs.aws.amazon.com/cli/latest/reference/iam/list-policy-versions.html | 2021-01-16T06:10:53 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.aws.amazon.com |
The gallery module provides methods to categorize images into albums and place them on the website. The gallery items are fully integrated with the site search.
The gallery module makes each image stored in an album available in 4 different sizes besides the original size, for each size you can choose if the image is scaled and cropped to be an exact size or if the image is resized to fit within the defined bounds.
Once installed the news module exposes new setting which allow you to set the default sizes of images within new albums, these sizes can later be changed per album.
To set the default sizes go Settings then Gallery select the size to modify e.g. Tiny Image. There are three fields for each image, the width and height of the image and the Crop Image to Size option. The crop option if ticked will scale and crop the images to be the exact width and height given while leaving it unchecked will cause the image to be scaled to fit within the defined width and height.
The gallery module adds three additional fields, gallery.title, gallery.desc and gallery.tags, before images appear in search results you need to select which of these fields are indexed by the search module and which are used for tagging image items.
Select Settings then Search then Search Settings and add the gallery fields to the appropriate search fields. Next select the Tag Fields setting and add the gallery.tags field.
For the changes to take effect the content has to be reindexed, select the clearFusionCMS settings group, Rebuild Search and finally Reindex Content. It can take several minutes to rebuild the search data.
The gallery module includes a default style sheet to get you started, just add the following above </head> in your template to include it:
<link href="system/modules/gallery/gallery.css" rel="stylesheet" type="text/css">
While the default CSS provides a useful stating point in most cases you will want to refine the style for your site. | https://docs.clearfusioncms.com/modules/gallery-module/ | 2021-01-16T05:22:58 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.clearfusioncms.com |
DAML-LF JSON Encoding¶
We describe how to decode and encode DAML-LF values as JSON. For each DAML-LF type we explain what JSON inputs we accept (decoding), and what JSON output we produce (encoding).
The output format is parameterized by two flags:
encodeDecimalAsString: boolean encodeInt64AsString: boolean
The suggested defaults for both of these flags is false. If the
intended recipient is written in JavaScript, however, note that the
JavaScript data model will decode these as numbers, discarding data in
some cases; encode-as-String avoids this, as mentioned with respect to
JSON.parse below.
Note that throughout the document the decoding is type-directed. In other words, the same JSON value can correspond to many DAML-LF values, and the expected DAML-LF type is needed to decide which one.
Decimal¶
Input¶
Decimals can be expressed as JSON numbers or as JSON strings. JSON strings are accepted using the same format that JSON accepts, and treated them as the equivalent JSON number:
-?(?:0|[1-9]\d*)(?:\.\d+)?(?:[eE][+-]?\d+)?
Note that JSON numbers would be enough to represent all Decimals. However, we also accept strings because in many languages (most notably JavaScript) use IEEE Doubles to express JSON numbers, and IEEE Doubles cannot express DAML-LF Decimals correctly. Therefore, we also accept strings so that JavaScript users can use them to specify Decimals that do not fit in IEEE Doubles.
Numbers must be within the bounds of Decimal, [–(10³⁸–1)÷10¹⁰, (10³⁸–1)÷10¹⁰]. Numbers outside those bounds will be rejected. Numbers inside the bounds will always be accepted, using banker’s rounding to fit them within the precision supported by Decimal.
A few valid examples:
42 --> 42 42.0 --> 42 "42" --> 42 9999999999999999999999999999.9999999999 --> 9999999999999999999999999999.9999999999 -42 --> -42 "-42" --> -42 0 --> 0 -0 --> 0 0.30000000000000004 --> 0.3 2e3 --> 2000
A few invalid examples:
" 42 " "blah" 99999999999999999999999999990 +42
Output¶
If encodeDecimalAsString is set, decimals are encoded as strings, using
the format
-?[0-9]{1,28}(\.[0-9]{1,10})?. If encodeDecimalAsString
is not set, they are encoded as JSON numbers, also using the format
-?[0-9]{1,28}(\.[0-9]{1,10})?.
Note that the flag encodeDecimalAsString is useful because it lets JavaScript consumers consume Decimals safely with the standard JSON.parse.
Int64¶
Input¶
Int64, much like Decimal, can be represented as JSON numbers and as
strings, with the string representation being
[+-]?[0-9]+. The
numbers must fall within [-9223372036854775808,
9223372036854775807]. Moreover, if represented as JSON numbers, they
must have no fractional part.
A few valid examples:
42 "+42" -42 0 -0 9223372036854775807 "9223372036854775807" -9223372036854775808 "-9223372036854775808"
A few invalid examples:
42.3 +42 9223372036854775808 -9223372036854775809 "garbage" " 42 "
Output¶
If encodeInt64AsString is set, Int64s are encoded as strings, using the
format
-?[0-9]+. If encodeInt64AsString is not set, they are encoded as
JSON numbers, also using the format
-?[0-9]+.
Note that the flag encodeInt64AsString is useful because it lets
JavaScript consumers consume Int64s safely with the standard
JSON.parse.
Timestamp¶
Input¶
Timestamps are represented as ISO 8601 strings, rendered using the
format
yyyy-mm-ddThh:mm:ss.ssssssZ:
1990-11-09T04:30:23.123456Z 9999-12-31T23:59:59.999999Z
Parsing is a little bit more flexible and uses the format
yyyy-mm-ddThh:mm:ss(\.s+)?Z, i.e. it’s OK to omit the microsecond part
partially or entirely, or have more than 6 decimals. Sub-second data beyond
microseconds will be dropped. The UTC timezone designator must be included. The
rationale behind the inclusion of the timezone designator is minimizing the
risk that users pass in local times. Valid examples:
1990-11-09T04:30:23.1234569Z 1990-11-09T04:30:23Z 1990-11-09T04:30:23.123Z 0001-01-01T00:00:00Z 9999-12-31T23:59:59.999999Z
The timestamp must be between the bounds specified by DAML-LF and ISO 8601, [0001-01-01T00:00:00Z, 9999-12-31T23:59:59.999999Z].
JavaScript
> new Date().toISOString() '2019-06-18T08:59:34.191Z'
Python
>>> datetime.datetime.utcnow().isoformat() + 'Z' '2019-06-18T08:59:08.392764Z'
Java
import java.time.Instant; class Main { public static void main(String[] args) { Instant instant = Instant.now(); // prints 2019-06-18T09:02:16.652Z System.out.println(instant.toString()); } }
Output¶
Timestamps are encoded as ISO 8601 strings, rendered using the format
yyyy-mm-ddThh:mm:ss[.ssssss]Z.
The sub-second part will be formatted as follows:
- If no sub-second part is present in the timestamp (i.e. the timestamp represents whole seconds), the sub-second part will be omitted entirely;
- If the sub-second part does not go beyond milliseconds, the sub-second part will be up to milliseconds, padding with trailing 0s if necessary;
- Otherwise, the sub-second part will be up to microseconds, padding with trailing 0s if necessary.
In other words, the encoded timestamp will either have no sub-second part, a sub-second part of length 3, or a sub-second part of length 6.
Unit¶
Represented as empty object
{}. Note that in JavaScript
{} !==
{}; however,
null would be ambiguous; for the type
Optional
Unit,
null decodes to
None, but
{} decodes to
Some ().
Additionally, we think that this is the least confusing encoding for Unit since unit is conceptually an empty record. We do not want to imply that Unit is used similarly to null in JavaScript or None in Python.
Date¶
Represented as an ISO 8601 date rendered using the format
yyyy-mm-dd:
2019-06-18 9999-12-31 0001-01-01
The dates must be between the bounds specified by DAML-LF and ISO 8601, [0001-01-01, 9999-99-99].
Record¶
Input¶
Records can be represented in two ways. As objects:
{ f₁: v₁, ..., fₙ: vₙ }
And as arrays:
[ v₁, ..., vₙ ]
Note that DAML-LF record fields are ordered. So if we have
record Foo = {f1: Int64, f2: Bool}
when representing the record as an array the user must specify the fields in order:
[42, true]
The motivation for the array format for records is to allow specifying
tuple types closer to what it looks like in DAML. Note that a DAML
tuple, i.e. (42, True), will be compiled to a DAML-LF record
Tuple2 {
_1 = 42, _2 = True }.
GenMap¶
GenMaps are represented as lists of pairs:
[ [k₁, v₁], [kₙ, vₙ] ]
Order does not matter. However, any duplicate keys will cause the map to be treated as invalid.
Optional¶
Input¶
Optionals are encoded using
null if the value is None, and with the
value itself if it’s Some. However, this alone does not let us encode
nested optionals unambiguously. Therefore, nested Optionals are encoded
using an empty list for None, and a list with one element for Some. Note
that after the top-level Optional, all the nested ones must be
represented using the list notation.
A few examples, using the form
JSON --> DAML-LF : Expected DAML-LF type
to make clear what the target DAML-LF type is:
null --> None : Optional Int64 null --> None : Optional (Optional Int64) 42 --> Some 42 : Optional Int64 [] --> Some None : Optional (Optional Int64) [42] --> Some (Some 42) : Optional (Optional Int64) [[]] --> Some (Some None) : Optional (Optional (Optional Int64)) [[42]] --> Some (Some (Some 42)) : Optional (Optional (Optional Int64)) ...
Finally, if Optional values appear in records, they can be omitted to represent None. Given DAML-LF types
record Depth1 = { foo: Optional Int64 } record Depth2 = { foo: Optional (Optional Int64) }
We have
{ } --> Depth1 { foo: None } : Depth1 { } --> Depth2 { foo: None } : Depth2 { foo: 42 } --> Depth1 { foo: Some 42 } : Depth1 { foo: [42] } --> Depth2 { foo: Some (Some 42) } : Depth2 { foo: null } --> Depth1 { foo: None } : Depth1 { foo: null } --> Depth2 { foo: None } : Depth2 { foo: [] } --> Depth2 { foo: Some None } : Depth2
Note that the shortcut for records and Optional fields does not apply to
Map (which are also represented as objects), since Map relies on absence
of key to determine what keys are present in the Map to begin with. Nor
does it apply to the
[f₁, ..., fₙ] record form;
Depth1 None in
the array notation must be written as
[null].
Type variables may appear in the DAML-LF language, but are always
resolved before deciding on a JSON encoding. So, for example, even
though
Oa doesn’t appear to contain a nested
Optional, it may
contain a nested
Optional by virtue of substituting the type
variable
a:
record Oa a = { foo: Optional a } { foo: 42 } --> Oa { foo: Some 42 } : Oa Int { } --> Oa { foo: None } : Oa Int { foo: [] } --> Oa { foo: Some None } : Oa (Optional Int) { foo: [42] } --> Oa { foo: Some (Some 42) } : Oa (Optional Int)
In other words, the correct JSON encoding for any LF value is the one you get when you have eliminated all type variables.
Variant¶
Variants are expressed as
{ tag: constructor, value: argument }
For example, if we have
variant Foo = Bar Int64 | Baz Unit | Quux (Optional Int64)
These are all valid JSON encodings for values of type Foo:
{"tag": "Bar", "value": 42} {"tag": "Baz", "value": {}} {"tag": "Quux", "value": null} {"tag": "Quux", "value": 42}
Note that DAML data types with named fields are compiled by factoring out the record. So for example if we have
data Foo = Bar {f1: Int64, f2: Bool} | Baz
We’ll get in DAML-LF
record Foo.Bar = {f1: Int64, f2: Bool} variant Foo = Bar Foo.Bar | Baz Unit
and then, from JSON
{"tag": "Bar", "value": {"f1": 42, "f2": true}} {"tag": "Baz", "value": {}}
This can be encoded and used in TypeScript, including exhaustiveness checking; see a type refinement example. | https://docs.daml.com/1.4.0-snapshot.20200729.4851.0.224ab362/json-api/lf-value-specification.html | 2021-01-16T05:27:02 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.daml.com |
Basic Product Detail Page Integration The product detail page of a retail website is where your products shine: customers discover all features of your products and decide, based on the available information, if they want to buy the product, continue their search, or simply abandon the session. The Froomle Personalisation platform can help your products shine even brighter with minimal required effort, as described in this basic example. In this example we cover a Froomle integration that will recommend your users products that are viewed by other users who also viewed this product. This will direct users to good candidates for products that do satisfy their needs if the current product failed. To set up the basic Froomle integration on a product detail page, there are two major steps: Requesting recommendations Setting up integration tracking Requesting recommendations Recommendations are requested from Froomle using the Recommendations API. This Recommendations API lives at. Your Froomle Solution Architect will be in touch concerning your customer_token and environment. Froomle can identify your users by either device_id or user_id. A device_id is anything that is tied to a specific device or browser, such as a cookie. A user_id is expected to be long-lived and used across devices, e.g. a login ID. For the basic product detail page use case we would have to create the following recommendation request: We will create a request that fetches 4 recommendations similar to the product shown on the product detail page. { "page_type": "product_detail", "device_id": "BD3qoN3ko7URdparQX2vDT4", "user_id": "7081599607", "context_item": "917096", "lists": [ { "list_name": "Similar products", "configuration_id": "default", "list_size": 4 } ] } One very important detail about this request is the context_item key. This represents the unique identifier of the product displayed on the product detail page and is used as context for generating the list of similar products. Below is an example response: { "user_group": "froomle", "version": "Froomle_1", "device_id": "BD3qoN3ko7URdparQX2vDT4", "user_id": "7081599607", "request_id": 1285050, "lists": [ { "list_name": "Similar products", "configuration_id": "default", "list_size": 4, "limit": 4, "list_key": "biufwgbvilwabi274rbd", "items": [ { "item_id": "item_1", "rank": 1 }, { "item_id": "item_2", "rank": 2 }, { "item_id": "item_3", "rank": 3 }, { "item_id": "item_4", "rank": 4 } ] } ] } Integration events Recommendations also introduce a set of extra integration events that you will have to communicate to Froomle. To link the original set of recommendations that is displayed with their events sent upon interaction, the recommendations API communicates a unique request_id in its response. You will see this is listed as a mandatory parameter for all the recommendation events. Impression Every time you display a box of recommendations it is important for us to know when each of the recommended items displayed in the box was visible to the user. Recommendations are often displayed "below the fold" meaning users have to scroll down on the product detail page before they can actually see them. A second example is a carousel that holds the recommendations, only a subset of the recommendations will be visible initially, when users "scroll" through the carousel other recommendations will be visible too. An impression event is used for each recommendation to communicate this information to Froomle. Click on recommendation Once a user interacts with a recommendation by clicking on a recommended item, you will have to communicate this interaction to Froomle by means of a click_on_recommendation event. | https://docs.froomle.com/froomle-doc-main/examples/retailer/recommendations/basic_product_detail_page_integration.html | 2021-01-16T05:56:13 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.froomle.com |
Resources
More Documents
- Build and run automation Run a node on google cloud for free How to mine with grin-miner Monetary policy
Talks
Playlists
Other
- Mimblewimble, Scaling Bitcoin'16 Mimblewimble, SF Bitcoin Developers`16 Mimblewimble, BPASE'17 Mimblewimble & Scriptless Scripts, RWC'18 A View on Grin, HCPP'19, Slides
Forum
- How to store Grin in cold storage? Raspberry Pi 4 - Standalone Grin-Node PoC by Grinnode.live How to open port 3414 (and why) PoW specification Coinbase outputs as regular outputs Use of NRD kernels in payment channels Emission Rate Thread, #2, #3 Transaction Aggregation TX Graph Confidentality Response to Reavealing TX Graph Some Thoughts on Privacy How to open port 3414 (and why) Scheduled PoW upgrades proposal Choice of ASIC Resistant PoW for GPU miners Put later phase-outs on hold proposal All about C31 fade out, the C29 scale and C32 Genesis block message Queries about transaction aggregation Aggregate merkle proofs Unique kernels thread #73 Sending a transaction to more parties than originally intended Reasoning behind block weight limit Hardforks on Grin v5.0.0 and beyond Play attacks and possible mitigations Replay attacks and possible mitigations Grin transactions user interactivity Eliminate finalize step On Igno's absence Being ASIC resistant or not Is there a potential hidden inflation problem Eliminating finalize step Integrated payment proofs and round minimization Pep talk for one sided transactions Dismantling the core team and governance structure
Medium
- Grin's Mythical Fair Launch Grin Money Explained Grin Transactions Explained, Step-by-Step What’s inside a Grin Transaction File? Breaking Mimblewimble’s Privacy Model Factual inaccuracies of “Breaking Mimblewimble’s Privacy Model” Mimblewimble Without the Scary Math Behind Mimblewimble An Introduction to Grin Proof-of-Work
Podcasts
- [Bitcoin Wednesday] Introducing Mimblewimble and Grin @jaspervdm [Unchained] Grin: A More Private, Lighter Bitcoin @lehnberg @yeastplume [The Crypto Show] Mimblewimble with Andrew Poelstra and Peter Wuillie [What Bitcoin Did] Grin's Mimblewimble Implementation @yeastplume [Zero Knowledge] Grin @lehnberg [Captain Crypto Show] Grin @yeastplume [Let's Talk Bitcoin] Privacy with Mimblewimble @yeastplume @andreas @adam.b.levine
Miscellaneous
- [Launchpad] Mimblewimble Mailing List Archive [Reddit] Mimblewimble introduced to r/bitcoin [Youtube] Aantonop Bitcoin Q&A: Mimblewimble and Dandelion [Coindesk] Harry Potter Characters Join Mimblewimble 2016 [Github] Meeting Notes [Reddit] On Grin's Privacy [Launchpad] Scripting observations and Lightning Network implementation [GitHub] Grin difficulty, C29, C31 fade out and C32 [GitHub] Fees in Mining [Google] Replay Attacks Summary [Blog] A Relatively Easy to Understand Primer on ECC | https://docs.grin.mw/wiki/resources/ | 2021-01-16T05:03:21 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.grin.mw |
Get-MsolAccountSku
Updated: July 30, 2015
Applies To: Azure, Office 365, Windows Intune
Note
- The cmdlets were previously known as the Microsoft Online Services Module for Windows PowerShell cmdlets.
Syntax
Get-MsolAccountSku [-TenantId <Guid>] [<CommonParameters>]
Parameters
".
Examples
The output is provided by Microsoft.Online.Administration.AccountSKU. Each AccountSKU returned will include license that have been used..
Example 1
The following command returns a list of SKUs.
Get-MsolAccountSku
Additional Resources
There are several other places you can get more information and help. These include:
Azure Active Directory Forum
Azure AD Community Information Center
Azure Active Directory Community scripts
See Also
Other Resources
Manage Azure Active Directory by using Windows PowerShell | https://docs.microsoft.com/pt-pt/previous-versions/azure/dn194118(v=azure.100) | 2021-01-16T07:08:38 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.microsoft.com |
Quick start guide
Here are a few easy steps you need to follow in order to make your Sydney-powered website look similar to our demo site.
1. Install the theme
Install the theme by going to Appearance > Themes > Add New. Once you're here, either search for the theme or upload the .zip file in case you've downloaded the theme manually;
2. Install the recommended plugins
Install the recommended plugin: Elementor. You will see a notice to install it after you've activated the theme;
3. Import demo content
This step is optional. This is the easiest way to get a quick start with Sydney as it will reproduce our demo site on your own site. Download the demo content from here, then from your admin area go to Tools > Import > WordPress and select the file you've just downloaded.
Please click here to go to the next tutorial. | https://docs.athemes.com/article/70-quick-start-guide | 2021-01-16T04:48:35 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.athemes.com |
Connect Bot agent to a device with a proxy using a script If your device cannot connect to the Enterprise Control Room because the device proxy setting is configured to use an automatic configuration script, complete the steps in this task to run the script to provide the authentication details. You can perform these steps before or after installing the Bot agent. ProcedureTo add or update the proxy, perform the following steps: Download PSTools. PsExec Extract the files from downloaded zip file. Open the Microsoft command prompt in administrator mode. Change to the directory where you extracted the PSTools files. Execute the following command: .\psexec -i -s -d cmd In the new command prompt window, execute the following command: whoami The system returns the following: nt authority\system Execute the following command: inetcpl.cpl In the Internet Properties, navigate to Connections > LAN settings. Select the Use automatic configuration script option. Provide the address to the proxy auto-configuration (PAC) file. For example:. Click OK. Restart the Bot agent in Windows Services. From the Enterprise Control Room, check the device status and verify that it is connected. | https://docs.automationanywhere.com/bundle/enterprise-v2019/page/enterprise-cloud/topics/control-room/devices/cloud-connect-bot-agent-authenticate-proxy-script.html | 2021-01-16T05:58:28 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.automationanywhere.com |
confluent iam acl delete¶
Flags¶
--kafka-cluster-id string Kafka cluster ID for scope of ACL commands. --allow ACL permission to allow access. --deny ACL permission to restrict access to resource. --principal string Principal for this operation with User: or Group: prefix. --host string Set host for access. . -h, --help Help for iam acl delete.
Global Flags¶
-v, --verbose count Increase verbosity (-v for warn, -vv for info, -vvv for debug, -vvvv for trace). | https://docs.confluent.io/5.4.2/cli/command-reference/confluent-iam/confluent_iam_acl_delete.html | 2021-01-16T05:27:11 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.confluent.io |
Intellicus Java APIs can be used for performing activities like User Management, Report Management, etc.
To use the Java APIs of Intellicus, following jars should be placed in class path of the web application.
- intellica.jar
- log4ij.jar
- xercesImpl.jar
- xmlParserAPIs.jar
These jars are provided with intellicus setup. Following is given the path for jar file:
<Intellicus Install path>APIs
Java Doc
The Java Doc for the APIs is available at FTP locaton:
Mandatory Step to Use Java APIs
Initialize Report Client
Configuration
Intellicus client JAVA APIs are configured by a file “ReportClient.properties”.
This file must be present in the JAVA CLASS_PATH variable of the host java application.
During initialization action, the report client component reads the configuration file and keeps the configuration in memory.
Host java application class has to import the following classes, to initialize the Report Client.
import com.intellica.client.init.ReportClient
import java.io.FileInputStream
import java.io.BufferedInputStream
import java.io.InputStream
The initialization action is to be performed once in the lifetime of application.The initialization would be done according to the configurations set in the ReportClient.properties file
Init method is used to initialize the Intellica client SDK with the values from the ReportClient.properties file. This properties file contains configurations for Report Engine Interface.
InputStream is= new BufferedInputStream (new FileInputStream (“<ReportClient.properties_AbsolutePath>”));
//Static method.
ReportClient.init (is);
If it is required to create the logs at desired location, set the absolute or relative path in the INTERA_HOME property of the ReportClient.Properties file.
Eg- If ReportClient.properties is placed at location c://client/config/, (i.e. client complete folder is copied down in c:// drive),
InputStream is= new BufferedInputStream(new FileInputStream(“C://client/config/ReportClient.properties”));
com.intellica.client.init.ReportClient.init(is);
then INTERA_HOME property should be set as:
Absolute Path–
//setting absolute path for logs
INTERA_HOME=C://client
Relative Path-
//setting relative path for logs
INTERA_HOME=../../../client
(This path is basically relative to jakarta/bin/, therefore, if we consider that intellicus is installed at loc- c://program files/intellicus/jakarta/bin, then relative path for logs(i.e. C://client/logs/ ) with respect to bin would be ../../../client)
Initialize Requestor User context
Requestor User
A Requestor User is the user, who is requesting any action to the Intellicus system.
A com.intellica.client.common.UserInfo class object represents a user in Intellicus.
The UserInfo class has the following attributes:
Intellicus mode of authentication uses User ID, Password and Org ID as mandatory attributes to authenticate a user and authorize that user’s actions.
A host application that takes over authentication responsibilities can use any of the above attributes to authenticate.
Java Application class has to import the following class to store the login user information and check for Authorization.
import com.intellica.client.common.UserInfo;
The UserInfo object acts as a carrier to all above attributes in such case.
So, in all the use cases discussed below, host application has to create an object of UserInfo class and pass it in all the method calls.
UserInfo requestorUserInfo = new UserInfo(“John”,”john”,”Org1″);
Note: For performing any admin activity like user management at Intellicus, requestor user should be admin user at Intellicus. | https://docs.intellicus.com/documentation/integration-and-management-19-0/integration-manuals-19-0/intellicus-developers-guide-19-0/java-apis-19-0/introduction-to-intellicus-java-apis-19-0/ | 2021-01-16T06:24:51 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.intellicus.com |
Index Queries¶
DynamoDB supports two types of indexes: global secondary indexes, and local secondary indexes. Indexes can make accessing your data more efficient, and should be used when appropriate. See the documentation for more information.
Index Settings¶
The
Meta class is required with at least the
projection class attribute to specify the projection type. For Global secondary indexes,
the
read_capacity_units and
write_capacity_units also need to be provided. By default, PynamoDB will use the class attribute
name that you provide on the model as the
index_name used when making requests to the DynamoDB API. You can override the default
name by providing the
index_name class attribute in the
Meta class of the index.
Global Secondary Indexes¶
Indexes are defined as classes, just like models. Here is a simple index class:
from pynamodb.indexes import GlobalSecondaryIndex, AllProjection from pynamodb.attributes import NumberAttribute class ViewIndex(GlobalSecondaryIndex): """ This class represents a global secondary index """ class Meta: # index_name is optional, but can be provided to override the default name index_name = 'foo-index' read_capacity_units = 2 write_capacity_units = 1 # All attributes are projected projection = AllProjection() # This attribute is the hash key for the index # Note that this attribute must also exist # in the model view = NumberAttribute(default=0, hash_key=True)
Global indexes require you to specify the read and write capacity, as we have done in this example. Indexes are said to project attributes from the main table into the index. As such, there are three styles of projection in DynamoDB, and PynamoDB provides three corresponding projection classes.
AllProjection: All attributes are projected.
KeysOnlyProjection: Only the index and primary keys are projected.
IncludeProjection(attributes): Only the specified
attributesare projected.
We still need to attach the index to the model in order for us to use it. You define it as a class attribute on the model, as in this example:
from pynamodb.models import Model from pynamodb.attributes import UnicodeAttribute class TestModel(Model): """ A test model that uses a global secondary index """ class Meta: table_name = 'TestModel' forum = UnicodeAttribute(hash_key=True) thread = UnicodeAttribute(range_key=True) view_index = ViewIndex() view = NumberAttribute(default=0)
Local Secondary Indexes¶
Local secondary indexes are defined just like global ones, but they inherit from
LocalSecondaryIndex instead:
from pynamodb.indexes import LocalSecondaryIndex, AllProjection from pynamodb.attributes import NumberAttribute class ViewIndex(LocalSecondaryIndex): """ This class represents a local secondary index """ class Meta: # All attributes are projected projection = AllProjection() forum = UnicodeAttribute(hash_key=True) view = NumberAttribute(range_key=True)
Every local secondary index must meet the following conditions: - The partition key (hash key) is the same as that of its base table. - The sort key (range key) consists of exactly one scalar attribute. The range key can be any attribute. - The sort key (range key) of the base table is projected into the index, where it acts as a non-key attribute.
Querying an index¶
Index queries use the same syntax as model queries. Continuing our example, we can query
the
view_index global secondary index simply by calling
query:
for item in TestModel.view_index.query(1): print("Item queried from index: {0}".format(item))
This example queries items from the table using the global secondary index, called
view_index, using
a hash key value of 1 for the index. This would return all
TestModel items that have a
view attribute
of value 1.
Local secondary index queries have a similar syntax. They require a hash key, and can include conditions on the
range key of the index. Here is an example that queries the index for values of
view greater than zero:
for item in TestModel.view_index.query('foo', TestModel.view > 0): print("Item queried from index: {0}".format(item.view))
Pagination and last evaluated key¶
The query returns a
ResultIterator object that transparently paginates through results.
To stop iterating and allow the caller to continue later on, use the
last_evaluated_key property
of the iterator:
def iterate_over_page(last_evaluated_key = None): results = TestModel.view_index.query('foo', TestModel.view > 0, limit=10, last_evaluated_key=last_evaluated_key) for item in results: ... return results.last_evaluated_key
The
last_evaluated_key is effectively the key attributes of the last iterated item; the next returned items will be the items following it. For index queries, the returned
last_evaluated_key will contain both the table’s hash/range keys and the indexes hash/range keys. This is due to the fact that DynamoDB indexes have no uniqueness constraint, i.e. the same hash/range pair can map to multiple items. For the example above, the
last_evaluated_key will look like:
{ "forum": {"S": "..."}, "thread": {"S": "..."}, "view": {"N": "..."} } | https://pynamodb.readthedocs.io/en/latest/indexes.html | 2021-01-16T06:01:22 | CC-MAIN-2021-04 | 1610703500028.5 | [] | pynamodb.readthedocs.io |
How-to articles, tricks, and solutions about GIT FILE
How to Discard Unstaged Changes in Git
The tutorial provides you with information you need to discard the unstaged changes in the working copy. Find several ways of discarding and get the codes.
How to Revert "git rm -r"
This tutorial provides the information of answering to the question of reverting git rm -r, and differences between reverting git rm -r . and git rm. | https://www.w3docs.com/snippets-tags/git%20file | 2021-01-16T05:13:25 | CC-MAIN-2021-04 | 1610703500028.5 | [] | www.w3docs.com |
What You Need to Check Out When Starting Your Business Form Home
There are many business ideas that we have and we would like to start. One can start to put into action these ideas, at the comfort of their rooms. Although, in case you are not prepared for commercial agencies, starting from your home isn’t bad. You may be anxious about how to start the business. Get to check out on this website and discover more of what you need to check out.
The kind of business that you want to start should be checked out There are some ideas that are more about progressing when started at home, although, other business may not perform well. Get to learn more on this company website, concerning some of the services that can be handled at home.. Check it out!! you cannot be strict on your working services, it is better to find an excellent place to start your business from.
You need to know some of the people, that you will need for your business to progress, view here for more.. | http://docs-prints.com/2020/09/19/doing-the-right-way-15/ | 2021-01-16T05:25:16 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs-prints.com |
What was decided upon? (e.g. what has been updated or changed?) Should this text auto-complete when a user begins typing a search term?
Why was this decided? (e.g. explain why this decision was reached. It may help to explain the way a procedure used to be handled pre-Alma) Our user survey comments expressed strong interest in a search system that was more Google-like. Library staff were of mixed opinion on this option. Dale will look into this option. It is currently off on the ILS UI and the home page. The committee’s recommendation was to do this, if possible.
Who decided this? (e.g. what unit/group) User Interface
When was this decided?
Additional information or notes. | https://docs.library.vanderbilt.edu/2018/10/10/text-in-around-the-search-box/ | 2021-01-16T06:06:41 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.library.vanderbilt.edu |
:.
Note
Azure storage firewall is not supported for cloud shell storage accounts. | https://docs.microsoft.com/en-us/azure/cloud-shell/overview?WT.mc_id=ModInfra-8133-pierrer | 2021-01-16T07:04:42 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.microsoft.com |
How-to articles, tricks, and solutions about DISABLE
How to disable input conditionally in Vue.js
In this short Vue.js tutorial, you will read and learn about a very easy and fast method of disabling the HTML input conditionally in Vue.js components.
How to Disable Links on the Current Page with Pure CSS
Learn how to disable links on the current page using pointer-events and user-select properties. Also, find examples!
How to Disable Text Selection Highlighting Using CSS
Stop being copied by others with just simple and quick technique! Read the snippet and know how to prevent copying from your site disabling text selection highlighting.
How to Disable Text Selection, Copy, Cut, Paste and Right-click on a Web Page
Learn the ways of how to disable text selection highlighting, how to disable copy, cut and paste, how to disable right-click. See example with CSS, JavaScript and jQuery.
How to Disable the Browser Autocomplete and Autofill on HTML Form and Input Fields
Learn how to prevent browsers auto filling the input fields of HTML forms. Use autocomplete="off" to specify that autocomplete is disabled. See examples.
How to Disable the Resizing of the <textarea> Element?
Use the CSS3 resize property with its "none" value to disable the resizing function of the textarea element. Learn the ways of only vertically or horizontally resizing, too.
How to Disable Zoom on a Mobile Web Page With HTML and CSS
One of the most common inconveniences both developers and users face is the zoom on mobile web pages. Well, we’re here to help you fix that problem.
How to Display a Message if JavaScript is Turned Off
If you have a content that will not function without JavaScript, you need to display a message with an explanation of the problem. In this snippet, we are going to have a look at 2 simple methods to display the content when JavaScript is turned off. | https://www.w3docs.com/snippets-tags/disable | 2021-01-16T05:27:38 | CC-MAIN-2021-04 | 1610703500028.5 | [] | www.w3docs.com |
Node API
The API is used to query a node about various information on the blockchain, networks and peers. By default, the API will listen on
localhost:3413. The API is started at the same time as the grin node.
This endpoint requires, by default, basic authentication. The username is
grin.
Node API v2
This API version uses JSON-RPC for its requests. It is split up into a foreign API and an owner API. The documentation for these endpoints is automatically generated:
Basic auth passwords can be found in
.api_secret/
.foreign_api_secret files respectively.
Node API v1
Note: version 1 of the API will be deprecated in v4.0.0 and subsequently removed in v5.0.0. Users of this API are encouraged to upgrade to API v2.
This API uses REST for its requests. To learn about what specific calls can be made read the node API v1 doc.
Basic auth password can be found in
.api_secret | https://docs.grin.mw/wiki/api/node-api/ | 2021-01-16T05:47:20 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.grin.mw |
Category: Acquisitions Moving an Order Posted on May 26, 2020 by Angel Craddock Receiving Shelf Ready books | GOBI Firm Ordered (3350-09) Posted on May 21, 2020May 27, 2020 by Angel Craddock Making “discard” and “route” records in Alma Posted on April 24, 2020April 24, 2020 by Pete Wilson Remove PO Lines and Bibs added in error Posted on February 4, 2020 by Jamen Mcgranahan YBP process of loading records Posted on July 17, 2019 by Jamen Mcgranahan Jobs in Alma that modify PO Lines Posted on June 7, 2019June 7, 2019 by Mary Ellen Wilson Creating an E-Journal Subscription Order Posted on March 13, 2019 by Erin Loree Ordering One-Time E-Books Posted on March 13, 2019April 17, 2019 by Erin Loree Music Blanket Orders Posted on March 13, 2019 by Erin Loree Ordering a Package or Database Collection Posted on March 12, 2019 by Erin Loree Posts navigation Older posts | https://docs.library.vanderbilt.edu/category/alma/acquisitions/ | 2021-01-16T06:49:51 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.library.vanderbilt.edu |
Common section configuration options¶
lemmatizer_base¶
Lemmatizer dictionaries base path. Optional, default is /usr/local/share (as in –datadir switch to ./configure script).
Our lemmatizer implementation Manticore website ().
Example:
lemmatizer_base = /usr/local/share/sphinx/dicts/
progressive_merge¶
Merge Real-Time index chunks during OPTIMIZE operation from smaller to bigger. Progressive merge merger faster and reads/write less data. Enabled by default. If disabled, chunks are merged from first to last created.
json_autoconv_keynames¶
Whether and how to auto-convert key names within JSON attributes. Known value is ‘lowercase’. Optional, default value is unspecified (do not convert anything).
When this directive is set to ‘lowercase’, key names within JSON attributes will be automatically brought to lower case when indexing. This conversion applies to any data source, that is, JSON attributes originating from either SQL or XMLpipe2 sources will all be affected.
Example:
json_autoconv_keynames = lowercase
json_autoconv_numbers¶
Automatically detect and convert possible JSON strings that represent numbers, into numeric attributes. Optional, default value is 0 (do not convert strings into numbers)..
Example:
json_autoconv_numbers = 1
on_json_attr_error¶
What to do if JSON format errors are found. Optional, default value is
ignore_attr (ignore errors). Applies only to
sql_attr_json
attributes.
By default, JSON format errors are ignored (
ignore_attr) and the
indexer tool will just show a warning. Setting this option to
fail_index will rather make indexing fail at the first JSON format
error.
Example:
on_json_attr_error = ignore_attr
plugin_dir¶
Trusted location for the dynamic libraries (UDFs). Optional, default is empty (no location).
Specifies the trusted directory from which the UDF libraries can be loaded.
Example:
plugin_dir = /usr/local/sphinx/lib
icu_data_dir¶
A folder that contains data used by ICU to segment Chinese text. Should only be specified if you’ve built
ICU from sources. If ICU is loaded as a dynamic library (supplied in a package provided by us,
e.g.
libicu_dev), it doesn’t require any external data. This folder must contain a .dat file (e.g.
icudt64l.dat).
Example:
icu_data_dir = /home/myuser/icu_data | https://docs.manticoresearch.com/3.1.0/html/conf_options_reference/common_section_configuration_options.html | 2021-01-16T05:14:55 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.manticoresearch.com |
1 Overview
The
starts-with() function tests whether a string attribute starts with a specific string (case-insensitive) as a sub-string.
2 Example
This query returns all the customers from which the name starts with the string “Jans”:
//Sales.Customer[starts-with(Name, 'Jans')]
Customers with the name “Jansen” will be returned, for example, because the name starts with “Jans.” | https://docs.mendix.com/refguide8/xpath-starts-with | 2021-01-16T06:51:45 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.mendix.com |
Google BigQuery Sink Connector for Confluent Cloud¶
Note
If you are installing the connector locally for Confluent Platform, see Google BigQuery Sink Connector for Confluent Platform.
The Google BigQuery Sink Connector is used to stream data into BigQuery tables. The BigQuery table schema is based upon information in the Apache Kafka®. The internal thread pool defaults to 10 threads.
- The connector supports Avro and schemaless JSON (schema validation is disabled for JSON).
Refer to Cloud connector limitations for additional information.
Important
Preview connectors are not currently supported and are not recommended for production use. For specific connector limitations, see Cloud connector limitations.
Quick Start¶
Use this quick start to get up and running with the Confluent Cloud Google BigQuery Sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to a BigQuery data warehouse.
-
Additionally, make sure that either one of the following credential types is generated to use for the Kafka cluster credentials fields:
- A Confluent Cloud API key and secret. After you have created your cluster, go to Cluster settings > API access > Create Key.
- A Confluent Cloud service account.
Important
A BigQuery table must exist before running this connector. Topic names are mapped to BigQuery table names. When you create the BigQuery table, make sure to enable Partitioning: Partition by ingestion time and define a schema as shown in the example below:
Step 1: Launch your Confluent Cloud cluster.¶
See the Confluent Cloud Quick Start for installation instructions.
Step 4: Set up the connection.¶
Note
Make sure you have all your prerequisites completed.
Complete the following and click Continue.
- Select one or more topics.
- Enter a Connector Name.
- Enter your Kafka Cluster credentials. The credentials are either the API key and secret or the service account API key and secret.
- Enter your BigQuery credentials. Open the JSON file you downloaded when creating the service account. Copy and paste all file contents into the credentials field.
- Enter your BigQuery project and datasets.
- Select the message format.
- Add the storage account name, account key, and container name.
- Enter the number of tasks in use by the connector. See Confluent Cloud connector limitations for additional task information.
Step 6: Check the connector status.¶
The status for the connector should go from Provisioning to Running.
Step 7: Check the results in BigQuery.¶
- From the Google Cloud Console, go to your BigQuery project.
- Query your datasets and verify that new records are being added.
For additional information about this connector, see Google BigQuery Sink Connector for Confluent Platform. Note that not all Confluent Platform connector features are provided in the Confluent Cloud connector. | https://docs.confluent.io/5.3.1/cloud/connectors/cc-gcp-bigquery-sink.html | 2021-01-16T05:37:34 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['../../_images/ccloud-bigquery-partition-by-ingestion.png',
'../../_images/ccloud-bigquery-partition-by-ingestion.png'],
dtype=object)
array(['../../_images/ccloud-bigquery-status.png',
'../../_images/ccloud-bigquery-status.png'], dtype=object)] | docs.confluent.io |
0001 rfc-process
- Title: rfc-process
- Authors: joltz, yeastplume, lehnberg
- Start date: June 21st, 2019
Summary
The "RFC" (request for comments) process is intended to provide a consistent and controlled path for improvements to be made to Grin.
Motivation
Many changes, including bug fixes and documentation improvements can be implemented and reviewed via the normal GitHub pull request workflow.
Some changes though are "substantial", and could benefit from being put through a more formal design process in order to produce a consensus among Grin community participants and stakeholders.
When this process should be followed
You need to follow this process if you intend to make "substantial" changes to the Grin codebase or governance process. What constitutes a "substantial" change may evolve based on community norms and individual definitions of sub-teams, but may include the following.
- Any semantic or syntactic change to the wallet, node, miner, or underlying crypto libraries that is not a bugfix.
- Major changes in ecosystem content such as the docs, site or explorer.
- Removing Grin features, including those that are feature-gated.
Some changes do not require an RFC:
- Rephrasing, reorganizing, refactoring, or changes that are not visible to Grin's users.
- Additions that strictly improve objective, numerical quality criteria (warning removal, speedup, better platform coverage, more parallelism, trap more errors, etc.)
- Additions only likely to be noticed by other developers-of-grin, invisible to users-of-grin.
If you submit a pull request to implement a new feature without going through the RFC process, it may be closed with a polite request to submit an RFC first.
Team specific guidelines
To be added here once available.
Before creating an RFC
A hastily-proposed RFC can hurt its chances of acceptance. Low quality proposals, proposals for previously-rejected features, or those that don't fit into the near-term roadmap, may be quickly rejected, which can be demotivating for the unprepared contributor. Laying some groundwork ahead of the RFC can make the process smoother.
Although there is no single way to prepare for submitting an RFC, it is generally a good idea to pursue feedback from other project contributors beforehand, to ascertain that the RFC may be desirable; having a consistent impact on the project requires concerted effort toward consensus-building.
Ways to prepare and pave the way for writing and submitting an RFC include discussing the topic or posting "pre-RFCs" to our forum for feedback.
As a rule of thumb, receiving encouraging feedback from long-standing project contributors, and particularly members of the relevant team (if applicable) is a good indication that the RFC is worth pursuing.
Process description
In order to make a "substantial" change to Grin, one must first get an RFC merged into the RFC repo as a markdown file. At that point the RFC is "active" and may be implemented with the goal of eventual inclusion into Grin.
Stages in detail
Submission
- Fork the RFC repo
- Copy
0000-template.mdto
text/0000-my-feature.md(where "my-feature" is descriptive. don't assign an RFC number yet).
- If you include any assets, do so as
/assets/0000-asset-description.xxx
- Write the RFC according to the template instructions.
- Submit a pull request. As a pull request the RFC will receive design feedback from the larger community, and the author should be prepared to revise it in response.
Draft
- Each pull request will be labeled with the most relevant team, which will lead to it being triaged by that team and is assigned a shepherd from this team. The shepherd ensures the RFC progresses through the process and that a decision is reached, but they themselves do not make this decision.
- As the author, you build consensus and integrate feedback. RFCs that have broad support are much more likely to make progress than those that don't receive any comments. They are encouraged to reach out to the RFC shepherd in particular to get help identifying stakeholders and obstacles.
- The relevant team discuss the RFC pull request, as much as possible in the comment thread of the pull request itself. Offline discussion will be summarized on the pull request comment thread.
- RFCs rarely go through this process unchanged, especially as alternatives and drawbacks are shown. As an author.
Final Comment Period (FCP)
- At some point, a member of the assigned team will propose a "motion for final comment period" (FCP), along with a disposition for the RFC (merge, close, or postpone).
- This step is taken when enough of the tradeoffs have been discussed that the team team. Team members use their best judgment in taking this step, and the FCP itself ensures there is ample time and notification for stakeholders to push back if it is made prematurely.
- For RFCs with lengthy discussion, the motion to FCP is usually preceded by a summary comment trying to lay out the current state of the discussion and major tradeoffs/points of disagreement.
- The FCP lasts ten calendar days, so that it is open for at least 5 business days. It is also advertised widely (i.e. in Grin News). draft mode.
Active
- As FCP concludes and there are no objections to accepting the RFC, it gets merged into
/grin-rfcsand becomes "active".
- Before merging, the shepherd:
- updates the RFC to give it an RFC number (which is the same as the number of the initial Pull Request).
- Renames the markdown file accordingly and any accompanied assets.
- If a tracking issue on the repo affected by the RFC has created, it is linked to in the header.
- Once active, the authors may then implement it and submit the feature as a pull request to the relevant repo.
- contributors will take on responsibility for implementing their accepted feature.
- Modifications to "active" RFCs can be done in follow-up pull requests. We strive to write each RFC in a manner that it will reflect the final design of the feature; but the nature of software development means that we cannot expect every merged RFC to actually reflect what the end result will be at the time of implementation.
- In general, once accepted, RFCs should not be substantially changed. Only very minor changes should be submitted as amendments. More substantial changes should be new RFCs, with a note added to the original RFC. Exactly what counts as a "very minor change" is up to the team to decide; check team specific guidelines for more details.
Postponed
- Some RFC pull requests are tagged with the "postponed" label when they are closed (as part of the rejection process).
- An RFC closed with "postponed" is marked as such because we want neither to think about evaluating the proposal nor about implementing the described feature until some time in the future, and we believe that we can afford to wait until then to do so.
- Postponed pull requests may be re-opened when the time is right. We don't have any formal process for that, you should ask members of the relevant team.
- Usually an RFC pull request marked as "postponed" has already passed an informal first round of evaluation, namely the round of "do we think we would ever possibly consider making this change, as outlined in the RFC pull request, or some semi-obvious variation of it." (When the answer to the latter question is "no", then the appropriate response is to close the RFC, not postpone it.)
Closed
- A proposed RFC can be closed at any time before reaching "active" state. This is done by closing the pull request itself. This would happen for example if there is no support in the community for the proposal or if there are other underlying reasons why this is not a change the community wants to make.
Changes to the RFC process
In the spirit of the proposed process itself, a future "substantial" overhaul to the RFC process should be opened as a new RFC rather than making edits to this RFC. Minor changes can be made by opening pull requests against this document.
As the RFC process is something that should be consistent across all teams and the project as a whole, changes to the process fall under Core's remit. As they evaluate proposals to modify the process, they are expected to consult with teams, and other stakeholders using or being affected by the process.
Drawbacks
- May not encourage sufficient community engagement
- May slow down needed features
- May allow some features to be included too quickly
Rationale and alternatives
Alternatively, retain the current informal RFC process. The proposed RFC process is designed to improve over the informal process in the following ways:
- Discourage unactionable or vague RFCs
- Ensure that all serious RFCs are considered equally
- Improve transparency for how new features are added to Grin
- Give confidence to those with a stake in Grin's development that they understand why new features are being merged
- Assist the Grin community with feature and release planning.
As an alternative, we could adopt an even stricter RFC process than the one proposed here. We could for example look to Bitcoin's BIP or Python's PEP process for inspiration.
Prior art
This process draws inspiration extensively from Rust's RFC process, where much credit for the process is due.
Most decentralized cryptocurrency projects have adopted an RFC-like process to manage adding new features.
Bitcoin uses BIPs which are an adaptation of Python's PEPs. These processes are similar to the Rust RFC process which has had success in the Rust community as well as in other cryptocurrency projects like Peercoin.
Unresolved questions
- Does this RFC strike a favorable balance between formality and agility?
- Does this RFC address the issues with the current informal process adequately?
Future possibilities
This proposal was initially based on an RFC process for codebase development. As the process evolves it will have a larger impact in the governance of Grin. This is a relatively new area of exploration as governance processes can have wide ranging impacts on the ecosystem as a whole.
Just as it is important to hone the language to support the development process and life-cycle, it is also important to sharpen the language to support governance processes and life-cycles for the Grin ecosystem. | https://docs.grin.mw/grin-rfcs/text/0001-rfc-process/ | 2021-01-16T05:31:24 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.grin.mw |
several profiles, which includes the ESB profile, Business Process.1.1 for instructions on upgrading to WSO2 EI 6.1.1 from the previous WSO2 EI version. ESB Profile
- Clustering the Business Process Profile
- Clustering the Message Broker Profile
- Clustering the Analytics Profile
Production deployment guidelines
Follow the production deployment guide in the administration guide:.
-.
Working with profile-specific administration tasks
See the following topics for information on administration tasks that are specific to WSO2 EI: | https://docs.wso2.com/display/EI620/Product+Administration | 2021-01-16T06:03:38 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.wso2.com |
Using an External Wireless Access Point¶
Most SOHO-style wireless routers can be used as an access point if a true Access Point (AP) is not available. If pfSense® software replaced an existing wireless router, the old router can still be used to handle the wireless portion of the network.
This type of deployment is popular for wireless because it is easier to keep the access point in a location with better signal and take advantage of more current wireless hardware without relying on driver support in pfSense software. This way a network supporting 802.11ac or later wireless standards may still be used and secured by pfSense software at the edge, even though pfSense software does not yet have support for newer standards.
This technique is also common with wireless equipment running *WRT, Tomato, or other custom firmware for use as dedicated access points rather than edge routers.
Turning a wireless router into an access point¶ software will handle this function for the network, and having two DHCP servers on the same broadcast domain will not function correctly.
Change the LAN IP address¶
A functional, unique, IP address on the access point is required for management purposes.
Change the LAN IP address on the wireless router to an unused IP address in the subnet where it will reside (commonly LAN). If the firewall running pfSense software replaced this wireless router, then the wireless router was probably using the same IP address now assigned to the firewall LAN interface, which conflicts.
Plug in the LAN interface¶
Most wireless routers bridge their wireless network to works well, but offers limited control over the ability of the wireless clients to communicate with internal hosts.
See also
See Choosing Routing or Bridging for details on bridging in this role.
Bridging wireless to an OPT interface¶
To keep wireless and wired networks on the same IP subnet and broadcast domain while also increasing control over wireless clients, add an OPT interface to the firewall for the access point and bridge the OPT interface to the LAN interface.
Warning
Though bridging offers increased control over traffic, it also results in lower performance as all wireless traffic must pass through and be processed by the firewall. Typically, wireless speeds are low enough that this is not a major concern, but as wireless speeds improve the severity of the problem also increases.
This scenario is functionally equivalent to plugging the access point directly into the LAN switch, except pfSense software. | https://docs.netgate.com/pfsense/en/latest/recipes/external-wireless-router.html | 2021-07-23T19:37:34 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.netgate.com |
PictofitCore Web SDK
The Pictofit Web SDK is a JavaScript library which can display products and avatars on the web using WebGL and HTML5. The main component is the
PictofitWebViewer which can be easily added to any website. The configuration of the viewer is based on
.json configuration files. No server-side components are required which makes the integration into existing websites particularly easy.
The following sections explain, based on different use-cases, how to use the framework.
Getting Started
This page describes the required steps to get started with the PictofitCore Web SDK.
Configuration Switching
Example of a .json config list file for auto switching between different configurations. | https://docs.pictofit.com/web-sdk/0.9.4/ | 2021-07-23T20:20:13 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.pictofit.com |
Unity’s Built-In Render Pipeline supports different rendering.
If the GPU on the device running your Project does not support the rendering path that you have selected, Unity automatically uses a lower fidelity rendering path. For example, on a GPU that does not handle Deferred Shading, Unity uses Forward Rendering. at lower fidelity: per-vertex, or per-object.
If your project does not use a large amount of real-time lights, or if lighting fidelity is not important to your project, then this rendering path is likely to be a good choice for your project.
Para más detalles vea la.
Para más detalles vea la Deferred Lighting page.
Legacy Vertex Lit is the rendering path with the lowest lighting fidelity and no support for real-time shadows. It is a subset of the Forward rendering path.
Para más detalles vea la Vertex Lit page | https://docs.unity3d.com/es/2019.4/Manual/RenderingPaths.html | 2021-07-23T19:14:08 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.unity3d.com |
REGISTER TO DOWNLOAD
We appreciate your interest, for more detailed product or manufacturing inquiries, please contact us via telephone +4122 594 30 00 or via email at [email protected].
File information
Name: VAULTIC186
File name: 6654AS_10Sep19_Summary_VIC186.pdf
File size: 0.58 MB
The VaultIC186 is a Secure microcontroller solution designed to secure various systems against counterfeiting, cloning or identity theft. It is a hardware security module that can be used in many applications such as IP protection, access control or hardware protection. | https://docs.wisekey.com/?id=79 | 2021-07-23T18:59:33 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.wisekey.com |
In previous versions, when a user accessed a viewer or printer applet for the first time, he or she might have been prompted about caching the Java files. This prompt appeared only if this option was enabled on the server. If the Java files were cached, they were installed as objects on the client. If the user chose not to cache the files, they may have been cached anyway, depending on how the user's browser was configured.
If the files were installed as objects on the client, users must remove those objects before using this release of EXTRA! Mainframe Server Edition. The procedure for doing this varies, depending on which browser the clients use. | https://docs.attachmate.com/EXTRA/MainframeServerEdition/8.0/documentation/guide_ems/content/upgrade_removecache_pr.htm | 2021-07-23T18:51:19 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.attachmate.com |
The Browser interface has a look and feel that is similar to the latest Web browsers. Although it requires less screen real estate than the Ribbon interface, it provides the same functionality as the Ribbon. It also provides additional ways to access commands and connect to hosts.
In much the same way that you access commands in the Ribbon interface, you can access commands through the Reflection menu or the Quick Access toolbar. The Browser also allows you to access commands by typing them into the search box. For example:
To access the Trace commands, enter T in the search box and then choose from the list of Trace commands.
Similar to the Ribbon, you can connect to a host automatically or from the Reflection menu. In addition, you can connect by entering the type of connection and the host name in the search box.
For example:
To open a telnet connection to a 3270 host named myMainFrame, enter tn3270://myMainFrame
Note
For IBM systems , you can open Telnet sessions for 3270 or 5250 terminals using the the following format:
tn3270://hostName
tn5250://hostName
For Open Systems (VT) you can open Telnet, Secure Shell, or Rlogin sessions using the following format:
telnet://hostName
ssh://hostName
rlogin://hostName | https://docs.attachmate.com/Reflection/2011/r3/help/en/user-html/24775.htm | 2021-07-23T20:14:57 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.attachmate.com |
The URL can be used to link to this page
Your browser does not support the video tag.
AG 11-082
RETURN TO: EXT: Z� � ���;� �'� CITY OF FEDERAL WAY LAW DEPARTMENT ROUTING FORM 1. ORIGINATINGDEPT/DIV: PUBLIC WORKS / s fif 1 5 ORIGINATING STAFF PERSON: l7 �( �� v� j�i I� � r"�� EXT: � 7 Z� 3. DATE REQ. BY: 4. TYPE OF DOCUMENT �CHECK ONE�: ❑ CONTRACTOR SELECTION DOCUMENT (E.G., RFB, RFP, RFQ� ❑ PUBLIC WORKS CONTRACT ❑ ❑ PROFESSIONAL SERVICEAGREEMENT ❑ ❑ GOODS AND SERVICE AGREEMENT ❑ o REAL ESTATE DOCUMENT ❑ ❑ ORDINANCE ❑ ❑ CONTRACTAMENDMENT�AG#�: /� ❑ �OTHER z. i.��P f� C) i.'� (� lil WI C? �l c� fl Gt' � Q SMALL OR LIMITED PUBLIC WORKS CONTRACT MAINTENANCE AGREEMENT HUMAN SERVICES / CDBG SECURITY DOCUMENT �E.G. BOND RELATED DOCUMENTS� RESOLUTION 5. PROJECT NAME: �` � SO�it.�Ir� 3 �C� � S+ Y£' t T St7 vt�� �()O �{Vt� Q`�� l`oi v�tP CC>N s�T l'�tG� iC7h !� oi K aq P �+?�t 6. NAME OF CONTRACTOR: �� �C�T � ADDRESS: TELEPHONE: E-MarL: FAx: SIGNATURE NAME: TITLE: 7' ` I I I COMPLETION DATE: 9. TOTAL COMPENSATION: $ �I �T ���.3 Q c � (INCLUDE EXPENSES AND SALES TAX, IF ANY� �1F CALCULATED ON HOURLY LABOR CHARGE - ATTACH SCHEDULES OF EMPLOYEES TITLES AND HOLIDAY RATES� REIMBURSABLE EXPENSE: �YES ❑ NO IF YES, MAXIMUM DOLLAR AMOUNT: $ I Z� OZ1U� IS SALES TAX OWED: ❑ YES �IO IF YES, $ PAID BY: ❑ CONTRACTOR ❑ CITY ❑ PURCHASING: PLEASECHARGETO: �,.L�U! • � ��" `� ` � i � � C.S� � J lO. DOCUMENT / CONTRACT REVIEW �` PROJECT MANAGER �--BPb7SI�1VI �l �'"� �DEPUTY DIRECTOR ��� ��� ��Law DEPT 11. COUNCIL APPROVAL �IF APPLICABLE� I IAL/DA E �EVIEWED �� � tl •7 ( COMMITTEE APPROVAL DATE: � ( I 12. CONTRACT SIGNATURE ROUTING ,� ��� � � .j� SENT TO VENDOR/CONTRACTOR DATE SENT: o ATTACH: SIGNATURE AUTHORITY, INSURANCE CERTIFICATE, LICENSES, EXHIBITS I� L/ DATE SIG iED 3�1� � LAW DEPT � 31 � SIGNATORY (MAYOR ��}, '2 � �I� ) `� CITY CLERK _ �ASSIGNED AG # AGi# _ �� � �� � SIGNED COPY RETURNED DATE SENT: '( I�jI � I �eR ONE ORIGINAL COMMENTS: � T.___..._ _._.._ �� '��� �,._.,,._.. . ,. D 1 ' � c f .. _ _ . _ _ _ . _ 1 .. . L1 /- INITIAL / DATE APPROVED COUNCILAPPROVALDATE: ._ � DATE REC' tC 11l9 CITY OF � Federal Wa y LE7TER OF TRANSMIITAL Date: March 24, 2011 To: WA State Dept of Transportation From: Shawna Upton, Administrative Assistant to Attn: Kathy Eldred Brian Roberts, P.E., Street Systems Project Engineer PO Box 330310 Seattle, WA 98133-9710 RE: I-5 South 320 Street Southbound Off Ramp Construction Management City of Federal Way • Public Works Department 33325 8"' Avenue South • Federal Way, WA 98003 Phone 253-835-2700 • Fax 253-835-2709 • TRANSMITTED AS CHECKED BELOW: ❑ For Your Review ❑ As Requested � Please Return � For Your Approval � For Your Action ❑ For Your Information ❑ Under Separate Cover ❑ Other ❑ Via ITEMS�COPIES '�, DESCRIPTION • � � -• .� -- COMMENTS: Please execute the enclosed agreements and return one, fully-executed agreement to the Ciry for our records. Please mail the executed agreement to my attention at: Ciry of Federal Way Public Works Department Attn: Shawna Upton 33325 8�' Ave S Federal Way, WA 98003 If you have further questions, please contact Brian Roberts at 253.835.2723. cc: Project File C:\Documents and Settings\shawnau\Desktop\Contract Docs\Transmittal Letter Template - Blank.doc Day File CONSTRUCTION ADMINISTRATION BY STATE — ACT[TAL COST LOCAL AGENCY ADVERTISEMENT & AWARD OF LOCAL AGENCY PROJECT I 5 SOUTH 320 STREET SOUTHBOUND Agreement Number GCA 6680 Local Agency and Address: CITY of FEDERAL WAY, 33325 8�` Avenue South, Federal Way, WA 98003-6325. Northwest Region Description of Project: Widening the I-5 South 320 Street southbound off ramp to provide additional right and left turn lanes, modifying the existing signal system, providing storm water treatment and detention, and providing sidewalk ramps at the off-ramp intersection that meet the accessibility criteria associated with the Americans with Disabilities Act. LOCAL AGENCY Project Manager: Brian Roberts P. E., 33325 8�' Avenue South, Federal Way, WA 98003-6325, (253) 835-2723. STATE Project Manager: Aleta Borschowa, P.E., 6431 Corson Ave S, Seattle, WA 981Q8, (206) 76$ 5862, [email protected]. This Agreement is made and entered into between the STATE OF WASHINGTON DEPARTMENT OF TRANSPORTATION (STATE) and the above named Local Agency (LOCAL AGENCY). WHEREAS, the LOCAL AGENCY is planning the construction of a project as described in the Description of Project above and/or as further described in an attached exhibit, hereinafter the "Project," and WHEREAS, the Project is partially or entirely within state-owned limited access (LJA) right of way, and WHEREAS, the construction of the Project could significantly impact the safety, maintenance and operation of the state transportation system, and WHEREAS, the STATE deems it to be in the STATE's best interest for the STATE to provide construction administration for the Project in an effort to control and minimize impacts to the safety, maintenance and operation of the state transportation system, NOW, THEREFORE, pursuant to RCW 47.28.140 and/or chapter 39.34 RCW, the above recitals that are incorporated herein as if set forth below, and in consideration of the terms, conditions, and performances contained herein, and the attached Exhibit A, which is incorporated and made a part hereof, Page 1 of 12 IT IS MUTUALLY AGREED AS FOLLOWS: 1. PURPOSE � 1.1 The STATE, on behalf of the LOCAL AGENCY, agrees to perform construction administration for the Project, as further provided herein and pursuant to the attached ea�hibit. Exhibit A is the Cost Estimate. 2. DESIGN: STATE/LOCAL AGENCY APPROVAL 2.1 The STATE, pursuant to agreement No. GCA 6435, dated March 22, 2010 between the Parties, prepared the plans, specifications, and cost estimate (PS&E) for the Project, and the LOCAL AGENCY has accepted the PS&E pursuant to the terms of GCA 6435. 3. PERMITTING, RIGHT OF WAY, AD AND AWARD 3.1 The LOCAL AGENCY shall be responsible to secure the following for the Proj ect: (a) State Environmental Policy Act (SEPA) approval; (b) National Environmental Policy Act (NEPA) approval, if applicable; (c) All permits; and (d) Right of way, including temporary construction easements needed to construct the Project, and an executed STATE airspace lease, cooperative agreement or maintenance agreement, pursuant to Section 12.2, if required. 3.2 The LOCAL AGENCY shall advertise the Project for bids, prepare and issue any addenda, and award and execute the Project construction contract. Any Project addenda affecting state-owned L/A right of way must be reviewed and approved by the STATE prior to issuance. The LOCAL AGENCY may request the STATE to respond to bidder inquiries or develop addenda or other document related to questions from bidders. This request must be in writing. 3.3 The STATE shall respond to LOCAL AGENCY questions and provide information requested by the LOCAL AGENCY's Project Manager. This shall include but not be limited to, providing addenda to the contract plans, contract special provisions and engineering estimates. 4. CONSTRUCTION ADMINISTRATION 4.1 The STATE agrees to provide construction administration to include design support services for the LOCAL AGENCY's Project construction contract. The executed Project contract plans, addenda, and specifications (hereinafter Contract) are by this reference made a part of this Agreement as if fully attached and incorporated herein. Page 2 of 12 i � ; , , The STATE's Project Manager will provide all necessary services and tools to provide construction administration, including but not limited to: answering questions during advertisement, surveying, inspection, materials testing, and the representation necessary to administer the Contract construction to ensure that the Project is constructed in accordance with the Contract. 4.2 The LOCAL AGENCY may elect to have certain construction administration elements and/or tools provided in whole or in part by its contractor (hereinafter Contractor), if included as a Contract bid item, or by the LOCAL AGENCY. Any construction administration to be performed by the LOCAL AGENCY's Contractor or by the LOCAL AGENCY shall require STATE prior written approval. 4.3 The STATE is authorized to use the Minor Change Contract bid item up to a total item cost of $50,000.000, with no single change exceeding fifteen thousand dollars; per Standard Specifications for Road, Bridge, and Municipal Construction (Standard Specifications) 1-04.4(1) 2010; provided that the STATE is consistent with the intent of the Project Contract and does not include an extension of Contract time. The STATE shall not approve structural or material(s) related changes to the Project without approval of the LOCAL AGENCY, regardless of the cost 4.4 The LOCAL AGENCY agrees that both formal and informal communication between the L�OCAL AGENCY and its Contractor shall be through the STATE's Project Manager. The LOCAL AGENCY shall make the STATE's Project Manager aware by copy or written account of any direct communication affecting the Contract. The STATE's Project Manager shall communicate regularly with the LOCAL AGENCY to keep the LOCAL AGENCY up-to-date on all significant issues affecting the Project. 4.5. The STATE shall develop and execute a communication plan for any phase or changes in phases of the Project Contract that affect the public. The LOCAL AGENCY shall review and approve the elements of the STATE communication plan that affects traffic on South 320�' Street. The LOCAL AGENCY shall respond directly to the public's requests and questions at its sole cost. 4.6 The LOCAL AGENCY may also inspect the Project. All contact between the LOCAL AGENCY's inspector(s) and the Contractor shall be only through the STATE's Project Manager or his/her designee. 4.7 The STATE will provide the LOCAL AGENCY with monthly progress reports, which will include details regarding progress of the Contract work, working days, updates to the Contractor's critical path schedule, progress estimates for payments to the Contractor, estimated costs for the STATE's construction administration, Contract changes, and a comparison of planned vs. actual quantities. Page 3 of 12 4.8. The STATE, at the sole cost to the LOCAL AGENCY, agrees to inspect the landscaping and planting establishment on STATE's right of way for the first year after the landscaping and planting has been completed. 4.9 The STATE will prepare the fmal construction documentation in conformance with the STATE Construction Manual. Unless "as-built" plans are to be maintained and provided by the Contractor as part of the Contract, the STATE will maintain one set of plans as the official "as-built" set and make notations in red ink of a11 plan revisions as required by the STATE's Construction Manual. The STATE will submit one reproducible set of as-built plans to the CITY within six (6) months of final Project acceptance pursuant to Section 7. 4.10 Should for any reason, the LOCAL AGENCY decide not to complete the Project after construction has begun, the STATE, in its sole discretion, shall determine what work must be completed to restore state facilities and/or right of way to a condition and configuration that is safe for public use, operation, and maintenance, and the LOCAL AGENCY agrees that the STATE shall have the authority to direct the Contractor to complete the restoration. The LOCAL AGENCY agrees that all costs associated with Contract termination, including but not limited to engineering, completing state facility and right of way restoration, and Contractor claims, will be the sole responsibility of the LOCAL AGENCY. If the Contractor is not available to restore the state facilities and right of way, the STATE may perform, or contract to perfortn, the restoration work at LOCAL AGENCY expense. Payment to the STATE shall be pursuant to Section 8. This section shall survive the termination of this Agreement. . 4. 11 Upon completion of the Project, the STATE shall submit all Project construction records, except the STATE's copy of the "as-built" plans, to the LOCAL AGENCY for retention. The LOCAL AGENCY agrees to maintain these records for not less than three (3) years. 4.12 The LOCAL AGENCY agrees to remit to the STATE any liquidated damages assessed against the Contractor associated with closures to I-5 or to the 320 St. Southbound off ramp. The LOCAL AGENCY shall retain any assessed liquidated damages associated with closures of S 320 St, or due to delay of Project completion. 5. CONTRACT CHANGES 5.1 Changes to the Contract will be documented by change order as defined in the Standard Specifications. The STATE shall prepare all change orders in accordance with the STATE's Construction Manual (M41-01), current edition. 5.2 Required change orders are change orders that involve any or a combination of the following: Page4of12 (a) Changes in the work, work methods, working days, or quantities as necessary to satisfactorily complete the scope of the Project within state-owned L/A right of way. (b) Mitigating an emergency or safety threat to the traveling public. All other change orders shall be considered elective. 5.3 The STATE will advise the LOCAL AGENCY in writing of any proposed required change order as soon as reasonably practical. The LOCAL AGENCY shall review and approve or disapprove the proposed change in writing within three (3) working days of receiving the STATE's written notice. If an extension of review time is needed, the LOCAL AGENCY shall notify the STATE in writing as soon as possible. The LOCAL AGENCY agrees that any delays associated with the LOCAL AGENCY's approval of a change order may cause increases in the Project cost, as well as the STATE's construction administration costs. Nothing herein relieves the LOCAL AGENCY of its sole responsibility for change order costs or Contractor claims associated with the LOCAL AGENCY's change order approval process. 5.4 The STATE will develop required change orders, and upon written approval from the LOCAL AGENCY, secure signatures from the Contractor, and submit final required change orders to the LOCAL AGENCY for execution and payment. 5.5 The LOCAL AGENCY authorizes the STATE to initiate, negotiate, document, and direct the Contractor by either verbal or written direction in all matters regarding. required changes described in Section 5.2 which have been approved in writing by the LOCAL AGENCY. 5.6 The STATE reserves the right, when necessary due to emergency or safety threat to the traveling public, as solely determined by the STATE, to direct the Contractor to proceed with work associated with a required change prior to the LOCAL AGENCY's execution of the change order. If time permits, the STATE will provide an opportunity for the LOCAL AGENCY to review the required change before providing direction to the Contractor. 5.7 In the event that the LOCAL AGENCY disagrees with the STATE's determination of a required chan�e, the LOCAL AGENCY may pursue resolution under Section 13.5, Disputes. However, any delays to the Contract due to the LOCAL AGENCY pursuing the Disputes process shall be at LOCAL AGENCY expense. 5.8 The LOCAL AGENCY may request additions or modifications to the Contract through the STATE. These additions or modifications shall be deemed elective chan�e orders. The STATE will direct the Contractor to implement elective change(s), provided that the change(s) comply with the Standard Specifications, Project permits, and state and federal laws, rules, regulations, and design policies. The STATE will develop Page 5 of 12 elective change orders, secure signatures from the Contractor and submit final elective change orders to the LOCAL AGENCY for approval, execution, and payment. 5.9 Changes to structures within state-owned right of way must be reviewed and approved by the STATE Bridge Office and STATE Geotechnical Office before implementation. 5.10 Changes to electrical and intelligent transportation systems within the state- owned right of way must be reviewed and approved by the STATE Region Traffic Office before implementation. 5.11 The STATE will notify the LOCAL AGENCY of errors or omissions in the Contract as soon as reasonably practical. The STATE shall provide the necessary documents (PS&E) that will be incorporated into a change order. 6. PAYMENTS TO CONTRACTOR 6.1 The STATE shall prepare summaries of the amount due to the Contractor from . the LOCAL AGENCY for work performed in accordance with the terms of the Contract (Progress Estimates). The STATE shall submit monthly Progress Estimates to the LOCAL AGENCY for payment by the LOCAL AGENCY to the Contractor. 6.2 The LOCAL AGENCY agrees that it shall be solely responsible for all costs associated with the LOCAL AGENCY's Project. The LOCAL AGENCY further agrees that the STATE shall have no liability or responsibility for payment of any or all Project Contractor or subcontractor costs, including material costs and the costs of required and/or elective change orders, or costs associated with Contractor claims and/or delays attributable to failure of performance by the LOCAL AGENCY. 6.3 The LOCAL AGENCY shall at all times indemnify and hold harmless the STATE from all claims for labor or materials in connection with the Project located on state-owned L/A right of way, and from the cost of defending against such claims, including attorney fees. In the event a lien is filed upon the state-owned right of way, the LOCAL AGENCY shall (1) Record a valid Release of Lien; (2) Deposit sufficient cash with the STATE to cover the amount of the claim on the lien in question and authorize payment to the extent of said deposit to any subsequent judgment holder that may arise as a matter of public record from litigation with regard to lien holder claim; or (3)Procure and record a bond which releases the state-owned right of way from the claim of the lien and from any action brought to foreclose the lien. 7. PROJECT ACCEPTANCE 7.1 Prior to acceptance of the Project and the STATE's construction administration, the STATE and the LOCAL AGENCY will perform a joint final inspection of the Project. The LOCAL AGENCY agrees, upon satisfactory completion of the Project by its Contractor and receipt of a"Notice of Physical Completion," as determined by the Page 6 of 12 STATE, to deliver a letter of acceptance of the Project and the STATE's construction administration which sha11 include a release of the STATE from all future claims or demands, except from those resulting from the negligent performance of the STATE's construction administration under this Agreement. 7.2 If a letter of acceptance of the Project is not received by the STATE within sixty (60) calendar days following delivery of a"Notice of Physical Completion" of the Project to the LOCAL AGENCY, the Project and the STATE's construction administration shall be considered accepted by the LOCAL AGENCY and the STATE shall be released from all future claims or demands, except from those resulting from the negligent performance of the STATE's construction administration under this Agreement. 7.3 The LOCAL AGENCY may withhold its acceptance of the Project and the STATE's construction administration by submitting written notification to the STATE within sixty (60) calendar days following "Notice of Physical Completion" of the Project. This notification shall include the reason(s) for withholding the acceptance. The Parties shall then work together to resolve the outstanding issues identified in the LOCAL AGENCY's written notification. Upon resolution of the outstanding issues, the LO�AL AGENCY will promptly deliver the letter of acceptance to the STATE. 8. PAYMENT TO STATE 8.1 The LOCAL AGENCY, in consideration of the faithful performance of the STATE's construction administration and Services provided by the STATE as described in this Agreement, agrees to reimburse the STATE for its actual direct and related indirect costs. A cost estimate for the STATE's construction administration and Services is provided as Exhibit A. 8.2 If the Parties have a reciprocal overhead agreement in place effective as of the date of this Agreement, the STATE's overhead rate will not be charged. In this event, the STATE will only invoice for actual direct salary and direct non-salary costs for the STATE's construction administration and Services. 8.3 The STATE shall submit monthly invoices to the LOCAL AGENCY after construction administration and Services have been performed and a final invoice after acceptance of the Project and STATE's construction administration. The LOCAL AGENCY agrees to make payments within thirty (30) calendar days of receipt of a STATE invoice. These payments are not to be more frequent than one (1) per month. If the LOCAL AGENCY objects to all or any portion of any invoice, it shall notify the STATE in writing of the same within fifteen (15) calendar days from the date of receipt and shall pay that portion of the invoice not in dispute. The Parties shall immediately make every effort to settle the disputed portion of the invoice. Page 7 of 12 8.4 A payment for the STATE's construction administration and Services will not constitute agreement as to the appropriateness of any item, and at the time of final invoice, the Parties will resolve any discrepancies. 8.5 1NCREASE IN COST: In the event unforeseen conditions require an increase in cost for the STATE's construction administration and Services by more than twenty-five (25) percent above the cost estimate in Exhibit A, the Parties must negotiate and execute a written amendment to this Agreement addressing said increase prior to the STATE performing any construction administration or Services in excess of said amount. 9. RIGHT OF ENTRY 9.1 The LOCAL AGENCY hereby grants to the STATE, its employees, and authorized agents, a right of entry upon all land in which the LOCAL AGENCY has an interest for the STATE to perform construction administration and Services under this Agreement. 9.2 The STATE hereby grants to the LOCAL AGENCY, its employees, authorized agents, contractors and subcontractors a right of entry upon state-owned right of way for the LOCAL AGENCY to provide inspection and to construct the Project. 9.3 Where applicable, the LOCAL AGENCY hereby grants to the STATE, its employees, and authorized agents, a right of entry upon all land in which the LOCAL AGENCY has an interest for the STATE to operate, maintain and/or reconstruct signal loop detectors and appurtenances for signals belonging to the STATE, if any, that are constructed as part of the Project and located within the LOCAL AGENCY's right of way. The terms of this section shall survive the termination of this Agreement. 10. CLAIMS 10.1 Contractor Claims for Additional Payment: In the event the Contractor makes a claim for additional payment associated with the Project work, the STATE will immediately notify the LOCAL AGENCY of such claim. The STATE shall provide a written recommendation to the LOCAL AGENCY regarding resolution of Contractor claims. The LOCAL AGENCY agrees to defend such claims at its sole cost and expense. The STATE will cooperate with the LOCAL AGENCY in the LOCAL AGENCY's defense of the claim. The LOCAL AGENCY shall reimburse any STATE costs incurred in providing such assistance, including reasonable attorneys' fees, pursuant to Section 8. 10.2 Third Partv Claims for Dam�es Post Project Acceptance: After Project acceptance, in the event of claims for damages or loss attributable to bodily injury, sickness, death, or injury to or destruction of property that occurs because of the Project located on local agency or state-owned right of way, the Party owning the right of way shall defend such claims and hold harmless the other Party, and the other Party shall not be obligated to pay any such claim or the cost of defense. Nothing in this section, Page 8 of 12 however, shall remove from the Parties any responsibilities defined by the current laws of the state of Washington or from any liabilities for damages caused by the Party's own negligent acts or omissions. The provisions of this section shall survive the termination of this Agreement. 11. DAMAGE TO THE PROJECT DURING CONSTRUCTION 11.1. The LOCAL AGENCY authorizes the STATE to direct the LOCAL AGENCY's Contractor to repair all third party damage to the Project during construction. 11.2 The LOCAL AGENCY agrees to be responsible for all costs associated with said third party damage and for collecting such costs from the third party. 11.3 The STATE will document the third party damage by required change order and cooperate with the LOCAL AGENCY in identifying, if possible, the third party. The STATE will also separately document and invoice the LOCAL AGENCY for the STATE's costs associated with third party damage. STATE costs shall be reimbursed pursuant to Section 8. 12. OWNERSHIP, OPERATION AND MAINTENANCE 12.1 Upon acceptance of the Project as provided in Section 7, the LOCAL AGENCY shall be the sole owner of that portion of the Project located within the LOCAL AGENCY's right of way, and the LOCAL AGENCY shall be solely responsible for all future operation and maintenance of the Project located within the LOCAL AGENCY's right of way at its sole cost, without expense or cost to the STATE, except for any improvements made pursuant to Section 9.3. 12.2 Upon acceptance of the Project as provided in Section 7, the STATE shall be the sole owner of that portion of the Project located within state-owned right of way, and the STATE sha11 be solely responsible for all future operation and maintenance of the Project located within state-owned right of way at its sole cost, without expense or cost to the LOCAL AGENCY. However, if the LOCAL AGENCY has obtained or is required to obtain an air space lease, cooperative agreement, or maintenance agreement from the STATE to own, operate, or maintain a portion of the Project located within state-owned right of way, the terms of the air space lease, cooperative agreement, or maintenance agreement will control for those specified portions of the Project. 12.3 Section 12 shall survive the termination of this Agreement. Page 9 of 12 13. GENERAL PROVISIONS 13.1 Amendment: This Agreement may be amended or modified only by the mutual agreement of the Parties. Such amendments or modifications shall not be binding unless they are in writing and signed by persons authorized to bind each of the Parties. 13.2 Termination: The LOCAL AGENCY may terminate this Agreement upon written notice to the STATE. The STATE may terminate this Agreement only with the written concurrence of the LOCAL AGENCY. 13.2.1 If this Agreement is terminated prior to the fulfillment of the terms stated herein, the LOCAL AGENCY agrees to reimburse the STATE for the costs the STATE has incurred up to the date of termination, as well as the costs of non- cancelable obligations. 13.2.2 Any termination of this Agreement shall not prejudice any rights or obligations accrued to the Parties prior to termination. 13.2.3 Termination prior to completing the Project within state-owned right of way will terminate the right of the LOCAL AGENCY to complete the Project within state-owned right of way. The Contractor will be directed by the STATE to restore state facilities and right of way in accordance with Section 4. 7. This section shall survive the termination of this Agreement. 13.3 Independent Contractor: The Parties shall be deemed independent contractors for all purposes, and the employees of the Parties or any of their contractors, subcontractors, consultants, and the employees thereof, shall not in any manner be deemed to be employees of the other Party. 13 .4 Indemnification 13.4.1 Unless a claim falls within the provisions of Section 10.2, the LOCAL AGENCY sha11 protect, defend, indemnify, and hold harmless the STATE and its employees and authorized agents and/or contractors, while acting within the scope of their employment as such, from any and all costs, claims, judgments, and/or awards of damages, arising out of, or in any way resulting from, the LOCAL AGENCY's design, inspection, and construction obligations to be performed pursuant to the provisions of its Contract or as authorized under this Agreement. The LOCAL AGENCY shall not be required to indemnify, defend, or save harmless the STATE if the claim, suit, or action for injuries, death, or damages (both to persons and/or property) is caused by the negligence of the STATE; provided that, if such claims, suits, or actions result from the concurrent negligence of (a) the STATE, its employees or authorized agents and (b) the LOCAL AGENCY, its employees, authorized agents, or contractors, or involves those actions covered by RCW 4.24.115, the indemnity provisions provided Page 10 of 12 herein shall be valid and enforceable only to the extent of the negligence of each Party, its employees or authorized agents and/or contractors. 13.4.2 Unless the claim falls within the provisions of Section 10.2, the STATE shall protect, defend, indemnify, and hold harmless the LOCAL AGENCY and its employees and authorized agents and/or contractors, while acting within the scope of their employment as such, from any and all costs, claims, judgments, and/or awards of damages, arising out of, or in any way resulting from, the STATE's construction administration and Services obligations to be performed pursuant to the provisions of this Agreement. The STATE shall not be required to indemnify, defend, or save harmless the LOCAL AGENCY if the claim, suit, or action for injuries, death, or damages (both to persons and/or property) is caused by the negligence of the LOCAL AGENCY; provided that, if such claims, suits, or actions result from the concurrent negligence of (a) the STATE, its employees or authorized agents and (b) the LOCAL AGENCY, its employees, authorized agents, or contractors, or involves those actions covered by RCW 4.24.115, the indemnity provisions provided herein shall be valid and enforceable only to the extent of the negligence of each Party, its employees or authorized agents andlor contractors. 13.4.3 The LOCAL AGENCY agrees to accept full liability for any facilities the LOCAL AGENCY has provided direction to the STATE to design and/or construct outside state-owned right of way that do not meet STATE standards. 13.4.4 Section 13.4 shall survive the termination of this Agreement. 13.5 Disputes: In the event that a dispute arises under this Agreement, it shall be resolved as follows: The STATE and the LOCAL AGENCY shall each appoint a member to a disputes board, these two members shall select a third board member not affiliated with either Pariy. The three-member board shall conduct a dispute resolution hearing that shall be informal and unrecorded. An attempt at such dispute resolution in compliance with aforesaid process shall be a prerequisite to the filing of any litigation concerning the dispute. The Parties shall equally share in the cost of the third disputes board member; however, each Party shall be responsible for its own costs and fees. 13.6 Venue: In the event that either Party deems it necessary to institute legal action or proceedings to enforce any right or obligation under this Agreement, the Parties agree that any such action or proceedings shall be brought in Thurston County Superior Court. Further, the Parties agree that each will be solely responsible for payment of its own attorney's fees, witness fees, and costs. 13.7 Audit Records: All financial records, including labor, material and equipment records in support of all STATE costs shall be maintained by the STATE for a period of three (3) years from the date of termination of this Agreement. The LOCAL AGENCY shall have full access to and right to examine said records during normal business hours Page 11 of 12 and as often as it deems necessary, and should the LOCAL AGENCY require copies of any records, it agrees to pay the costs thereof. The Parties agree that the work performed herein is subject to audit by either or both Parties and/or their designated representatives and/or state and federal government. 13.8 Term of Agreement: Unless otherwise provided herein, the term of this Agreement shall commence as of the date this Agreement is executed and shall continue until all of the following are complete: (a) The Project and the STATE's construction administration and Services are accepted by the LOCAL AGENCY pursuant to Section 7; (b) The STATE and LOCAL AGENCY both have a reproducible copy of the final "as-built" plans; (c) All Project records are submitted to the LOCAL AGENCY pursuant to Section 4.8; and (d) All obligations for payment have been met, except for Sections 4.7, 9.3, 10.2, 13.2.3, 13.4 and all of Section 12, which survive the termination of this Agreement. IN WITNESS WHEREOF, the Parties hereto have executed this Agreement as of the Party's date last signed below. Approved as to Form � _ ._.___., ... _ — tricia A. Richardson City Attorney Date: �-� ` l � Appro d as to Form By: Ann E. Salay Assistant Attorney General Date: � - 3 -1( Page 12 of 12 XL3755 320th Ramp Widening Project CE Justification Aleta Borschowa's PEO (412340) Estimated CE Expenses Exhibit A, page 2 of 2 Base FTE $ Base FTE Overtime Overtime FTE Duratioa per 150 hr Level of FTE $ per Level of Staff / Occupation FTE (months) Month Effort Month Effort Cost Engineering Manager 1 4 $10,828.50 0.05 - - $2,165.70 Project Engineer 1 5 $9,892.50 0.20 - - $9,892.50 Assistant PE 1 5 $9,046.50 0.20 - - $9,046.50 Office Engineer 1 8 $7,765.50 0.50 $9,936.00 0.00 $31,062.00 AssistantOfficeEngineer 1 8 $7,125.00 0.50 $9,018.00 0.00 $28,500.00 Project Controller 1 8 $7,765.50 0.20 $9,936.00 0.00 $12,424.80 Senior Secretary 1 8 $4,381.50 0.20 $5,076.00 0.00 $7,010.40 DocumentControl 1 5 $6,544.50 0.30 $8,184.00 0.00 $9,816.75 Submittals Coordinator 1 5 $6,544.50 0.30 $8,184.00 0.00 $9,816.75 Change Order Writer 1 6 $7,125.00 0.50 $9,018.00 0.00 $21,375.00 CADD Engineer 1 4 $7,125.00 0.30 $9,019.00 0.00 $8,550.00 Materials Engineer 1 8 $7,125.00 0.50 $9,018.00 0.00 $28,500.00 Material Testers 1 3 $6,544.50 1.00 $8,184.00 0.25 $25,771.50 Surveying . 1 3 $7,765.50 0.30 $9,936.00 0.00 $6,988.95 HQ / NWR Materials Support 1 8 $7,765.50 0.20 $8,184.00 0.00 $12,424.80 Structures / HQ Support 1 3 $9,892.50 0.20 $12,837.00 0.00 $5,935.50 Roadway / HQ Support 1 4 $9,892.50 0.20 $12,837.00 0.00 $7,914.00 Geotechnical Support 1 3 $9,892.50 0.20 $12,837.00 0.00 $5,935.50 Utilities Engineers Support 1 2 $7,765.50 0.20 $9,936.00 0.00 $3,106.20 ElectricalInspectors 1 2 $7,765.50 0.20 $9,936.00 0.00 $3,106.20 Electrical Design Support 1 2 $7,765.50 0.20 $9,936.00 0.00 $3,106.20 Civil Design Support - During AD 1 1 $7,765.50 0.39 $9,936.00 0.00 $3,000.20 Civil Design Support - During CN 1 4 $7,765.50 0.19 $9,936.00 0.00 $6,001.18 Construction Traffic Office 1. 4 $7,765.50 0.20 $9,936.00 0.00 $6,212.40 Fabrication Inspection 1 2 $7,765.50 0.50 $9,936.00 0.00 $7,765.50 Chief Inspector 1 5 $7,765.50 0.50 $9,936.00 0.00 $19,413.75 Project Inspector 1 5 $7,125.00 1.00 $9,01$.00 0.25 $46,897.50 LandscapeInspector � 1 12 $7,125.00 0.10 $9,018.00 0.00 $8,550.00 Communications 1 4 $6,544.50 0.10 $8,184.00 0.00 $2,617.80 Enviornmental Compliance Inspector 1 4 $7,�65.50 0.30 $9,936.00 0.00 $9,318.60 Environmental Technical Advisor 1 4 $7,765.50 0.30 $9,936.00 0.00 $9,318.60 Reimbursables (Car, Paper, etc...) 1 8 $1,500.00 1.00 - - $12,000.00 Total $383,544.78 Working Days Months 73 3.5 Assumutions 1.) 73 working days will be included in the contract. This will result in anywhere from 2-8 months of administration depending on the employee's role on the project. Landscape inspectors will need to work at least 12 months per the GCA. 2.) Field staff will need to work approximately 2 hours of overtime each shift. This will account for travel time to / from the office and the contractors hours of work. 3.) Level of effort for the assistant o�ce engineer, materials engineer, and change order writer will be higher. This is due to the fact that they will not be able to use the mainframe computer systems they utilize on WSDOT contracts. Instead, each employee will be required to create and maintain spreadsheets that are specifically for the administration of this job. � . . . , ' t . 1 I � , r GCA 6680 CITY OF FEDERAL WAY I-5/SOUTHBOUND 320 ST OFFRAMP CHANNELIZATION CONSTRUCTION ADMINISTRATION/DESIGN SUPPORT ESTIMATE — Exhibit A page 1 of 2 ESTIMATED COST Budget CN ADMINISTRATION, Incl. Design Support During AD & Construction $383,545 10% Contingencies $ 38,355 SUB TOTAL $421,900 12% Direct Project Support Redistribution Charges $ 50,630 PROJECT TOTAL $472,530 | https://docs.cityoffederalway.com/WebLink/DocView.aspx?id=422661&dbid=0&repo=CityofFederalWay | 2021-07-23T19:27:43 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.cityoffederalway.com |
cupyx.scipy.ndimage.morphological_gradient¶
- cupyx.scipy.ndimage.morphological_gradient(input, size=None, footprint=None, structure=None, output=None, mode='reflect', cval=0.0, origin=0)[source]¶
Multidimensional morphological gradient.
The morphological gradient is calculated as the difference between a dilation and an erosion of the input with a given structuring element.
- Parameters
input (cupy.ndarray) – The input array.
size (tuple of ints) – Shape of a flat and full structuring element used for the morphological gradient. Optional if
footprintor
structureis provided.
footprint (array of ints) – Positions of non-infinite elements of a flat structuring element used for morphological gradient. Non-zero values give the set of neighbors of the center over which opening is chosen.
structure (array of ints) – Structuring element used for the morphological gradient.
structuremay be a non-flat structuring element.
output (cupy.ndarray, dtype or None) – The array in which to place the output.
mode (str) – The array borders are handled according to the given mode (
'reflect',
'constant',
'nearest',
'mirror',
'wrap'). Default is
'reflect'.
cval (scalar) – Value to fill past edges of input if mode is
constant. Default is
0.0.
origin (scalar or tuple of scalar) – The origin parameter controls the placement of the filter, relative to the center of the current element of the input. Default of 0 is equivalent to
(0,)*input.ndim.
- Returns
The morphological gradient of the input.
- Return type
- | https://docs.cupy.dev/en/stable/reference/generated/cupyx.scipy.ndimage.morphological_gradient.html | 2021-07-23T20:35:32 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.cupy.dev |
Why We're Building on Polkadot¶
After extensive research, we decided to build Moonbeam using the Substrate development framework and to deploy Moonbeam as a parachain on the Polkadot network.
Substrate Blockchain Framework¶
Substrate is a good technical fit for Moonbeam. By building on top of this framework, we can leverage the extensive functionality that Substrate includes out-of-the-box, rather than building it ourselves. This includes peer-to-peer networking, consensus mechanisms, governance functionality, an EVM implementation, and more.
Overall, using Substrate will dramatically reduce the time and implementation effort needed to implement Moonbeam. Substrate allows a great degree of customization, which is necessary in order to achieve our Ethereum compatibility goals. And, by using Rust, we benefit from both safety guarantees and performance gains.
Polkadot Network and Ecosystem¶
The Polkadot network is also a good fit for Moonbeam. As a parachain on Polkadot, Moonbeam will be able to directly integrate with — and move tokens between — any other parachains and parathreads on the network.
We can also leverage any of the bridges that are independently built to connect non-Polkadot chains to Polkadot, including bridges to Ethereum. Polkadot’s interoperability model uniquely supports Moonbeam’s cross-chain integration goals and is a key enabling technology to support the Moonbeam vision.
But perhaps just as important as the technical criteria above, we are impressed with the people in the Polkadot ecosystem. This includes individuals at Parity, the Web3 Foundation, and other projects in the ecosystem. We have built many valuable relationships and find the people to be both extremely talented and the kind of people we want to be around. | https://docs.moonbeam.network/overview/why-polkadot/ | 2021-07-23T18:02:35 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.moonbeam.network |
Version: 2.0 | Date: 5th February 2021
We have been recognizing the efforts of the security research community for helping us make WSO2 products safer. To honor all such external contributions, we maintain a reward and acknowledgement program for WSO2 owned software products. This document describes the various aspects of this program:
Products & Services in Scope
At this time, the scope of this program is limited to security vulnerabilities found on the software products developed by WSO2.
This includes the following:
Out of the above listed products, only the latest released version of each product is included for the scope of this program. In addition to that, the release date of the product version should be within 3 years from the date of report.
Any live deployment of a WSO2 product, or a website (e.g. wso2.com) or any other hosting owned by WSO2, would not be included in the scope of this program.
Qualifying Vulnerabilities
Any security issue that has a moderate or higher security impact on the confidentiality, integrity, or availability of a WSO2 product would be included for the scope of the program.
Following are few common issues that we typically consider for rewarding.
- SQL or LDAP Injection
- Cross-site Scripting (XSS)
- Broken authentication and authorization
- Broken session management
- Remote code execution
- OS command execution
- XML External Entity (XXE) or XML Entity Expansion
- Path traversal
- Insecure Direct Object References
- Confidential information leakages (E.g. credentials, PII)
Kindly note that the impact calculation is solely at the discretion of WSO2.
Non-qualifying Vulnerabilities
We review reported security issues case-by-case. Following are common issues that we typically do not consider for rewarding.
- Logout Cross-site Request Forgery (CSRF)
- Missing CSRF token in login forms
- Cross domain referer leakage
- Missing HttpOnly and Secure cookie flags
- SSL/TLS related issues
- Missing HTTP security headers
- Account enumeration
- Brute-force Attacks
- Non-critical Information Leakages (E.g. Server information, stacktraces)
However, we would still consider the issues from the above categories for rewarding based on the security impact.
Rewards and Acknowledgement
To show our appreciation, we provide a reward and an acknowledgement to eligible reporters after the reported issues are fixed and announced to the WSO2 customers and the community users.
Please refer to our Vulnerability Management Process for more details about how we disclose security vulnerabilities.
We will do the following upon reporter's consent:
- Include the reporter's name in the security researcher Acknowledgements web page.
- Provide one of the following prefered by the reporter:
- Amazon gift voucher worth 50 USD (from: Amazon.com / Amazon.ca / Amazon.cn / Amazon.fr / Amazon.de / Amazon.in / Amazon.it / Amazon.co.jp / Amazon.co.uk / Amazon.es / Amazon.com.au)
- PayPal transfer worth 50 USD.
Exceptions & Rules
Following exceptions and rules apply in this program:
- You will qualify for a reward only if you are the first person to responsibly disclose an unknown issue.
- WSO2 has 7 days to provide the first response to the report. It could take up to 90 days to implement a fix based on the severity of the report, and further time might be needed to announce the fix to our customers and community users of all the affected product versions. WSO2 will keep the reporter up to date with the progress of the process.
- Posting details or conversations about the report that violates responsible disclosure, or posting details that reflect negatively on the program and the WSO2 brand, will disqualify from consideration for rewards and credits.
- All security testing must be carried out in a standalone WSO2 product running locally or a hosted deployment owned by the reporter.
- All communications must be conducted through [email protected] email only.
Offering a reward or giving credits has to be entirely at WSO2’s discretion.
Investigating and Reporting Bugs
If you have found a vulnerability, please contact us at [email protected]. If necessary, you can use this PGP key.
A good bug report should include the following information at a minimum:
Vulnerable WSO2 product(s) and their version(s)
List of URL(s) and affected parameter(s)
Describe the browser, OS, and/or app version
Describe the self-assessed impact
Describe the steps to exploit the vulnerability
Any proposed solution
We thank you for helping us keep WSO2 products and services safe ! | https://docs.wso2.com/display/Security/WSO2+Security+Reward+and+Acknowledgement+Program | 2021-07-23T18:32:54 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.wso2.com |
How to create slider page? Created November 23, 2018 Author specia Category Slider Setup For Webstrap Lite In WordPress dashboard go to pages submenu and click on Add New. You can create a Slide using page. In this fields, you can enter page title, page description, thumbnail image and read more button using custom field. Was this article helpful? Yes No | http://docs.speciatheme.com/knowledge-base/how-to-create-slider-page-9/ | 2021-07-23T18:10:40 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.speciatheme.com |
Offline Tools #
When using an unstable network connection, an application must maintain a normal behavior when it is disconnected. Our goal is to provide the right toolkit to handle such situations.
Handling a Network Disconnect #
There are two ways to handle a network disconnect:
- Automatically reconnect to Kuzzle when possible, and enter offline mode in the meantime. This is the default behavior.
- Stop all further communication with Kuzzle and invalidate the current instance and all its children. The application will have to manually reconnect once the network is available. To do so, simply set the
autoReconnectoption to
falsewhen creating the SDK instance.
Offline mode refers to the time between a
disconnected and a
reconnected event (see Events).
Subscriptions #
A subscription opens a permanent pipe between the client and Kuzzle. Whenever a real-time message or a modified document matches a subscription filter, a notification is sent by Kuzzle to the client (for instance, see the Collection.subscribe method).
While in offline mode, the Kuzzle SDK client maintains all subscriptions configurations and, by default, when Kuzzle SDK client reconnects, all subscriptions are renewed. This behavior can be changed by setting the
autoResubscribe to
false, in which case, each subscription will have to be renewed manually using the
Room.renew method.
API Requests #
While in offline mode, API requests can be queued, and then executed once the network connection has been reestablished. By default, there is no request queuing.
- Queue all requests automatically when going offline by setting the
autoQueueoption to
true(see Kuzzle SDK constructor)
- Start and stop queuing manually, by using the startQueuing and stopQueuing methods
The queue itself can be configured using the
queueTTL and
queueMaxSize options.
Filtering Requests to be Queued #
By default, when queuing is first activated, all requests are queued.
However, you can choose to omit certain request by using the
queueFilter property. This property can be set to a function that accepts the request as an input value and returns a boolean result which indicates whether or not the request should be queud.
Additionally, almost all request methods accept a
queuable option, which when set to
false, will cause the request to be discarded if the Kuzzle SDK is disconnected. This option overrides the
queueFilter property.
Handling Network Reconnect #
autoReplayto
truewhen using user authentication should generally be avoided.
When leaving offline-mode, the JWT validity is verified. If it has expired, the token will be removed and a
tokenExpiredevent will be triggered.
If
autoReplayis set, then all pending requests will be automatically played as an anonymous user.
Once a
reconnected event is fired, you may replay the content of the queue with the
playQueue method. Or you can let the Kuzzle SDK replay it automatically upon reconnection by setting the
autoReplay option to
true.
Requests are sent to Kuzzle with a
replayInterval delay between each call.
Any request made while the client is processing the queue will be delayed until the queue is empty. This ensures that all requests are played in the right order.
Taking Control of the Offline Queue #
You can be notified about what's going on in the offline queue, by using the
offlineQueuePush and the
offlineQueuePop events.
The
offlineQueuePush event is fired whenever a request is queued. It will emit an object containing a
query property, describing the queued request, and an optional
cb property containing the corresponding callback, if any.
The
offlineQueuePop event is fired whenever a request has been removed from the queue, either because the queue limits have been reached, or because the request has been replayed. It provides the removed request to its listeners.
The
offlineQueueLoader property of the Kuzzle SDK instance loads requests to the queue, before any previously queued request. It is invoked every time the Kuzzle SDK starts dequeuing requests.
This property must be set with a function that returns an array of objects with the following accessible properties:
- a
queryproperty, containing the request to be replayed
- an optional
cbproperty pointing to the callback to invoke after the completion of the request
Finally, if the provided methods don't give you enough control over the offline queue, you can access and edit the queue directly using the
offlineQueue property.
Automatic Offline-Mode #
You can set the
offlineMode option to
auto when instantiating the Kuzzle SDK instance. This sets the offline mode configuration to the following presets:
autoReconnect=
true
autoQueue=
true
autoReplay=
true
autoResubscribe=
true | https://docs-v2.kuzzle.io/sdk/js/5/essentials/offline-tools/ | 2021-07-23T18:50:39 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs-v2.kuzzle.io |
Getting started with AWS Elemental MediaConnect
This Getting Started tutorial shows you how to use AWS Elemental MediaConnect to create and share flows. The tutorial is based on a scenario where you want to do all of the following:
Ingest a live video stream of an awards show that is taking place in New York City.
Distribute your video to an affiliate in Boston who does not have an AWS account, and wants content sent to their on-premises encoder.
Share your video with an affiliate in Philadelphia who wants to use their AWS account to distribute the video to their three local stations.
Topics
Prerequisites
Before you can use AWS Elemental MediaConnect, you need an AWS account and the appropriate permissions to access, view, and edit MediaConnect components. Complete the steps in Setting up AWS Elemental MediaConnect, and then return to this tutorial.
Step 1: Access AWS Elemental MediaConnect
After you set up your AWS account and create IAM users and roles, you sign in to the console for AWS Elemental MediaConnect.
To access AWS Elemental MediaConnect
Open the MediaConnect console at
.
Step 2: Create a flow
First, you create an AWS Elemental MediaConnect flow to ingest your video from your on-premises encoder into the AWS Cloud. For the purposes of this tutorial, we use the following details:
Flow name: AwardsNYCShow
Source name: AwardsNYCSource
Source protocol: Zixi push
Zixi stream ID: ZixiAwardsNYCFeed
CIDR block sending the content: 10.24.34.0/23
Source encryption: None
To create a flow
On the Flows page, choose Create flow.
In the Details section, for Name, enter
AwardsNYCShow.
For Availability Zone, choose Any.
In the Source section, for Name, enter
AwardsNYCSource.
For Protocol, choose Zixi push. AWS Elemental MediaConnect will populate the value of the ingest port.
For Stream ID, enter
ZixiAwardsNYCFeed.
For Whitelist CIDR, enter
10.24.34.0/23.
Choose Create flow.
Step 3: Add an output
To send content to your affiliate in Boston, you must add an output to your flow. This output will send your video to your Boston affiliate's on-premises encoder. For the purposes of this tutorial, we use the following details:
Output name: AwardsNYCOutput
Output protocol: Zixi push
Zixi stream ID: ZixiAwardsOutput
IP address of the Boston affiliate's on-premises encoder: 198.51.100.11
Output encryption: None
To add an output
On the Flows page, choose the
AwardsNYCShowflow.
Choose the Outputs tab.
Choose Add output.
For Name, enter
AwardsNYCOutput.
For Protocol, choose Zixi push. AWS Elemental MediaConnect populates the value of the port.
For Stream ID, enter
ZixiAwardsOutput.
For Address, enter
198.51.100.0.
Choose Create output.
Step 4: Grant an entitlement
You must grant an entitlement to allow your Philadelphia affiliate to use your content as the source for their AWS Elemental MediaConnect flow. For purposes of this tutorial, we use the following details:
Entitlement name: PhillyTeam
Philadelphia affiliate's AWS account ID: 222233334444
Output encryption: None
To grant an entitlement
Choose the Entitlements tab.
Choose Grant entitlement.
For Name, enter
PhillyTeam.
For Subscriber, enter
222233334444.
Choose Grant entitlement.
Step 5: Share details with your affiliates
Now that you've created your AWS Elemental MediaConnect flow with an output for your Boston affiliate and an entitlement for your Philadelphia affiliate, you need to communicate details about the flow.
Your Boston affiliate will receive the flow on their on-premises encoder. The details of where to send your video stream were provided by your Boston affiliate, and you don't need to provide any other information. After you start your flow, the content will be sent to the IP address that you specified when you created the flow.
Your Philadelphia affiliate must create their own AWS Elemental MediaConnect flow, using your flow as the source. You must provide the following information to your Philadelphia affiliate:
Entitlement ARN: You can find this value on the Entitlement tab of the AwardsNYCShow flow details page.
Region: This is the AWS Region that you created the AwardsNYCShow flow in.
Step 6: Clean up
To avoid extraneous charges, be sure to delete all unnecessary flows. You must stop the flow before it can be deleted.
To stop your flow
On the Flows page, choose the
AwardsNYCShowflow.
The details page for the AwardsNYCShow flow appears.
Choose Stop.
To delete your flow
On the AwardsNYCShow flow details page, choose Delete.
A confirmation message appears.
Choose Delete flow. | https://docs.aws.amazon.com/mediaconnect/latest/ug/getting-started.html | 2021-07-23T18:50:20 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.aws.amazon.com |
.
Here’s a template you can use to structure your article well, and make it as useful as possible to readers. Make sure you set the article status from Pitch to Draft once you receive approval.
As you write your draft, try to understand and follow the style, grammar, and SEO guidelines for the Magazine. Don’t skip this step! The guidelines tell you how to use markup, write better sentences, and get your article ranked well in search engines. Also, if you write a better article, the editors are more likely to publish it sooner. | https://docs.fedoraproject.org/he/fedora-magazine/writing-an-article/ | 2021-07-23T19:14:26 | CC-MAIN-2021-31 | 1627046150000.59 | [array(['../_images/fedora-magazine-workflow-3.jpg',
'fedora magazine workflow 3'], dtype=object)] | docs.fedoraproject.org |
Cameo Safety and Reliability Analyzer
Released on:
New Generic Safety Table for the ISO 26262 Functional Safety Plugin
The ISO 26262 Functional Safety Plugin has been improved by adding a new Generic Safety Table. This table allows you to display custom elements and their properties depending on the selected scope. It will come in handy when you need to see all relevant elements extending the ISO 26262 library (e.g., Typical Automotive Situation, Accident Scenario, Hazardous Event, and Automotive Effect) in one place. The figure below illustrates how to use a Generic Safety table to display Crash Automotive Situation elements derived from an Operational Situation.
An example of a Generic Safety Table created for a custom Operational Situation.
Learn more about a Generic Safety Table >> | https://docs.nomagic.com/display/CSRA190SP4/19.0+LTR+SP4+Version+News | 2021-07-23T19:08:44 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.nomagic.com |
Image service¶
Info
Some of these instructions require use of Openstack command line tools.
Uploading an image¶
Size limit using the web interface¶
While it is possible to upload images using the Horizon web interface there is currently a 2G size limit. For larger images, using the command line tools will work or asking the openstack software to DL the image from an URL you supply.
Image format support¶
Due to how we use Ceph as the image backend store all images in the image service are converted to raw format before they are booted. This implicates that to avoid this from happening EVERY time a new instance is booted images need to be uploaded in raw format only.
- If a large image is uploaded in e.g qcow2 format the at-boot conversion of it will take a long time, making the boot process seem stuck.
- Please use external tooling to convert your images to raw before uploading.
We have plans to mitigate and better work around some of these limitations.
Downloading an image based on an existing instance¶
Downloading an image based on an existing instance currently requires running Openstack cli commands and cannot solely be done through the Horizon webui.
Create a snapshot from an existing instance (either via openstack cli or in the Horizon webui).
Note
Creating a snapshot takes some time since the whole instance disk image will be uploaded to the Glance image service over a network connection. This will be the time to get some coffee.
From the command line, list the all images in the project and verify that the snapshot is visible:
glance image-list
Use the following commmand to download the image to local storage:
glance image-download _uuid_of_previously_created_snapshot_ \ --file _local_filename_to_save_raw_image_to_ \ --progress
Launching a new instance based on an uploaded custom image¶
Uploading a raw image can be done through the command line:
glance image-create \ --file _path_to_local_raw_image_ \ --disk-format=raw \ --name _name_of_image_ \ --property architecture=x86_64 \ --protected False \ --visibility private \ --container-format bare \ --progress
After the upload have finished, verify that the image is visible:
glance image list
The uploaded image can now be used to launch new instances. In the
Horizon webui, navigate to
Compute ->
Images. The uploaded image
should now be visible in the list. Click the
Launch button to the
right of the uploaded image and fill in instance information as usual.
Changing disk type on an image¶
If your disk image comes from an existing installation of another virtualization kit, the OS on the inside might have a limited amount of drivers and specifically might be lacking the virtio drivers which are generally considered the most performing ones.
Booting without correct drivers will mostly end up in some kind of recovery mode in early boot at best, so changing the disk-type of an image is sometimes vital during import.
You will need the image ID, either by looking at the web portal or
running
glance image-list. Then change the bus type to something
you expect will work.
IDE would be the safest for older OSes, but least performant since IDE never can have multiple I/Os in flight so a guest using IDE will never issue them in parallel even if our underlying storage would handle it. The other alternatives include "scsi", "usb" and the default "virtio".
glance image-update \ --property hw_disk_bus=ide \ abcd-defg-12345678-901234-abcd
Migrations could be made with a changed bus type on an imported disk image from another system, then from within the instance you update and/or install virtio drivers and shutdown the instance, then make a new image from the virtio-capable volume and then start a normal virtio-only instance using the second image and lastly delete the first instance.
This can also be used to change the network card type, using the
property
hw_vif_model. Default is
virtio but the list also has
e1000,
ne2k_pci,
pcnet,
rtl8139 where
e1000 emulates an Intel
Pro gigabit ethernet card, and the others are different versions of
old chipsets almost never used nowadays. This is rarely needed. | https://docs.safespring.com/compute/image/ | 2021-07-23T17:52:59 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.safespring.com |
ContentAreaMode Property
ContentAreaMode - This property specifies if the content area element of RadEditor is editable IFRAME or DIV. There are two values available for the property: "Iframe" and "Div". The default value is "Iframe".
These are the main differences between Div and Iframe content area modes:
When "Iframe" mode is used the content area has a separate document, which does not automatically inherit the current page style sheets. In this mode the parent page CSS styles are copied to the document of the Iframe element by RadEditor. This might decrease the loading performance of the control if multiple styles are defined. In "Div" mode all the styles from the parent page are automatically applied to the content.
When "Div" mode is used, RadEditor will not automatically register stylesheets (e.g.,
TableLayoutCssFile) because they can break the page. If you want to use them, you should add
<link>elements to the
<head>of your page manually. When "Iframe" is used, RadEditor adds such stylesheets to the document of the
<iframe>.
You cannot edit a full HTML pagein "Div" mode because the
<html>element cannot be nested inside a
<div>element.
In "Div" mode the content area is part of the current page, which provides better support for https and document.domain cross server scripting.
The "Div" mode offers better compatibility with some screen readers.
The "Div" mode supports faster implementation of AutoResizeHeight functionality, which is based on the built-in resize implementation of the DIV element.
The "Div" content area mode functionality is based on the contentEditable property of the DIV element.All major browsers support this property now, including Firefox 3, Safari 3, Opera 10, Google Chrome, and Internet Explorer (since 5.5), however some older browsers do not.
One major difference between both modes is that in "Iframe" mode the user can insert FORM tags without breaking the page.
Div mode should be used by experienced users only who know and understand the benefits and the potential problems that may occur. | https://docs.telerik.com/devtools/aspnet-ajax/controls/editor/functionality/editor-views-and-modes/contentareamode-property | 2021-07-23T20:26:52 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.telerik.com |
Push Footer to bottom of viewport
With this example, I will show how to create a footer which stays at the bottom of the viewport unless the page content pushes it further.
- We start with a page that contains the normal ingredients:
- Right-click App and from the dialog, choose Blocks, Navigation and a navigation structure of your choice. For this example, we have chosen Brand Menu.
- Right-click Header, choose to add component after, stay with Suggested and add the Main element.
- To add the Footer, right-click Main, select to add after, choose Blocks, Footers and the Footer of your choice.
- Next, we need to push the Footer to the bottom of the viewport. For this, we need to attach a stylesheet to our document. Choose the Styles panel, click the + sign and choose to Attach CSS file. Choose the file and hit Open.
- Then we will add a few style rules to position our Footer. Click on Code view, select the HTML element, Click the + sign and choose our style sheet.
Note: Experienced users can jump to the ADDENDUM at the end of this tutorial.
- Make sure to choose the html selector in your style sheet, not the one that says _reboot.scss or similar. Then add the two style rules as shown.
Note:
overflow-y: scroll; is a legitimate way to prevent the jump between scrollbar and no-scrollbar pages. The problem with this is that the scrollbar is always showing. A better way is to replace this with a hack (back to hacks???), namely
margin-left: calc(100vw - 100%);
- Select the Body element click the + sign and choose your style sheet.
- You will notice that Wappler has chosen the ID as the selector. This is not what we want because other pages will have a different Body ID. To change this, click on style.css
- This will open our style sheet where we can change the selector
- Change #index to body, save the style sheet and close it
- Add the three style rules as per example
- Our last style rule applies to the Main element as follows
- Back in Design view, refresh the document and see that the footer has been pushed to the bottom of the viewport.
- Don’t forget to save your work.
ADDENDUM
If you are of a more adventurous type (read: experienced user), you can enter the style rules directly into the style sheet, thus saving having to go steps 6 through to 13.
- Click on File Manager to open the Project Folder, ensure that you are viewing the local version and double-click the style sheet to open it.
- Add the following style rules to the style sheet:
html { min-height: 100%; margin-left: calc(100vw - 100%); } body { display: flex; flex-direction: column; min-height: 100vh; } main { flex: 1 0 auto; }
- The saved style sheet will look like, where Wappler has added the necessary prefixes. | https://docs.wappler.io/t/push-footer-to-bottom-of-viewport/12929 | 2021-07-23T20:16:51 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.wappler.io |
ELM¶
Created on Mon Oct 27 17:48:33 2014
@author: akusok
- class
hpelm.elm.
ELM(inputs, outputs, classification='', w=None, batch=1000, accelerator=None, precision='double', norm=None, tprint=5)[source]¶
Bases:
object
Interface for training Extreme Learning Machines
Below the ‘matrix’ type means a 2-dimensional Numpy.ndarray.
add_data(X, T)[source]¶
Feed new training data (X,T) to ELM model in batches; does not solve ELM itself.
Helper method that updates intermediate solution parameters HH and HT, which are used for solving ELM later. Updates accumulate, so this method can be called multiple times with different parts of training data. To reset accumulated training data, use ELM.nnet.reset().
For training an ELM use ELM.train() instead.
add_neurons(number, func, W=None, B=None)[source]¶
Adds neurons to ELM model. ELM is created empty, and needs some neurons to work.
Add neurons to an empty ELM model, or add more neurons to a model that already has some.
Random weights W and biases B are generated automatically if not provided explicitly. Maximum number of neurons is limited by the available RAM and computational power, a sensible limit would be 1000 neurons for an average size dataset and 15000 for the largest datasets. ELM becomes slower after 3000 neurons because computational complexity is proportional to a qube of number of neurons.
This method checks and prepares neurons, they are actually stored in solver object.
confusion(T, Y)[source]¶
Computes confusion matrix for classification.
Confusion matrix \(C\) such that element \(C_{i,j}\) equals to the number of observations known to be class \(i\) but predicted to be class \(j\).
error(T, Y)[source]¶
Calculate error of model predictions..
Another option is to use scikit-learn’s performance metrics. Transform Y and T into scikit’s format by
y_true = T.argmax[1],
y_pred = Y.argmax[1].
load(fname)[source]¶
Load ELM model data from a file.
Load requires an
ELMobject, and it uses solver type, precision and batch size from that ELM object.
save(fname)[source]¶
Save ELM model with current parameters.
Model does not save a particular solver, precision batch size. They are obtained from a new ELM when loading the model (so one can switch to another solver, for instance).
Also ranking and max number of neurons are not saved, because they are runtime training info irrelevant after the training completes.
train(X, T, *args, **kwargs)[source]¶
Universal training interface for ELM model with model structure selection.
Model structure selection takes more time and requires all data to fit into memory. Optimal pruning (‘OP’, effectively an L1-regularization) takes the most time but gives the smallest and best performing model. Choosing a classification forces ELM to use classification error in model structure selection, and in error() method output. | https://hpelm.readthedocs.io/en/latest/api/elm.html | 2021-07-23T19:10:19 | CC-MAIN-2021-31 | 1627046150000.59 | [] | hpelm.readthedocs.io |
deleteRole #
Delete the provided role.
There is a small delay between the time a role is deleted and it being reflected in the search layer (usually a couple of seconds). That means that a role that was just deleted may still be returned by the
searchRolesfunction at first.
deleteRole(id, [options], [callback]) #
Options #
Return Value #
Returns the
Security object to allow chaining.
Callback Response #
Returns the id of the rold that has been deleted.
Usage #
// Using callbacks (NodeJS or Web Browser) kuzzle .security .deleteRole('myrole', function(error, result) { }); // Using promises (NodeJS) kuzzle .security .deleteRolePromise('myrole') .then((result) => { });
Callback response
"deleted role identifier" | https://docs-v2.kuzzle.io/sdk/js/5/core-classes/security/delete-role/ | 2021-07-23T18:41:35 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs-v2.kuzzle.io |
Use SageMaker Provided Project Templates
SageMaker provides project templates that create the infrastructure you need to create an MLOps solution. Currently, SageMaker offers the following project templates.
MLOps template for model building and training - This template enables you to build and train machine learning models and register models to the model registry. Use this template when you want a MLOps solution for building and training models. This template provides the following resources:
An AWS CodeCommit repository that contains sample code that creates SageMaker pipeline in python code and shows how to create and update a SageMaker pipeline. This repository also has a Python Jupyter notebook that you can open and run in SageMaker Studio.
A CodePipeline that has source and build steps. The source step points to the CodeCommit repository and the build step gets the code from that repository, creates and/or updates the SageMaker pipeline, starts a pipeline execution, and waits for the pipeline execution to complete.
An Amazon S3 bucket to store artifacts, including CodePipeline and CodeBuild artifacts, and any artifacts generated from the SageMaker pipeline runs.
In a collaborative environment with multiple Studio users working on a same project, we recommend creating this project. After data scientists experiment in SageMaker Studio and check their code into CodeCommit, model building and training happens in the common infrastructure so that there is a central, authoritative location to keep track of the ML Models and artifacts that are ready to go to production.
MLOps template for model deployment - This template deploys machine learning models from the Amazon SageMaker model registry to SageMaker hosted endpoints for real-time inference. Use this template when you have trained models that you want to deploy for inference. This template provides the following resources:
An AWS CodeCommit repository that contains sample code that deploys models to endpoints in staging and production environments.
A CodePipeline that has source, build, deploy to staging, and deploy to production steps. The source step points to the CodeCommit repository, the build step gets the code from that repository, generates AWS CloudFormation stacks to deploy. The deploy to staging and deploy to production steps deploy the AWS CloudFormation stacks to their respective environments. There is a manual approval step between the staging and production build steps, so that a MLOps engineer must approve the model before it is deployed to production.
There is also a programmatic approval step with placeholder tests in the example code in the CodeCommit repository. You can add additional tests to replace the placeholders tests.
An Amazon S3 bucket to store artifacts, including CodePipeline and CodeBuild artifacts, and any artifacts generated from the SageMaker pipeline runs.
This template recognizes changes in the model registry. When a new model version is registered and approved, it automatically triggers a deployment.
MLOps template for model building, training, and deployment - This template enables you to easily build, train, and deploy machine learning models. Use this template when you want a complete MLOps solution from data preparation to model deployment.
This template is a combination of the previous 2 templates, and contains all of the resources provided in those templates. | https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-projects-templates-sm.html | 2021-07-23T20:35:02 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.aws.amazon.com |
Get the value of a statistical variable at a given place and time
Given a list of DCIDs representing Place objects, a StatisticalVariable, and optionally a date, get the measurements of the specified variable in the specified places with a date (if specified).
General information about this formula
Formula:
=DCGET(dcids, variable, date)
Required arguments:
dcids: A list of
Placenodes, identified by their DCIDs.
variable- The StatisticalVariable whose measurement is sought.
Optional arguments:
date- The date or dates of interest. If this argument is not specified, the API will return the latest observation of the variable.
Assembling the information you will need to use this formula
This endpoint requires the arguments
dcids and
variable. DCIDs are unique node identifiers defined by Data Commons. Your query will need to specify the DCIDs for the parent places of interest. You are also required to specify the statistical variable whose measurement you seek. Statistical variables are the metrics tracked by Data Commons.
You may choose to specify the
date argument. You may specify this argument as a single value, a row, or a column. All dates must be in ISO 8601 format (e.g. 2017, “2017”, “2017-12”) or as a Google sheets date value.
Returns
The value of the variable at those places on the specified date (or on the latest available date, if no date is specified).
NOTE:
It’s best to minimize the number of function calls to
DCGETby using a single call to get a variable for a row/column of places and/or a column/row of times. This is because a spreadsheet will make one call to a Google server per custom function call. If your sheet contains thousands of separate calls to
DCGET, expect it to be slow.
You can find a list of StatisticalVariables with human-readable names here.
Examples
Before trying this method out, make sure to follow the setup directions in the main section for Sheets docs.
Get the total population of Hawaii in 2017.
=DCGET("geoId/15", "Count_Person", 2017)
Get the population of multiple places with a single function call.
Input
Output
Get the population of a single place in multiple years.
Input
Output
Get the median age of multiple places in multiple years.
With places as a column and dates as a row:
Input
Output
With places as a row and dates as a column:
Input
Output
Error outputs
If you provide an invalid DCID, the API returns an error:
If you provide a nonexistent statistical variable, the API returns a blank value:
If you provide an invalidly formatted date, the API returns a blank value:
If you fail to provide all required arguments, you will receive an error:
| https://docs.datacommons.org/api/sheets/get_variable.html | 2021-07-23T17:52:44 | CC-MAIN-2021-31 | 1627046150000.59 | [array(['/assets/images/sheets/sheets_get_variable_input.png', None],
dtype=object)
array(['/assets/images/sheets/sheets_get_variable_output.png', None],
dtype=object)
array(['/assets/images/sheets/sheets_get_variable_one_place_multiple_years_input.png',
None], dtype=object)
array(['/assets/images/sheets/sheets_get_variable_one_place_multiple_years_output.png',
None], dtype=object)
array(['/assets/images/sheets/sheets_get_variable_places_column_years_row_input.png',
None], dtype=object)
array(['/assets/images/sheets/sheets_get_variable_places_column_years_row_output.png',
None], dtype=object)
array(['/assets/images/sheets/sheets_get_variable_places_row_years_column_input.png',
None], dtype=object)
array(['/assets/images/sheets/sheets_get_variable_places_row_years_column_output.png',
None], dtype=object)
array(['/assets/images/sheets/sheets_get_variable_nonexistent_dcid.png',
None], dtype=object)
array(['/assets/images/sheets/sheets_get_variable_nonexistent_statvar.png',
None], dtype=object)
array(['/assets/images/sheets/sheets_get_variable_incorrect_date.png',
None], dtype=object)
array(['/assets/images/sheets/sheets_get_variable_incorrect_args.png',
None], dtype=object) ] | docs.datacommons.org |
Add-ons
- What kind of reporting is available?
- Leaky Paywall - Login Redirect
- Create custom landing pages with the Leaky Paywall Lead In add-on
- Grow subscriptions by blocking incognito browser use
- Sell premium archives with our Premium Archive Access add-on
- What happens if I cancel my Leaky Paywall premium add ons?
- How to manage large Group Accounts using Leaky Paywall Bulk Import
- How to pull subscriber reports
- Leaky Paywall - Slack
- Case study: How to deliver custom content to your subscribers
- How to sell a Premium+ locked down level subscription with unique messaging
- How to redirect users upon login
- Publishers Service Associates <> Leaky Paywall integration | https://docs.zeen101.com/category/119-add-ons/2?sort=popularity | 2021-07-23T19:54:27 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.zeen101.com |
BMC Discovery end-to-end process
The end-to-end process for getting started with BMC Discovery is as follows:
- Logging in to the system
- First steps in securing BMC Discovery
- Performing an initial discovery run
- Installing the Windows proxy manager and proxies
- Examining scan results
- Rescanning with credentials
- Performing a cloud discovery run
- Excluding ranges from discovery
- Scheduling discovery
- Enabling other users
- Where to go from here
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/discovery/121/bmc-discovery-end-to-end-process-951189183.html | 2021-07-23T19:02:10 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.bmc.com |
Getting Started with XenCenter
Exploring the XenCenter workspace
About Citrix Hypervisor Licensing
-
Connecting and Disconnecting Servers
-
Changing Server Properties
Changing the Control Domain Memory
Exporting and Importing a List of Managed Servers
-
-
-
-
Importing and Exporting VMs
-
-
-
-
Failback
- restores VMs and vApps from replicated storage back to a pool on your primary site. Failback occurs when the primary site comes back up after a disaster event. To fail back VMs and vApps to your primary site, use the Disaster Recovery wizard.
Important:
The Disaster Recovery wizard does not control any storage array functionality. Disable duplication (mirroring) of the metadata storage and the storage used by the VMs which are to be restored, break mirroring before you attempt to recover data. This action gives the primary site.
Choose the VMs and vApps that you want to restore. Use the Power state after recovery option to specify whether to start the restored VMs and vApps automatically. Alternatively, you can wait and start the VMs and vApps manually after failback is complete.
Click Next to progress to the next wizard page and begin failback prechecks.
The wizard performs pre-checks before starting failback. For example, the wizard ensures can take some time depending on the number of VMs and vApps you are restoring.
When the failback is complete, click Next to see the summary report.
- Click Finish on the summary report page to close the. | https://docs.citrix.com/en-us/xencenter/current-release/dr-failback.html | 2021-07-23T18:11:28 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.citrix.com |
Is now in the development stage.
Smart contract should have entry point
void __apply() or if you include header from
x86-64/contracts/contract_base.hpp you can use macro
MAIN(<contract_name> FUNC(<function1_name>) FUNC(<function2_name>)) which includes
__apply function as the entry point. This macro supports a maximum of 10 functions. There is no other requirement for the rest of the code of the smart contract, though the approach for writing smart contract used in the examples is encouraged. There are several limitations to the C++ language in order to get small and portable bytecode.
Several standard classes and functions that are not supported:
floating point types and operations
dynamic linked libraries and runtime loading of the dynamic library
C++ and C standard libraries are not supported fully on all the C++ toolchains. For more information about STL support see Using C++ STL library.
RTTI
exceptions
constructors of global variables are not called on the program startup
Several standard classes and functions are provided as a part of development environment:
string, vector, hashmap
cryptography functions (to be implemented)
classes/functions for accessing chain
classes/functions for accessing persistent storage
Contract parameters can be retrieved by
get_parameters variadic functions, should return false on error. Contract return value can be set using
set_return_values variadic functions. These functions can be called directly or using macro MAIN as in the example below.
Headers for these classes and functions are located in
x86-64/contracts/.
Following smart contract written in C++ implements simple program for custom token.
#include "events.hpp"#include "string.hpp"#include "parameters.hpp"#include "return_value.hpp"#include "contract_base.hpp"#include "db_types.hpp"#include "db_hashmap.hpp"namespace x86_64_contract{EVENT(transfer, string, string, uint64_t);class contract : public contract_base{private:string _name{"Contract"};DB_STRING(_owner);DB_UINT64(_total_supply);DB_UINT64(_max_supply);DB_HASHMAP(db_string, db_uint64, _balances);public:void constructor() override{_owner = get_origin_sender();_total_supply = 0;_max_supply = 1'000'000'000;}std::uint64_t total_supply() const{return _total_supply;}std::uint64_t balance_of(const string& account){if(db_hashmap<db_string, db_string>::npos != _balances.find(account)){return _balances[account];}return 0;}bool mint(std::uint64_t amount){if(get_origin_sender() == _owner && _total_supply + amount <= _max_supply){_total_supply += amount;_balances[_owner] += amount;return true;}return false;}bool transfer(const string& from, const string& to, std::uint64_t amount){if(get_origin_sender() == from && _balances[from] >= amount){_balances[from] -= amount;_balances[to] += amount;EMIT(transfer, from, to, amount);return true;}return false;}};MAIN(contract, FUNC(total_supply) FUNC(balance_of) FUNC(mint) FUNC(transfer))}
This smart contract implements a simple token, each of the functions declared in MAIN can be invoked from the chain via
call_contract operation and the result will be available for the user. Constructor is invoked automatically during contract creation.
DB classes allows to save variables in the persistent storage so their values will be accessible during future invocations of the contract. These variables values can be retrieved via
call_contract_no_changin_state operation. Following types are supporting persistence:
DB_UINT8(var)
DB_UINT16(var)
DB_UINT32(var)
DB_UINT64(var)
DB_INT8(var)
DB_INT16(var)
DB_INT32(var)
DB_INT64(var)
DB_STRING(var)
DB_BOOL(var)
DB_VECTOR(type, var)
DB_HASHMAP(key_type, value_type, var)
Events mechanism is supported in x86-64 contracts. Following event signature
EVENT(event_name, variable_type1, variable_type2)is defined in contract namespace we and later in can be emited in the contract functions. Event will be saved in the contract result:
get_contract_result 1.11.4[1,{"contract_id": "1.10.28","result": {"error": "none","gas_used": 41844,"output": "f72e000000000000","logs": [{"hash": "806a86f5c5dccc502bdc081bb15f6e3f3cda72d005c0e964bce0baa40c06169d","log": "06000000312e322e323606000000312e322e3235a0fd000000000000","id": 0}]}}]
where
logs - array of logs, where every logs consists of:
hash - hash(keccak256) of event. This value represents the hash of the signature
event_name(bool,string,uint64), without spaces between variables.
log - hex of log.
id - id of the contract that generated the logs.
Different types of errors can occur during the execution of the contract. Errors are recorded in the result logs:
get_contract_result 1.11.25[1,{"contract_id": "1.10.28","result": {"error": "memory_invalid_access","gas_used": 41844,"output": "f72e000000000000","logs": []}}]
where
error - is the type of error,
none if contract finished execution successfully.
Errors types:
unknown
contract_error
out_of_gas
log_limit_exceeded
output_limit_exceeded
no_available_memory
invalid_register
unsupported_instruction
unsupported_modrm_sib
unexpected_operation
division_by_zero
memory_invalid_access
zero_size_allocation
operand_invalid_access
not_heap_memory
incorrect_parameters
invalid_chain_call
incorrect_emulator_load
C++ smart contract should be compiled and linked into ELF or Mach-O executable using standard toolchain.
gcc
for optimization
-01 or
-Os can be used instead of
-O0.
-Os is recommended to use.
gcc linker flags
-nostdlib -fno-rtti -fno-exceptions -fno-unwind-tables -fno-pie -static -e __apply -Wl,--gc-sections
clang -Wno-inline-new-delete -fno-stack-protector
for optimization
-01 or
-Os can be used instead of
-O0.
-Os is recommended to use.
clang linker flags
-nostdlib -fno-rtti -fno-exceptions -fno-unwind-tables -fno-pie -static -e ___apply
The 64 bit ELF or Mach-O executable generated by the C++ linker tool should be repackaged, uploaded and executed using the general flow for the x86-64 smart contracts described in the corresponding sections. | https://docs.echo.org/advanced/x86-64-virtual-machine/c++-smart-contracts | 2021-07-23T19:34:22 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.echo.org |
useradd: /etc/passwd.8: lock file already used useradd: cannot lock /etc/passwd; try again later..
OpenShift Pipelines Technology Preview (TP) 1.0 is now available on OpenShift Container Platform 4.4. OpenShift Pipelines, you must delete your existing deployments before upgrading to OpenShift Pipelines version 1.0. To delete an existing deployment, you must first delete Custom Resources and then uninstall the OpenShift Pipelines Operator. For more details, see the uninstalling. | https://docs.openshift.com/container-platform/4.4/pipelines/op-release-notes.html | 2021-07-23T18:49:30 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.openshift.com |
Create & Send Emails
Emails, specifically Ad Hoc emails, are what you can send to groups of people who have accounts in your Tilma website and their personal subscription settings are set to receive communications from you.
In the main CMS side navigation, find and click on Emails.
When you open up the Emails tool, you will see three tabs:
- Ad Hoc
Ad Hoc
These are the emails that you create to send about on any subject or for any reason. For example, this could be a weekly email to the entire parish containing a link to download this weeks’ bulletin, or could be an email targeted only to parents of children participating in PREP.
Ad Hoc emails can be prepared and scheduled to be sent at any time.
The Welcome email is the automated email that is sent to individuals when a new Tilma account is created by a parish. This might be if a Tilma admin sets up an account on a parishioner’s behalf, or as a result of making many accounts as a result of importing a CSV of parishioner names, email addresses, and, optionally, giving envelope numbers. Go to Import CSV List for more about importing your list of parishioners.
The content in the body of this email can be modified to suit the needs of each parish.
Ad Hoc
In the Ad Hoc tab, there two sub-tabs:
- Drafts
- Sent
Drafts
In Drafts, you will see a list, or index, of all emails in the process of being drafted and not yet sent. You can click on an existing draft to open it up and continue editing so it can be queued up to be sent.
You can also create a new email by clicking the “ New email” button. This will open up a brand new email draft.
Sent
In Sent, you see all the emails that had previously been sent. You can click on the emails to view the settings and the content.
Each email in both Drafts and Sent has an ellipsis (...) menu. Hovering over this menu will activate the menu, which contains the “Duplicate email” item to create an exact duplicate of the email in its current state.
Create New or Edit Emails
Click the “New email” button to create a new email, or click on an existing draft to edit. You’re presented with two Panels:
- Settings
- Editor
Settings
The Settings panel is where you add all of the various settings for your email, and it’s divided into three sections.
Email Title – this is the name of the email you will see in the index of draft or sent emails. This is just for you to be able to identify each email and is not seen by the public.
To: Subscription List or Filter / Segment This List – this is the choice for the audience your email will go to.
To: Subscription List
The dropdown menu will let you select from the series of preset subscription lists that each parishioner has the choice to subscribe to in the personal profile.
Filter / Segment This List
The is where you can use search queries of People to send emails to people based on account information. This can be really useful for segmenting, or grouping, people based on personal data or other related info.
For example, if you’re sending an invite to married couples to take part in a weekend marriage retreat, you can find all the people in the parish who are married and send the email to them. Or you can find all the parents of PREP-aged children to let them know how to register for the next set of PREP classes with a link to the registration form on your Tilma website.
If the data is in a person’s profile, that can be used to create a query, meaning you can communicate directly to the right people at the right time. Go to Create & Edit People Queries to find out how to create or modify your own queries.
Schedule – Using the date picker, you can choose with day and time in the future to send the email.
Content
Subject Line – this is the subject line that will display in a recipient’s inbox. Write something concise and catchy to guarantee this email will get opened. This article might help.
Preview Text – some email apps, especially on mobile devices -- which is where the vast majority of how emails are read -- will display some preview text. This is where you can give a sneak peak of what the recipient will gain by opening the email.
Custom Banner – this is the banner, or header image, for your email. You can use the default banner and transparent overlay as set in your Site Settings, or you can choose to override the defaults. You can choose to:
- Use Defaults: use
- Upload a new background image and keep the existing transparent overlay, or no overlay at all.
- Upload a new transparent overlay to appear over the default background.
- Upload a new a new background and transparent overlay
Banner Image (Optional)
Choose a photo to be the main banner for this email. We recommend avoiding text in your photo because it could be cropped. We recommend using images that are 3200 pixels wide by 1250 pixels tall. No matter what size you upload, your image will automatically be fitted into the allotted space.
Banner Overlay (Optional)
Choose an image to go on top of the banner image, like a logo or emblem. Anything you upload will not be cropped. We recommend using transparent .png files that are 1600 pixels wide.
Test Email
Send out tests of your email before sending to the chosen recipient list to make sure it looks okay and that links are working properly. You can add up to five email addresses for test recipients, each separated with a comma. Click the “Save & Send Test” button to send the email to the test recipients.
Editor
In the Editor panel is where you add your content for the email itself. Go to Intro to Content Editing for more information on how to use the live page editor.
In the body of the email, you can type or paste in your text, the same as any page or resource.
However, emails have some restrictions on what content can be included. The Quick Insert Tool will only allow you to add:
- Photos – Place images on page, including adding captions and alt text
- Bullet points – create new sets of bullets, both Unordered and Ordered (numbered)
- Horizontal lines – separate sections on a page with a full-width line
- Insert Content Block – insert links to other content in your site, such as Resources articles, in attractive cards
- Button insert – add "Call to Action" buttons in the body and customize the link
Save Draft
If you’re not ready to send your email, but want to save your draft for editing later, you can click the “Save Draft” link in the top-left corner. While your progress is constantly auto-saved at fairly regular intervals, clicking this link will save your work up to the second. After clicking, you will be directed back to the index of draft emails.
Preview
Clicking the Preview button in the top-right corner will display how the email should look and how links work in a recipient’s desktop email app.
Sending Email
Once you’re ready to send your email, click the “Ready to send” button, and that will queue your email to be sent at the day and time you set in the Settings panel.
If you hover your cursor over the little dropdown arrow on the button, you reveal more options:
- Delete – deletes your draft email
- Update & Send Test – saves your progress and sends a test to whatever email address is saved in the Test Email field in Settings. | https://docs.tilmaplatform.com/article/11-create-send-emails | 2021-07-23T19:29:23 | CC-MAIN-2021-31 | 1627046150000.59 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e311a942c7d3a7e9ae6e6f5/images/6026e01c661b720174a6cbd6/file-tFmYdzz8B6.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e311a942c7d3a7e9ae6e6f5/images/6026e0ba0a2dae5b58faf4a8/file-uMyGWe4QT0.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e311a942c7d3a7e9ae6e6f5/images/6026e3548502d1120e9070ec/file-458c6Wz9tI.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e311a942c7d3a7e9ae6e6f5/images/6026e3698502d1120e9070ed/file-9aeeNjSgL5.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e311a942c7d3a7e9ae6e6f5/images/6026e3a1661b720174a6cbef/file-keIJpdPyLO.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e311a942c7d3a7e9ae6e6f5/images/6026e3bd24d2d21e45ed5fcc/file-uGUhWfkchd.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e311a942c7d3a7e9ae6e6f5/images/6026e3e2661b720174a6cbf1/file-GKhS78nrgz.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5e311a942c7d3a7e9ae6e6f5/images/6026e3fb0a2dae5b58faf4c1/file-DsJurM8lRy.png',
None], dtype=object) ] | docs.tilmaplatform.com |
Unity Collaborate constantly monitors the changes team members make to your Project. When a team member saves changes to a SceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info
See in Glossary or PrefabAn asset type that allows you to store a GameObject complete with components and properties. The prefab acts as a template from which you can create new object instances in the scene. More info
See in Glossary on their local machine, Collaborate notifies you that the Asset has been changed by displaying an In-Progress badge on the Asset.
To view who is working on the Scene or Prefab, hover over the badge.
For Scenes, the In-Progress badge is visible in two locations in the Unity Editor:
On the file icon in the Project browser.
In the Hierarchy window.
For Prefabs, the In-Progress badge is also visible in two locations in the Unity Editor:
On the file icon in the Project browser.
In the InspectorA Unity window that displays information about the currently selected GameObject, Asset or Project Settings, alowing you to inspect and edit the values. More info
See in Glossary window.
When a team member publishes their changes, Unity removes the icon and counter. | https://docs.unity3d.com/2019.3/Documentation/Manual/CollaborateInProgress.html | 2021-07-23T20:21:32 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.unity3d.com |
Free Google Firebase Hosting in Wappler!
What is Google Firebase Hosting
We always got many questions about hosting of your web site or app. Specially for new users hosting can be scary in the beginning. There are so many options with different prices and specs, some more technical other less. So it is easy to get confused.
Not any more! Meet Google Firebase Hosting - the free hosting to get you started with Wappler. In just a few clicks you will have your web site or app up and running live - at no additional hosting costs!
Google Firebase Hosting makes it really easy to get started and it has a very generous free tier called Spark Plan that for the most sites will be actually more than enough.
Of course you can’t run Server Side languages as PHP and build complex backends, but as we are moving towards to more client side dynamic web apps and mobile apps - it will just do fine!
Getting started with Firebase Hosting
To get you started with Firebase Hosting, first thing you need to do is just create a Firebase Project
Create a Firebase Project with Google
Just go to and click on the “Get started”, sign in with your Google account and you will see the Google Firebase Console - the online project manager.
Once at the Firebase Console - you can just create a new project:
Enter your project namd and pay special attention to the project ID below it! This will become your live web site name with the addition of .web.app to it!
So in this example your website my-great-wappler-project.web.app
Do note that you can also add any custom domain to it - so you are not bound to that name but it is a nice way to start. The default name or your custom domain get automatic SSL certificates, so you don’t have to do anything to build secure sites.
In the next step, you can enable Google Analytics if you want, but I will switch it off.
When the project is ready you will see:
And you are done!
You will see the Firebase Project Manager (Console) when you click on continue.
But you don’t have to do anything there for now - you can go to Wappler
Creating a Wappler Project connected to the Google Firebase Hosting
Let’s open Wappler now and go to it’s Project Manager.
Choose there to create a new project and choose one of the default templates to get started:
In the new project settings, just give your project a name (1), choose a folder locally to store it (2), choose the hosting type “Google Firebase Hosting” (3) and specify the Firebase Project ID (4) that we used to create the Firebase Project.
If you don’t remember your project ID - just click on the Manage Firebase Projects to go to the Firebase Manager, where you can find the project ID in the project settings.
The first time your create a Firebase Hosting project, you will be prompted to run a system check if you have NodeJS and to install Firebase Tools if needed. So choose Yes:
When done you will see:
The first thing to do when connecting you Firebase Projects is so also sign in with Google within Wappler, So choose Yes again.
You will be redirected to the web browser so you can choose your Google account and sign in.
After that go back to Wappler and you should see:
Developing and Testing Locally
The Firebase Hosting has a nice build-in local server that you can just start:
You will see it running. You can either refresh your Design View or open the link in the browser to test.
You will see the dynamic data rendered once you refreshed and also the caales made to the local web server:
You can easily switch between the Output pane and the Local Web Server pane:
Or click on the “X” to close the Local web server if you don’t need it.
Deploy Your Site Live!
You can do any modifications of the web site your want and then save the pages.
When you are ready to deploy your site to live, just click on the “Deploy” button.
When it is all done, you will see:
You can just click on the link to see your site live in the browser!
Note your site is available under both names project_id.web.app or project_id.firebaseapp.com or later on also under custom domain if you assign it one.
Congratulations you just deploy your first Wappler Firebase Hosted Site!
Managing your Firebase Web Site
When you want you can always jump to the Firebase Project Manager Console, to check the settings and usage of your Firebase hosted site. Just go to
Click on your project and then Hosting.
Domain names
You will see here do domain names your site is available on and asiign a custom domain if you want.
You will also see the latest deploys and even revert them if something goes wrong!
Usage
You can also keep an eye on the web site usage to see how many visitors you got and the traffic you have. And see how it fits the free tier.
Database Usage?
As you noticed Google Firebase Hosting currently support static html files and no server side PHP code.
But don’t worry - soon we will be adding also Client Side Database and support for Google Firestore / Realtime Database - so you will have your database connections right where you need them to build even more sophisticated web sites and apps! | https://docs.wappler.io/t/free-google-firebase-hosting-in-wappler/17547 | 2021-07-23T18:50:16 | CC-MAIN-2021-31 | 1627046150000.59 | [array(['https://community.wappler.io/uploads/default/original/3X/8/3/83d8c7daa1148f42297a44d69bd90e6f04c748f0.png',
'image'], dtype=object)
array(['https://community.wappler.io/uploads/default/original/3X/1/1/114079328daa361ee15b34d16a9ee94cdfad2642.png',
'image'], dtype=object)
array(['https://community.wappler.io/uploads/default/original/3X/8/3/83fb95cc3f2679d9b2658d4d976b672bad1af3dd.png',
'image'], dtype=object)
array(['https://community.wappler.io/uploads/default/original/3X/7/9/79b2cc620a25876357dd72c653b6af6201e5df19.png',
'image'], dtype=object)
array(['https://community.wappler.io/uploads/default/original/3X/7/a/7a20883d3325fc2333966235961563440b97c3c4.png',
'image'], dtype=object)
array(['https://community.wappler.io/uploads/default/original/3X/0/0/00860645f4c6bbd993a0fa4744b62c48c82f6191.png',
'image'], dtype=object)
array(['https://community.wappler.io/uploads/default/original/3X/c/b/cb37416a8c89ec0b013d43be1a12bac116fd822e.png',
'image'], dtype=object)
array(['https://community.wappler.io/uploads/default/original/3X/6/4/645b1adcc8120c988788a05ae40aace2e4ea6118.png',
'image'], dtype=object) ] | docs.wappler.io |
Webix provides extensive scrolling management tools.
By default, Webix uses native browser scrollbars. In Webix Pro edition custom Webix-made scrollbars are available.
By default, a component features vertical scrolling. It can be modified via the dedicated scroll property that may take the following values:
webix.ui({ view:"list", id:"mylist", scroll:"x", //"y", "xy", "auto", false // ... config });
Some components have the scrollX and scrollY properties that take boolean values to enable/disable the specified scrolling direction. Check the API Reference for details.
With dynamic loading, only part of the stored records are loaded into the component during initialization. Each time you scroll down/up the component, a data request is sent to the server to load the next portion of data.
Read more about dynamic loading.
If the dataset is too long, it seems nice to have the ability to scroll to a definite part automatically with a click of a button.
Here you have several possibilities:
1. Make use of the scrollTo(x,y) method of a component:
The function is called from a scrollview object and takes the horizontal and vertical position of the necessary line into it. If you state this position as the ID of a button, the function will look like this:
{ view:"scrollview", id:"verses", body:{/* content */} }, { view:"button", value:"Imitation of Spenser", id: "130", width:250, click:function(id){ $$("verses").scrollTo(0, id*1); // scrolls to the position at top:130px } }
Scroll position can be calculated during development state with the getScrollState() method that returns an object with X and Y parameters.
For touch devices you can also set the time for the scrollTo() function to perform. Define the scrollSpeed property value in milliseconds:
webix.ui({ view:"list", scrollSpeed:"100ms", ... });
2. Scrolling via focusing on the necessary view within the scrollview.
Here the ID of a button should be connected to the ID of the row in the scrollview:
{ view:"button", value: "Verse 1", id: "1", click:scroll}, { view:"button", value: "Verse 2", id: "2", click:scroll},
Then the showView() method sets focus to the scrollview item with the corresponding ID.
webix.ui({ view:"scrollview", id:"verses", body:{ rows:[ { id:"verse_1", template:"...Some long text"}, { id:"verse_2", template:"...Some long text"} ] }); function scroll(id){ $$("verses").showView("verse_"+id); }
Related sample: Scrollview
3. Make the view show the definite item specified by its ID:
$$("mylist").showItem(5); // the list should be scrollable
Related sample: Horizontal List
Within Datatable scrolling can be done with the following methods:
Scroll state is defined as the combination of a top and left scroll position (how much is the component scrolled from its top and left border). In case you have either a horizontal or vertical scrollbar, the scroll state includes only one value - X or Y respectively.
The current scroll position can be derived with the getScrollState() method:
var scPos = $$("mylist").getScrollState(); // returns data as ({x:30,y:200}) var posx = scPos.x; // 30 var posy = scPos.y; // 200
The feature is available in Webix Pro edition only.
Webix offers custom scrollbars to replace native browser ones. The advantages are as follows:
Related sample: Custom Scroll
The feature is provided by a separate CustomScroll module that you need to enable before use.
Make sure you wrap everything into the webix.ready() function, that is executed after page loading:
Back to topBack to top
webix.ready(function(){ // enabling CustomScroll if (!webix.env.touch && webix.env.scrollSize) webix.CustomScroll.init(); // your webix app webix.ui({ ... }); }); | https://docs.webix.com/desktop__scroll_control.html | 2021-07-23T19:02:47 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.webix.com |
This tutorial uses the WSO2 API Manager Tooling Plug-in.
This tutorial explains how to map your backend URLs to the pattern that you want in the API Publisher. Note the following:
-:
Before you begin, note that a mock backend implementation is set up in this tutorial for the purpose of demonstrating the API invocation. If you have a local API Manager setup, save this file in the
<APIM_HOME>/repository/deployment/server/synapse-configs/default/api folder to set up the mock backend.
Log in to the API Publisher, design a new API with the following information, click Add and then click Next: Implement >.
The Implement tab opens. Give the information in the table below.
Click Next: Manage > to go to the
Managetab, select the
Goldtier and).
<sequence xmlns="" name="TestSequence"> <property name="REST_URL_POSTFIX" scope="axis2" action="remove"/> </sequence> Import Sequence to import the sequence you create above.
- Browse to the
TestSequence.xmlfile you created in step 4.
Your sequence now appears on the APIM perspective. Right-click on the imported sequence and click Commit File to push the changes to the Publisher server.
Log back into the API Publisher, click Edit and go to the Implement tab. Select the Enable Message Mediation check box and engage the
Insequence that you created earlier.
TestSequence.xml removes the URL postfix from the backend endpoint, since the URI template of the API's resource is automatically appended to the end of the URL at runtime. Therefore the request URL is modified by adding this sequence to the In flow.
- Save and Publish the API.
You have created an API. Let's subscribe to the API and invoke it.
Log in to the API Store and subscribe to the API.
Click the View Subscriptions button when prompted. The Subscriptions tab opens.
Click the Production Keys tab and click Generate Keys to create an application access token. If you have already generated a token before, click Re-generate to renew the access token.
Click the API Console tab of your API.
Note that the
businessIdis added in the UI as a parameter. Give a
businessIdand click Try it out to invoke the API.
Note the response that you get. According to the mock backend used in this tutorial, you get the response
Received Request.
In this tutorial, you mapped the URL pattern of the APIs in the Publisher with the endpoint URL pattern of a sample backend. | https://docs.wso2.com/display/AM210/Map+the+Parameters+of+your+Backend+URLs+with+the+API+Publisher+URLs | 2021-07-23T20:03:58 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.wso2.com |
List Zsh Source Code (list)¶
list**[>**] [ location*|*.**|**-** [ num ]]
list location [num]
List source code.
Without arguments, print lines centered around the current line. If num is given that number of lines is shown.
If this is the first list command issued since the debugger command loop was entered, then the current line is the current frame. If a subsequent list command was issued with no intervening frame changing, then that is start the line after we last one previously shown.
A location is either:
- a number, e.g. 5,
- a filename, colon, and a number, e.g. /etc/profile:5,
- a “.” for the current line number
- a “-” for the lines before the current linenumber
If the location form is used with a subsequent parameter, the parameter is the starting line number is used. When there two numbers are given, the last number value is treated as a stopping line unless it is less than the start line, in which case it is taken to mean the number of lines to list instead.
Wherever a number is expected, it does not need to be a constant – just something that evaluates to a positive integer.
Examples:¶
list 5 # List starting from line 5 list 4+1 # Same as above. list /etc/profile:5 # List starting from line 5 of /etc/profile list /etc/profile 5 # Same as above. list /etc/profile 5 6 # list lines 5 and 6 of /etc/profile list /etc/profile 5 2 # Same as above, since 2 < 5. list profile:5 2 # List two lines starting from line 5 of profile list . # List lines centered from where we currently are stopped list - # List lines previous to those just shown
See also
set listize, or show listsize to see or set the number of source-code lines to list. | https://zshdb.readthedocs.io/en/latest/commands/files/list.html | 2021-07-23T19:17:31 | CC-MAIN-2021-31 | 1627046150000.59 | [] | zshdb.readthedocs.io |
How AAA works
AAA provides security for a distributed internet environment by allowing any client with the proper credentials to connect securely to protected application servers from anywhere on the Internet. This feature incorporates the three security features of AAA. Authentication enables the NetScaler appliance to verify the client’s credentials, either locally or with a third-party authentication server, and allow only approved users to access protected servers. Authorization enables the ADC to verify which content on a protected server it should allow each user to access. Auditing enables the ADC to keep a record of each user’s activity on a protected server.
To understand how AAA LDAP.
-.
Authentication requires that several entities: the client, the NetScaler appliance, the external authentication server if one is used, and the application server, respond to each other when prompted by performing a complex series of tasks in the correct order. If you are using an external authentication server, this process can be broken down into the following fifteen steps.
- The client sends a GET request for a URL on the application server.
- The NetScaler appliance’s traffic management virtual server redirects the request to the application server.
- The application server determines that the client has not been authenticated, and therefore sends an HTTP 200 OK response via the TM vserver to the client. The response contains a hidden script that causes the client to issue a POST request for /cgi/tm.
- The client sends a POST request for /cgi/tm.
- The NetScaler appliance’s authentication virtual server redirects the request to the authentication server.
- The authentication server creates an authentication session, sets and caches a cookie that consists of the initial URL and the domain of the traffic management virtual server, and then sends an HTTP 302 response via the authentication virtual server, redirecting the client to /vpn/index.html.
- The client sends a GET request for /vpn/index.html.
- The authentication virtual server redirects the client to the authentication server login page.
- The client sends a GET request for the login page, enters credentials, and then sends a POST request with the credentials back to the login page.
- The authentication virtual server redirects the POST request to the authentication server.
- If the credentials are correct, the authentication server tells the authentication virtual server to log the client in and redirect the client to the URL that was in the initial GET request.
- The authentication virtual server logs the client in and sends an HTTP 302 response that redirects the client to the initially requested URL.
- The client sends a GET request for their initial URL.
- The traffic management virtual server redirects the GET request to the application server.
- The application server responds via the traffic management virtual server with the initial URL.
If you use local authentication, the process is similar, but the authentication virtual server handles all authentication tasks instead of forwarding connections to an external authentication server. The following figure illustrates the authentication process.
Figure 1. Authentication Process Traffic Flow
When an authenticated client requests a resource, the ADC, before sending the request to the application server, checks the user and group policies associated with the client account, to verify that the client is authorized to access that resource. The ADC handles all authorization on protected application servers. You do not need to do any special configuration of your protected application servers.
AAA traffic management handles password changes for users by using the protocol-specific method for the authentication server. For most protocols, neither the user nor the administrator needs to do anything different than they would without AAA traffic management. Even when an LDAP authentication server is in use, and that server is part of a distributed network of LDAP servers with a single designated domain administration server, password changes are usually handled seamlessly. When an authenticated client of an LDAP server changes his or her password, the client sends a credential modify request to AAA traffic management, which forwards it to the LDAP server. If the user’s LDAP server is also the domain administration server, that server responds appropriately and AAA traffic management then performs the requested password change. Otherwise, the LDAP server sends AAA traffic management an LDAP_REFERRAL response to the domain administration server. AAA traffic management follows the referral to the indicated domain administration server, authenticates to that server, and performs the password change on that server.
When configuring AAA traffic management with an LDAP authentication server, the system administrator must keep the following conditions and limitations in mind:
- AAA traffic management assumes that the domain administration server in the referral accepts the same bind credentials as the original server.
- AAA traffic management only follows LDAP referrals for password change operations. In other cases AAA traffic management refuses to follow the referral.
- AAA traffic management only follows one level of LDAP referrals. If the second LDAP server also returns a referral, AAA traffic management refuses to follow the second referral.
The ADC supports auditing of all states and status information, so you can see the details of what each user did while logged on, in chronological order. To provide this information, the appliance logs each event, as it occurs, either to a designated audit log file on the appliance or to a syslog server. Auditing requires configuring the appliance and any syslog server that you use. | https://docs.citrix.com/en-us/netscaler/11-1/aaa-tm/ns-aaa-how-it-works-con.html | 2021-07-23T18:55:00 | CC-MAIN-2021-31 | 1627046150000.59 | [array(['/en-us/netscaler/media/aaa_authentication-overview.png',
'localized image'], dtype=object) ] | docs.citrix.com |
Last modified: January 19, 2021
Overview
For distributed cPanel accounts, the parent node controls password strength requirements for the account’s main password, as well as any services that run on the parent node. Child nodes control other passwords.
- For accounts with distributed mail, the mail child node controls the Email and Mailing List password strength settings.
Users with shell access can bypass these requirements with the
passwd command.
This feature allows you to define minimum strengths for passwords for all of cPanel & WHM’s features that require password authentication. The system rates password strength on a scale of zero to
100, where
100 represents a very strong password. When you set a minimum password strength, the system automatically rounds this value up to the nearest increment of
5.
How to set minimum password strengths
To set the minimum password strengths, perform the following steps:
- To specify the default minimum password strength for features that you set Default, use the Default Required Password Strength slider or enter a number between
0and
100in the appropriate text box.Note:
If you use the Default Required Password Strength setting, we recommend that you set its value to
40or greater.
- To configure a minimum required password strength for a specific feature, use that feature’s slider to specify its minimum password strength, or enter a number between
0and
100in the text box.
- Click Save to save your changes.
By default, this requirement only applies to new accounts. To enforce this requirement for existing accounts, you must enable the Password Strength setting in WHM’s Configure Security Policies interface (WHM >> Home >> Security Center >> Configure Security Policies). | https://docs.cpanel.net/whm/security-center/password-strength-configuration/ | 2021-07-23T18:13:53 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.cpanel.net |
Here we speak about simple drag and drop of items within one and the same component as well as between different views and their instances. The information about how to make any app element draggable is to be found in the dedicated article of the documentation.
All the components you'd like to work with must have the drag property, both source and target components. In case of several instances of one and the same component, it would be enough to enable drag-and-drop. For adequate drag-and-drop between different components study external data functionality.
The property has several values to define the drag-and-drop mode:
Related sample: Multi Drag-and-Drop in Tree
The multidrag mode (dragging of several items at a time is possible) is enabled by setting the multi selection ability within the component. You can use one of the following settings:
Treetable Multidragging
webix.ui({ view:"treetable", // treetable config multiselect:true, drag:true });
Related sample: List: Drag-and-Drop of items
In essence, drag-n-drop is a set of sequential events: first you hook the necessary component item, then drag it to the desired position and drop the item releasing the mouse button.
Therefore, the component with draggable items as well as the one with a dropping ability, gets the following events:
They are used to control the drag-n-drop process and customize it on different stages, since any event can trigger any custom function you'd like to associate this event with.
Functions attached to these events have context and native event as arguments.
Native event is a DOM event that happens during drag-and-drop, while context is an object with the following properties:
For instance, the onBeforeDrop can be used to make a copy of a dragged item the moment it's dropped while not changing its place at all:
view:"datatable", drag:true, on:{ onBeforeDrop:function(context, e){ this.getItem(context.target).title = context.source.innerHTML; //copying this.refresh(context.target); return false; //block the default behavior of event (cancels dropping) } }
Related sample: Drag-and-Drop from HTML
More about the possibilities of the drag-n-drop event system.
Drag-and-Drop Events - how to use the drag-n-drop event system for custom drag-n-drop, e.g. copying items with drag-and-drop;
On-Page Drag-n-Drop (Advanced Level) - how to make any Webix view or HTML node on the page draggable and control every aspect of drag-n-drop; | https://docs.webix.com/desktop__dnd.html | 2021-07-23T19:47:26 | CC-MAIN-2021-31 | 1627046150000.59 | [] | docs.webix.com |
Introduction
A ticket represents a specific task that needs to be completed. The CRM module provides a platform to manage and track these tasks electronically.
Example:
Clients can send an request to assist with a new application, to your support email address which is converted into a ticket. The ticket is then assigned to a specific Ticket Group where it can be actioned by any of the agents in the group.
How to create a new Ticket
Click on the CRM Contacts icon and select menu item Tickets
Click on the New Ticket button
Complete the required fields and click on Save
Description of each field: | https://help.i-docs.co.za/support/solutions/articles/11000101603-how-to-create-a-ticket | 2021-07-23T18:05:17 | CC-MAIN-2021-31 | 1627046150000.59 | [array(['https://s3.amazonaws.com/cdn.freshdesk.com/data/helpdesk/attachments/production/11069255405/original/1gllFncHrwhFO8aE3nrKEZrLrS9pG_NBYQ.png?1613729387',
None], dtype=object)
array(['https://s3.amazonaws.com/cdn.freshdesk.com/data/helpdesk/attachments/production/11069255481/original/SEXIIq8ojuFur4zLajVU9IHOpM6dK6ih3w.png?1613729493',
None], dtype=object)
array(['https://s3.amazonaws.com/cdn.freshdesk.com/data/helpdesk/attachments/production/11069255594/original/Vzb52rkBVAZQ4HUZLEHvSXlu4y8xNr-3Yw.png?1613729690',
None], dtype=object) ] | help.i-docs.co.za |
A useful feature is available to classify topics. Classes are not the default topic status automatically provided by Groups Forums.
Administrators can manage topic Classes. To create a new Class go to Topics > Classes.
In the above example 3 classes are added: To do, Doing and Done. These are helpful labels for a support working system.
Moderators and Assignees can use the defined classes.
Topic Moderators are allowed to create and use topic classes:
- edit_topic_classes
- assign_topic_classes
Topic Assignees are allowed to use topic classes:
- assign_topic_classes
| http://docs.itthinx.com/document/groups-forums/classes/ | 2018-09-18T17:24:45 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['http://docs.itthinx.com/wp-content/uploads/2015/04/Screen-Shot-2015-04-01-at-17.35.57.png',
'Screen Shot 2015-04-01 at 17.35.57'], dtype=object)
array(['http://docs.itthinx.com/wp-content/uploads/2015/04/Screen-Shot-2015-04-01-at-19.51.59.png',
'Screen Shot 2015-04-01 at 19.51.59'], dtype=object) ] | docs.itthinx.com |
Scanning¶
To start scanning press the Play button. Also the process can be stopped, paused and resumed.
During the scanning, the progress is shown in the bottom of the scene.
You can navigate in the 3D scene using the following shortcuts:
Upon completion of the scanning process, the object can be saved in File > Save model. The point cloud is saved in ply format.
| https://horus.readthedocs.io/en/release-0.2/source/getting-started/scanning.html | 2018-09-18T17:13:47 | CC-MAIN-2018-39 | 1537267155634.45 | [array(['../../_images/main-window.png', '../../_images/main-window.png'],
dtype=object)
array(['../../_images/scanning.png', '../../_images/scanning.png'],
dtype=object)
array(['../../_images/scan-finished.png',
'../../_images/scan-finished.png'], dtype=object)] | horus.readthedocs.io |
Newsletters are not being sent
This is most likely due to cron not running on your site.
Cron
Please use a tool like WP Crontrol to check if cron is running.
Unless you have an alternative to WordPress Cron (as is the case with WP Engine), you should not have the
DISABLE_WP_CRON constant set to
true in your
wp-config.php.
WP Engine
If your site is running on WP Engine, you must ask their support team to enable Cron for you.
See WP-Cron and WordPress Scheduling – The WP Engine alternate cron is a true cron that runs every minute on the minute, checking for and activating scheduled events. This is not enabled on your server by default, so you will have to request to have it enabled via Support. Once it has been enabled, it will ensure that the events are run as scheduled. | http://docs.itthinx.com/document/groups-newsletters/troubleshooting/ | 2018-09-18T18:02:40 | CC-MAIN-2018-39 | 1537267155634.45 | [] | docs.itthinx.com |
Node.ACS Guides Overview
Node.ACS Quickstart
Getting started with Node.ACS.
Node.ACS MVC Framework
Using the Node.ACS MVC Framework.
Standard Node Applications
Using standard Node applications in Node.ACS.
Using ACS APIs from Node
Using ACS applications from your Node application.
Logging
Logging facilities in Node.ACS.
Node.ACS Organization Support
Describes Platform organization support for Node.ACS applications.
Correlating Native Applications with Node.ACS Services
Correlating Native Applications with Node.ACS Services
Command-Line Interface
List an Application's Access Log
Creating Node.ACS projects in Studio
Managing Standalone projects with Studio
Node.ACS Sample Code
Accessing a MongoDB Database from Node.ACS
Node.ACS Release Notes
Changes in the latest releases.
Troubleshooting
Troubleshooting deployment and runtime exceptions | http://docs.appcelerator.com/cloud/latest/?_escaped_fragment_=/guide/node | 2015-08-28T00:09:14 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.appcelerator.com |
Information for "Making a Language Pack for Joomla" Basic information Display titleJ2.5:Making a Language Pack for Joomla Default sort keyMaking a Language Pack for Joomla Page length (in bytes)19,079 Page ID105orDextercowley (Talk | contribs) Date of page creation18:12, 17 August 2010 Latest editorImanickam (Talk | contribs) Date of latest edit02:41, 7 November 2013 Total number of edits78 Total number of distinct authors12 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Page properties Transcluded templates (3)Templates used on this page: Template:JVer (view source) (semi-protected)Template:RightTOC (view source) Template:Version/tutor (view source) Retrieved from ‘’ | https://docs.joomla.org/index.php?title=J2.5:Making_a_Language_Pack_for_Joomla&action=info | 2015-08-28T00:37:06 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Revision history of "JHtmlSliders::end::end/1.6 (content was: "__NOTOC__ =={{JVer|1.6}} JHtmlSliders::end== ===Description=== Close the current pane. {{Description:JHtmlSliders::end}} <span class="editsection" style="font-s..." (and the only contributor was "Doxiki2")) | https://docs.joomla.org/index.php?title=JHtmlSliders::end/1.6&action=history | 2015-08-28T01:54:01 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Local wiki extensions
From Joomla! Documentation
Local wiki templates • Local wiki extensions • Local interwiki links
Extensions are additions to the MediaWiki code that perform special functions.
Contents>
Additional Parser Functions
The ParserFunctions[2] extension has been added. It. bogous URL and find every article that contains the external link: Special:Linksearch
- ↑ Further information can be found here.
- ↑ Full documentation for these functions at
- ↑ don't take a reference literally: it may well be any URL other than Wikipedia or Webster | https://docs.joomla.org/index.php?title=JDOC:Local_wiki_extensions&oldid=3272 | 2015-08-28T00:57:43 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
::getAddKeySQL/11.1 to API17:JDatabaseImporterMySQL::getAddKeySQL without leaving a redirect (Robot: Moved page)
- 20:29, 27 April 2011 Doxiki2 (Talk | contribs) automatically marked revision 56270 of page JDatabaseImporterMySQL::getAddKeySQL/11.1 patrolled | https://docs.joomla.org/index.php?title=Special:Log&page=JDatabaseImporterMySQL%3A%3AgetAddKeySQL%2F11.1 | 2015-08-28T00:34:16 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Difference between revisions of "Extension Installer/Triggers"
From Joomla! Documentation
< Extension Installer
Revision as of 08:15, 27 February 2009
With the release of Joomla! 1.6, the extension installer now provides triggers for the 'installer' group of plugins to be notified of various events occuring from the result of an extension installation, update or uninstall.
These triggers are: | https://docs.joomla.org/index.php?title=Extension_Installer/Triggers&diff=13307&oldid=10858 | 2015-08-28T00:57:45 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Platform
From Joomla! Documentation
Revision as of 13:33, 29 August 2012 by Tom Hutchison (Talk | contribs)
You are viewing a list of pages and/or additional subcategories under the category called Platform. More information on the Joomla! Platform can be found on the Joomla! Platform Portal page.
Subcategories
This category has only the following subcategory.
S
- [×] Subpackages (2 P)
Pages in category ‘Platform’
The following 9 pages are in this category, out of 9 total. | https://docs.joomla.org/index.php?title=Category:Platform&direction=prev&oldid=72336 | 2015-08-28T01:32:21 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Revision history of "Upgrading and Migrating Joomla"
View logs for this page
There is no edit history for this page.
This page has been deleted. The deletion and move log for the page are provided below for reference.
- 10:09, 29 April 2013 Tom Hutchison (Talk | contribs) deleted page Upgrading and Migrating Joomla (page was incomplete and contained many red links and links for upgrading from unsupported to unsupported versions) | https://docs.joomla.org/index.php?title=Upgrading_and_Migrating_Joomla&action=history | 2015-08-28T00:58:13 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
JTree is a class that allows you to create and walk through object-trees. Used in conjunction with the JNode class.
Contents
Defined in
libraries/joomla/base/tree.php
Methods
Importing
jimport( 'joomla.base.tree' );
[Edit See Also] SeeAlso:JTree
Examples
<CodeExamplesForm />
Tree Structures with JTree and JNote
Batch1211 19:52, 22 March 2010 (EDT) Edit comment | https://docs.joomla.org/index.php?title=API16:JTree&oldid=101496 | 2015-08-28T01:45:55 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
JInstallerPackage:::uninstall
Description
Custom uninstall method.
Description:JInstallerPackage::uninstall [Edit Descripton]
public function uninstall ($id)
- Returns boolean True on success
- Defined on line 199 of libraries/joomla/installer/adapters/package.php
- Since
See also
JInstallerPackage::uninstall source code on BitBucket
Class JInstallerPackage
Subpackage Installer
- Other versions of JInstallerPackage::uninstall
SeeAlso:JInstallerPackage::uninstall [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/index.php?title=JInstallerPackage::uninstall/11.1&oldid=57185 | 2015-08-28T00:55:36 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Information for "JURI/base" Basic information Display titleJURI/base Default sort keyJURI/base Page length (in bytes)1,806 Page ID2837 creatorChris Davenport (Talk | contribs) Date of page creation05:25, 17 September 2008 Latest editorMATsxm (Talk | contribs) Date of latest edit08:31, 21 August 2015 Total number of edits14 Total number of distinct authors8 Recent number of edits (within past 30 days)1 Recent number of distinct authors1 Page properties Magic word (1)__NOTOC__ Retrieved from ‘’ | https://docs.joomla.org/index.php?title=JURI/base&action=info | 2015-08-28T00:26:18 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Information for "JDatabase/insertObject" Basic information Display titleJDatabase/insertObject Default sort keyJDatabase/insertObject Page length (in bytes)525 Page ID11outerdt (Talk | contribs) Date of page creation18:02, 20 January 2011 Latest editorWouterdt (Talk | contribs) Date of latest edit18:04, 20 January 2011 Total number of edits4 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’ | https://docs.joomla.org/index.php?title=JDatabase/insertObject&action=info | 2015-08-28T00:53:12 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Revision history of "JLoggerSysLog:: construct"LoggerSysLog:: construct (cleaning up content namespace and removing duplicated API references) | https://docs.joomla.org/index.php?title=JLoggerSysLog::_construct&action=history | 2015-08-28T00:47:29 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
All public logs
Combined display of all available logs of Joomla! Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive).
- 19:34, 28 February 2011 Chris Davenport (Talk | contribs) marked revision 37643 of page Talk:How do you assign a module to specific pages? patrolled | https://docs.joomla.org/index.php?title=Special:Log&page=Talk%3AHow+do+you+assign+a+module+to+specific+pages%3F | 2015-08-28T01:10:02 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Module Development
From Joomla! Documentation
Revision as of 11:48, 30 August 2012 by Tom Hutchison (Talk | contribs)
- How do you list your extension in the extensions site?
Tutorials
List of all articles belonging to the categories "Tutorials" AND "Module Development" | https://docs.joomla.org/index.php?title=Portal:Module_Development&oldid=72889 | 2015-08-28T00:18:59 | CC-MAIN-2015-35 | 1440644060103.8 | [] | docs.joomla.org |
Transform Orientations¶
Reference
- Mode
Object and Edit Modes
- Panel
- Hotkey
Comma
Transform Orientations affect the behavior of Transformations: Location, Rotation, and Scale.
-, can be used to manage transform orientations: selecting the active orientation, adding ("+" icon), deleting ("X" icon) and rename custom orientations.
The default name for these orientations comes from whatever you have selected. If an edge, it will be titled, "Edge", if:
- Name.
Align to Transform Orientation¶
Reference
- Mode
Object and Edit Modes
Aligns (rotates) the selected objects so that their local orientation matches the active transform orientation in the Transform orientation panel or the Orientation selection in the Transform Adjust Last Operation panels. | https://docs.blender.org/manual/ja/2.82/scene_layout/object/editing/transform/control/orientations.html | 2022-06-25T04:42:58 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['../../../../../_images/scene-layout_object_editing_transform_control_orientations_custom-name.png',
'../../../../../_images/scene-layout_object_editing_transform_control_orientations_custom-name.png'],
dtype=object) ] | docs.blender.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.