content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Custom Webhook Integration allows Saber to make a POST request to any URL.
POST URL
Required
This URL to which you’d like Saber to send a POST request to for each new feedback report.
HTTP Username and HTTP Password
Optional
If the URL specified above is behind HTTP Basic Authentication, enter a username and password in these fields.
Extra fields
Optional
Here you can specify some extra fields to be included in the POST request. Each field should be on a new line, in the format name=value. A common use for this is to specify an API Key for the target URL, eg:
api_key=acb123
All integrations include a test button, which allow you to check the options you’ve specified are working correctly. Clicking the Test Settings button will send a sample feedback report using the data from the form to your Custom Webhook URL.
Clicking test settings does not save the integration, you will still need to click the Save once you are happy with settings. | https://docs.bugmuncher.com/integrations/webhook/ | 2019-04-18T15:17:39 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.bugmuncher.com |
Option Lists: editing
Form Management
- Form Types
- Adding Internal Forms
-
-
- Disabling forms
- Deleting forms
- Form Permissions (who sees what?)
-
-
The Smart Fill feature works by entering the field name (the name="" attribute of the form field) and the URL of the page where it's stored. Then, the script attempts to parse the page and extract the list of field options.
Generally speaking, if you supplied the correct form field name, this should work fine. But in case it cannot locate the field, you might want to try "hacking it" by copying and pasting the field into a blank webpage and linking that that instead. This can at least help identify where the problem lies.
Grouped Options
Form Tools 2.1.x introduced a new option where you can group similar options, e.g. you could have a single Option List split into two groups: "Canadian Provinces" and "US States".
How the grouped Option Lists get displayed depends on the field type it's being used in. For example, a dropdown list would just group them visually in optgroups; radios and checkboxes display them with a heading above each group.
Check out this page for some handy tips about how the user interface works for grouping your options. | https://docs.formtools.org/userdoc/form_management/option_lists_editing/ | 2019-04-18T14:47:05 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.formtools.org |
Software Deployment¶
Software deployment on DGX-2 is based on containers. NVIDIA provides a wide range of prepared Docker containers with a variety of different software. A user can easily download these containers and use them directly on the DGX-2.
The catalog of all container images can be found on NVIDIA site. Supported software includes:
- TensorFlow
- MATLAB
- GROMACS
- Theano
- Caffe2
- LAMMPS
- ParaView
- ...
Running Containers on DGX-2¶
NVIDIA expects usage of Docker as a containerization tool, but Docker is not a suitable solution in a multiuser environment. For this reason, the Singularity container solution is used.
Singularity can be used very similar to Docker, the only change is a rewrite of an image URL address. For example, original command for Docker
docker run -it nvcr.io/nvidia/theano:18.08 should be rewritten to
singularity shell docker://nvcr.io/nvidia/theano:18.08. More about Singularity here.
Info
The
--nv Singularity switch is used by default on DGX-2.
For fast container deployment, all images are cached after first use in lscratch directory. This behavior can be changed by SINGULARITY_CACHEDIR environment variable, but the start time of container will increase significantly. | https://docs.it4i.cz/dgx2/software/ | 2019-04-18T15:43:13 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.it4i.cz |
Though we don't offer a direct mp3 export, you can download the mp4 and then convert the file to an mp3 via a third party application.
Pro tip for iOS users: The latest macOS has a secret feature in quicktime that allows you to export to audio only with one click!
| http://docs.crowdcast.io/features-and-tools/can-i-download-only-the-audio-portion-of-my-event | 2019-04-18T14:51:27 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['https://downloads.intercomcdn.com/i/o/86477265/68579c91821b17a9bd567fcb/46520594_10213015986214388_3388442434462547968_o.jpg',
None], dtype=object) ] | docs.crowdcast.io |
Live tests¶
Live tests are used to validate configurations built by ARouteServer and to test compliance between expected and real results.
A mix of Python unittest and Docker (and KVM too for OpenBGPD tests) allows to create scenarios where some instances of BGP speakers (the clients) connect to a route server whose configuration has been generated using this tool.
Some built-in tests are included within the project and have been used during the development of the tool; new custom scenarios can be easily built by users and IXP managers to test their own policies.
Example: in a configuration where blackhole filtering is enabled, an instance of a route server client (AS1) is used to announce some tagged prefixes (203.0.113.1/32) and the instances representing other clients (AS2, AS3) are queried to ensure they receive those prefixes with the expected blackhole NEXT_HOP (192.0.2.66).
def test_071_blackholed_prefixes_as_seen_by_enabled_clients(self): for inst in (self.AS2, self.AS3): self.receive_route(inst, "203.0.113.1/32", self.rs, next_hop="192.0.2.66", std_comms=["65535:666"], lrg_comms=[])
Travis CI log file contains the latest built-in live tests results. Since (AFAIK) OpenBGPD can’t be run on Travis CI platform, the full live tests results including those run on OpenBGPD can be found on this file.
Setting up the environment to run live tests¶
- To run live tests, Docker must be present on the system. Some info about its installation can be found on the External programs installation section.
- In order to have instances of the route server and its clients to connect each other, a common network must be used. Live tests are expected to be run on a Docker bridge network with name
arouteserverand subnet
192.0.2.0/24/
2001:db8:1:1::/64. The following command can be used to create this network:
docker network create --ipv6 --subnet=192.0.2.0/24 --subnet=2001:db8:1:1::/64 arouteserver
Route server client instances used in live tests are based on BIRD 1.6.4, as well as the BIRD-based version of the route server used in built-in live tests; the
pierky/bird:1.6.4image is expected to be found on the local Docker repository. Build the Docker image (or pull it from Dockerhub):
# build the image using the Dockerfile # from mkdir ~/dockerfiles cd ~/dockerfiles curl -o Dockerfile.bird -L docker build -t pierky/bird:1.6.4 -f Dockerfile.bird . # or pull it from Dockerhub docker pull pierky/bird:1.6.4
If there is no plan to run tests on the OpenBGPD-based version of the route server, no further settings are needed. To run tests on the OpenBGPD-based version too, the following steps must be done as well.
OpenBGPD live-tests environment¶
To run an instance of OpenBGPD, KVM is needed. Some info about its installation can be found on the External programs installation section.
Setup and install a KVM virtual-machine running one of the supported versions of OpenBSD. This VM will be started and stopped many times during tests: don’t use a production VM.
- By default, the VM name must be
arouteserver_openbgpd60or
arouteserver_openbgpd61or
arouteserver_openbgpd62; this can be changed by setting the
VIRSH_DOMAINNAMEenvironment variable before running the tests.
- The VM must be connected to the same Docker network created above: the commands
ip link showand
ifconfigcan be used to determine the local network name needed when creating the VM:
$ ifconfig br-2d2956ce4b64 Link encap:Ethernet HWaddr 02:42:57:82:bc:91 inet addr:192.0.2.1 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:57ff:fe82:bc91/64 Scope:Link inet6 addr: 2001:db8:1:1::1/64 Scope:Global inet6 addr: fe80::1/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 ...
- In order to run built-in live test scenarios, the VM must be reachable at 192.0.2.2/24 and 2001:db8:1:1::2/64.
On the following example, the virtual disk will be stored in ~/vms, the VM will be reachable by connecting to any IP address of the host via VNC, the installation disk image is expected to be found in the install60.iso file and the network name used is br-2d2956ce4b64:
sudo virsh pool-define-as --name vms_pool --type dir --target ~/vms sudo virsh pool-start vms_pool sudo virt-install \ -n arouteserver_openbgpd60 \ -r 512 \ --vcpus=1 \ --os-variant=openbsd4 \ --accelerate \ -v -c install60.iso \ -w bridge:br-2d2956ce4b64 \ --graphics vnc,listen=0.0.0.0 \ --disk path=~/vms/arouteserver_openbgpd.qcow2,size=5,format=qcow2
Finally, add the current user to the libvirtd group to allow management of the VM:
sudo adduser `id -un` libvirtd
To interact with this VM, the live tests framework will use SSH; by default, the connection will be established using the
rootusername and the local key file
~/.ssh/arouteserver, so the VM must be configured to accept SSH connections using SSH keys:
mkdir /root/.ssh cat << EOF > .ssh/authorized_keys ssh-rsa [public_key_here] arouteserver EOF
The
StrictHostKeyCheckingoption is disabled via command line argument in order to allow to connect to multiple different VMs with the same IP address.
The SSH username and key file path can be changed by setting the
SSH_USERNAMEand
SSH_KEY_PATHenvironment variables before running the tests.
Be sure that the
bgpddaemon will startup automatically at boot and that the
bgpctltool can be executed correctly on the OpenBSD VM:
echo "bgpd_flags=" >> /etc/rc.conf.local chmod 0555 /var/www/bin/bgpctl
How to run built-in live tests¶
To run built-in live tests, the full repository must be cloned locally and the environment must be configured as reported above.
To test both the BIRD- and OpenBGPD-based route servers, run the Python unittest using
nose:
# from within the repository's root nosetests -vs tests/live_tests/
How it works¶
Each directory in
tests/live_tests/scenarios represents a scenario: the route server configuration is stored in the usual
general.yml and
clients.yml files, while other BGP speaker instances (route server clients and their peers) are configured through the
ASxxx.j2 files.
These files are Jinja2 templates and are expanded by the Python code at runtime. Containers’ configuration files are saved in the local
var directory and are used to mount the BGP speaker configuration file (currenly,
/etc/bird/bird.conf for BIRD and
/etc/bgpd.conf for OpenBGPD).
The unittest code sets up a Docker network (with name
arouteserver) used to attach instances and finally brings instances up. Regular Python unittest tests are then performed and can be used to match expectations to real results.
Details about the code behind the live tests can be found in the Live tests code documentation section.
Built-in scenarios¶
Some notes about the built-in scenarios that are provided with the program follow.
How to build custom scenarios¶
A live test scenario skeleton is provided in the
pierky/arouteserver/tests/live_tests/skeleton directory.
It seems to be a complex thing but actually most of the work is already done in the underlying Python classes and prepared in the skeleton.
To configure the route server and its clients, please consider that the Docker network used by the framework is on 192.0.2.0/24 and 2001:db8:1:1::/64 subnets.
Initialize the new scenario into a new directory:
- using the
init-scenariocommand:
arouteserver init-scenario ~/ars_scenarios/myscenario
- manually, by cloning the provided skeleton directory:
mkdir -p ~/ars_scenarios/myscenario cp pierky/arouteserver/tests/live_tests/skeleton/* ~/ars_scenarios/myscenario
Document the scenario, for example in the
README.rstfile: write down which BGP speakers are involved, how they are configured, which prefixes they announce and what the expected result should be with regards of the route server’s configuration and its policies.
Put the
general.yml,
clients.ymland
bogons.ymlconfiguration files you want to test in the new directory.
Configure your scenario and write your test functions in the
base.pyfile.
Declare the BGP speakers you want to use in the
_setup_rs_instance()and
_setup_instances()methods of the base class.
- classmethod
SkeletonScenario.
_setup_instances()
Declare the BGP speaker instances that are used in this scenario.
The
cls.INSTANCESattribute is a list of all the instances that are used in this scenario. It is used to render local Jinja2 templates and to transform them into real BGP speaker configuration files.
The
cls.RS_INSTANCE_CLASSand
cls.CLIENT_INSTANCE_CLASSattributes are set by the derived classes (test_XXX.py) and represent the route server class and the other BGP speakers class respectively.
The first argument is the instance name.
The second argument is the IP address that is used to run the instance. Here, the
cls.DATAdictionary is used to lookup the real IP address to use, which is configured in the derived classes (test_XXX.py).
The third argument is a list of files that are mounted from the local host (where Docker is running) to the container (the BGP speaker). The list is made of pairs in the form
(local_file, container_file). The
cls.build_rs_cfgand
cls.build_other_cfghelper functions allow to render Jinja2 templates and to obtain the path of the local output files.
For the route server, the configuration is built using ARouteServer’s library on the basis of the options given in the YAML files.
For the other BGP speakers, the configuration must be provided in the Jinja2 files within the scenario directory.
Example:
@classmethod def _setup_instances(cls): cls.INSTANCES = [ cls._setup_rs_instance(), cls.CLIENT_INSTANCE_CLASS( "AS1", cls.DATA["AS1_IPAddress"], [ ( cls.build_other_cfg("AS1.j2"), "/etc/bird/bird.conf" ) ] ), ... ]
To ease writing the test functions, set instances names in the
set_instance_variables()method.
SkeletonScenario.
set_instance_variables()
Simply set local attributes for an easier usage later
The argument of
self._get_instance_by_name()must be one of the instance names used in
_setup_instances().
Example:
def set_instance_variables(self): self.AS1 = self._get_instance_by_name("AS1") self.AS2 = self._get_instance_by_name("AS2") self.rs = self._get_instance_by_name("rs")
Write test functions to verify that scenario’s expectations are met.
Some helper functions can be used:
LiveScenario.
session_is_up(inst_a, inst_b)
Test if a BGP session between the two instances is up.
If a BGP session between the two instances is not up, the
TestCase.fail()method is called and the test fails.
Example:
def test_020_sessions_up(self): """{}: sessions are up""" self.session_is_up(self.rs, self.AS1) self.session_is_up(self.rs, self.AS2)
LiveScenario.
receive_route(inst, prefix, other_inst=None, as_path=None, next_hop=None, std_comms=None, lrg_comms=None, ext_comms=None, local_pref=None, filtered=None, only_best=None, reject_reason=None)
Test if the BGP speaker receives the expected route(s).
If no routes matching the given criteria are found, the
TestCase.fail()method is called and the test fails.
Example:
def test_030_rs_receives_AS2_prefix(self): """{}: rs receives AS2 prefix""" self.receive_route(self.rs, self.DATA["AS2_prefix1"], other_inst=self.AS2, as_path="2")
LiveScenario.
log_contains(inst, msg, instances={})
Test if the BGP speaker’s log contains the expected message.
This only works for BGP speaker instances that support message logging: currently only BIRD.
If no log entries are found, the
TestCase.fail()method is called and the test fails.
Example
Given self.rs the instance of the route server, and self.AS1 the instance of one of its clients, the following code expands the “{AS1}” macro using the BGP speaker specific name for the instance self.AS1 and then looks for it within the route server’s log:
self.log_contains(self.rs, "{AS1} bad ASN", {"AS1": self.AS1})
On BIRD, “{AS1}” will be expanded using the “protocol name” that BIRD uses to identify the BGP session with AS1.
Example:
def test_030_rs_rejects_bogon(self): """{}: rs rejects bogon prefix""" self.log_contains(self.rs, "prefix is bogon - REJECTING {}".format( self.DATA["AS2_bogon1"])) self.receive_route(self.rs, self.DATA["AS2_bogon1"], other_inst=self.AS2, as_path="2", filtered=True) # AS1 should not receive the bogon prefix from the route server with six.assertRaisesRegex(self, AssertionError, "Routes not found"): self.receive_route(self.AS1, self.DATA["AS2_bogon1"])
Edit IP version specific and BGP speaker specific classes within the
test_XXX.pyfiles and set the prefix ID / real IP addresses mapping schema.
- class
pierky.arouteserver.tests.live_tests.skeleton.test_bird4.
SkeletonScenario_BIRDIPv4(methodName='runTest')
BGP speaker specific and IP version specific derived class.
This class inherits all the test functions from the base class. Here, only IP version specific attributes are set, such as the prefix IDs / real IP prefixes mapping schema.
The prefix IDs reported within the
DATAdictionary must be used in the parent class’ test functions to reference the real IP addresses/prefixes used in the scenario. Also the other BGP speakers’ configuration templates must use these IDs. For an example plase see the “AS2.j2” file.
The
SHORT_DESCRattribute can be set with a brief description of this scenario.
Example:
class SkeletonScenario_BIRDIPv4(SkeletonScenario): # Leave this to True in order to allow nose to use this class # to run tests. __test__ = True SHORT_DESCR = "Live test, BIRD, skeleton, IPv4" CONFIG_BUILDER_CLASS = BIRDConfigBuilder RS_INSTANCE_CLASS = BIRDInstanceIPv4 CLIENT_INSTANCE_CLASS = BIRDInstanceIPv4 IP_VER = 4 DATA = { "rs_IPAddress": "99.0.2.2", "AS1_IPAddress": "99.0.2.11", "AS2_IPAddress": "99.0.2.22", "AS2_prefix1": "2.0.1.0/24", "AS2_bogon1": "192.168.2.0/24" }
Edit (or add) the template files that, once rendered, will produce the configuration files for the other BGP speakers (route server clients) that are involved in the scenario (the skeleton includes two template files,
AS1.j2and
AS2.j2).
Example:
router id 192.0.2.22; # This is the path where Python classes look for # to search BIRD's log files. log "/var/log/bird.log" all; log syslog all; debug protocols all; protocol device { } # Prefixes announced by this BGP speaker to the route server. # # The Jinja2 'data' variable refers to the class 'DATA' attribute. # # IP prefixes are not configured directly here, only a reference # to their ID is given in order to maintain a single configuration # file that can be used for both the IPv4 and the IPv6 versions # of the scenario. protocol static own_prefixes { route {{ data.AS2_prefix1 }} reject; route {{ data.AS2_bogon1 }} reject; } protocol bgp the_rs { local as 2; neighbor {{ data.rs_IPAddress }} as 999; import all; export all; connect delay time 1; connect retry time 1; }
Run the tests using
nose:
nosetests -vs ~/ars_scenarios/myscenario
Details about the code behind the live tests can be found in the Live tests code documentation section.
Debugging live tests scenarios¶
To debug custom scenarios some utilities are provided:
the
REUSE_INSTANCESenvironment variable can be set when executing nose to avoid Docker instances to be torn down at the end of a run. When this environment variable is set, BGP speaker instances are started only the first time tests are executed, then are left up and running to allow debugging. When tests are executed again, the BGP speakers’ configuration is rebuilt and reloaded. Be careful: this mode can be used only when running tests of the same scenario, otherwise Bad Things (tm) may happen.
Example:
REUSE_INSTANCES=1 nosetests -vs tests/live_tests/scenarios/global/test_bird4.py
once the BGP speaker instances are up (using the
REUSE_INSTANCESenvironment variable seen above), they can be queried using standard Docker commands:
$ # list all the running Docker instances $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 142f88379428 pierky/bird:1.6.3 "bird -c /etc/bird..." 18 minutes ago Up 18 minutes 179/tcp ars_AS101 26a9ec58dcf1 pierky/bird:1.6.3 "bird -c /etc/bird..." 18 minutes ago Up 18 minutes 179/tcp ars_AS2 $ # run 'birdcl show route' on ars_AS101 $ docker exec -it 142f88379428 birdcl show route
Some utilities are provided whitin the
/utilsdirectory to ease these tasks:
# execute the 'show route' command on the route server BIRD Docker instance ./utils/birdcl rs show route # print the log of the route server ./utils/run rs cat /var/log/bird.log
The first argument (“rs” in the examples above) is the name of the instance as set in the
_setup_instances()method.
the
BUILD_ONLYenvironment variable can be set to skip all the tests and only build the involved BGP speakers’ configurations. Docker instances are not started in this mode.
Example:
BUILD_ONLY=1 nosetests -vs tests/live_tests/scenarios/global/test_bird4.py | https://arouteserver.readthedocs.io/en/latest/LIVETESTS.html | 2019-04-18T15:27:43 | CC-MAIN-2019-18 | 1555578517682.16 | [] | arouteserver.readthedocs.io |
Overlays Administration¶
To simplify deployment of overlay networking as much as possible, there is an
esdc-overlay command that will automate for you most of the required operations. Moreover, it permanently stores the data in Danube Cloud‘s configuration database so the compute node configuration can be easily re-applied at any time.
To apply all configuration to all/selected compute nodes,
esdc-overlay runs an Ansible playbook that does all the hard work.
Note
The
esdc-overlay command should be run in the shell on the first compute node (the one that hosts the mgmt01 virtual server).
See also
You need to enable overlays prior to administering them. See How to enable overlays in Danube Cloud.
See also
To better understand to concepts of overlay networking in Danube Cloud have a look at the general overlay documentation.
- What
esdc-overlaycan do:
- Create/modify overlay rules.
- Create/modify firewall configuration.
- List configured overlay rules.
- Create an adminoverlay, create appropriate vNICs, assign adminoverlay IP/MAC addresses.
- Create and apply IPSec configuration.
- Restart appropriate system services if needed.
- Check overlay requirements.
Table of Contents
Creating an overlay rule¶
Usage of the
create subcommand:
esdc-overlay create <overlay_rule_name> [node_list|all] [raw_overlay_rule_string]
Creating an overlay rule on all compute nodes in the Danube Cloud installation can be as simple as running:
esdc-overlay create mynewoverlay
More complex usage can define subset of compute nodes that will host the new overlay rule:
esdc-overlay create localoverlay node01.local,node02.local,node03.local
Or you can create a completely custom overlay rule (all nodes, port 4790, MTU 1300):
esdc-overlay create customers all "-e vxlan -p vxlan/listen_ip=0.0.0.0,vxlan/listen_port=4790 -s files -p files/config=/opt/custom/networking/customers_overlay.json -p mtu=1300"
- Notes:
- If you provide a node list to
esdc-overlay(anything other than
all), only these compute nodes will be touched by Ansible.
- In the node list, you can provide also non-existent node names. This way you can configure future nodes in advance.
Updating an overlay rule¶
Usage of the
update subcommand:
esdc-overlay update [overlay_rule_name] [node_list|all] [raw_overlay_rule_string]
update has the same parameters as
create. It can alter any overlay rule parameters and the change is immediately pushed to all or selected compute nodes.
The
update subcommand can be also run without any parameters. In this mode it will (re)apply the configuration for all overlay rules on all compute nodes. It is very useful either to verify that the configuration is correct or to configure overlays on a newly added compute nodes.
Note
After adding a new compute node, just run the
esdc-overlay update command. It will fully configure overlay networking on the new compute node(s).
Modify a list of nodes that the specified overlay should be configured on:
esdc-overlay update localoverlay node03.local,node04.local,node04.local
Re-apply configuration for myrule overlay rule (Ansible will touch only nodes that the myrule should be on - it will retrieve the actual node list from the configuration database):
esdc-overlay update myrule
Delete an overlay rule¶
Usage of the
delete subcommand:
esdc-overlay delete <overlay_rule_name>
The overlay rule will be first deleted on all compute nodes and then (if successful) removed from the configuration database.
List all configured overlay rules¶
Usage of the
list subcommand:
Create adminoverlay¶
Usage of the
adminoverlay-init subcommand + example:
esdc-overlay adminoverlay-init <adminoverlay_subnet/netmask> [nodename1=ip1,nodename=ip2,...] esdc-overlay adminoverlay-init 10.10.10.0/255.255.255.0 node01.local=10.10.10.11,node02.local=10.10.10.12
This subcommand does the following operations:
- Validate the specified IP subnet.
- Create the adminoverlay overlay rule.
- Generate/assign IP addresses for vNICs on all compute nodes.
- Generate static MAC addresses for vNICs.
- Set the vnc_listen_address if needed.
- Write the configuration into
/usbkey/configon all compute nodes.
- Reload the
network/virtualsystem service to apply new overlay configuration.
- Add ipfilter rules to drop unencrypted VXLAN packets to/from internet.
- Reload the
network/ipfilterservice.
Parameters:
-
adminoverlay_subnet/netmask- a network subnet with a netmask that will be used for the adminoverlay vNICs. The network is more or less equivalent to the admin network (but the admin network is still needed).
-
nodename1=ip1,...- if you want to set specific IP addresses for some/all compute nodes, you can do it here. Unspecified nodes will have an IP address assigned automatically. All IP addresses must be from the
adminoverlay_subnet.
Modify adminoverlay¶
Usage of the
adminoverlay-update subcommand:
esdc-overlay adminoverlay-update [nodename1=ip1,nodename=ip2,...]
This subcommand can modify assigned IP addresses. It will (as all commands except
*-list) immediately run Ansible to apply the configuration.
List adminoverlay info¶
Usage of the
adminoverlay-list subcommand:
esdc-overlay adminoverlay-list[root@node01 ~] esdc-overlay adminoverlay-list Adminoverlay subnet: 10.10.10.0 Adminoverlay netmask: 255.255.255.0 Adminoverlay vxlan_id: 2 Adminoverlay vlan_id: 2 IP MAC NODE 10.10.10.11 00:e5:dc:dc:26:c3 node01.local 10.10.10.12 00:e5:dc:0f:c0:25 node02.local 10.10.10.13 00:e5:dc:0f:c0:42 node03.local
Enable firewall on all compute nodes¶
Usage of the
globally-enable-firewall subcommand + example:
esdc-overlay globally-enable-firewall [allowed_IP_list] esdc-overlay globally-enable-firewall admin_IP1,allowed_IP2,good_subnet/24 esdc-overlay globally-enable-firewall 12.13.14.0/26,100.150.200.128/25,1.2.3.4
By default, running
esdc-overlay with
create or
update subcommands will create firewall rules that prevent sending unencrypted overlay packets over the external0 interface.
The
globally-enable-firewall subcommand will further configure ipfilter on external0 interfaces of all compute nodes to whitelist mode. That means that it will permit connections only from allowed destinations. Note that network interfaces other that external0 will NOT be affected by this change. Virtual servers are also not affected by this operation. This is solely supposed to protect the hypervisors from internet threats.
Allowed destinations are:
- all compute nodes
- sources specified in
allowed_IP_list
This subcommand can be used to update the
allowed_IP_list after the firewall has been enabled.
The subcommand requires confirmation before applying changes on compute nodes. Running the subcommand without parameters can be used to review the actual firewall configuration.
Disable firewall on all compute nodes¶
Usage of the
globally-disable-firewall subcommand:
esdc-overlay globally-disable-firewall
This subcommand will revert the effect of
globally-enable-firewall on all compute nodes. All nodes are switched to blacklist ipfilter mode (allow all except explicitly forbidden).
The ipfilter itself is still active and you can add your own custom rules manually to any compute node by creating/editing a file in
/opt/custom/etc/ipf.d/ directory and running
/opt/custom/etc/rc-pre-network.d/010-ipf-restore.sh refresh. | https://docs.danubecloud.org/user-guide/network/esdc-overlay-cmd.html | 2019-04-18T14:35:21 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.danubecloud.org |
Lesson plans - the ned show
all for ki, inc. producers of the e show all rights reserved free lesson lans resources at resources accelerated reading quiz:
Teks based lesson plan - esc3
Teks based lesson plan north east independent school district subject: 5th grade reading weeks: first nine weeks group 3
ned has a tent - free stories and free
Ned has a tent a collection of stories for level - 21 by clark ness visit for more free stories and ebooks.
Non-food incentives and rewards in the classroom
Treasure chest 1st grade materials: erasers, pencils, bubble bottles, little race cars, rulers, etcetera to use for prizes. most prizes come from the 'dollar store'.
First grade level 36 stories - clarknes
A robber a collection of stories for level - 36 by clark ness visit for more free stories and ebooks. | https://www.docs-archive.net/Free-1st-Grade-Reading-Books.pdf | 2019-04-18T14:16:56 | CC-MAIN-2019-18 | 1555578517682.16 | [] | www.docs-archive.net |
Dashboard v.2
After logging in to ApiOmat, the dashboard opens and you can start building your backend.
The dashboard has a main menu on top where you can open each screen, beginning with the
Module Market : Select ready-to-use modules for your app as well as create and upload native modules
My Modules : Find tools to send push messages or export data if the corresponding module was previously added
Class Editor : Define own data models,
SDK : Download generated SDKs for your selected modules and your own models in your preferred language
Data : Take a look the existing data, copy, export, import and delete it
Admin: Give roles to other customers for your app
Each screen has a help icon in the ribbon bar where you can find more information about the screen.
The navigation menu on the left side will let you switch between your Apps and the available systems. If you want to switch to another app, simply click on the current apps name and select another one from the popup menu.
Depending on your plan, you have between one in the current screen.
Please be aware that each system has its own SDK, and those SDKs will only have access to the data and models deployed in the same system. | http://docs.apiomat.com/32/Dashboard-v.2.html | 2019-04-18T15:18:37 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['images/download/attachments/10717669/dashboard2_mm.png',
'images/download/attachments/10717669/dashboard2_mm.png'],
dtype=object)
array(['images/download/attachments/10717669/menu.png',
'images/download/attachments/10717669/menu.png'], dtype=object)
array(['images/download/attachments/10717669/backenedDialog.png',
'images/download/attachments/10717669/backenedDialog.png'],
dtype=object)
array(['images/download/attachments/10717669/ClassBrowser.png',
'images/download/attachments/10717669/ClassBrowser.png'],
dtype=object) ] | docs.apiomat.com |
Why I don’t like Word to write documentation
Don’t get me wrong – MS Word is a great product for many use cases, but most of them are IMHO outdated (like writing a letter) and writing modern documentation is not a use case where MS Word shines.
Today we live in the information age, but Word is page oriented – Word lets you write documents with pages, page numbers, page header and footer etc.
The table of contents lists the page numbers the content can be found on – linking to the content is not a first class feature.
When I write down the documentation for a project, I don’t want to create pages – I want to dump information. The format in which the information will be published is important, but it is even more important to have the freedom to change the format to – for example – HTML, ePub, Word, PDF and whatever may come in the future and what my stakeholders need.
Yes, Word can be saved as HTML and printed as PDF, but that’s not the same.
BTW: interesting about Word is that it implements many advanced features which seem to be hard to use.
Did you know that Word lets you…
compare two documents?
compose a document from subdocuments?
reference images instead of embedding them?
These are great features, but since we are not used to them, they are hard to handle.
Instead of comparing documents and checking the diff, we use templates with a change history table as part of the document. A proper diff together with version control would be more helpful.
Instead of composing documents from subdocuments, we copy and paste parts from document to document in order to create views for different stakeholders. A habit which would not be tolerated in software development :-)
When we use images in documents, they are embedded. So if we want to make a change, we first have to search for the source file (and hopefully find it), change it and embed it again. Storing only the reference to the image within the document would make this easier.
Unfortunately, composing documents and referencing images result in multifile documents which people are not used to and would break the "please mail me the doc" flow.
So what’s the solution? Stay tuned and I will explain in my next post why I prefer asciidoc over Word… | https://docs-as-co.de/news/why-no-word/ | 2019-04-18T15:35:57 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs-as-co.de |
Intel Debugger¶
IDB is no longer available since Intel Parallel Studio 2015
Debugging Serial Applications¶
The intel debugger version is available, via module intel/13.5.192. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use X display for running the GUI.
$ ml intel/13.5.192 $ ml Java $ idb
The debugger may run in text mode. To debug in text mode, use
$ idbc
To debug on the compute nodes, module intel must be loaded. The GUI on compute nodes may be accessed using the same way as in the GUI section.
Example:
$ qsub -q qexp -l select=1:ncpus=24 -X -I # use 16 threads for Anselm qsub: waiting for job 19654.srv11 to start qsub: job 19654.srv11 ready $ ml intel $ ml Java $ icc -O0 -g myprog.c -o myprog.x $ idb ./myprog.x
In this example, we allocate 1 full compute node, compile program myprog.c with debugging options -O0 -g and run the idb debugger interactively on the myprog.x executable. The GUI access is via X11 port forwarding provided by the PBS workload manager.
Debugging Parallel Applications¶
Intel debugger is capable of debugging multithreaded and MPI parallel programs as well.
Small Number of MPI Ranks¶
For debugging small number of MPI ranks, you may execute and debug each rank in separate xterm terminal (do not forget the X display. Using Intel MPI, this may be done in following way:
$ qsub -q qexp -l select=2:ncpus=24 -X -I qsub: waiting for job 19654.srv11 to start qsub: job 19655.srv11 ready $ ml intel $ mpirun -ppn 1 -hostfile $PBS_NODEFILE --enable-x xterm -e idbc ./mympiprog.x
In this example, we allocate 2 full compute node, run xterm on each node and start idb debugger in command line mode, debugging two ranks of mympiprog.x application. The xterm will pop up for each rank, with idb prompt ready. The example is not limited to use of Intel MPI
Large Number of MPI Ranks¶
Run the idb debugger from within the MPI debug option. This will cause the debugger to bind to all ranks and provide aggregated outputs across the ranks, pausing execution automatically just after startup. You may then set break points and step the execution manually. Using Intel MPI:
$ qsub -q qexp -l select=2:ncpus=24 -X -I qsub: waiting for job 19654.srv11 to start qsub: job 19655.srv11 ready $ ml intel $ mpirun -n 48 -idb ./mympiprog.x
Debugging Multithreaded Application¶
Run the idb debugger in GUI mode. The menu Parallel contains number of tools for debugging multiple threads. One of the most useful tools is the Serialize Execution tool, which serializes execution of concurrent threads for easy orientation and identification of concurrency related bugs.
Further Information¶
Exhaustive manual on IDB features and usage is published at Intel website. | https://docs.it4i.cz/software/intel/intel-suite/intel-debugger/ | 2019-04-18T15:43:09 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.it4i.cz |
Exchange Online Protection Service Description
Obtain information about features and requirements for Exchange Online Protection. Included is a list of plans that provide Exchange Online Protection as well as a comparison of features across those plans. Compare Office 365 for Business plans.
To buy Exchange Online Protection, see Exchange Online Protection.
You can export, save, and print pages in the Office 365 Service Descriptions. Learn how to export multiple pages.
Important
EOP replaces Forefront Online Protection for Exchange (FOPE). All FOPE customers will be transitioned to EOP. EOP delivers the protection and control provided by FOPE, and also includes additional features. For more information about transitioning from FOPE to EOP, go to the Forefront Online Protection for Exchange (FOPE) Transition Center.
What's new in Exchange Online Protection (EOP)
For information about new features in EOP, see What's New in Exchange Online Protection. For a feature comparison between FOPE and EOP, see FOPE vs. EOP Feature Comparison.
Exchange Online Protection (EOP) plans
EOP is available through the following subscription plans:
Exchange Enterprise CAL with Services features.
Note
New features for Exchange Enterprise CAL with Services are deployed at the same time as Exchange Online, not EOP standalone. Be advised that the deployment schedules for EOP standalone and Exchange Online/Exchange Enterprise CAL with Services may be slightly different.
Requirements for Exchange Online Protection (EOP).
Limits
For limits in EOP, see Exchange Online Protection Limits.
Feature availability across Exchange Online Protection (EOP) plans
Each feature is listed below. For more detailed information about EOP features, click the links in the table. When Exchange Online is mentioned, it typically refers to the Office 365 Enterprise service family.
Note
1 Mail users are defined as "Mailboxes," and, along with external mail contacts, can be added, removed, and otherwise managed directly in the Exchange admin center (EAC).
2 No RBAC customization. Admin roles only.
3 Managed domains can be viewed and domain types can be edited in the EAC. All other domain management must be done in the Microsoft 365 admin center.
4 The available flexible criteria and actions differ between EOP and Exchange Online. For a list of available criteria and actions in EOP, see Transport Rule Criteria and Transport Rule Actions. For a list of available criteria and actions in Exchange Online, see Transport Rule Criteria and Transport Rule Actions.
5 EOP auditing reports are a subset of Exchange Online auditing reports that exclude information about mailboxes.
6 DLP policy tips are not available for Exchange Enterprise CAL with Services customers.
7 The default content filter action is to move spam messages to the recipients' Junk Email folder. For this to work with on-premises mailboxes, you must also configure two Exchange Transport rules on your on-premises servers to detect spam headers added by EOP. For more information, see Ensure that Spam is Routed to Each User's Junk Email Folder.
8 This feature is available to Exchange Server 2013 Service Pack 1 (SP1) customers whose mailboxes are being filtered by EOP, and will soon be available to Exchange Online customers.
9 EOP reports are a subset of Exchange Online reports that exclude information about mailboxes.
10 Includes DLP reports.
11 Exchange Enterprise CAL with Services customers should install the workbook by selecting the Exchange Online service rather than the Exchange Online Protection service.
12 Supported for on-premises customers who purchase Azure Information Protection and use Exchange Online Protection to route email through Exchange Online.
13 Scans inbound and outbound messages, but does not scan internal messages sent from a sender in your organization to a recipient in your organization.
14 The available predicates and actions differ between EOP and Exchange Online.
15 Hybrid setup is not available through Hybrid Wizard, but can be set up manually if you have Exchange SP1.
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/office365/servicedescriptions/exchange-online-protection-service-description/exchange-online-protection-service-description | 2019-04-18T15:32:06 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.microsoft.com |
Message-ID: <1966971718.24232.1555598583724.JavaMail.confluence@docs1.parasoft.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_24231_1482156520.1555598583724" ------=_Part_24231_1482156520.1555598583724 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
This topic explains configuration options for using HTTP 1.1&nbs= p;with selected supporting tools and provisioning action tools.After select= ing HTTP 1.1 from the Transport drop-down menu within the Transport t= ab of an appropriate tool, the following options display in the left pane o= f the Transport tab:
General page options include:
Router Endpoint: The endpoint is the URL of th= e service endpoint.
Method: Specifies which method is used to proc= esses the request. This field is disabled if the Constrain to = WSDL check box is selected.The method to invoke can be specif= ied as a fixed value, parameterized value, or scripted value.
For det= ails about parameterizing values, see = Parameterizing Tools with Data Source Values, Variables, and Extracted Valu= es.
With fixed values, you can access data source values usi=
ng
${var_name} syntax. You can also use the environm=
ent variables that you have specified. For details about environments, see =
Configuri=
ng Virtual options include:
Security> Client side SSL page options include:=
Security> HTTP Authentication page options incl= ude: This header is constructed based on the Proxy Authentication settings in=
the preferences and whether the server indicated that proxy authentication=
is required. Cookies page options include:
Proxy-Authorization
Cookies
This header is constructed based on the Proxy Authentication settings in= the preferences and whether the server indicated that proxy authentication= is required.
Cookies page options include: | https://docs.parasoft.com/exportword?pageId=33860237 | 2019-04-18T14:43:03 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.parasoft.com |
So, you've created your payment plans and are ready to go but you have no idea where to actually paste PayWhirl's embed code(s)?!?! No problem!
PayWhirl is incredibly flexible in that you can paste the embed code nearly anywhere on your website, but what does that mean exactly?!? Well, most web builders have a toggle where you can switch from the normal graphical interface into a HTML editor. This HTML editor is where you can paste the PayWhirl embed codes.
Some builders may refer to this as switching the WYSIWYG editor into "code mode" before you can paste in your widget code. Most website platforms (Wordpress, Shopify, Joomla, SquareSpace, Wix, Etc.) provide wysiwyg editors to edit your content. Almost all of these editors have a "toggle to code" or html button. It's typically the top left or right icon in a page editor and looks like this --> </>
See this example from Shopify below:
NOTE: You might need to create a new page on your website so you can get to an editor like the one pictured above. For example, all pages in WordPress, Joomla & Shopify have wysiwyg editors. Also, many themes will give you a wysiwyg editor for content on the homepage.
Currently there is currently a limitation of 1 widget/form per page. If you have more then one the second widget is likely to display incorrectly. | https://docs.paywhirl.com/PayWhirl/classic-system-support-v1/frequently-asked-questions-faq/v1-how-do-add-the-widget-or-payment-form-to-my-site | 2019-04-18T14:27:46 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['https://uploads.intercomcdn.com/i/o/19830171/03f3280c09b9c6b5f47f5105/note.png',
None], dtype=object)
array(['https://uploads.intercomcdn.com/i/o/12669173/046344de610a1a3647f6ca9e/shopify_html.png',
None], dtype=object) ] | docs.paywhirl.com |
Outlines how notifications from the Adyen payments platform work for Point of Sale.
In an integrated setup, the merchant’s back-end can receive real-time payment information in the form of notifications from the Adyen payments platform.
Notifications include new reports on the outcome of transactions and financial reporting, referenced refunds, the creation of recurring contracts, etc. Examples for useful application of notifications are:
- Refund requests: Refunds are processed asynchronously by the Adyen payments platform. A response confirms the receipt of the request. In the rare event a refund can not be processed, contact our POS Support Team and, if possible, the shopper to find an alternative way of refunding the shopper.
- Offline transactions: For offline transactions the
pspReferenceis generated by the Adyen payments platform and sent in a notification. This value should be stored for use in Refunds.
- Reports: Notifications are also sent when a report is generated, allowing you to automate report retrieval.
For more information on Notifications in general see the payment notifications page. | https://docs.adyen.com/developers/point-of-sale/build-your-integration/notifications-for-point-of-sale | 2019-04-18T14:19:03 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.adyen.com |
Add Ticket¶
With the Support module enabled, every user can submit a support ticket in case of a problem. The ticket is sent as a plain text email to the email address set in the SUPPORT_EMAIL setting. The user who submits the ticket may receive a confirmation email depending on the SUPPORT_USER_CONFIRMATION setting.
Note
The Support module can be disabled via the virtual data center settings page. | https://docs.danubecloud.org/user-guide/gui/support/add-ticket.html | 2019-04-18T15:11:12 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['../../_images/add_ticket.png', '../../_images/add_ticket.png'],
dtype=object) ] | docs.danubecloud.org |
We are using a concept called Parenting. When you create a group of transforms, the topmost transform or Scene is called the “parent transform”, and all transforms grouped underneath it are called “child transforms” or “children”. You can also create nested parent-child transforms (called “descendants” of the top-level parent transform).
Here is an example scene hierarchy with multiple transforms with parent-child transforms.
In this example
Click a parent transform’s circled "-" / "+" icon (on the left-hand side of its name) to show or hide its children.
To make any transform the “child” of another, drag and drop the intended child transform onto the intended parent transform in the Scene Tab.
To make any transform in the root of a scene (not child of any transform), drag and drop the intended transform onto the blank area of the Scene Tab. | https://docs.modest3d.com/Designer_Documentation/Scene_Management/Adjust_Scene_Hierarchy | 2019-04-18T15:12:23 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.modest3d.com |
Community Cage
A few VMs are hosted by OSAS in the Community Cage under the OSCI umbrella.
Deployment
These hosts are deployed using Ansible and the rules are public in the Gerrit repository. You also need the Ansible Vault passphrase, which can be found in the infra team encrypted file.
Accessing Hosts
All machines are accessible using the root account directly. The list of admin SSH keys allowed to log in is also managed via Ansible.
Machines in the
OSAS-Public VLAN can be accessed directly, whereas those in
the
OSAS-Internal VLAN require to use a jump host. Ansible is already
configured to use it, but if you need direct SSH access just add the following
option to your ssh call:
-o [email protected]. | https://ovirt-infra-docs.readthedocs.io/en/latest/Community_Cage/Overview/index.html | 2019-04-18T14:39:11 | CC-MAIN-2019-18 | 1555578517682.16 | [] | ovirt-infra-docs.readthedocs.io |
This guide describes how to get started with a device running the Open Lighting Embedded (OLE) software. For developer information, including instructions on how to modify the OLE code, see the OLE Developer Guide.
Since the OLE software is customizable, the version running on a particular device may have different functionality from what is described here. Consult your manufacturer's documentation on which features are enabled on each product.
The examples in this guide are from a Number1 since that was the first and most widely available OLE device. Throughout the rest of this document, the device running OLE is simply referred to as the 'device'.
The device can function as either an RDM Responder or DMX / RDM Controller. On boot, the device will start in Responder mode.
In responder mode, the device can simulate different types of RDM models, for example a moving light or a dimmer. This allows testing of RDM Controller implementations against a known good RDM Responder implementation. Together the different RDM Models implement all PIDs from the E1.20, E1.37-1 & E1.37-2 standards.
Responder mode is standalone, a host PC is not required but can be connected to view the USB Console logs if desired.
In controller mode, the device operates as a USB to DMX / RDM Controller. This mode requires a host computer running the Open Lighting Architecture (OLA). The OLA documentation outlines the requirements for the host computer.
When the device is configured as an output port within OLA it will automatically switch into controller mode.
When operating in controller mode, the OLE device reports detailed timing information about the RDM responses. The timing stats are reported when running the RDM Responder Tests.
The device is powered over the USB port. If the device is connected to a host PC no additional power source is required.
For standalone operation, the device can be powered by a standard USB cell phone charger.
The OLE codebase is licensed under the LGPL.
The unit-testing code & mocks are licenced under the GPL. | http://docs.openlighting.org/ole/manual/latest/ | 2019-04-18T14:59:05 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.openlighting.org |
Requesting a grid certificate using the Digicert SSO Portal
From SNIC Documentation
Caveat
Due to brain damage at Google, you can no longer use Google Chrome/Chromium for getting a Digicert certificate. Firefox, Safari and Internet Explorer still works. We have reports that Microsoft Edge does not work.
Set a master password
When using Firefox, or any browser on Linux/Unix, it is highly recommended to use a Master Password to protect stored logins and passwords.
Instructions for Firefox:
Requesting a eScience (grid) certificate
- Start a suitable web browser (see Caveat above for details):
- Windows:
- Internet Explorer
- Firefox (does not use OS certificate store, obtained certificate is only available to Firefox)
- macOS:
- Safari
- Firefox (does not use OS Keychain, obtained certificate is only available to Firefox)
- Linux/Unix:
- Firefox (obtained certificate is only available to Firefox)
- Go to
- Type the first characters of your university (or similar) and then select the Identity Provider to use for login.
- Login at your home university.
- Select the Grid Premium product.
- Normally, leave the CSR field blank to get a key generated in your browser.
- Press "Request Certificate".
- Your certificate is generated and should be automatically imported into your browser.
Exporting the Digicert certificate
If you need to use the certificate with other programs it needs to be exported to a file and imported where appropriate.
See Exporting a client certificate for detailed instructions on how to export a Digicert certificate from the most popular browsers.
Adding certificate to OS certificate store
Some operating systems have a built in keychain/keystore. If Firefox was used the certificate needs to be imported to keychain/keystore in order to be available for other programs.
Windows: FIXME: Investigate and update instructions accordingly.
Using the certificate with grid tools
To use the Digicert certificates with the ARC grid client they have to be exported from the browser into a file and then converted into a suitable format.
See Preparing a client certificate for detailed instructions on how to prepare an exported certificate for use with grid tools. | http://docs.snic.se/wiki/Requesting_a_grid_certificate_using_the_Digicert_SSO_Portal | 2019-04-18T14:19:46 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.snic.se |
Using Elastic Beanstalk with Amazon Virtual Private Cloudends..
Note
Elastic Beanstalk does not currently support linux proxy settings (HTTP_PROXY, HTTPS_PROXY and NO_PROXY) for configuring a web proxy. Instances in your environment must have access to the Internet directly or through a NAT device.
Important
Instances in your Elastic Beanstalk environment use Network Time Protocol (NTP) to syncronize the system clock. If instances are unable to communicate on UDP port 123, the clock may go out of sync, causing issues with Elastic Beanstalk health reporting. Ensure that your VPC security groups and network ACLs allow outbound UDP traffic on port 123 to avoid these issues.
What VPC Configurations Do I Need?
When you use Amazon VPC with Elastic Beanstalk, you can launch Elastic Beanstalk resources, such as Amazon EC2 instances, in a public or private subnet. The subnets that you require depend on your Elastic Beanstalk application environment type and whether the resources you launch are public or private. The following scenarios discuss sample VPC configurations that you might use for a particular environment.
Topics
Single-instance environments
For single-instance environments, Elastic Beanstalk assigns an Elastic IP address (a static, public IP address) to the instance so that it can communicate directly with the Internet. No additional network interface, such as a network address translator (NAT), is required for a single-instance environment.
If you have a single-instance environment without any associated private resources, such as a back-end Amazon RDS DB instance, create a VPC with one public subnet, and include the instance in that subnet. For more information, see Example: Launching a Single-Instance Environment without Any Associated Private Resources in a VPC.
If you have resources that you don't want public, create a VPC with one public subnet and one private subnet. Add all of your public resources, such as the single Amazon EC2 instance, in the public subnet, and add private resources such as a back-end Amazon RDS DB instance in the private subnet. If you do launch an Amazon RDS DB instance in a VPC, you must create at least two different private subnets that are in different Availability Zones (an Amazon RDS requirement).
Load-balancing, autoscaling environments
For load-balancing, autoscaling environments, you can either create a public and private subnet for your VPC, or use a single public subnet. In the case of a load-balancing, autoscaling environment, with both a public and private subnet, Amazon EC2 instances in the private subnet require Internet connectivity. Consider the following scenarios:
Scenarios
You want your Amazon EC2 instances to have a private IP address
Create a public and private subnet for your VPC in each Availability Zone (an Elastic Beanstalk requirement). Then add your public resources, such as the load balancer and NAT, to the public subnet. Elastic Beanstalk assigns them a unique Elastic IP addresses (a static, public IP address). Launch your Amazon EC2 instances in the private subnet so that Elastic Beanstalk assigns them private IP addresses.
Without a public IP address, an Amazon EC2 instance can't directly communicate with the Internet. Although Amazon EC2 instances in a private subnet can't send outbound traffic by default, neither can they receive unsolicited inbound connections from the Internet.
To enable communication between the private subnet, and the public subnet and the Internet beyond the public subnet, create routing rules that do the following:
Route all inbound traffic to your Amazon EC2 instances through the load balancer.
Route all outbound traffic from your Amazon EC2 instances through the NAT device.
You have resources that are private
If you have associated resources that are private, such as a back-end Amazon RDS DB instance, launch the resources in private subnets.
Note
Amazon RDS requires at least two subnets, each in a separate Availability Zone. For more information, see Example: Launching an Elastic Beanstalk in a VPC with Amazon RDS.
You don't have any private resources
You can create a single public subnet for your VPC. If you want to use a single public subnet, you must choose the Associate Public IP Address option to add the load balancer and your Amazon EC2 instances to the public subnet. Elastic Beanstalk assigns a public IP address to each Amazon EC2 instance, and eliminates the need for a NAT device to allow the instances to communicate with the Internet.
For more information, see Example: Launching a Load-Balancing, Autoscaling Environment with Public and Private Resources in a VPC.
You require direct access to your Amazon EC2 instances in a private subnet
If you require direct access to your Amazon EC2 instances in a private subnet (for example, if you want to use SSH to sign in to an instance), create a bastion host in the public subnet that proxies requests from the Internet. From the Internet, you can connect to your instances by using the bastion host. For more information, see Example: Launching an Elastic Beanstalk Application in a VPC with Bastion Hosts.
Extend your own network into AWS
If you want to extend your own network into the cloud and also directly access the Internet from your VPC, create a VPN gateway. For more information about creating a VPN Gateway, see Scenario 3: VPC with Public and Private Subnets and Hardware VPN Access in the Amazon VPC User Guide. | http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/vpc.html | 2017-09-19T17:27:27 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.aws.amazon.com |
RADOS Gateway¶
RADOS Gateway is an object storage interface built on top of
librados to
provide applications with a RESTful gateway to RADOS clusters. The RADOS Gateway
supports two interfaces:
- S3-compatible: Provides block storage functionality with an interface that is compatible with a large subset of the Amazon S3 RESTful API.
- Swift-compatible: Provides block storage functionality with an interface that is compatible with a large subset of the OpenStack Swift API.
RADOS Gateway is a FastCGI module for interacting with
librados. Since it
provides interfaces compatible with OpenStack Swift and Amazon S3, RADOS Gateway
has its own user management. RADOS Gateway can store data in the same RADOS
cluster used to store data from Ceph FS clients or RADOS block devices.
The S3 and Swift APIs share a common namespace, so you may write data with
one API and retrieve it with the other.
Note
RADOS Gateway does NOT use the CephFS metadata server. | http://docs.ceph.com/docs/bobtail/radosgw/ | 2017-09-19T16:54:07 | CC-MAIN-2017-39 | 1505818685912.14 | [array(['../_images/ditaa-8276196287c26eb902a6cc389aaf4e97773c5d52.png',
None], dtype=object) ] | docs.ceph.com |
Expired patron accounts display with a black box around the patron’s name, a note that the patron is expired, and – when initially retrieved – an alert stating that the “Patron account is EXPIRED.”
Open the patron record in edit mode as described in the section Updating Patron Information.
Navigate to the information field labeled Privilege Expiration Date. Enter a new date in this box. When you place your cursor in the Patron Expiration Date box, a calendar widget will display to help you easily navigate to the desired date.
Select the date using the calendar widget or key the date in manually. Click the Save button. The screen will refresh and the “expired” alerts on the account will be removed. | http://docs.evergreen-ils.org/2.11/_renewing_library_cards.html | 2017-09-19T17:13:18 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.evergreen-ils.org |
Amazon ECS Container Agent Introspection
The Amazon ECS container agent provides an API for gathering details about the container instance that the agent is running on and the associated tasks that are running on that instance. You can use the curl command from within the container instance to query the Amazon ECS container agent (port 51678) and return container instance metadata or task information.
To view container instance metadata log in to your container instance via SSH and run the following command. Metadata includes the container instance ID, the Amazon ECS cluster in which the container instance is registered, and the Amazon ECS container agent version information,
Copy
[ec2-user ~]$
curl
Output:
{ "Cluster": "default", "ContainerInstanceArn": "
<container_instance_ARN>", "Version": "Amazon ECS Agent - v1.14.4 (f94beb4)" }
To view information about all of the tasks that are running on a container instance, log in to your container instance via SSH and run the following command:
Copy
[ec2-user ~]$
curl
Output:
{ "Tasks": [ { "Arn": "arn:aws:ecs:us-east-1:<aws_account_id>:task/example5-58ff-46c9-ae05-543f8example", "DesiredStatus": "RUNNING", "KnownStatus": "RUNNING", "Family": "hello_world", "Version": "8", "Containers": [ { "DockerId": "9581a69a761a557fbfce1d0f6745e4af5b9dbfb86b6b2c5c4df156f1a5932ff1", "DockerName": "ecs-hello_world-8-mysql-fcae8ac8f9f1d89d8301", "Name": "mysql" }, { "DockerId": "bf25c5c5b2d4dba68846c7236e75b6915e1e778d31611e3c6a06831e39814a15", "DockerName": "ecs-hello_world-8-wordpress-e8bfddf9b488dff36c00", "Name": "wordpress" } ] } ] }
You can view information for a particular task that is running on a container instance. To specify a specific task or container, append one of the following to the request:
The task ARN (
?taskarn=)
task_arn
The Docker ID for a container (
?dockerid=)
docker_id
To get task information with a container's Docker ID, log in to your container instance via SSH and run the following command.
Note
Amazon ECS container agents prior to version 1.14.2 require full Docker container IDs for the introspection API, not the short version that is shown with docker ps. You can get the full Docker ID for a container by running the docker ps --no-trunc command on the container instance.
Copy
[ec2-user ~]$
curl
Output:
{ "Arn": "arn:aws:ecs:us-east-1:<aws_account_id>:task/e01d58a8-151b-40e8-bc01-22647b9ecfec", "Containers": [ { "DockerId": "79c796ed2a7f864f485c76f83f3165488097279d296a7c05bd5201a1c69b2920", "DockerName": "ecs-nginx-efs-2-nginx-9ac0808dd0afa495f001", "Name": "nginx" } ], "DesiredStatus": "RUNNING", "Family": "nginx-efs", "KnownStatus": "RUNNING", "Version": "2" } | http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-introspection.html | 2017-09-19T17:28:06 | CC-MAIN-2017-39 | 1505818685912.14 | [] | docs.aws.amazon.com |
JBoss.orgCommunity Documentation
Abstract
The User manual is an in depth manual on all aspects of HornetQ.
HornetQ is an example of Message Oriented Middleware (MoM) For a description of MoMs and other messaging concepts please see the Chapter 4, Messaging Concepts. 6+ runtime, that's everything from Windows desktops to IBM mainframes.
Amazing Git repository is
All release tags are availble. Queue, Topic Queue, Topic 49
stop.bat
To run on Unix/Linux type
./stop.sh
To run on Windows type
stop.bat
Please note that HornetQ requires a Java 6 or later runtime to run..
hornetq-configuration.xml. This is the main HornetQ
configuration file. All the parameters in this file are described in Chapter 49, Configuration Reference. Please see Section 6.9, “The main configuration file.” for more information on this file. basic.core.remoting.impl 48,-jgroups example demonstrates how to form a two
node cluster using JGroups as its underlying topology discovery technique, rather than
the default UDP broadcasting.. example shows you how to configure a
HornetQ server to send and receive Stomp messages via a Stomp 1.1 connection...64, . Although this value is
configured on the server, it is downloaded and used by the client. However,
it can be overridden on the client-side by using the customary
"javax.net.ssl.keyStore" system property.
key-store-password. This is the password for the client
certificate key store on the client. Although this value is configured on
the server, it is downloaded and used by the client. However, it can be
overridden on the client-side by using the customary
"javax.net.ssl.keyStorePassword" system property.(...); closing a ClientSession you left open. Please make sure you close all ClientSessions explicitly before let ting them go out of scope! [Finalizer] 20:14:43,244 WARNING [org.hornetq.core.client.impl.DelegatingSession] The session.40, .40, .15, .44,
true then all calls to send for durable messages on non
transacted sessions will block until the message has reached the server, and a
response has been sent back. The default value is
true.
BlockOnNonDurableSend. If this is set to
true-durable-send and
block-on-non-durable.53, .17, .22, -messages-directory>/data/large-messages</large-messages.30, ).. drop messages and also throw an exception on the client-side when the address is full.
To do this just set the
address-full-policy to
FAIL.42, .api."., FAIL FAIL then further messages will be dropped and an exception will be thrown on the client-side..HDR_SCHEDULED_DELIVERY_TIME).
The specified value must be a positive.51, .31, ..35, “Message Group” for an example which shows how message groups are configured and used with JMS.
See Section 11.1.36,
HornetQ supports two additional modes:
PRE_ACKNOWLEDGE and
INDIVIDUAL_ACKNOWLEDGE
In some cases you can afford to lose messages in event of failure, so it would make sense to acknowledge the message on the server before delivering it to the client.
This extra mode is supported by HornetQ and will call it pre-acknowledge mode.!
This can be configured in the
hornetq-jms.xml file on the
connection factory like this:
<connection-factory <connectors> <connector-ref </connectors> <entries> <entry name="ConnectionFactory"/> </entries> <pre-acknowledge>true</pre-acknowledge> </connection-factory>
Alternatively, to use pre-acknowledgement mode using the JMS API, create a JMS Session
with the
HornetQSession.PRE_ACKNOWLEDGE constant.
// messages will be acknowledge on the server *before* being delivered to the client Session session = connection.createSession(false, HornetQcase for individual acknowledgement would be when you need to have your own scheduling and you don't know when your message processing will be finished. You should prefer having one consumer per thread worker but this is not possible in some circunstances.43, , Clusters)
Chapter 38, Clusters) Chapter 38, Clusters).32, ");.33, ", messageCounter.getDepth(), messageCounter.getDepthDelta());
See Section 11.1.34, . Multiple connectors can be configured by using a comma separated list, i.e. org.hornetq.core.remoting.impl.invm.InVMConnectorFactory,org.hornetq.core.remoting.impl.invm.InVMConnectorFactory.</description> <config-property-name>ConnectorClassName</config-property-name> <config-property-type>java.lang.String</config-property-type> <config-property-value>org.hornetq.core.remoting.impl:
</config-property-value> </config-property>.api.api.api.api.jms.bridge.impl.JNDIDestinationFactory"> <constructor> <parameter> <inject bean="JNDI" /> </parameter> <parameter>/queue/source</parameter> </constructor> </bean> <!-- TargetDestinationFactory describes the Destination used as the target --> <bean name="TargetDestinationFactory" class="org.hornetq.api> <entry> <key>jnp.timeout</key> <value>5000</value> </entry> <entry> <key>jnp.sotimeout</key> <value>5000< durable and non-durable durable and non-durable durable..28, )
ServerLocator instance directly you can also specify the
parameters using the appropriate setter methods on the
ServerLocator.18, way> <ha>true</ha> .
ha. This optional parameter determines whether or not this
bridge should support high availability. True means it will connect to any available
server in a cluster and support failover. The default value is
false.
max-size-bytesto prevent the flow of messages from ceasing..api.core.createMessage clusters allow groups of HornetQ servers to be grouped together in order to share message processing load. Each active node in the cluster is an active HornetQ server which manages its own messages and handles its own connections... connector-name="netty which signifies
that an anonymous port should be used. This parameter is alawys specified in conjunction with
local-bind-address. This is a UDP specific attribute. broadcasting.
The JGroups attributes (
jgroups-file hornetq. attribute. in
the broadcast group that you wish to listen from. This parameter is
mandatory. This is a UDP specific attribute.
group-port. This is the UDP port of the multicast
group. It should match the
group-port).), , = HornetQClient.createServerLocatorWithHA(new DiscoveryGroupConfiguration(groupAddress, groupPort));. The following shows all the available configuration options.
This parameter is mandatory.
max-retry-interval. This is how many times a cluster connection should try
to reconnect if the connection fails, -1 means for ever..
retry-interval-multiplier. This is a multiplier used to increase the
retry-interval after each reconnect attempt, default is 1.
This parameter is optional, the default value is
false.
min-large-message-size. This parameters determines when a
message should be splitted with multiple packages when sent over the cluster.
This parameter is optional and its default is at 100K..
connection-ttl. This is how long a cluster connection should stay alive if it
stops receiving messages from a specific node in the cluster
check-period. The period (in milliseconds) used to check if the cluster connection
has failed to receive pings from another server
call-timeout. When a packet is sent via a cluster connection and is a blocking
call, i.e. for acknowledgements, this is how long it will wait for the reply before throwing an exception
call-failover-timeout. Similar to
call-timeout but used
when a call is made during a failover attempt
confirmation-window-size. The size of the window used for sending confirmations
from the server connected to, default is -1 which is no window.
Alternatively if you would like your cluster connections to use a static list of servers for discovery then you can do it like this.
<cluster-connection <address>jms</address> <connector-ref>netty-connector</connector-ref> <retry-interval>500</retry-interval> <use-duplicate-detection>true</use-duplicate-detection> <forward-when-no-consumers>true</forward-when-no-consumers> <max-hops>1</max-hops> .. HornetQ = Hornet> <forward-when-no-consumers>true</forward-when-no-consumers> HornetQ..
The replicating live and backup pair must be part of a cluster, meaning that even tho you may have a single live/backup it is still regarded as a cluster and must have a cluster connection configured in both the live and tha backup's for the same address. Also all servers must be on the same cluster, and have the same cluster user and password..
To configure the live and backup servers to be a replicating pair, configure
both
hornetq-configuration.xml to have:
<shared-store>false</shared-store> . . . <cluster-connections> <cluster-connection <cluster-user>fish<cluster-user> <cluster-password>ILikeToSwim<cluster-password> ... </cluster-connection> </cluster-connections>
The backup server must be flagged explicitly as a backup.
<backup>true</backup> <connectors> <connector name="nameOfConfiguredLiveServerConnector"> <factory-class> org.hornetq.core.remoting.impl.netty.NettyConnectorFactory </factory-class> <param key="port" value="5445"/> </connector> <!-- a real configuration could have more connectors here --> <connectors>, Clusters in order to force the new live server to shutdown when the old live server
comes back up in
hornetq-configuration.xml configuration file as follows:
<check-for-live-server>true</check-for-live-server>.
HornetQ.
HornetQ Chapter 38, Clusters. Alternatively, the clients can explicitly connect to a specific server and download the current servers and backups see Chapter 38, Clusters.
To enable automatic client failover, the client must be configured to allow non-zero reconnection attempts (as explained in Chapter 34,.
Since the client doesn't learn about the full topology until after the first
connection is made there is a window where it doesn
Hornet Section 11.1.67, “Transaction Failover” and Section 11.1.41, “Non-Transaction Failover With Server Data Replication”.
HornetQ exception..
HornetQ ships with a fully functioning example demonstrating how to do this, please see Section 11.1.67, ..3, ); uses the JBoss Logging framework to do its logging and is configurable via the
logging.properties
file found in the configuration directories. This is configured by Default to log to both the console and to a file.
There are 6 loggers availabe # Console handler configuration handler.CONSOLE=org.jboss.logmanager.handlers.ConsoleHandler handler.CONSOLE.properties=autoFlush handler.CONSOLE.level=FINE handler.CONSOLE.autoFlush=true handler.CONSOLE.formatter=PATTERN # File handler configuration handler.FILE=org.jboss.logmanager.handlers.FileHandler handler.FILE.level=FINE handler.FILE.properties=autoFlush,fileName handler.FILE.autoFlush=true handler.FILE.fileName=hornetq.log handler.FILE.formatter=PATTERN # Formatter pattern configuration formatter.PATTERN=org.jboss.logmanager.formatters.PatternFormatter formatter.PATTERN.properties=pattern formatter.PATTERN.pattern=%d{HH:mm:ss,SSS} %-5p [%c] %s%E%n).-pull-consumers: msg-push-consumers: POST message to. The semantics of this link are described
in Posting Messages.
msg-pull-consumers. This is a URL for
creating consumers that will pull from a queue. The semantics
of this link are described in HornetQ
REST Interface Basics. POST messages to. the URL you post messages to.
Messages are published to a queue or topic by sending a simple HTTP
message to the URL published by the msg-create header. The HTTP message
contains whatever content you want to publish to the HornetQ destination.
Here's an example scenario: 201 Created msg-create-next:/002. This URL may
be uniquely generated for each message and used for duplicate detection. If
you lose the URL within the
msg-create-next header, then
just go back to the queue or topic resource to get the msg-create URL./001/002.
An alternative to this approach is to use the
msg-create-with-id
header. This is not an invokable URL, but a URL template. The idea is that
the client provides the
DUPLICATE_DETECTION_ID and creates
it's're interacting with a topic, creates a temporty '-' charactor is converted
to a '$'.
idle-timeout. For a topic subscription,
idle time in milliseconds in which the consumer connections
will be closed if idle.
delete-when-idle. Boolean value, If
true, a topic subscription will be deleted (even if it is
durable) when an the idle timeout is reached. probabley respone reated. producres.20, .26, “Interceptor” for an example which shows how to use interceptors to add properties to a message on the server.
Stomp is a text-orientated wire protocol that allows Stomp clients to communicate with Stomp Brokers. HornetQ now supports both Stomp 1.0 and Stomp 1 custom durable messages. By default JMS messages are durable. If you don't really need durable messages then set them to be non-durable. Durable messages incur a lot more overhead in persisting them to storage.
Batch many sends or acknowledgements in a single transaction. HornetQ will only require a network round trip on the commit, not on every send or acknowledgement. traffic on the wire. For more information on this, see
Chapter 29, Extra Acknowledge Modes. operating systems like later versions of Linux include TCP auto-tuning and setting TCP buffer sizes manually can prevent auto-tune from working and actually give you worse performance!...
This is the configuration file used by the server side JMS service to load JMS Queues, Topics and Connection Factories.
Continued..
By default all passwords in HornetQ server's configuration files are in plaintexttext value ("bbc").
example 2
<mask-password>true</mask-password> <cluster-password>80cf731af62c290</cluster-password>
This indicates the cluster password is a masked value and Hore cleartext cleartext passord! | http://docs.jboss.org/hornetq/2.3.0.beta3/docs/user-manual/html_single/index.html | 2018-06-18T02:49:47 | CC-MAIN-2018-26 | 1529267859923.59 | [] | docs.jboss.org |
Lesson 4: Defining Advanced Attribute and Dimension Properties
APPLIES TO:
SQL Server Analysis Services
Azure Analysis Services. is displayed. For more information, see Parent-Child Dimensions and Attributes in Parent-Child Hierarchies.
Automatically Grouping Attribute Members
In this task, you automatically create groupings of attribute members based on the distribution of the members within the attribute hierarchy. For more information, see Group | https://docs.microsoft.com/en-us/sql/analysis-services/lesson-4-defining-advanced-attribute-and-dimension-properties?view=sql-analysis-services-2017 | 2018-06-18T01:58:20 | CC-MAIN-2018-26 | 1529267859923.59 | [] | docs.microsoft.com |
You can run a workflow to add an XSD schema to a REST host from the plug-in's inventory.
About this task
The XSD schema describes the XML documents that are used as input and output content from Web services. By associating such a schema with a host, you can specify the XML element that is required as an input when you are generating a workflow from a REST operation.
Prerequisites
Verify that you are logged in to the Orchestrator client as an administrator.
Verify that you have a connection to a REST host from the Inventory view.
Procedure
- Click the Workflows view in the Orchestrator client.
- In the workflows hierarchical list, select to navigate to the Add a schema to a REST host workflow.
- Right-click the Add a schema to a REST host workflow and select Start workflow.
- Select the host to which you want to add the XSD schema.
- Select whether to load the schema from URL.
- Click Submit to run the workflow. | https://docs.vmware.com/en/vRealize-Orchestrator/7.0/com.vmware.vrealize.orchestrator-use-plugins.doc/GUID-80F1BADC-623C-4ADE-9D44-D7C93D31D7D3.html | 2018-06-18T02:24:29 | CC-MAIN-2018-26 | 1529267859923.59 | [] | docs.vmware.com |
cts.elementAttributePairGeospatialQuery( $element-name as xs.QName[], $latitude-attribute-names as xs.QName[], $longitude-attribute-names as xs.QName[], $regions as cts.region[], [$options as String[]], [$weight as Number?] ) as cts.elementAttributePairGeospatialQuery
Returns a query matching elements by name which has specific attributes.
The point value is expressed as the numerical values in the textual content of the named attributes.
The value of the
precision option takes precedence over
that implied by the governing coordinate system name, including the
value of the
coordinate-system option. For example, if the
governing coordinate system is "wgs84/double" and the
precision
option is "float", then the query uses single precision.
The point values and the boundary specifications are given in degrees relative to the WGS84 coordinate system. Southern latitudes and Western longitudes take negative values. Longitudes will be wrapped to the range (-180,+180) and latitudes will be clipped to the range (-90,+90).
If the northern boundary of a box is south of the southern boundary, no points will match. However, longitudes wrap around the globe, so that if the western boundary is east of the eastern boundary (that is, if the value of 'w' is greater than the value of 'e'), documents with test data declareUpdate(); xdmp.documentInsert("/point01.xml", xdmp.unquote( ' <item><point lat="10.5" long="30.0"/></item>' )), xdmp.documentInsert("/point02.xml", xdmp.unquote( ' <item><point lat="15.35" long="35.34"/></item>' )), xdmp.documentInsert("/point03.xml", xdmp.unquote( ' <item><point lat="5.11" long="40.55"/></item>' )); // ****** // Now the following search: cts.search( cts.elementAttributePairGeospatialQuery(xs.QName("point"), xs.QName("lat"), xs.QName("long"), cts.box(10.0, 35.0, 20.0, 40.0))); // => returns the document inserted above: // <item><point lat="15.35" long="35.34"/></item> cts.search( cts.elementAttributePairGeospatialQuery(xs.QName("point"), xs.QName("lat"), xs.QName("long"), cts.box(10.0, 40.0, 20.0, 35.0))); // => returns the document inserted above (wrapping around the Earth): // <item><point lat="10.5" long="30.0"/></item> // ****** // And the following search: cts.search( cts.elementAttributePairGeospatialQuery(xs.QName("point"), xs.QName("lat"), xs.QName("long"), cts.box(20.0, 35.0, 10.0, 40.0))); // => throws an XDMP-BADBOX error (because latitudes do not wrap)
CommentsThe commenting feature on this page is enabled by a third party. Comments posted to this page are publicly visible.
| http://docs.marklogic.com/cts.elementAttributePairGeospatialQuery | 2018-06-18T02:16:07 | CC-MAIN-2018-26 | 1529267859923.59 | [array(['/images/i_speechbubble.png', None], dtype=object)] | docs.marklogic.com |
When troubleshooting issues related to the communication between iSymphony and Asterisk, we may need to get a packet capture to verify the exact information iSymphony is sending to and receiving from Asterisk. This will help us determine whether iSymphony or Asterisk is misbehaving, how it is misbehaving, and what an appropriate solution should be.
Quickstart
The command below will use the most common options. See the Step-by-step guide for full details:
tcpdump -i lo -s 0 -C 50 -W 5 -w isymphony.pcap tcp port 5038
This will create
pcap files that we will need to examine.
When running the command above you may get a "Permission Denied" message, despite the fact that you are running as the root user. To resolve this add the
-Z root parameter after
tcpdump.
Step-by-step guide
We will use a command called
tcpdump to perform the packet capture. If you're already familiar with
tcpdump, you can skip to the end of the guide to see the recommended set of flags, and reference the steps below to clarify any unknown flags.
- Interface: First, determine which interface you'll need to monitor:
- If iSymphony is installed on the same server as Asterisk, this will be the
lointerface.
- If iSymphony is running on a separate server, this will be the interface over which iSymphony connects to Asterisk. In most situations, this will be the
eth0interface.
If you're not sure, the following command will tell you which interface is current being used. Be sure to replace the
<asterisk_hostname>tag with the actual hostname of your Asterisk server.
ip route get $(getent ahosts <asterisk_hostname> | head -1 | awk '{print $1}') | grep -Po '(?<=(dev )).*(?= src)'
- Port: Next, determine what port to monitor:
- If we ask for a packet capture between iSymphony and Asterisk, use port
5038.
- If we ask for a packet capture between an iSymphony server and an iSymphony remote agent, use port
51000.
- Size Constraints: Next, determine an appropriate size limit. This will be dependent on how busy your phone system is, how quickly you can stop the capture after the problem occurs, and how much available disk space you have. We will provide two separate parameters to the packet capture command to control the size limit. First, the
-Cflag will limit the size of one file created by the packet capture. Second, the
-Wflag will limit how many total files (each with the size defined with -C) are retained. Using these two parameters, the
tcpdumpcommand will automatically rotate capture files up to the limit. Here are some general guidelines to pick an appropriate size:
- As a starting point, we typically recommend a 5 file limit, with up to 50MB per file. Thus, -C should be 50, and -W should be 5
- If the problem is easily reproducible, and you can stop the packet capture quickly after the issue occurs, you can probably set the limits lower. Keep in mind, though, that the limits are limits, and if you stop the capture before reaching the limits, it won't use the full size of the limits anyway.
- If the phone system is low-traffic, it will grow the size of the packet capture more slowly, and the limits can be set lower.
- If the phone system is high-traffic, but you can stop the capture quickly after the issue happens. In that case, the capture size can be smaller than otherwise necessary on a high traffic system.
- If the phone system is high-traffic and it may take some time before you can stop the capture after the issue occurs (for example, if it crashes overnight), the limits may need to be set higher, provided sufficient disk space is available.
Run the command: With that information collected, you're ready to run the command below to perform the packet capture. Note that
tcpdumpwill run in the foreground, so you'll need to either run the command in a separate terminal/ssh session, or use a terminal multiplexer like
screento disconnect and reconnect to it as needed.
tcpdump -i <interface> -C <filesize_limit> -W <files_limit> -w isymphony.pcap tcp port <port>
Be sure to replace the
<interface>,
<filesize_limit>,
<files_limit>, and
<port>placeholders with their appropriate values as determined above.
- Stop the command: Once the issue has occurred, press Ctrl+C to stop the capture.
- Upload the files: You will now have several new files in the current directory, named
isymphony.pcap,
isymphony.pcap1, etc. We will need a copy of those files. If they are too large to attach to an email, please upload them to a file host for us to access.
Related articles | http://docs.getisymphony.com/display/ISYMKB/Acquire+a+Packet+Capture | 2018-06-18T01:28:44 | CC-MAIN-2018-26 | 1529267859923.59 | [] | docs.getisymphony.com |
position = %INSTR(start, string, substring[, position2])
or
xcall INSTR(start, string, substring, position2)
Return value
position
The starting position of the first occurrence of substring within string. (n)
If any of the following conditions are true, the result is 0:
If substring is null but string is not, the result is 1. If position2 is specified, the result is also returned in that variable.
Arguments
start
The position in the string at which the search will begin. (n)
string
The string in which to search for the substring. (a)
substring
The substring for which to search. (a)
position2
(optional) A variable in which to return the starting position of the first occurrence of substring. If INSTR can’t find an exact match of the substring value or matches one of the other conditions described by position, position2 is returned with a value of 0 to indicate that the search failed. (n)
Discussion
Searching from left to right, %INSTR searches for substring in string from start through the end of the string. The length of substring must be less than or equal to the length of string.
To search from right to left and find the last occurrence of the substring, use %RVSTR.
The position returned is relative to the beginning of string.
Examples
This function checks if the entered state abbreviation is in the western part of the United States.
function chk_state a_state ,a record western ,a*, "AZ|CA|CO|ID|MT|NM|NV|OR|UT|WA|WY" proc freturn %instr(1, western, a_state)
endfunction
In the example below, assume your data division looks like this:
record codes ,a30, "help.add.sub.mul.div" loc ,d3
target ,a3, "sub"
The following subroutine returns a loc value of 7, because the first d appears at character position 7 in the string stored in the codes variable.
xcall instr(4, codes, "d", loc)
The subroutine below assigns the value 8 to loc, because when the search begins at the eighth character stored in codes, the first d appears at character position 8. This result differs from the preceding XCALL, because the starting position value causes the search to begin after the d in character position 7.
xcall instr(8, codes, "d", loc)
The following subroutine searches codes for the first occurrence of sub, beginning at character position 1. Loc is set to 10.
xcall instr(1, codes, target, loc)
The example below uses one field for all of the error messages, rather than a separate field or array entry for each message. This saves data space, because the field is the exact size of the error message, with no filler. The processing method uses delimiters that make it easy to add, delete, or change error messages.
subroutine error_msg
err_code ,a ;Error code, including colon .define TTY ,1 ;Terminal channel .define FILEOF ,1 ;End-of-file error # .define ROW ,12 ;Row # to display error message record err_msg ;Capitalized word is err_code ,a*, "Opener:Could not open file\" & "FLESIZ:File full\" & "DUPKEY:Duplicate key exists\" & "KEYSIZ:Incorrect key size\" & "NOTFOUND:Record not found\" & "LOCKED:Record locked--try again\" & "ILLRECNBR:Invalid record number\" & "WRONG:Invalid error code\" & "CNTLC:Internal error--call developer!\" record msg_buffer ret_msg ,a22, "Press <CR> to Continue" input ,a1 ;Dummy input variable start_pos ,d4 ;Start position of error message end_pos ,d4 ;End position of error message len ,d2 ;Length of error message record window width ,d2 ;Width of error msg display col ,d2 ;Column position of error msg proc onerror (FILEOF)done, exit ;Look for valid error code xcall instr(1, err_msg, err_code, start_pos) len = %len(err_code) if (start_pos .eq. 0) then begin len = 6 ;Length of "WRONG:" xcall instr(1, err_msg, "WRONG:", start_pos) end xcall instr(start_pos + len, err_msg, '\', end_pos) decr end_pos ;Back up to end of msg start_pos = start_pos + len width = end_pos - start_pos + 1 ;Error msg width col = %rnd(40.0 - width/2.0)
display(TTY, $scr_pos(ROW, col), & err_msg(start_pos, end_pos),
& $scr_pos(ROW + 1, 28), ret_msg) reads(TTY, input, done) ;Wait for user response done, ;Error displayed offerror return
exit, ;If anything unexpected happens, stop program
offerror xcall exit_sys ;External wrap up stop endsubroutine | http://docs.synergyde.com/lrm/lrmChap9INSTR.htm | 2018-06-18T02:09:26 | CC-MAIN-2018-26 | 1529267859923.59 | [] | docs.synergyde.com |
CS Time has many features that when selected can enhance the system.
Note: The Features that are available will be dependent on the CS Time System Level and optional modules that have been registered.
CONTENTS
To open the Features options select window, proceed as follows:
Using this option, you can select which classifications you wish to use to classify employees. These are used to group employees together, like employees in the same branch office, or employees in the same department. There can be up to ten of these employee classifications. In a small business you may not need to use any of these classifications, in a large organization you may need all ten.
These options are used to select the following options:
These options enable you to select the options that relate to different types of leave, how leave forms are to be handled and how employee leave is to be accrued. The settings available include:
This option enables you to select colors used on the various leave calendars.
This option enables you to select the different types of hardware that you will be using and also the types of connection methods that will be used to connect the clocks to the system.
This option is used to select which employee human resources information options that can be included when capturing employee information.
These options enable you to select how employee access clockings are to be controlled and which system and hardware options are to be used.
These options are only available on the Enterprise version and are used select options that are normally only used by large companies that have a large number of employees and who work at different sites.
If the Job Costing Module is to be used, this option is used to select which Job Costing categories you wish to use to classify employees.
Note: Only the Classifications that have been selected (activated) for use from the Classification option will be available for Job Classification selection.
First Job Clocking. Before a job costing report is printed CS Time will check for the last job clocking for each employee in order to update their job classifications. This process can take a long time as the system will check through all of an employee's clockings until it finds a job clocking or none at all. To speed up the process, one can specify the date when job costing was implemented. The system will not check further back than this date which will speed up to classifications update.
If the Job Costing Module is to be used, these options are used to select which Job Costing features you wish to use.
This option is used to select if rostering is to be used and when rostering can be performed.
Rostering can be done for Payroll or Daily Shifts. If Daily Shift Rostering is selected you can also select the Only Same Day option that will only allow rostering to be done for the current day or by using Daily Availability that will inform you as to which employees are available or not on particular days.
These options are used if the TNA Web Server is to be used which allows employee information, clockings, leave, daily and payroll hours to be viewed and managed using a standard web browser.
Configure automatic updates for CS Time to minimise disruptions during payroll processing.
Once the required System Features have been selected, click on the Ok button to have the selected features to be available for use.
Permalink:
Viewing Details: | http://docs.tnasoftware.com/Configuration_Module/System_Features | 2018-06-18T02:04:38 | CC-MAIN-2018-26 | 1529267859923.59 | [] | docs.tnasoftware.com |
-
Cloud Manager can manage agent authentication for you if you are using Automation to manage the agents. With Automation, Cloud Manager creates the users for each agent and configures the agent appropriately. See: Enable Username and Password Authentication for your Cloud Kerberos authentication mechanism you want the agents to use.
Adding an agent as a MongoDB user requires configuring an authentication mechanism. Agents can use any supported authentication mechanism, but all agents must use the same mechanism.
For the purposes of this tutorial, you must ensure your:
- Deployment supports Kerberos authentication and
- Agents use Kerberos authentication.
See Enable Kerberos Authentication for your Cloud Manager Project for how to enable Kerberos authentication. | https://docs.cloudmanager.mongodb.com/tutorial/configure-backup-agent-for-cr/ | 2018-06-18T01:52:39 | CC-MAIN-2018-26 | 1529267859923.59 | [] | docs.cloudmanager.mongodb.com |
Regenerate a text index for a table You can regenerate a table text index when you change table stop words or display values. Before you beginRole required: admin About this task By default, the system maintains text indexes on a daily schedule. Typically, you only need to manually regenerate a text index when you change these values. You change the list of table-specific stop words. You change the display value of a record such as changing a user or group name. Until you regenerate the index, text searches for old display values will still produce results and searches for the new display value will not show results. Text indexing can be a resource-intensive task that may take a while to complete. You may notice performance degradation or incomplete search results during index generation. To estimate text indexing duration, you can view historical statistics. | https://docs.servicenow.com/bundle/istanbul-platform-administration/page/administer/search-administration/task/t_RegenerateATextIndexForATable.html | 2018-06-18T02:12:43 | CC-MAIN-2018-26 | 1529267859923.59 | [] | docs.servicenow.com |
. The staging area should be a network directory that can be specified as a UNC path or mounted as a network drive by
The reference machine.
The system where you build the WinPE image.
The virtualization host where you provision the machines.
Ensure that the network has a DHCP server. vRealize Automation cannot provision machines with a WIM image unless DHCP is available.
Identify or create the reference machine in.
Create any custom scripts you want to use to customize provisioned machines and place them in the appropriate work item directory.
If you are using VirtIO for network or storage interfaces, you must ensure that the necessary drivers are included in your WinPE image and WIM image. See Preparing for WIM Provisioning with VirtIO Drivers.
When you create the WinPE image, you must manually insert the vRealize Automation guest agent. See Manually Insert the Guest Agent into a WinPE Image.
Place the WinPE image in the location required by your virtualization platform. If you do not know the location, see your hypervisor documentation.
Gather the following information to include. | https://docs.vmware.com/en/vRealize-Automation/7.3/com.vmware.vra.prepare.use.doc/GUID-640D5559-F519-4048-BB30-FFAA0EB78A80.html | 2018-06-18T01:47:05 | CC-MAIN-2018-26 | 1529267859923.59 | [] | docs.vmware.com |
To signup for a D-Tools account, you will go to the url d-tools.cloud and you will go to the signup form. You may also be referred to the application from the D-Tools website or other partners websites.
- Enter in your email and click "Next" button
- Enter Full Name, create a password and confirm that password
- From your email, take the verification code and have it added to the verification code area
- Click the verify code button and you will see your email become verified
Note: Make sure you not only send verification code when verifying email, but also click the Verify button after you have entered in the verification code sent to your email.
After you create your user, you will be taken to the account creation process. Here you will go through the process of entering in your Company information, Your Business information, then configure your account. Answering these questions will configure your account so you don't have to do all the setup work other tools have you go through. The more you enter in here, the less you will do later.
Once your processing is done, you will land on an empty dashboard. You will notice a few things right away:
- A guide on what to do first - Create an opportunity, build a quote, and present a proposal.
- In app guidance - there is tool tips to help you get started and learn the software with our help. You can go through the flow or dismiss.
- Intercom - You can always ask D-Tools support questions through intercom. | https://docs.d-tools.cloud/en/articles/2287472-creating-an-account | 2020-10-20T02:18:28 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.d-tools.cloud |
AppBox is a tool for developers to build and deploy Development and In-house applications directly to the devices from your Dropbox account.
Why AppBox?
Installation
1. Using curl
You can install AppBox by running following command in your terminal -
curl -s | bash
2. Manual
If you face any issue using above command then you can manually install AppBox by downloading it from here. After that, unzip
AppBox.app.zip and move
AppBox.app into
/Applications directory.
How to use AppBox
System requirements
Currently, AppBox is only supported to run on macOS 10.10 or later. | https://docs.getappbox.com/ | 2020-10-20T02:18:52 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.getappbox.com |
Using the Date Difference Calculated Column
This -
- Navigate to the "Manage Data -> Data Warehouse" page.
- Navigate to the table on which you want to create this column.
- Click "Create a Column" and configure your column as follows:
- "Select Column Definition Type" => Same Table
- "Select Column Definition Equation" => DATE_DIFF = (Ending DATETIME - Starting DATETIME)
- "Select Ending DATETIME Column" => Choose the ending datetime field, which is typically the event that occurs later
- "Select Starting DATETIME Column" => Choose the starting datetime field, which is typically the event that occurs earlier
- Provide a name to the column and hit "Save".
- The column will be available to use immediately.
As an example, the screenshot below is configured to calculate the "Seconds between order date and customer's creation date":
| https://docs.magento.com/mbi/data-analyst/data-warehouse-mgr/using-date-diff-calc-column-.html | 2020-10-20T03:49:17 | CC-MAIN-2020-45 | 1603107869785.9 | [array(['/mbi/images/date_diff.png', 'date_diff.png'], dtype=object)] | docs.magento.com |
Display compliance violations
Note This feature is not available for certain device types.
On the Self Service Portal, you can display compliance violations of your device.
- In the Sophos Central Self Service portal, click Mobile and then click the relevant device.
- Click the Noncompliant link next to Compliance Status.This link is only available when your device is not compliant. | https://docs.sophos.com/central/Selfservice/help/en-us/esg/Sophos-Mobile-SSP/tasks/SSPViolations.html | 2020-10-20T03:48:22 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.sophos.com |
Difference between revisions of "Game"
From CSLabsWiki
Revision as of 17:25, 5 November 2013
Game is a virtual machine currently temporarily running on Europa (until Juno is rebuilt), that hosts several games and other recreational services listed below.
Hosted on Game
Game Servers
Communication Servers
* Murmur (Mumble) * XMPP (via Prosody) | http://docs.cslabs.clarkson.edu/mediawiki/index.php?title=Game&diff=prev&oldid=5781 | 2020-10-20T04:00:36 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.cslabs.clarkson.edu |
Defining and managing chargeback hierarchies and targets
In TrueSight Capacity Optimization, any entity that uses IT services and incurs IT costs, such as an internal company department or business user, or an external customer of a service provider is considered a target. You can structure targets into a chargeback hierarchy to represent either organizational or logical structures. This is similar to the domain tree displayed in the Workspace folder.
The Chargeback > Hierarchies enables you to define and view targets. These targets are organized in a hierarchical structure that reflects the structure of your organization. The targets in the chargeback hierarchy are charged back for their IT costs. Using Chargeback > Hierarchies, you can select domains and associated sub domains from the Workspace and set them as targets for accounting costs for IT services.
For more information, see:
To view chargeback hierarchies
Select Chargeback > Hierarchies. The Hierarchies page displays a list of all existing chargeback hierarchies that have been configured in TrueSight Capacity Optimization.
The Hierarchies page consists of Hierarchies table with the following columns:
The following figure shows the sample Hierarchies page.
To view details of a hierarchy
- Select Chargeback > Hierarchies.
The Hierarchies page is displayed.
Click name of the required hierarchy.
The hierarchyName page is displayed.
The following figure shows the sample hierarchyName page.
Adding a new chargeback hierarchy
You can add a new chargeback hierarchy that is based on the entities for which account of IT costs is required.
Before you begin
Ensure that the required domain and associated sub domains are available in the Workspace. For more information, see Out-of-the-box ETLs and Creating a domain.
To add a new chargeback hierarchy
- On the Hierarchies page, click Add hierarchy.
The Add hierarchy page is displayed.
Enter the following details for the chargeback hierarchy.
Click Save to save the new hierarchy.
The hierarchy is saved and the hierarchyName page is displayed and the chargeback hierarchy is added in the Workspace > Administration domains >Chargeback view > Business Drivers.
To edit a chargeback hierarchy
- Select Chargeback > Hierarchies.
The Hierarchies page is displayed.
Click name of the required hierarchy.
The hierarchyName page is displayed.
- Click Edit.
The Edit hierarchy page is displayed.
Edit the following details.
Deleting a chargeback hierarchy
You can delete a chargeback hierarchy that is not required for accounting IT costs in your organization.
Before you begin
Ensure that required chargeback hierarchy is not associated with any cost model. If it is associated with any cost model, delete or remove the cost model association before you try to delete the chargeback hierarchy. For more information, see To delete a cost model.
To delete a chargeback hierarchy
- Select Chargeback > Hierarchies.
The Hierarchies page is displayed.
Click name of the required hierarchy.
The hierarchyName page is displayed.
- Click Delete.
Click OK to delete the selected chargeback hierarchy.
The selected chargeback hierarchy is deleted.
Where to go from here
Defining and managing a cost object | https://docs.bmc.com/docs/btco113/defining-and-managing-chargeback-hierarchies-and-targets-775473206.html | 2020-10-20T03:13:15 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.bmc.com |
How to assign readymade size chart templates to products
Just Activate the plugin, you will list ready-made size chart templates. Now, select the default size chart template and assign it to a product or category.
Preview default size: With this preview button you can see the default template view or style.
Custom Template: Ready-made size chart templates can be a clone or modified template and assign to product or category.
Default Template: It is a readymade size template. You can directly apply to the product or category. The default size chart title cannot be changed. If you want to change then you want a clone that size chart.
Assign Default a size chart template in Edit product page
In the product edit page, you get the dropdown box, in which you can select a size chart (default and custom) template.
How it will display on the product detail page.
| https://docs.thedotstore.com/article/241-how-to-assign-readymade-size-chart-templates-to-products | 2020-10-20T02:34:12 | CC-MAIN-2020-45 | 1603107869785.9 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ad9db7c042863075092a2a2/images/5e01f97904286364bc9335b7/file-wdit9Tkse3.png',
'How to assign readymade size chart templates to products'],
dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ad9db7c042863075092a2a2/images/5e01fc4f04286364bc9335c3/file-vtCBBquW0x.png',
'How to assign readymade size chart templates to products'],
dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ad9db7c042863075092a2a2/images/5e01fda204286364bc9335c7/file-Vambp2kTy7.png',
'How to assign readymade size chart templates to products'],
dtype=object) ] | docs.thedotstore.com |
Text Field Options
Colour words
When this field is included in a view, highlight it with a colour if the content is one of a number of words. The most up-to-date values to be highlighted are shown in the administrator interface, but as of the latest update of this page, they are
Text calculations can also be coloured based on the same words - see the calculation options
Use dropdown
Rather than a simple text entry field, display a dropdown of choices from which one can be selected. If there are only a few values, then the user interface may display options in radio button style, but the behaviour is exactly the same.
Options can be provided with the default value setting as above. If no options are specified, then the list of options will be automatically generated from the set of unique values already entered into records. In this way, a set of values will gradually grow as people enter new data.
See also the Use only given values option below to enforce the use of provided values only.
Use tags
Similar to Use dropdown, but rather than only letting the user select one value, multiple values can be chosen at once.
An example use may be selecting countries of origin for an ingredient, where it may come from different countries (perhaps via different suppliers).
The list of available options is specified in exactly the same way as for dropdowns above.
Use only given values
If a dropdown or tags list is used as above, ticking this option enforces the use of one of the provided values.
If it’s un-ticked, then an ‘add new entry’ option will be available for users to manually add values which aren’t yet in the list. Added values will be visible to all.
In some circumstances, it can be useful to leave this un-ticked and let options grow organically, until a certain point. Administrators can then rationalise and ‘hard-code’ values from the most commonly entered. In other situations, it may be best to specify available options right from the start.
Default value
For basic text fields, this simply provides a default value, which the field will have when a record’s created. If not provided, the field will be blank.
Field defaults can also be calculated, with a simple or arbitratily complex calculation. To do that, instead use the Set from previous referenced field option.
Specifying dropdown lists and tags fields
If either Use dropdown or Use tags is selected, the default value option provides a list of options with which to populate the dropdown, or a set of tags to choose from. Enter each option here separated by commas.
If the list starts with a comma, that means the field should be blank by default when a record is created. If not, then the default will be the first value in the comma separated list.
In the dropdown/tags list, the options will be ordered alphabetically, not shown in the order they’re entered in. If you wish to give the values a particular order, you can prefix them with a number or letter followed by a closing bracket like so:
1) cold, 2) warm, 3) hot. The number prefixes won’t show to the user.
Lists of users
Also for dropdown lists and tags fields, agileBase can generate dropdown contents based on a list of users and/or roles in the system. To do that, select
users or
users and roles from the ‘fill with’ selector just below the default value input. The standard user ID format throughout the system is used, ‘Forename Surname (username)’.
Only users/roles which have privileges to view the data in this particular table will be shown in the list. That can keep things manageable if a system has hundreds or more users.
An example use would be assigning an account manager to a customer, selecting from a list of staff members (who are users).
When a user is selected in this way, more options for use are opened up, such as automatically emailing the selected person when there’s a status change or something is overdue for example. agileBase can look up the user’s email address to accomplish this, See email workflows for details.
When using ‘fill with users’, the list of users to appear in the dropdown can further be narrowed down by role if required. To do that, enter the text
_users|role1,role2,role3
i.e.
_users followed by a vertical bar and then a comma separated list of role names. Only users which appear in one or more of the named roles will appear.
Text case
Convert people’s input into the required case as it’s entered - choose from
- lowercase
- UPPERCASE
- Title Case
Size
If ‘short’ is chosen, a standard single line input box is displayed.
If ‘large’ is chosen, a multi-line text entry box is shown. This option also lets people format the text entered, such as using bold, italics or bullet points. For data security reasons, only certain formatting is allowed.
Use as record title
The user interface has the ability to highlight the importance of certain fields by showing their contents in large font at the top of the record data. For example, a Company Name field may be the record title for an organisation record.
This option can be selected for multiple fields in a table. In that case, the first field will be the main title, others will appear as subtitles.
Preventing duplicates
Enabling the ‘use as record title’ option also has the effect of enabling duplicate detection. The system will detect when a value is entered that is close to an existing value in the system. Any ‘close’ values that are found are displayed as links to the relevant records.
This can be useful when e.g. entering company names. Although a field can be marked unique to prevent exact duplicates, that won’t pick up near matches, which this duplicate detection will. For example, ‘The Universoty Of Bristol’ and ‘University Of Bristol (UOB)’ will be detected as close matches, or ‘agileBase’ and ‘agileBase Ltd’.
The system uses trigram matching to detect similar values.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve. | https://docs.agilebase.co.uk/docs/fields/field-options/text-field-options/ | 2020-10-20T02:26:47 | CC-MAIN-2020-45 | 1603107869785.9 | [array(['/word-colours.png', 'Word Colours'], dtype=object)] | docs.agilebase.co.uk |
Installation Wizard
Proceed through the installation wizard to accept licenses, install and configure Cloudera Runtime, and more.
Upload License File
On the Upload License File page, you can select either the trial version of CDP Data Center or upload a license file:
- Choose one of the following options:
- Upload Cloudera Data Platform License
- Try Cloudera Data Platform for 60 days. The CDP Data Center trial does not require a license file, but the trial expires after 60 days.
- If you choose the CDP Data Center Edition Trial, you can upload a license file at a later time. Read the license agreement and click the checkbox labeled Yes, I accept the Cloudera Standard License Terms and Conditions if you accept the terms and conditions of the license agreement. Then click Continue.
- If you have a license file for CDP Data Center, upload the license file:
- Select Upload Cloudera Data Platform License.
- Click Upload License File.
- Browse to the location of the license file, select the file, and click Open.
- Click Upload.
- Click Continue.
- Click Continue to proceed with the installation.
The Welcome page displays.
Welcome (Add Cluster - Installation)
The Welcome page of the Add Cluster - Installation wizard provides a brief overview of the installation and configuration procedure, as well as some links to relevant documentation.
Click Continue to proceed with the installation.
Cluster Basics
The Cluster Basics page allows you to specify the Cluster Name
For new installations, a Regular Cluster (also called a base cluster) is the only option. You can add a compute cluster after you finish installing the base cluster.
For more information on regular and compute clusters, and data contexts, see Virtual Private Clusters and Cloudera SDX.
Enter a cluster name and click Continue..
If you do not want to enable auto-TLS at this time, click Continue to proceed.
Specify Hosts
- To enable Cloudera Manager to automatically discover hosts on which to install Runtime. For more information, see Parcels.
- Use PackagesA package is a standard binary distribution format that contains compiled code and meta-information such as a package description, version, and dependencies. Packages are installed using your operating system package manager.
- Select the version of Cloudera Runtime or CDH to install. If you do not see the version you want to install:
- Parcels – Click the Parcel Repository & Network Settings link to add the repository URL for your version. If you are using a local Parcel repository, enter its URL as the repository URL.
Repository URLs for CDH 6 parcels are documented in CDH 6 Download InformationRepository URLs for the Cloudera Runtime 7 parcels are documented in Cloudera Runtime Download Information
After adding the repository, click Save Changes and wait a few seconds for the version to appear. If your Cloudera Manager host uses an HTTP proxy, click the Proxy Settings button to configure your proxy.Note that if you have a Cloudera Enterprise license and are using Cloudera Manager 6.3.3 or higher to install a CDH version 6.3.3 or higher, or a Cloudera Runtime version 7.0 or higher using parcels, you do not need to add a username and password or "@" to the parcel repository URL. Cloudera Manager will authenticate to the Cloudera archive using the information in your license key file. Use a link to the repository in the following format:
If you are using a version of CM older than 6.3.3 to install CDH 6.3.3 or higher parcels, you must include the username/password and "@" in the repository URL during installation or when you configure a CDH 6.3.3 or higher parcel repository. After you add the repository, click Save Changes and wait a few seconds for the version to appear. If your Cloudera Manager host uses an HTTP proxy, click the Proxy Settings button to configure your proxy.
- Packages – If you selected Use Packages, and the version you want to install is not listed, you can select Custom Repository to specify a repository that contains the desired version. Repository URLs for CDH 6 version are documented in CDH 6 Download Information,
If you are using a local package repository, enter its URL as the repository URL.
- If you selected Use Parcels, specify any Additional Parcels you want to install.
- Click Continue.
Select JDK
If you installed your own JDK version, such as Oracle JDK 8, in Step 2: Install Java Development Kit, select Manually manage JDK.
To allow Cloudera Manager to automatically install the OpenJDK on cluster hosts, select Install a Cloudera-provided version of OpenJDK.
To install the default OpenJDK that is provided by your operating system, select Install a system-provided version of OpenJDK.
After checking the applicable boxes, click Continue.
Enter Login Credentials
- Select root for the
rootaccount, or select Another user and enter the username for an account that has password-less
sudoprivileges.
-.. | https://docs.cloudera.com/cdp-private-cloud-base/7.1.4/installation/topics/cdpdc-installation-wizard.html | 2020-10-20T03:12:49 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.cloudera.com |
How to Update theme?
Method 01: Manual Update Via Envato Market WordPress Plugin
Please visit this link: to download The Envato Market WordPress Plugin help you can update the Theme & Plugin automatically easier.
Method 02: Manual Update Via WordPress
Step 01: Download Nomos theme. Read our Download The Package Theme to know how to download the package theme.
Note: Before updating the theme, you need to backup all data on your site. If you don’t backup, all modification files will be lost.
Step 02: Go to Appearance > Themes and deactivate Nomos theme. To deactivate, simply switch to a different theme. For example, the default WordPress Twenty Fifteen theme.
Step 03: After deactivating Nomos theme, you can go ahead and delete it. To do this, hover over the theme thumbnail then click “Theme Details”. In the bottom right corner of the window, click the ‘Delete’ button. All your content such as pages, options, images and posts will not be lost or erased by doing this. However, any customizations to the theme’s core files, such as PHP files will be lost. So you need to the child theme to customize the theme.
Step 4: Upload, Install and Active the Nomos theme new version. Read our Install Nomos via WordPress to know how to install Nomos via WordPress.
– Then your theme had updated successfully, your backend will look like this picture below.
Step 5: Don’t forget to update the required plugins
Method 03: Manual Update Via FTP
Step 01: Download Nomos theme. Read our Download The Package Theme to know how to download the package theme.
Step 02: Before updating the theme, you need to backup Active the theme. Read our Install Nomos via FTP to know how to install Nomos via FTP.
Step 4: Don’t forget to update the required plugins | https://docs.famithemes.com/docs/nomos/04-updates/ | 2020-10-20T03:41:33 | CC-MAIN-2020-45 | 1603107869785.9 | [array(['https://docs.famithemes.com/wp-content/uploads/2018/05/fami_4.1_1.png',
None], dtype=object)
array(['https://docs.famithemes.com/wp-content/uploads/2018/05/ecome_2.1__3.png',
None], dtype=object)
array(['https://docs.famithemes.com/wp-content/uploads/2018/05/install-09.png',
None], dtype=object)
array(['https://docs.famithemes.com/wp-content/uploads/2018/05/antive__2.1_5.jpg',
None], dtype=object)
array(['https://docs.famithemes.com/wp-content/uploads/2018/05/Screenshot_6.png',
None], dtype=object)
array(['https://docs.famithemes.com/wp-content/uploads/2018/07/nomos_install1.png',
None], dtype=object)
array(['https://docs.famithemes.com/wp-content/uploads/2018/07/nomos__2.1_61.jpg',
None], dtype=object)
array(['https://docs.famithemes.com/wp-content/uploads/2018/05/antive__2.1_5.jpg',
None], dtype=object) ] | docs.famithemes.com |
Address Book
Customers who keep their address books current can speed through the checkout process. The address book contains the customer’s default billing and shipping addresses, and any additional addresses that they frequently use. Additional address entries are easy to access and maintain from the grid. Each customer’s address book can manage over 3,000 address book entries without impacting performance.
Add a new address
In the sidebar of your customer account, choose Address Book.
On the Address Book page under Additional Address Entries, click Add New Address.
Add New Address
Define the new address item:
Complete the contact and address information.
By default, the customer’s first and last names initially appear in the form.
Select the following checkboxes to indicate how the address is to be used. Select both checkboxes if the same address is used for both billing and shipping.
- Use as my default billing address
- Use as my default shipping address
When complete, click Save Address.
The new address is listed under Additional Address Entries.
Additional Address Entries | https://docs.magento.com/user-guide/v2.3/customers/account-dashboard-address-book.html | 2020-10-20T02:56:16 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.magento.com |
Using the Event Activity Statistics application
This section describes
how to access and use the Event Activity Statistics Application
event types
The Event Activity Statistics application is a facility that monitors and displays MainView AutoOPERATOR event traffic. Use this application when you want to display events available for automation by Rules. You can select any of the events shown on this panel and write a Rule for it.
You can also access events with the windows-mode view, AOEVENTS. See the MainView AutoOPERATOR Basic Automation Guide, Volume 2 for more information.
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/mao82/using-the-event-activity-statistics-application-839193615.html | 2020-10-20T04:11:26 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.bmc.com |
Analytical Applications
This topic describes how to use an ML workspace to create long-running web applications..
Testing applications before you deploy
Before you deploy an application using the steps described here, make sure your application has been thoroughly tested. You can use sessions to develop, test, and debug your applications. You can test web apps by embedding them in sessions as described here: Web Applications Embedded in Sessions.
-.
- Click Create Application.
You can Stop, Restart, or Delete an application from the Applications page.
If you want to make changes to an existing application, click Overview under the application name. Then go to the Settings tab to make any changes and update the application. | https://docs.cloudera.com/machine-learning/1.0/applications/topics/ml-applications.html | 2020-10-20T04:06:37 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.cloudera.com |
Providing the best possible support to our customers is very important to us. Please help us help you by following these guidelines.
If you have a valid service contract and the issue is considered critical, please contact our Call Center and clearly state that this is a critical issue. Your Service Contract User Guide contains the necessary Call Center contact information.
It is important that you write a clear description of the issue. Please include the current version number information. | https://docs.menandmice.com/plugins/viewsource/viewpagesrc.action?pageId=2687148 | 2020-10-20T02:34:14 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.menandmice.com |
How To Customise Browser Extension Calculations
Now you've downloaded PaTMa's Browser Extension Tool, set a free account, you can start to create property prospect lists. With the option of viewing essentials details such the property's price history, estimated yields and comparables as you browse property sites, you can also customise the calculations of the estimations made by the system.
It is essential to understand that the estimated Return On Investment calculated within the Property Tools Browser Extension uses the asking price as the purchase price and uses a median of local comparable rental property rates as the rent. These are the default calculations of PaTMa's Buy-to-Let Profit Calculator.
- By clicking the Settings Icon located in the top right corner of the Estimates section, you can make any additional edits to mortgage rates, max ltv etc..
You will be redirected to the PaTMa website, with the headling Settings where you can manually enter any legal fees, mortgage rate and Max LTV.
Filling In The Form
Legal Fees Enter the fees paid to institutions in order to carry the legal aspect of both buying and selling the property. This amount can be the accumulated cost of paying solicitors and land registry.
Mortgage Rate (%) Enter the percentage rate on a mortgage loan that will be applied to the property.
Mortgage Fees In monetary terms, specify the total fees paid on maintaining the mortgage, arrangement fees, surveying etc.
Max LTV (%) In this field, enter the largest allowable ratio of the loan's size to the monetary value of the property ―i.e the maximum loan-to-value ratio expressed as a percentage.
Required Rent Cover (%) Insert the how much of the rent income is required to cover costs at a profit as a percentage over 100%.
Stress rate (%) Insert the provisional state rate associated with the purchase as it will be included in the calculation of the rent cover.
Note: In the case that this building is to be used as a Second Home, mark the box at the bottom of the page to make it blue. This information will be used in the stamp duty calculations. | https://docs.patma.co.uk/property-tools/how-to/customising/ | 2020-10-20T02:56:29 | CC-MAIN-2020-45 | 1603107869785.9 | [array(['../images/et.png', None], dtype=object)
array(['../images/cu1.png', None], dtype=object)] | docs.patma.co.uk |
Getting Started
swizzin was designed to be an easy to use, modular seedbox solution.
System RequirementsSystem Requirements
Supported Operating SystemsSupported Operating Systems
- Debian 8/9/10
- Ubuntu 16.04 and Ubuntu 18.04
Recommended HardwareRecommended Hardware
- A KVM VPS or bare metal server is recommended
- 2+ CPU cores KVM or Intel Atom c2750
- 4 GB of RAM
- An x86_64 (64-bit) processor is required
With the exception of a 64-bit processor, these are not hard and fast requirements -- you may find that you're able to get away with running on a weaker CPU or less RAM; however, best performance will be had if the applications you're using have ample resource overhead.
InstallationInstallation
Quick StartQuick Start
Make sure you have either
curl or
wget installed. Pick the command of your choice to get started:
wget
bash <(wget -O- -q)
curl
bash <(curl -s) -O- -q)'
SetupSetup
After running the above command, the script will check for updates and install some necessary prerequisites before continuing.
When finished, the installer will ask you a few questions:
- A username for the master user
- A password for the master user
- The packages you would like to install
In text fields, you only need to enter your text and hit
return to enter. To choose packages, from the list, you can navigate with the arrow keys. Press
space to select an entry. When you're satisfied with your selection, press
tab to move the selector to
Ok and then press enter. This will advance the screen.
When you have finished running through the prompts, installation will start. The time it takes will depend on the number of packages you have selected.
Additional Setup QuirksAdditional Setup Quirks
A few items to be aware of as known issues. Most of these have had attempts at working around them, but it's good to be aware of things to avoid:
- Installer appears frozen before any user input (usually on
Installing dependenciesor
Checking repos):
control-cout and
apt update && apt upgradebefore running the installer.
- Capital letters in usernames: capital letters should never be used for usernames
- Usernames which may conflict with a group that already exists: for example, certain images like AWS may have an
admingroup out of the box. If you try to name your user
adminthe install will fail in this case.
Additional HelpAdditional Help
If you're having troubles with any of the items in the documentation, please first consult the Troubleshooting guide. If that is not enough for you, join us in Discord and we will attempt to help you to our best ability.
If you have found a bug or are having an issue, please open an issue on GitHub. | https://docs.swizzin.ltd/installation | 2020-10-20T02:23:35 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.swizzin.ltd |
The sleep state that the rigidbody will initially be in.
Sleeping is an optimisation that is used to temporarily remove an object from physics simulation when it is at rest. This property chooses whether the rigidbody should start off asleep, awake or have sleeping turned off altogether.
See the Physics Overview in the manual for more information about Rigidbody sleeping.
See Also: Sleep, IsSleeping, WakeUp, Rigidbody.Sleep. | https://docs.unity3d.com/kr/2017.2/ScriptReference/Rigidbody2D-sleepMode.html | 2020-10-20T03:32:51 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.unity3d.com |
Data Lifecycle Manager Download Information
Access to Data Lifecycle Manager application for production purposes requires authentication.
To access these applications, you must first have an active subscription agreement along with the required authentication credentials (username and password).
The authentication credentials are provided in an email sent to customer accounts from Cloudera when a new license is issued. If you have an existing license with a Data Lifecycle Manager entitlement, you might not have received an email.
NOTE: If you do not have authentication credentials, contact your Cloudera account representative to receive the same.
To download Data Lifecycle Manager applications, follow these steps:
- From cloudera.com, log into the cloudera.com account associated with the Data Lifecycle Manager license agreement.
- On the Data Lifecycle Manager Download page, click CHOOSE APPLICATION.
- Select from:
DLM-APP
DLM-ENGINE
- Click Download Now.
A table with the list of release packages with associated release numbers, supported operating system, and downloadable link appears.
Click on the applicable link to download the item relevant to your requirement. | https://docs.cloudera.com/HDPDocuments/DLM1/DLM-1.5.1/installation/content/dlm_accessing_dlm_repo.html | 2020-10-20T04:12:44 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.cloudera.com |
Color key value widget
What is a color key value widget?
The color key value widget displays the selected values in different colors and represented in a bar, showing their proportions.
Configuration options
The following table lists the configuration options of this widget:
Example
The below widget is a sample of the average response time of all ISP for the last week.
- Time Range - 1 week
- Field to show - avg (response time)
- Actions - All | https://docs.devo.com/confluence/ndt/dashboards/working-with-dashboard-widgets/color-key-value-widget | 2020-10-20T03:31:47 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.devo.com |
Adding Discount Rules for Checkout
There are two ways of starting towards the process of adding discount rules for checkout –
1. After opening the plugin dashboard, go to Discount Rules for Checkout section and click on the ‘Add discount Rules for checkout’ button, present just above the table of existing rules.
2. From any tab within the plugin dashboard, click on the ‘Add Discount Rules for Checkout’ tab.
Following any of the above-listed method, you will reach this page:
Adding Fee Configuration Details to your Discount Rule for Checkout
Fee Configuration, for any of the conditional discount rules you’re creating, is the section where you fill the basic details related to WooCommerce discounts, such as name, value, type, duration, status, and related details. To fill these details, pay attention to this part of the page you’ve opened.
Now, let us explain the significance and usage of each of these fields, visible in fee configuration section – Now, let us explain the significance and usage of each of these fields, visible in the fee configuration section.
Discount Rule Title for Checkout
As you’ll have multiple rules within the same WooCommerce plugin dashboard, it is critical to distinguish each of them with diverse names. Doing so will also help you in making use of your rules and modifying them later on. So, add an understandable and memorable name for your discount rule in this field.
Select Discount Type
Though this discount plugin, you can add 2 types of discounts for your e-commerce store.
1. Fixed
For the currency-based fixed-price discount, you will have to select ‘fixed’ as your discount type. For example, to enable a discount of $5, or $2.5, choose this option.
2. Percentage
Discount Value
If ticked, it will add two more form fields for you.
i. Calculate Quantity Based On
You can choose to increment the discount value as per the cart quantity or product quantity. For this, select the relevant option using this dropdown.
ii. Fee per Additional Quantity ($)
Start Date
End Date
Just like the ‘Start Date’ field, the ‘end date’ field is also optional and can be used whenever required. So, if you are running a sale or discount offer for a limited time through these rules, add an end date for your rule. After this, your rules won’t work for the store once the given date passes.
Status
You can enable or disable your created conditional discount rules using the toggle button. | https://docs.thedotstore.com/article/326-adding-discount-rules-for-checkout | 2020-10-20T03:21:42 | CC-MAIN-2020-45 | 1603107869785.9 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ad9db7c042863075092a2a2/images/5e2977ac04286364bc944b67/file-gvAyQDsNcR.png',
'Adding Discount Rules for Checkout'], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ad9db7c042863075092a2a2/images/5e2977042c7d3a7e9ae6a346/file-FSz57wWqRH.png',
'Adding Discount Rules for Checkout'], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ad9db7c042863075092a2a2/images/5e297ace2c7d3a7e9ae6a36d/file-Xa05xi238N.png',
'Adding Discount Rules for Checkout'], dtype=object) ] | docs.thedotstore.com |
Now you can click 1 button only from your Oberlo Product Page to get reviews directly from AliExpress without the need of filling product URL as usual.
From the Dashboard page, just go to Import Reviews => Scroll down and move to the 'Import reviews from Oberlo' sector
Setup easily as the instruction below:
Step 1: Add this extension to your chrome:
Step 2: Access your Oberlo Product Page and start importing reviews for all products as instructed below:
1.Click the 'Import Reviews' button on the top-right of your Oberlo Product Page.
** Please note that the Import Reviews button may not show due to network issue, just reload your browser and try again
2. Select the suitable filters to import the reviews
** Please note that this importing process will take a little time.
Rating: Set a limitation to import review with only specific ratings (3 stars and up / 4 stars and up / 5 stars).
Select Country: The app imports review from all countries by default, but if you only want to import reviews from a specific country, please select an option.
Only import reviews with photos: This option is blank by default. Click this if you only need reviews with at least a photo attachment
Only import reviews with content: This option is blank by default. Click this if you want to filter out reviews that don’t have text.
If you have any troubles, please contact us via [email protected] and we are always willing to support you! | https://reviews-docs.smartifyapps.com/faq/how-can-i-get-reviews-directly-from-oberlo | 2020-10-20T02:47:26 | CC-MAIN-2020-45 | 1603107869785.9 | [] | reviews-docs.smartifyapps.com |
Creating and using a Holiday list allows ensures that your organizations designated holidays are excluded from task scheduling and resource capacity calculations in PM Central.
When using a Holiday list keep in mind:
Only one holiday list can be used per PM Central Portfolio
The holiday list must be created from a SharePoint Calendar list
Holidays need to be created as Yearly events
If you use a holiday list there are several Web Part in PM Central that will need to be configured to reference the list: | https://docs.bamboosolutions.com/document/pmc_holiday_list/ | 2020-10-20T02:31:48 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.bamboosolutions.com |
Overview
All the examples in this document are based on
.netframework.
Development environment
Resources
Installation
Download SDK
Use command line to get from Github
Download SDK manually
Get the latest SDK from Github: gt3-dotnet-sdk (in
.zip format).
Import SDK
To import SDK, please firstly open the
.sln file in SDK with VS.
Then, add the following code.
Configure key pair and modify the request
Configure key pair
Get your key pair from GeeTest Dashboard. The key pair consists of a public key(captcha ID) and a private key (KEY). Then, configure the key pair through the following path.
Modify the request (optional)
API example
Captcha initialization
Initiate captcha via
API1, get
challenge and set the
status
Notice:
statusindicates captcha initialization. Status=1 refers to successful initialization, status=0 refers to downtime. Please store the
status, since it will be needed in secondary verification. In the demo above, session has been used to store
status.
Secondary verification (API2), including uptime and downtime
How to simulate the Failback mode? Please fill in an incorrect string (e.g. 123456789) for the captcha ID. Then, it will enter the Failback mode.
Run demo
To run the demo, please open the Solution Explorer and right click the demo
To view the demo, please visit in your browser.
Troubleshooting: secondary verification failure
- SDK internal logic errors. Please check: a) whether
sessionis stored and read successfully, b) whether the code could successfully process has been passed correctly.
- Please provide
challengeto our service team. They could help you to check the log. | https://docs.geetest.com/captcha/deploy/server/csharp | 2020-10-20T02:51:25 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.geetest.com |
Excluding Amazon S3 Storage Classes
If you're running AWS Glue ETL jobs that read files or partitions from Amazon Simple Storage Service (Amazon S3), you can exclude some Amazon S3 storage class types.
The following storage classes are available in Amazon S3:
STANDARD— For general-purpose storage of frequently accessed data.
INTELLIGENT_TIERING— For data with unknown or changing access patterns.
STANDARD_IAand
ONEZONE_IA— For long-lived, but less frequently accessed data.
GLACIER,
DEEP_ARCHIVE, and
REDUCED_REDUNDANCY— For long-term archive and digital preservation.
For more information, see Amazon S3 Storage Classes in the Amazon S3 Developer Guide.
The examples in this section show how to exclude the
GLACIER and
DEEP_ARCHIVE storage classes. These classes allow you to list files, but they
won't let you read the files unless they are restored. (For more information, see
Restoring
Archived Objects in the Amazon S3 Developer Guide.)
By using storage class exclusions, you can ensure that your AWS Glue jobs will work
on tables
that have partitions across these storage class tiers. Without exclusions, jobs that
read data
from these tiers fail with the following error:
AmazonS3Exception: The operation is
not valid for the object's storage class.
There are different ways that you can filter Amazon S3 storage classes in AWS Glue.
Topics
Excluding Amazon S3 Storage Classes When Creating a Dynamic Frame
To exclude Amazon S3 storage classes while creating a dynamic frame, use
excludeStorageClasses in
additionalOptions. AWS Glue automatically
uses its own Amazon S3
Lister implementation to list and exclude files corresponding
to the specified storage classes.
The following Python and
Scala examples show how to exclude the
GLACIER and
DEEP_ARCHIVE
storage classes when creating a dynamic frame.
Python example:
glueContext.create_dynamic_frame.from_catalog( database = "my_database", tableName = "my_table_name", redshift_tmp_dir = "", transformation_ctx = "my_transformation_context", additional_options = { "excludeStorageClasses" : ["GLACIER", "DEEP_ARCHIVE"] } )
Scala example:
val* *df = glueContext.getCatalogSource( nameSpace, tableName, "", "my_transformation_context", additionalOptions = JsonOptions( Map("excludeStorageClasses" -> List("GLACIER", "DEEP_ARCHIVE")) ) ).getDynamicFrame()
Excluding Amazon S3 Storage Classes on a Data Catalog Table
You can specify storage class exclusions to be used by an AWS Glue ETL job as a table
parameter
in the AWS Glue Data Catalog. You can include this parameter in the
CreateTable operation
using the AWS Command Line Interface (AWS CLI) or programmatically using the API.
For more information, see
Table Structure and CreateTable.
You can also specify excluded storage classes on the AWS Glue console.
To exclude Amazon S3 storage classes (console)
Sign in to the AWS Management Console and open the AWS Glue console at
.
In the navigation pane on the left, choose Tables.
Choose the table name in the list, and then choose Edit table.
In Table properties, add
excludeStorageClassesas a key and
[\"GLACIER\",\"DEEP_ARCHIVE\"]as a value.
Choose Apply. | https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-storage-classes.html | 2020-10-20T04:20:17 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.aws.amazon.com |
NiFi patches
This release provides Apache NFi 1.11.4 and these additional Apache patches.
- NIFI-7103 – PutAzureDataLakeStorageGen2 processor to provide native support for Azure Data lake Storage Gen 2 Storage
- NIFI-7173
- NIFI-7221 – Add support for protocol v2 and v3 with Schema Registry
- NIFI-7257 – Add Hadoop-based DBCPConnectionPool
NIFI-7259 – DeleteAzureDataLakeStorage processor to provide native delete support for Azure Data lake Gen 2 Storage
- NIFI-7269 – Solrj in nifi-solr-nar needs upgrade
- NIFI-7278 – Kafka_consumer_2.0 sasl.mechanism SCRAM-SHA-512 not allowed
- NIFI-7279 –.
- NIFI-7281 – Using ListenTCPRecord and CsvReader produces an exception
- NIFI-7286 – ListenTCPRecord does not release port when stopping or terminating while running
- NIFI-7287 – Prometheus NAR missing dependency on SSL classes
- NIFI-7294 – Flows with SolrProcessor configured to use SSLContextService are failing
- NiFi-7314 – HandleHttpRequest should stop Jetty in OnUnscheduled instead of OnStopped
NiFi-7345 – Multiple entity is created in Atlas for one Hive table if table name contains uppercase characters
- – ScriptedReportingTask gives a NoClassDefFoundError: org/apache/nifi/metrics/jvm/JmxJvmMetrics
- SHA-512 | https://docs.cloudera.com/cdf-datahub/7.2.2/release-notes/topics/cdf-datahub-nifi-patches.html | 2020-10-20T02:37:08 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.cloudera.com |
Master Master:
alerts_rate_across_accumulo16s
total_alerts_rate_across_accumulo16s
Some metrics, such as
alerts_rate, apply to nearly every metric context. Others only apply to a
certain service or role. | https://docs.cloudera.com/cloudera-manager/7.2.2/reference/topics/cm_metrics_master.html | 2020-10-20T04:03:46 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.cloudera.com |
Examining graphs
How to examine graphs.
Graphs have characteristics that can be examined with a variety of commands.
Procedure
- A list of all graphs can be retrieved with the following command:
system.graphs()In Studio and Gremlin console, a list is retrieved, although the presentation is different. Here is a Gremlin console result:
==> food ==> test
- A list of all graph and their attributes can be retrieved as well:
system.list()
==>Name: food_cql | Engine: Core | Replication: {replication_factor=1, class=org.apache.cassandra.locator.SimpleStrategy} ==>Name: food_classic | Engine: Classic | Replication: {class=org.apache.cassandra.locator.NetworkTopologyStrategy, SearchGraphAnalytics=1}
This result shows two different graphs, one with a Core engine and one with a Classic engine. The first listed graph was created without replication settings and defaulted to a replication factor of 1 and a
SimpleStrategy.
- To examine a particular graph, use the describe command:
system.graph('food').describe()
==>system.graph('food').ifNotExists().withReplication("{'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'SearchGraphAnalytics': '1'}").andDurableWrites(true).create() | https://docs.datastax.com/en/dse/6.8/dse-dev/datastax_enterprise/graph/using/examineGraph.html | 2020-10-20T02:35:39 | CC-MAIN-2020-45 | 1603107869785.9 | [] | docs.datastax.com |
_
-¶
Remember that TYPO3 is GPL software and at the same moment when | https://docs.typo3.org/m/typo3/reference-coreapi/10.4/en-us/ExtensionArchitecture/ExtensionKey/Index.html | 2020-10-20T03:59:17 | CC-MAIN-2020-45 | 1603107869785.9 | [array(['../../_images/RegisterExtensionKey.png',
'The extension key registration form'], dtype=object)] | docs.typo3.org |
Changelog for cjdns
crashey is for development, master is rolling release
A few words about Version.h
Current state: 5fa245c - remove check code (Tue Feb 3 23:25:10 2015 +0100)
v15 -- January 2015
crashey since: 74e7b71 - Configurator should attempt a ping before beginning to setup the core (might fix startup errors for some people) (Fri Jan 23 23:45:04 2015 +0100)
master since: 97161a5 - shitfuck missed a line (Thu Jan 29 19:25:39 2015 +0100)
- The configurator now tries to ping the core process before starting to configure it. This might fix a possible race condition during cjdroute startup.
- A bug with ETHInterface auto-peering has been fixed.
- A segfault in peerStats has been fixed.
- The
-Ocflag for build-time optimization has been fixed. It can now be set as
-O2to optimize for performance, or
-Osto optimize for disk space usage.
- We now try to remember and use the last known route to a node.
- Short-form IPv6 addresses are now supported.
- The tools in
contrib/nodejs/tools/have moved to
tools/.
- The sessionStats tool has been added; ping and traceroute tools can now resolve DNS domains.
- The search tool has been added, and DHT searches are now available in the Admin API as
SearchRunner_search().
- The ping and search tools now allow a
-coption for setting the number searches or pings.
- The Admin API functions
NodeStore_nodeForAddr()and
NodeStore_getLink()no longer require authentication.
v14 -- January 2015
crashey since: 670b047 - Fixed bug in getPeers which caused it to return the same peers every time (Sun Jan 18 17:13:53 2015 +0100)
master since: 601b6cd - Oops, lets bump the version while we're at it (Fri Jan 23 07:47:05 2015 +0100)
- The Hidden Peers bug has been fixed; it manifested in flapping peerings because of dropped pings on the switch layer.
- A bug in NodeStore_getPeers() has been fixed, which caused it to always return the same peers.
ETHInterfacecan now bind to
allnetwork interfaces, and auto-peer on all of them, being more resistant against interface going down or up during runtime.
ETHInterfacenow also brings the respective interfaces up when starting.
- A crash related to the bit alignment of switch ping error responses has been fixed.
- The ping and pingAll tools have been improved, pingAll now performs both router and switch pings.
InterfaceControllerhas been rewritten to allow for easier development of custom interfaces.
- Documentation for debugging using traffic analysis has been added.
v13 -- January 2015
crashey since: bb06b63 - Added 2 new command line tools, traceroute and cjdnslog (Thu Jan 1 17:10:39 2015 +0100)
master since: 185fe28 - Nodes trying to ping themselves causing crashes (Fri Jan 2 09:37:32 2015 +0100)
- Nodes running v11 or below are not supported any longer. They can still establish peering to every other node (also v13), but from v13 on, their traffic won't be switched any longer. They also won't make it into v13 nodes' routing tables.
- The ETHInterface wire protocol now includes the payload length. A few ethernet adapters don't strip the checksum which is appended to the packet by the sender, and thus confuse the decrypter.
NodeStore_getBest()no longer takes DHT k-buckets into accounts -- the respective code has been commented out. From the code comments:
The network is small enough that a per-bucket lookup is inefficient Basically, the first bucket is likely to route through an "edge" node In theory, it scales better if the network is large.
- The Admin API function
InterfaceController_peerStats()now includes the peer's
protocolVersion, and doesn't require authentication any longer.
cjdroute --genconfnow has an
--ethswitch which enables the ETHInterface and auto-peering.
- There is now a script which adds peering passwords to both the config file and the running process, avoiding the otherwise neccessary restart:
contrib/bash/peers.sh user [email protected] <user's ipv6>
- Minor Fixes for Android
- It's now possible to cross-compile for ARM, on an OSX host.
- Documentation, and scripts in
contrib/have been improved. | https://docs.meshwith.me/cjdns/changelog.html | 2017-03-23T00:13:20 | CC-MAIN-2017-13 | 1490218186530.52 | [] | docs.meshwith.me |
How to Connect to and Authenticate Oneself to the CVS server ************************************************************ Connection and authentication occurs before the CVS protocol itself is started. There are several ways to connect. server. kserver the kerberized client and server should be changed to use port 2401 (see below), and send a different string in place of `BEGIN AUTH REQUEST' to identify the authentication method in use. However, noone has yet gotten around to implementing this. pserver The password authenticated server listens on a port (in the current implementation, by having inetd call "cvs pserver") which defaults to 2401 (this port is officially registered). The client connects, sends the string `BEGIN AUTH REQUEST', a linefeed, the cvs root, a linefeed, the username, a linefeed, the password trivially encoded (see scramble.c in the cvs sources), a linefeed, the string `END AUTH REQUEST', and a linefeed. The client must send the identical string for cvs root both here and later in the `Root' request of the cvs protocol itself. Servers are encouraged to enforce this restriction. The server responds with `I LOVE YOU' and a linefeed if the authentication is successful or `I HATE YOU' and a linefeed if the authentication fails. After receiving `I LOVE YOU', the client proceeds with the cvs protocol.. future possibilities There are a nearly unlimited number of ways to connect and authenticate. One might want to allow access based on IP address (similar to the usual rsh protocol but with different/no restrictions on ports < 1024), to adopt mechanisms such as the General Security Service (GSS) API `BEGIN AUTH REQUEST'. | http://docs.freebsd.org/info/cvsclient/cvsclient.info.Connection_and_Authentication.html | 2009-07-03T23:32:55 | crawl-002 | crawl-002-012 | [] | docs.freebsd.org |
RULE
RULE blocks are triggered from an in-game EVENT. When an EVENT triggers, this block will check if its CONDITION has been met and then execute all of the ACTIONS.
In the following example, the CONDITION is checking when a Player earns a kill, whether their team has reached the target score, Then, the ACTIONS execute, which in this case, is ending the gamemode for the Player team's favor.
<block type="ruleBlock"> <mutation isOnGoingEvent="false"></mutation> <field name="EVENTTYPE">OnPlayerEarnedKill</field> <statement name="CONDITIONS"> <block type="conditionBlock"> <value name="CONDITION"> <block type="Equals"> <value name="VALUE-0"> <block type="GetGamemodeScore"> <value name="VALUE-0"> <block type="GetTeamId"> <value name="VALUE-0"> <block type="EventPlayer"></block> </value> </block> </value> </block> </value> <value name="VALUE-1"> <block type="GetTargetScore" /> </value> </block> </value> </block> </statement> <statement name="ACTIONS"> <block type="EndRound"> <value name="VALUE-0"> <block type="GetTeamId"> <value name="VALUE-0"> <block type="EventPlayer"></block> </value> </block> </value> </block> </statement> </block>
Types of RULE Blocks Events
Ongoing
Ongoing EVENT types continually check if its CONDITION has become True. If so, the ACTIONS will be executed once. In order for the EVENT to execute again, the CONDITION must become False before becoming True again. Ongoing EVENT types exist within the Global, Player, and Team context. Within the Player and Team contexts, payload value blocks, such as EventPlayer and EventTeam, can be used to refer to the specific Player or Team within the EVENT. Note: In FFA, Ongoing Team will not execute at all.
OnPlayerDied
This will trigger whenever a Player dies. Payloads: EventPlayer (Victim), EventOtherPlayer (Killer)
OnPlayerDeployed
This will trigger whenever a Player deploys. Payloads: EventPlayer (Deployed Player)
OnPlayerJoinGame
This will trigger when a Player joins the game. Payloads: EventPlayer (Joined Player)
OnPlayerLeaveGame
This will trigger when any player leaves the game.
OnPlayerEarnedKill
This will trigger when a Player earns a kill against another Player. Payloads: EventPlayer (Killer), EventOtherPlayer (Victim)
OnGameModeEnding
This will trigger when the gamemode ends.
OnMandown
This will trigger when a Player is forced into the mandown state. Payloads: EventPlayer (Downed Player)
OnRevived
This will trigger when a Player is revived by another Player. Payloads: EventPlayer (Revived Player), EventOtherPlayer (Reviver Player)
OnTimeLimitReached
This will trigger when the gamemode time limit has been reached.
OnGameModeStarted
This will trigger at the start of the gamemode.
OnPlayerIrreversiblyDead
This will trigger when the Player dies and returns to the deploy screen. Payloads: EventPlayer (Dead Player) | https://docs.bfportal.gg/docs/blocks/Rule | 2022-08-07T18:20:19 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.bfportal.gg |
Guarded fabric and shielded VMs overview
Applies to: Windows Server 2022, Windows Server 2019, Windows Server 2016
Overview of the guarded fabric
Virtualization security is a major investment area in Hyper-V. In addition to protecting hosts or other virtual machines from a virtual machine running malicious software, we also need to protect virtual machines from a compromised host. This is a fundamental danger for every virtualization platform today, whether it's Hyper-V, VMware or any other. Quite simply, if a virtual machine gets out of an organization (either maliciously or accidentally), that virtual machine can be run on any other system. Protecting high value assets in your organization, such as domain controllers, sensitive file servers, and HR systems, is a top priority.
To help protect against compromised virtualization fabric, Windows Server 2016 Hyper-V introduced shielded VMs. A shielded VM is a generation 2 VM (supported on Windows Server 2012 and later) that has a virtual TPM, is encrypted using BitLocker, and can run only on healthy and approved hosts in the fabric. Shielded VMs and guarded fabric enable cloud service providers or enterprise private cloud administrators to provide a more secure environment for tenant VMs.
A guarded fabric consists of:
- 1 Host Guardian Service (HGS) (typically, a cluster of 3 nodes)
- 1 or more guarded hosts
- A set of shielded virtual machines. The diagram below shows how the Host Guardian Service uses attestation to ensure that only known, valid hosts can start the shielded VMs, and key protection to securely release the keys for shielded VMs.
When a tenant creates shielded VMs that run on a guarded fabric, the Hyper-V hosts and the shielded VMs themselves are protected by the HGS. The HGS provides two distinct services: attestation and key protection. The Attestation service ensures only trusted Hyper-V hosts can run shielded VMs while the Key Protection Service provides the keys necessary to power them on and to live migrate them to other guarded hosts.
Video: Introduction to shielded virtual machines
Attestation modes in the Guarded Fabric solution
The HGS supports different attestation modes for a guarded fabric:
- TPM-trusted attestation (hardware-based)
- Host key attestation (based on asymmetric key pairs)
TPM-trusted attestation is recommended because it offers stronger assurances, as explained in the following table, but it requires that your Hyper-V hosts have TPM 2.0. If you currently do not have TPM 2.0 or any TPM, you can use host key attestation. If you decide to move to TPM-trusted attestation when you acquire new hardware, you can switch the attestation mode on the Host Guardian Service with little or no interruption to your fabric.
Another mode named Admin-trusted attestation is deprecated beginning with Windows Server 2019. This mode was based on guarded host membership in a designated Active Directory Domain Services (AD DS) security group. Host key attestation provide similar host identification and is easier to set up.
Assurances provided by the Host Guardian Service
HGS, together with the methods for creating shielded VMs, help provide the following assurances.
What is shielding data and why is it necessary?
A shielding data file (also called a provisioning data file or PDK file) is an encrypted file that a tenant or VM owner creates to protect important VM configuration information, such as the administrator password, RDP and other identity-related certificates, domain-join credentials, and so on. A fabric administrator uses the shielding data file when creating a shielded VM, but is unable to view or use the information contained in the file.
Among others, a shielding data files contain secrets such as:
- Administrator credentials
- An answer file (unattend.xml)
- A security policy that determines whether VMs created using this shielding data are configured as shielded or encryption supported
- Remember, VMs configured as shielded are protected from fabric admins whereas encryption supported VMs are not
- An RDP certificate to secure remote desktop communication with the VM
- A volume signature catalog that contains a list of trusted, signed template-disk signatures that a new VM is allowed to be created from
- A Key Protector (or KP) that defines which guarded fabrics a shielded VM is authorized to run on
The shielding data file (PDK file) provides assurances that the VM will be created in the way the tenant intended. For example, when the tenant places an answer file (unattend.xml) in the shielding data file and delivers it to the hosting provider, the hosting provider cannot view or make changes to that answer file. Similarly, the hosting provider cannot substitute a different VHDX when creating the shielded VM, because the shielding data file contains the signatures of the trusted disks that shielded VMs can be created from.
The following figure shows the shielding data file and related configuration elements.
What are the types of virtual machines that a guarded fabric can run?
Guarded fabrics are capable of running VMs in one of three possible ways:
- A normal VM offering no protections above and beyond previous versions of Hyper-V
- An encryption-supported VM whose protections can be configured by a fabric admin
- A shielded VM whose protections are all switched on and cannot be disabled by a fabric admin
Encryption-supported VMs are intended for use where the fabric administrators are fully trusted. For example, an enterprise might deploy a guarded fabric in order to ensure VM disks are encrypted at-rest for compliance purposes. Fabric administrators can continue to use convenient management features, such VM console connections, PowerShell Direct, and other day-to-day management and troubleshooting tools.
Shielded VMs are intended for use in fabrics where the data and state of the VM must be protected from both fabric administrators and untrusted software that might be running on the Hyper-V hosts. For example, shielded VMs will never permit a VM console connection whereas a fabric administrator can turn this protection on or off for encryption supported VMs.
The following table summarizes the differences between encryption-supported and shielded VMs.
1 Traditional debuggers that attach directly to a process, such as WinDbg.exe, are blocked for shielded VMs because the VM's worker process (VMWP.exe) is a protected process light (PPL). Alternative debugging techniques, such as those used by LiveKd.exe, are not blocked. Unlike shielded VMs, the worker process for encryption supported VMs does not run as a PPL so traditional debuggers like WinDbg.exe will continue to function normally.
Both shielded VMs and encryption-supported VMs continue to support commonplace fabric management capabilities, such as Live Migration, Hyper-V replica, VM checkpoints, and so on.
The Host Guardian Service in action: How a shielded VM is powered on
VM01 is powered on. Before a guarded host can power on a shielded VM, it must first be affirmatively attested that it is healthy. To prove it is healthy, it must present a certificate of health to the Key Protection service (KPS). The certificate of health is obtained through the attestation process.
Host requests attestation. The guarded host requests attestation. The mode of attestation is dictated by the Host Guardian Service:
TPM-trusted attestation: Hyper-V host sends information that includes:
TPM-identifying information (its endorsement key)
Information about processes that were started during the most recent boot sequence (the TCG log)
Information about the Code Integrity (CI) policy that was applied on the host.
Attestation happens when the host starts and every 8 hours thereafter. If for some reason a host doesn't have an attestation certificate when a VM tries to start, this also triggers attestation.
Host key attestation: Hyper-V host sends the public half of the key pair. HGS validates the host key is registered.
Admin-trusted attestation: Hyper-V host sends a Kerberos ticket, which identifies the security groups that the host is in. HGS validates that the host belongs to a security group that was configured earlier by the trusted HGS admin.
Attestation succeeds (or fails). The attestation mode determines which checks are needed to successfully attest the host is healthy. With TPM-trusted attestation, the host's TPM identity, boot measurements, and code integrity policy are validated. With host key attestation, only registration of the host key is validated.
Attestation certificate sent to host. Assuming attestation was successful, a health certificate is sent to the host and the host is considered "guarded" (authorized to run shielded VMs). The host uses the health certificate to authorize the Key Protection Service to securely release the keys needed to work with shielded VMs
Host requests VM key. Guarded host do not have the keys needed to power on a shielded VM (VM01 in this case). To obtain the necessary keys, the guarded host must provide the following to KPS:
- The current health certificate
- An encrypted secret (a Key Protector or KP) that contains the keys necessary to power on VM01. The secret is encrypted using other keys that only KPS knows.
Release of key. KPS examines the health certificate to determine its validity. The certificate must not have expired and KPS must trust the attestation service that issued it.
Key is returned to host. If the health certificate is valid, KPS attempts to decrypt the secret and securely return the keys needed to power on the VM. Note that the keys are encrypted to the guarded host's VBS.
Host powers on VM01.
Guarded fabric and shielded VM glossary
Additional References
- Guarded fabric and shielded VMs
- Blog: Datacenter and Private Cloud Security Blog
- Video: Introduction to Shielded Virtual Machines
- Video: Dive into Shielded VMs with Windows Server 2016 Hyper-V
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/windows-server/security/guarded-fabric-shielded-vm/guarded-fabric-and-shielded-vms | 2022-08-07T19:26:24 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['../media/guarded-fabric-shielded-vm/guarded-host-overview-diagram.png',
'Guarded host fabric'], dtype=object)
array(['../media/guarded-fabric-shielded-vm/shielded-vms-shielding-data-file.png',
'Illustration that shows the shielding data file and related configuration elements.'],
dtype=object)
array(['../media/guarded-fabric-shielded-vm/shielded-vms-how-a-shielded-vm-is-powered-on.png',
'Shielding data file'], dtype=object) ] | docs.microsoft.com |
For local communication, self-signed certificates and a private trust store are usually sufficient for securing communication. Indeed, several nodes can share the same certificate, as long as we ensure that our trust configuration is not tampered with.
To build a self-signed certificate chain, begin by creating a certificate configuration file like this:
[ req ] default_bits = 4096 default_keyfile = <hostname>.key distinguished_name = req_distinguished_name req_extensions = v3_req prompt = no [ req_distinguished_name ] C = <country code> ST = <state> L = <locality/city> O = <domain> OU = <organization, usually domain> CN= <hostname>.<domain> emailAddress = <email>
Substitute the values in <> with whichever suits your organization. For this example, let’s call our host
db, and our domain
foo.bar, and create a file called
db.cfg:
[ req ] default_bits = 4096 default_keyfile = db.key distinguished_name = req_distinguished_name req_extensions = v3_req prompt = no [ req_distinguished_name ] C = SE ST = Stockholm L = Stockholm O = foo.bar OU = foo.bar CN= db.foo.bar emailAddress = [email protected]
Note
Please note that each new signed certificate should have different “CN=” in “req_distinguished_name” section. Otherwise it won’t pass openssl verify check.
Then, begin by generating a self-signing certificate authority key:
openssl genrsa -out cadb.key 4096
And using this, a certificate signing authority:
openssl req -x509 -new -nodes -key cadb.key -days 3650 -config db.cfg -out cadb.pem
Now, generate a private key for our certificate:
openssl genrsa -out db.key 4096
And from this, a signing request:
openssl req -new -key db.key -out db.csr -config db.cfg
Then we can finally create and sign our certificate:
openssl x509 -req -in db.csr -CA cadb.pem -CAkey cadb.key -CAcreateserial -out db.crt -days 365
As a result, we should now have:
db.key - PEM format key that will be used by the database node.
db.crt - PEM format certificate for the db.key signed by the cadb.pem and used by database node.
cadb.pem - PEM format signing identity that can be used as a trust store. Use it to sign client certificates that will connect to the database nodes.
Place the files in a directory of your choice and make sure you set permissions so your Scylla instance can read them. Then update the server/client configuration to reference them.
When restarting Scylla with the new configuration, you should see the following messages in the log:
When node-to-node encryption is active:
Starting Encrypted Messaging Service on SSL port 7001
When client to node encryption is active:
Enabling encrypted CQL connections between client and server
Encryption: Encryption: Data in Transit Node to Node. | https://docs.scylladb.com/stable/operating-scylla/security/generate-certificate.html | 2022-08-07T19:36:49 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.scylladb.com |
traceroute <host> interface ).
- interface value
- Specifies the interface that the device must use for traceroute requests.
-.
- first-ttl value
- Specifies the first time-to-live value. Defaults to 1.
- gateway address
- Routes the request through a specified gateway.
- icmp-echo
- Uses ICMP echo for the traceroute probe.
- icmp-extensions
- Shows ICMP extensions (rfc4884). The general form is CLASS/TYPE: followed by a hexadecimal dump.
- through interface dp0p1s2.
vyatta@vyatta# traceroute google.com interface dp0p1s2 | https://docs.vyatta.com/en/supported-platforms/vrouter/configuration-vrouter/ip-routing/basic-routing/forwarding-and-routing-commands/traceroute-host-interface-value | 2022-08-07T18:23:55 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.vyatta.com |
NJ Top Doctor NJ Top Docs Traditional Package with Plaque
NJ Top Docs Traditional Package with Plaque
$795.00
Description
You will receive:
- A custom, personalized, clickable webpage on NJTopDocs.com for 1 full year (See imagery) showcasing your education, training, awards, photo, practice information, testimonials, etc
- 1 (one) 11” x 14” personalized wooden plaque (ready to hang) as selected
- Highlighted Listing in the print & digital edition of the next Top Doctor & Top Dentists Healthy Living Magazine issue “Approved Directory”
- 4 social media posts done by us about you on one of our social media network accounts (ie: Facebook, Instagram, LinkedIn, Twitter, etc)
- Inclusion in a group press release announcing your approval and featured on our blog.
- Use of all logos
Please select from the dropdown to see individual product imagery. | https://usatopdocs.com/doctor-award/nj-top-docs-custom-online-webpage-with-plaque-package/ | 2022-08-07T19:53:23 | CC-MAIN-2022-33 | 1659882570692.22 | [] | usatopdocs.com |
Cycle Websockets API
Overview
The Cycle Websockets API offers real-time data updates. WebSockets is a bidirectional protocol offering fastest real-time data, helping users build real-time applications. All channels offered require authentication supported through our REST API and the authentication endpoints will match the endpoint of the websocket the user desires.
General Points
- All messages are JSON encoded.
- Token's required to be passed as a query parameter can be obtained by using the corresponding REST endpoint.
- Timestamps should not be considered unique and not be considered as aliases for transaction IDs. Also, the granularity of timestamps is not representative of transaction rates.
Authentication
The API client must request an authentication "token" via the associated REST endpoint. The token should be used within 15 minutes of creation. The token does not expire once a connection is made but a new token will need to be obtained if the connection is dropped and the current token has expired.
Authentication Return Struct Example
{
"data": {
"token": "OG4BTTP3V_nBh92ETm9qGOAbcCvPTS-M9DE_0UWs2-oCd7WnGuhNxVq2jsEbc7dWP9YXnBsXkabno4OlarGxRbfhumpOlllYr6wTu8TLaml1OWCoOzhk30x4U7Jnp3zm1hhiKEKNZFbDBbkbnk65EHUdPAPdskXlGmBccCGBjMCxnA_gbY9BdDRNiM9WSN_v"
}
}
Some endpoints will return more than just the token, be sure the check the REST API documentation for the full struct for each endpoint. | https://api-docs.cycle.io/docs/intro/ | 2022-08-07T18:36:09 | CC-MAIN-2022-33 | 1659882570692.22 | [] | api-docs.cycle.io |
Add and Configure SRM
Learn how to add SRM to your cluster.
- SRM to your cluster. The configuration examples on this page are simple examples that are meant to demonstrate the type of information that you have to enter. For comprehensive deployment and configuration examples, see Deployment recommendations and Configuration examples.
streamsrepmgris added to the Kafka Super users property.
- cluster aliases:
- Find the Streams Replication Manager Cluster alias. property.
- Add a comma delimited list of cluster aliases. For example:
primary, secondaryCluster aliases are arbitrary names defined by the user. Aliases specified here are used in other configuration properties and with the
srm-controltool to refer to the clusters added for replication.
- Specify cluster connection information:
- Find the streams.replication.manager's replication configs property.
- Click the add button and add new lines for each cluster alias you have specified in the Streams Replication Manager Cluster alias. property
- Add connection information for your clusters. For example:
primary.bootstrap.servers=primary_host1:9092,primary_host2:9092,primary_host3:9092 secondary.bootstrap.servers=secondary_host1:9092,secondary_host2:9092,secondary_host3:9092
Each cluster has to be added to a new line. If a cluster has multiple hosts, add them to the same line but delimit them with commas.
- Add and enable replications:
- Find the streams.replication.manager's replication configs property.
- Click the add button and add new lines for each unique replication you want to add and enable.
- Add and enable your replications. For example:
primary->secondary.enabled=true secondary->primary.enabled=true
- to. When this property is left empty (default) the driver will read from and write to all clusters added to SRMs configuration. When this property is set, the driver will collect data from all clusters, but will only write to the clusters specified in this property. This property becomes essential when you have an advanced deployment as it allows you to distribute replication workloads.
- allowlist. | https://docs.cloudera.com/csp/2.0.1/deployment/topics/csp-add-srm.html | 2022-08-07T19:51:56 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.cloudera.com |
Overview
In this guide, we will walk through deploying a Delphix Engine, starting with configuring Source, Staging (aka Validated Sync), and Target database environments. We will then create a dSource, and provision a VDB.
Deploy OVA on VMWare
Use the Delphix-supplied OVA file to install the Delphix Engine. The OVA file is configured with many of the minimum system requirements and deploys to one 300GB hard disk with 8 vCPUs and 64GB RAM. The underlying storage for the install is assumed to be redundant SAN storage.
- Download the OVA file from. You will need a support login from your sales team or a welcome letter.
Navigate to "Virtual Appliance" and download the appropriate OVA. If you are unsure, use the HWv8_Standard type.
Login using the vSphere client to the vSphere server (or vCenter Server) where you want to install the Delphix Engine.
In the vSphere Client, click File.
Select Deploy OVA Template.
Browse to the OVA file.
Click Next.
Select a hostname for the Delphix Engine.
This hostname will also be used in configuring the Delphix Engine network.
Select the data center where the Delphix Engine will be located.
Select the cluster and the ESX host.
Select one (1) datastore for the Delphix OS. This datastore can be thin-provisioned and must have enough free space to accommodate the 300GB comprising the Delphix operating system.
Select four (4) or more datastores for Database Storage for the Delphix Engine. The Delphix Engine will stripe all of the Database Storage across these VMDKs, so for optimal I/O performance each VMDK must be equal in size and be configured Thick Provisioned - Lazy Zeroed. Additionally, these VMDKs should be distributed as evenly as possible across all four SCSI I/O controllers, as described in KB045 Reconfiguring Controllers.. For more information, see Optimal Network Architecture for the Delphix Engine.
Click Finish.
The installation will begin and the Delphix Engine will be created in the location you specified.
Setup Network Access to the Delphix Engine
- for the username and sysadmin for
Configure the hostname. If you are using DHCP, you can skip this step.
delphix network setup update *> set hostname=<hostname>Use the same hostname you command
Setting Up the Delphix Engine
Once you setup the network access for Delphix Engine, navigate to the Delphix Engine URL in your browser for server setup.
The setup procedure uses a wizard process to take you through eight configuration screens:
Administrators
System Time
Network
Storage
Serviceability
Authentication
Summary
- Connect to the Delphix Engine at HTTP://<Delphix Engine>/login/ index.html#serverSetup.
The Delphix Setup application will launch when you connect to the server.
Enter your sysadmin login credentials, which initially defaults to the username sysadmin, with the initial default password of sysadmin. On the first login, you will be prompted to change the initial default password.
- Click Next.
Administrators
The Delphix Engine supports two types of administrators:
- System Administrator (sysadmin) - this is the engine system administrator. The sysadmin password is defined here.
- Delphix Administrator (delphix_admin) - this is typically a DBA who will administer all the data managed by the engine.
System Time
Choose your option to setup system time in this section.
For a Quick Start, simply set the time and your timezone. You can change this later.
Network
The initial out-of-the-box network configuration in the OVA file is set to use a single VMXNET3 network adapter.
You have already configured this in the initial configuration. Delphix supports more advanced configurations, but you can enable those later.
Storage
You should see the data storage VMDKs or RDMs you created during the OVA installation. Click Next to configure these for data storage.
Serviceability
Choose your options to configure serviceability settings.
For a Quick Start, accept the defaults. You can change this later.
If the Delphix Engine has access to the external Internet (either directly or through a web proxy), then you can auto-register the Delphix Engine:
- Enter your Support Username and Support Password.
- Click Register.
If external connectivity is not immediately available, you must perform manual registration.
- Copy the Delphix Engine registration code in one of two ways:
- Manually highlight the registration code and copy it to the clipboard. Or,
- Click Copy Registration Code to Clipboard.
- Transfer the Delphix Engine's registration code to a workstation with access to the external network Internet. For example, you could e-mail the registration code to an externally accessible email account.
- On a machine with access to the external Internet, please use your browser to navigate to the Delphix Registration Portal at.
- Login with your Delphix support credentials (username and password).
- Paste the Registration Code.
- Click Register.
Summary
The final summary tab will enable you to review your configurations for System Time, Network, Storage, Serviceability, and Authentication.
- Click the Back button to go back and to change the configuration for any of these server settings.
- If you are ready to proceed, then click Submit.
- Click Yes to confirm that you want to save the configuration.
- Click Setup to acknowledge the successful configuration.
- There will be a wait of several minutes as the Delphix Engine completes configuration.
Source and Target Environment Requirements
Each DB2 Source host (master) must meet these requirements:
- IBM DB2 installed and an instance created on the machine
- HADR settings for each database to be used with the standby server should be preset before the linking process begins
Requirements for DB2 Staging and Target Hosts and Instances
The staging environment that the Delphix Engine uses must have access to an existing full backup of the source database on disk to create the first full copy. Delphix recommends using compressed backups as that will reduce storage needs and speed up ingest.
The staging and target DB2 instances that you wish to use must already exist on the host and contain no existing databases.
The available instances on each host can be verified by going to the databases tab for the environment in question.
Additional Environment Requirements
- There must be an operating system user (delphix_os) with these privileges:
Ability to login to the target environment via SSH
Ability to run mount, umount, mkdir, and rmdir as a super-user. If the target host is an AIX system, permission to run the nfso command as a super-user. See Sudo Privilege Requirements for DB2 Environments for further explanation of the commands and Sudo File Configuration Examples for DB2 Environments for examples of the /etc/sudoers file on different operating systems.
There must be a directory on the staging and target environment where you can install the Delphix Engine Toolkit – for example, /var/opt/delphix/toolkit .
- The delphix_os user must own the directory.
The directory must have permissions -rwxrwx--- (0770), but you can also use more permissive settings.
If delphix os user and instance users (responsible for DE operations such as linking and provisioning) are not sharing any common group, then toolkit directory must have -rwxrwxrwx (0777) permissions.
- The delphix_os user must have read and execute permissions on each directory in the path leading to the toolkit directory. For example, when the toolkit is stored in /var/opt/delphix/toolkit, the permissions on /var, /var/opt, and /var/opt/delphix should allow read and execute for "others," such as -rwxr-xr-x.
- The directory should have 1.5GB of available storage: 400MB for the toolkit and 400MB for the set of logs generated by each DB2 instance that runs on the host.
- In DB2 Toolkit: toolkit directory space will be used as the base location for the mount point.
The Delphix Engine must be able to initiate an SSH connection to the target environment
NFS client services must be running on the target environment
Instance User Requirements
The instance owner of each instance you wish to use within a staging or a target host must be added as an environment user within the Delphix engine. See Managing DB2 Users and Instance Owners.
For HADR synced dSources the staging instance owner must be able to "read" the ingested database contents as Delphix will check the validity of the database by querying tables before each dSource snapshot.
Database Container Requirements
All DB2 database containers types are fully supported with the exception of DB2 raw containers. NOTE: If a container is added or deleted, the dSource will have to be resynced.
Instance level configuration values such as the bufferpool value will need to be managed by the customer independent of Delphix. The instances used for staging and target environments must be compatible with the source DB2 instance. The Delphix DB2 DB Level toolkit supports managing dSources with database level granularity.
Sudo Privilege Requirements for DB2 Environments
This topic describes the rationale behind specific sudo privilege requirements for virtualizing the DB2 Databases.
It is required to specify the NOPASSWD qualifier within the "sudo" configuration file, as shown here: Sudo File Configuration Examples for DB2 Environments. This ensures that the "sudo" command does not demand the entry of a password, even for the "display permissions" (i.e. "sudo -l") command.
Delphix issues "sudo -l" in some scripts to detect if the operating system user has the correct sudo privileges. If it is unable to execute this command, some actions may fail and Delphix will raise an alert suggesting it does not have the correct sudo permissions. Restricting the execution of "sudo -l" by setting “listpw=always” in the “/etc/sudoers” file when the Delphix operating system user is configured to use public key authentication will cause the Delphix operating system user to be prompted for a password which will fail certain Delphix actions. Use a less restrictive setting for listpw than "always" when the Delphix operating system user is using public key authentication.
Adding a DB2 Environment
Prerequisites
Make sure that the staging environment in question meets the requirements described in Requirements for DB2 Hosts and Databases.
Procedure
Login to the Delphix Management application.
Click Manage.
Select Environments.
Next, to Environments, click the Actions (...) menu and select Add Environment.
In the Add Environment dialog, select Unix/Linux in the menu.
Select Standalone Host.
Click Next.
Enter a Name for the environment.
Enter the Host IP address or hostname.
Enter the SSH port.
The default value is 22.
Enter an OS Username for the environment.
For more information about the environment user requirements, see Requirements for DB2 Hosts and Databases.
Select a Login Type.
For Password, enter the password associated with the user in step 9.
Using Public Key Authentication:
If you want to use public key encryption for logging into your environment:
Select Public Key for the Login Type.
Click View Public Key.
Copy the public key that is displayed, and append it to the end of your ~/.ssh/authorized_keys file. If this file does not exist, you will need to create it.
Run chmod 600 authorized_keys to enable read and write privileges for your user.
Run chmod 755 ~ to make your home directory writable only by your user.
The public key needs to be added only once per user and per environment.
You can also add public key authentication to an environment user's profile by using the command line interface, as explained in the topic CLI Cookbook: Setting Up SSH Key Authentication for UNIX Environment Users.
For Password Login, click Verify Credentials to test the username and password.
Enter a Toolkit Path (make sure toolkit path does not have spaces).
For more information about the toolkit directory requirements, see Requirements for DB2 Hosts and Databases.
Click Submit.
As the new environment is added, you will see two jobs running in the Delphix platform Job History, one to Create and Discover an environment, and another to Create an environment. When the jobs are complete, you will see the new environment added to the list in the Environments tab. If you do not see it, click the Refresh icon in your browser.
Managing DB Instances
View Instances
Login to the Delphix Management application with Delphix Admin credentials.
Click Manage.
Select Environments.
In the Environments panel, click on the name of the environment to you want to refresh.
Select the Databases tab to see a list of all DB2 instances found in the environment.
Linking a Data Source (dSource)
Prerequisites
Be sure that the source and staging instances meet the host requirements and the databases meet the container requirements described in Requirements for DB2 Hosts and Databases.
Source Database Preparation
Instance Owner Permissions
Delphix uses the DB2 instance owner account on the dSource for many things, including verifying the data inside the databases. For ingesting database on the staging server with different instance we need permissions on the source database to do restore on the staging server. For example in the source if we have an instance named auto1051 and database name delphix and if we want to create a dSource on the auto1052 instance on staging server then you must explicitly grant DBADM and SECADM to the dSource instance auto1052 on the source instance using the following steps:
Connect to the source databases as the source instance owner.
connect to <DB_NAME> user <INSTANCE_OWNER>
Issue database grant command
grant DBADM, SECADM on database to user <DSOURCE_INSTANCE_OWNER>
Repeat step 2 for every database to be included in the dSource, on the corresponding source database.
Determine if your dSource will be a non-HADR instance, an HADR single standby instance, or an HADR multiple standby instance. Non-HADR dSources can only be updated via a full dSource resync from a newer backup file
Non-HADR Database
See "Instance Owner Permissions" section above.
HADR Single Standby Database
All items in Non-HADR Database section above.
The following database configuration settings must be set:
update db cfg for <DB_NAME> using HADR_LOCAL_HOST <PRIMARY_IP> HADR_LOCAL_SVC <PRIMARY_PORT > immediate
update db cfg for <DB_NAME> using HADR_REMOTE_HOST <STANDBY_IP> HADR_REMOTE_SVC <STANDBY_PORT> immediate
update db cfg for <DB_NAME> using HADR_REMOTE_INST <STANDBY_INSTANCE_NAME> immediate
update db cfg for <DB_NAME> using HADR_SYNCMODE SUPERASYNC immediate
If database configuration parameter LOGINDEXBUILD is set to OFF, do the following:
update db cfg for <DB_NAME> using LOGINDEXBUILD ON
Force off all connections to the database and reactivate the database
If database configuration parameter LOGARCHMETH1 is set to OFF, do the following:
update db cfg for <DB_NAME> using LOGARCHMETH1 XXXX (must be a valid log archiving method)
Take an offline backup
If LOGARCHMETH1 points to a third-party backup server (i.e. TSM or Netbackup) define LOGARCHMETH2 to disk
update db cfg for <DB_NAME> using LOGARCHMETH2 DISK:<full path to archive log directory>
Log files in the directory must be available from the time of the backup until the restore has successfully completed on the dSource.
db2 start hadr on db <DB_NAME> as primary by force
Take a full online backup as defined in the "Backup Source Database" section below.
Record the following information, as it must be entered on the Delphix Engine while creating the dSource.
HADR Primary hostname
HADR Primary SVC
HADR Standby SVC (auxiliary standby port)
HADR Multiple Standby Databases
This assumes a single standby database HADR setup already exists. The existing standby will be referred to as the main standby. The new delphix standby will be referred to as the auxiliary standby.
The following database configuration settings must be set on the primary database:
update db cfg for <DB_NAME> using HADR_SYNCMODE <SYNC MODE> immediate – set whichever sync mode you wish to use on your main standby.
update db cfg for <DB_NAME> using HADR_TARGET_LIST "<MAIN_STANDBY_IP:MAIN_STANDBY_PORT|AUXILIARY_STANDBY_IP:AUXILIARY_STANDBY_PORT>" immediate
You may have up to two auxiliary standbys defined separated by a '|'; one of which must be the delphix dSource.
stop hadr on db <DB_NAME>
start hadr on db <DB_NAME> as primary by force
Take a full online backup as defined in the "Backup Source Database" section below. While this backup is running, you may continue with step 5.
The following database configuration settings must be set on the existing main standby database:
update db cfg for <DB_NAME> using HADR_SYNCMODE <same mode as defined in 1.a above.> – It must be the same value used for primary database.
update db cfg for <DB_NAME> using HADR_TARGET_LIST "<PRIMARY_IP:PRIMARY_PORT|MAIN_STANDBY_IP:MAIN_STANDBY_PORT>"
stop hadr on db <DB_NAME>
start hadr on db <DB_NAME> as standby
Record the following information, as it must be entered on the Delphix Engine while creating the dSource (the auxiliary standby database):
HADR Primary hostname
HADR Primary SVC
HADR Standby SVC (auxiliary standby port)
HADR_TARGET_LIST <PRIMARY_IP:PRIMARY_PORT|MAIN_STANDBY_IP:MAIN_STANDBY_PORT>
Backup Source Database
Source Database with Raw DEVICE type Storage
Several users use raw device-based tablespaces for source DB2 databases. To leverage these environments with Delphix, Delphix has built a workflow using DB2s native tools that allow Delphix to discover and convert a raw device-based tablespace into an automatic storage-based tablespace during ingestion. Once the data is ingested into staging, customers will be able to provision VDBs of the automatic storage-based database.
In order to complete the linking process, the Standby dSource must have access to a full backup of the source DB2 databases on disk. This should be a compressed online DB2 backup and must be accessible to the dSource instance owner on disk. Delphix is currently not setup to accept DB2 backups taken using third-party sources such as Netbackup or TSM.Both HADR and Non-HADR backups must also include logs.
Example backup command: db2 backup database <DB_NAME> online compress include logs
Best Practices for Taking a Backup
The following best practices can help improve backup and restore performance:
Compression should be enabled
Following parameters should be optimally configured:
Utility Heap Size (UTIL_HEAP_SZ)
No. of CPUs
No. of Table Spaces
Extent Size
Page Size
Parallelism & Buffer configuration may be used to improve the backup performance. Parameters that should be configured are :
Parallelism
Buffer Size
No. of Buffers
More information about backup best practices is available in IBM Knowledge Center
Procedure
- Login to the Delphix Management Application using Delphix Admin credentials or as the owner of the database from which you want to provision the dSource.
- On the Databases tab of Environment Management screen, add a source config against discovered staging instance.
Then, click Manage.
Select Datasets.
Click the Plus (+) icon and select Add dSource, you’ll get a list of available source configs using which you can go for dsource creation.
In the Add dSource wizard, select the required source configuration.
If you are working with an HADR setup, please leave the HADR checkbox checked.
The database name is mandatory and must be unique for a given instance. This is the name that the database was on the instance it was restored from.
Enter the complete Backup Path where the database backup file resides. If no value is entered, the default value used is the instance home directory. If there are multiple backup files for a database on the backup path, the most current one will be used.
Enter the Log Archive Method1 you wish to use for the database. If no value is entered, the default value used is DISK:/mountpoint/dbname/arch.
Optionally, users can set the database configuration parameters during the linking operation in the Config Settings section.
If the dSource is to use HADR please enter the following fields. If it will not use HADR skip ahead to step 13. For more information about HADR please view Linking a dSource from a DB2 Database: An Overview.
a. Enter a fully qualified HADR Primary Hostname. This is a required field for HADR and must match the value set for HADR_LOCAL_HOST on the master.
b. Enter the port or /etc/services name for the HADR Primary SVC. This is a required field for HADR and uses the value set for HADR_LOCAL_SVC on the master.
c. Enter the port or /etc/services name for the HADR Standby SVC. This is a required field for HADR and uses the value set for HADR_REMOTE_SVC on the master.
Click Next.
Select a dSource Name and Database Group for the dSource.
Click Next.
You will get Data Management section where you need to specify staging environment and user which will be used for dsource creation.
Set the Staging Environment to be the same as the dSource host.
Select the Staging Environment User to be the same as the instance owner of the dSource instance.
Changing the Environment UserIf you need to change or add an environment user for the dSource instance, see Managing DB2 Users and Instance Owners.
Then, click Next and you’ll get Policies section. Set the desired Snapsync Policy for the dSource. For more information on policies see Advanced Data Management Settings for DB2 dSources.
Click Next.
Specify any desired pre- and post-scripts. For details on pre- and post-scripts, refer to Customizing DB2 Management with Hook Operations.
Click Next.
Review the dSource Configuration and Data Management information in summary section.
Click Submit.
The Delphix Engine will initiate two jobs to create the dSource: DB_Link and DB_Sync. You can monitor these jobs by clicking Active Jobs in the top menu bar, or by selecting System > Event Viewer. When the jobs have completed successfully, the database icon will change to a dSource icon on the Environments > Host > Databases screen, and the dSource will also appear in the list of Datasets under its assigned group.
The dSource Configuration Screen
After you have created a dSource, the dSource Configuration tab allows you to view information about it and make modifications to its policies and permissions. In the Datasets panel, select the dSource you wish to examine. You can now choose the configuration tab to see information such as the Source files, Data Management configuration and Hook Operations. For more information, see Advanced Data Management Settings for DB2 dSources.
Provisioning a Virtual Database (VDB)
Prerequisites
- You will need to have linked a dSource from a staging instance, as described in Linking a DB2 dSource, or have created a VDB from which you want to provision another VDB
- You should have set up the DB2 target environment with necessary requirements as described in Requirements for DB2 Hosts and Databases
- Make sure you have the required Instance Owner permissions on the target instance and environment as described in Managing DB2 Users and Instance Owners
- The method for Database Permissions for Provisioned DB2 VDBs is decided before the provisioning
You can take a new snapshot of the dSource by clicking the Camera icon on the dSource card. Once the snapshot is complete you can provision a new VDB from it.
Procedure
- Login to the Delphix Admin application.
- Click Manage.
- Select Datasets.
- Select a dSource.
- Select a snapshot from which you want to provision.
- Click Provision VDB icon to open Provision VDB wizard.
- Select a target environment from the left pane.
- Select an Installation to use from the dropdown list of available DB2 instances on that environment.
- Set the Environment User to be the Instance Owner. Note: The picking of instance owner is only possible if you have multiple environment users set on that host.
- Provide VDB Name as database name as parameter.
- Optionally, set the database configuration parameters for the VDB.
- Click Next.
- Select a Target Group for the VDB.
Click the green Plus icon to add a new group, if necessary.
- Select a Snapshot Policy for the VDB.
- Click Next.
Specify any desired hook operations. For details, on-hook operations, refer to Customizing DB2 Management with Hook Operations.
- Click Next.
- Review the Provisioning Configuration and Data Management information.
- Click Submit.
When provisioning starts, you can review the progress of the job in the Databases panel, or in the Job History panel of the Dashboard. When provisioning is complete, the VDB will be included in the group you designated and listed in the Databases panel. If you select the VDB in the Databases panel and click the Open icon, you can view its card, which contains information about the database and its Data Management settings.
Once the VDB provisioning has successfully completed, if the source and target instance ids are not the same, you may want to grant secadm and dbadm on the database to the target instance id. Refer to Database Permissions for Provisioned DB2 VDBs for more information.. | https://docs.delphix.com/docs525/quick-start-guides/quick-start-guide-for-db2 | 2022-08-07T19:19:43 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.delphix.com |
How to Designate Your Boosting to Other Pools¶
As introduced in the the Rewarding and Boosting System, in addition to the basic rewards, boosted rewards are incentives for miners staking PHX or cPHX in the contracts, meaning users can stake PHX or cPHX in the boosting contracts to enhance the basic mining rewards.
The mining contract gives stakers greater flexibility, that users can adjust the pools they intend to boost in, even after staking.
Here is a short, hands-on guide on how to designate your boosting effects of mining to other pools.
Step 1¶
Following the same steps in the reward boosting guide, please navigate to the 'earn' UI and connect your wallet.
Step 2¶
Choose the pool you want to move your boosting effect to. In this example, I intend to move the boosting from the Matic Pool to the USDC Pool.
Press the 'boost' button at the end.
Step 3¶
Choose the 3rd one, 'designate boosting to pools', and press the 'next' button.
Step 4¶
On the following page, please double confirm the pools you are staking in and moving out of. Then press 'next'.
Step 5¶
Please input the amount of vePHX you would like to move.
You will see all the details after re-designating the boosting effect in the next page.
Press confirm to complete the transaction through your wallet.
Congratulations, you have successfully designated the boosting effect to other pools.
If you need more information, do not forget to follow us on our social media channels. | https://docs.phx.finance/howto/changeboost/ | 2022-08-07T19:39:10 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['https://z3.ax1x.com/2021/09/01/h0G6r8.png', None], dtype=object)
array(['https://z3.ax1x.com/2021/09/01/h0JkIH.png', None], dtype=object)
array(['https://z3.ax1x.com/2021/09/01/h0Y80O.png', None], dtype=object)
array(['https://z3.ax1x.com/2021/09/01/h0YO3R.png', None], dtype=object)] | docs.phx.finance |
Introduction
The Job Labor Report consolidates and summarizes metrics related to job labor within a given date range (based on the visits' scheduled date).
Specifically, this report compares and contrasts estimated labor for a particular job and the actual labor performed. The difference between estimated and actual is reflected in the remaining column.
From there, Job Labor efficiency is calculated by [ Actual / Estimated = Efficiency ]
Filters on the report allow users to select the desired Job Labor data; these include Date Range, Operations, Sales Reps, and Assignees. Moreover, the search bar towards the top right of the table will allow you to search by Job Reference Number, Job Name, Customer.
Table of Contents
I. Job Labor Report
- Navigation
II. Job Labor Report Features
- Filters
- Columns
- Values
- Rows
- Additional Features
- Update or Remove a Report
I. Accessing the Job Labor Report
Navigation
Navigate to:
Reports >
Labor >
Job Labor Report
II. Job Labor Report Features
Filters
To begin utilizing the report data, first we suggest selecting the desired Date Range. By default, this report shows the totals of labor line items based on the visit's scheduled date.
Once you have a date range selected, you have the option to filter based on: Operation, Sales Rep, or Assignees.
*Note: When you filter by Assignee, the results may show partial information if all visits on a job are not assigned to the same assignee.
Columns
The data included in the Job Labor Report can be controlled by selecting which Columns to display. To access this:
1. Click the Columns button on the far left.
2.<<
Values
- Estimated - the summation of labor items on a job. (NOTE: if a job does not include any items of type 'labor', the job will not be included in the report since it will not have an estimated quantity.)
- Actual - the summation of labor lines reported on job completion.
- Remaining - the difference between the estimated and actual labor for the job. If estimated labor is less than actual labor, this number will be positive and show how many hours one could stay on the job and remain profitable. If the estimated labor is greater than the actual labor, the number will be negative and show the number of hours labor has exceeded the estimate.
- Efficiency - a ratio calculated as estimated labor-to-actual labor. If the ratio is greater than or equal to 100%, the job is deemed to have been accurately estimated and efficiently handled.
Rows
In the Job Labor Report, you have the ability to expand the row by selecting the "+" to the left of the table. Upon expanding the job, you will be shown additional information about the items and labor as it relates to the job.
To minimize the row, all you need to do is select the "-" and the row will collapse back to normal
Additional Features
- Search Bar: The search bar in the top right of the table will allow you to search by the following: Reference Number, Job Name, Customer
Please note: you cannot search by estimated, actual, remaining, or efficiency metrics.
- Export CSV: Finally, you also have the option to export the data table to CSV to further manipulate the information. Once you select the "Export CSV" button in the top right, only the data depicted on screen will be exported.
- Save Report: You have the ability to save the Job Labor Report and the filters that you have activated by selecting the "Save Report" button in the top right of the screen.
You will be given the option to create a name for your report; afterwards, the report will be stored in the "Saved Reports" section of your Reports Catalog.
Update or Remove a Report
1. Navigate to your Saved Reports
2. To remove the report select Removed Saved Report
3. To update a saved report, make your changes and select Update Report
Please sign in to leave a comment. | https://docs.singleops.com/hc/en-us/articles/360003884033-Job-Labor-Report | 2022-08-07T19:32:38 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['/hc/article_attachments/360087315333/JLR2.png', 'JLR2.png'],
dtype=object)
array(['/hc/article_attachments/360086129134/JLR3.png', 'JLR3.png'],
dtype=object)
array(['/hc/article_attachments/360087317413/JLR4.png', 'JLR4.png'],
dtype=object)
array(['/hc/article_attachments/360087317893/JLR5.png', 'JLR5.png'],
dtype=object)
array(['/hc/article_attachments/360087318393/JLR6.png', 'JLR6.png'],
dtype=object) ] | docs.singleops.com |
My Stream Is Not Updated (Grace)
For Flow-Flow Social Stream plugin, please follow this article.
Install WP Crontrol and check if
ffi_load_cache task is present in tasks list on Tools/Crontrol page (find it in WP admin sidebar menu).
If you see that its next same tasks which can be an issue that prevents Flow-Flow updating task from functioning OK. Check for tasks with the same name in a list of tasks, for example, using Crontrol plugin.
- Sometimes you can reach your server capacity, and it can’t handle the current amount of streams. Try to switch plugin to WP wrapper for DB operations (if you have 1.4+ plugin version). Check here how to enable it.
If it’s scheduling is not OK like the date in the past, please ask support from your hosting provider. Tell them WP Cron is not working correctly.
It's not here
If you don’t see tasks you need to create custom interval in Settings/Cron Schedules with next fields:
Then go back to Tools/Crontrol and create a task
ffi_load_cache with empty arguments field and Once Minute schedule.
Previous steps didn't help
Please open the Wordpress configuration file
wp-config.php WP wrapper for Cron, and you can try to use server-side Cron.
Please add this line to
wp-config.php file; it's specific for Grace:
define('FF_USE_WP_CRON', false);
and then add to server Cron jobs this command:
/usr/bin/curl -L -o /dev/null
replace 127.0.0.1 with your site URL. | https://docs.social-streams.com/article/121-my-stream-is-not-updated | 2022-08-07T19:21:54 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['http://social-streams.com/img/docs/wpcron/wpcron2.png', None],
dtype=object) ] | docs.social-streams.com |
Connectors¶
Connectors are modules which connect opsdroid to an external event source. This could be a chat client such as Slack or Matrix, or another source of events such as webhook endpoints. If an event is triggered by something outside opsdroid it happens in a connector.
Using two of the same connector type¶
If you need, you can use two of the same connector by adding the
module parameter in your configuration. For example, if you wish to use two Slack connectors pointing to different workspaces, you can do such with:
connectors: slack: bot-token: "xoxb-abdcefghi-12345" slack-two: bot-token: "xoxb-12345-abdcefghi" module: opsdroid.connector.slack
You can then select one connector or the other by using opsdroid’s method
get_connector(). For example:
# Use 'slack-two' connector slack_two = opsdroid.get_connector("slack-two") | https://docs.opsdroid.dev/en/stable/connectors/index.html | 2022-08-07T19:33:54 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.opsdroid.dev |
FedEx Service
To activate the FedEx service, click the [Toggle] icon.
Then you have to add your established FedEx account and fill in all required details.
Note!
If you do not have the FedEx account yet, you have to create it first: click [Apply Here] to go to the FedEx home page and apply for the FedEx account.
Before you start specifying details, you have to tick [Yes, I agree] to accept the FedEx Web Services End User License Agreement. Otherwise, you will not be able to add the FedEx account.
You may look through the full text of the agreement clicking [View agreement here].
After that, you have to specify the information at the "Add Your FedEx Account" page.
First, you have to enter data in the "Account Number"* and "Account Nickname"* fields.
Afterward, you have to click the [Dropdown] icon and pick one drop off type from the list.
Next, you have to specify details in the "Contact First Name"*, "Contact Last Name"*, "Company"*, and "Email"* boxes.
You have to fill in the address details in the "Address 1"* and "Address" fields.
Note!
Only "Address 1"* is obligatory to be filled in, the "Address" is optional.
Fill in the "City"* and "Zip Code"* text-fields.
To set the country/territory(*) and state/province(*), click the [Dropdown] icon and choose the needed from the list.
Also, you have to specify your phone number (*).
To finish, click the [Save Account] button at the bottom of the page or [Cancel] to abolish.
If the FedEx account is activated, you get a notification, and FedEx will appear in your shipping carriers list.
To edit the FedEx account settings, click the [Settings] icon.
On the "Update Your FedEx Account" page you can change account nickname(*), drop off type, and SmartPost settings. The only information you can not edit is the account number.
To edit account nickname(*) , delete current nickname and enter the new one in the appropriate text-field.
To change the "Drop Off Type" box, click the [Dropdown] icon and choose the needed type from the list.
To activate or deactivate "SmartPost", click the [Toggle] icon.
Note!
If you switch on "SmartPost", you have to specify "SmartPost Hub" and "SmartPost Endorsement": click the [Dropdown] icon and choose the requisite hub and endorsement.
To confirm changes, click [Save Account] at the bottom of the page or [Cancel] to annul changes.
If the FedEx account is activated, you get a notification, and FedEx will appear in your shipping carriers list.
To delete the FedEx account, click the [Delete] icon. | https://docs.sellerskills.com/app-setting/shipping-carriers/fedex-service | 2022-08-07T19:58:10 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.sellerskills.com |
The following guidelines are recommended best practices for maintaining the partitioning ranges of a partitioned table.
As with any processes, you must tune the best practices listed below to suit your individual processes, query workloads, and performance criteria.
- Have a well-defined, preferably automated, process for scheduling and issuing ALTER TABLE requests to add and drop partitioning ranges.
- Collect and maintain a current, monthly PARTITION statistics report which tracks the frequency of ALTER TABLE requests for adding and dropping partitions.
- Run and compare periodic EXPLAIN reports. This report is useful for determining the effects of your current partitioning on the performance of individual queries from your standard workloads.
- Based on your collected PARTITION statistics and scheduling process, define enough future ranges to minimize the frequency of ALTER TABLE requests for adding and dropping partitions. You should keep the number of “future” ranges at any one time to less than 10 percent of the total number of defined partitions.A future range is a range over date set of values which has not yet occurred at the time the partition is added. It is used to minimize the frequency of ALTER TABLE requests for adding and dropping partitions.
- To ensure the ALTER TABLE request does not fail as a result of infrequency, at least monthly make the necessary range drops and additions.
- Ensure that you drop any old partitions no longer needed, especially if queries in the workloads accessing the row-partitioned table do frequent primary index accesses and joins, and the complete partitioning column set is not included in the primary index definition. | https://docs.teradata.com/r/Teradata-VantageTM-SQL-Data-Definition-Language-Detailed-Topics/March-2019/ALTER-TABLE/ALTER-TABLE-MODIFY-Option/Best-Practices-for-Adding-and-Dropping-Partitioning-Ranges-from-a-Partitioning-Expression | 2022-08-07T19:58:46 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.teradata.com |
Contents:.
- supports a wide variety of numerical, statistical, and other function types. For a list of available transforms and functions, see Wrangle Language.
-.
- All: Select all columns in the dataset.
-..
This page has no comments. | https://docs.trifacta.com/display/HOME/Transform+Builder | 2022-08-07T19:27:30 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.trifacta.com |
protocols rip redistribute ospf
Redistributes (OSPF) routes into RIP routing tables.
OSPF routes that are redistributed into RIP are assigned a routing metric of 1. By default, no route map is applied to redistributed OSPF routes.
- metric
- Optional. A routing metric. The metric ranges from 1 through 16. The default metric is 1.
- map-name
- Optional. A route map.
Configuration mode
protocols { rip { redistribute { ospf { metric metric route-map map-name } } } }
Use the set form of this command to set the routing metric for OSPF routes being redistributed into RIP, or to specify a route map to be applied to redistributed OSPF routes.
Use the delete form of this command to remove OSPF route redistribution configuration.
Use the show form of this command to display OSPF route redistribution configuration. | https://docs.vyatta.com/en/supported-platforms/vrouter/configuration-vrouter/ip-routing/rip/route-redistribution-commands/protocols-rip-redistribute-ospf | 2022-08-07T19:17:02 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.vyatta.com |
Troubleshooting Launcher and Kubernetes:
Check Log Files#
RStudio Workbench generates log files for each component, including the RStudio Server Pro service, the Launcher service, the Kubernetes plugin, and individual sessions.
You can inspect the content of these log files for any relevant messages to help you determine the underlying cause of the error or issue that you are experiencing.
Server log files#
View the contents of the following log files on the server:
RStudio Workbench service:
/var/lib/rstudio-server/monitor/log/rstudio-server.log
Launcher service:
/var/lib/rstudio-launcher/rstudio-launcher.log
Kubernetes plugin:
/var/lib/rstudio-launcher/Kubernetes/rstudio-kubernetes-launcher.log
Session log files#
If you are able to start remote sessions in Kubernetes, you can also view the logs for individual sessions using the following steps:
From the RStudio Workbench home page, click + New Session:
Start a new session in Kubernetes (you may wish to clear Join session when ready):
Click the Info button next to the running session:
In the Session Info dialog box, select the Launcher Diagnostics tab:
In the Launcher Diagnostics tab, click the Details link:
The resulting page includes the logs from the remote session in Kubernetes:
Diagnostics report#
You can use the following command to generate a diagnostics report that includes logs for RStudio Workbench and Launcher as well as additional information about your system.
$ sudo rstudio-server run-diagnostics
The output of this command will show a location on disk where you can view the contents of the diagnostics report.
Restart services and test#
After reviewing the logs then identifying and correcting any issues, you can 2 - Verify Installation. | https://docs.rstudio.com/troubleshooting/launcher-kubernetes/log-files/ | 2022-08-07T19:11:45 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['/images/troubleshooting/launcher-kubernetes/session-logs-5.png',
'RStudio Workbench Home Page - Session Logs on RStudio Session Page'],
dtype=object) ] | docs.rstudio.com |
In MyNetworkmap you can import actor data from external sources. The data must be available in CSV format, a common export format that is supported by many programs.
The following example shows you how to import a list of companies and company characteristics from LibreOffice Calc to MyNetworkmap.
Open LibreOffice Calc and add a company name, location and number of employees per line:
Please, save the CSV file.
Switch to MyNetworkmap to the module "Actor import":
Select the export file, and then click Upload CSV.
You will then see the contents of the CSV file in the "CSV data" area. Now select "Ignore first row in the CSV data" and enter a semicolon as separator:
Then please click on "continue...". You will now see a table with the actor data. Each column stands for an actor attribute.
In this example, two attributes have been created (name and employee). (You can read about how to create attributes in another tutorial). These attributes are selected for the actor characteristics:
Then click on "Create new actors with the attribute values".
The import is now complete and the four actors have been created with the corresponding attributes.
You can now visualize these actors e.g. on a network map.
Open the "Network map" module and create a new network map.
Then open the left side menu "Actors" and click on "Reload list". The imported actors will then be displayed:
If no actors are displayed, please make sure that the attribute "name" is set in the menu area "Visualization" > "Actor label".
You can now drag and drop the actors onto the network card. Alternatively, you can insert all actors at once into the network card using the "Add all actors" button.
Afterwards you can visualize the number of employees by the actor symbol size. Click on "Actor size" in the side menu "Visualization" and select the attribute "employee" in the dialog and then click on "Save":
The companies, represented by circle symbols, are shown in different sizes. The symbol size corresponds to the number of employees.
| https://docs.kronenwett-adolphs.com/import_actors | 2022-08-07T19:57:14 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['https://doc.kronenwett-adolphs.com/sites/default/files/inline-images/tabelle_employees.png',
None], dtype=object)
array(['/sites/default/files/inline-images/actor_import.png',
'Akteure importieren'], dtype=object)
array(['/sites/default/files/inline-images/actor_import_2.png',
'Akteursdaten importieren'], dtype=object)
array(['/sites/default/files/inline-images/actor_import_3.png',
'Akteursdaten importieren'], dtype=object)
array(['/sites/default/files/inline-images/actor_import_4.png',
'Akteure importieren'], dtype=object)
array(['/sites/default/files/inline-images/actor_import_nwk_list.png',
'Importierte Akteure in Netzwerkkarte laden'], dtype=object)
array(['/sites/default/files/inline-images/employees.png',
'actor size and number of employees'], dtype=object)] | docs.kronenwett-adolphs.com |
RAM, virtual memory, pagefile, and memory management in Windows
Applies to: Windows 7 Service Pack 1, Windows Server 2012 R2
Original KB number: 2160852
Summary
This article contains basic information about the virtual memory implementation in 32-bit versions of Windows..):
Pagefile
RAM not used. On these systems, it may serve no useful purpose to maintain a large pagefile. On the other hand, if disk space is plentiful, maintaining a large pagefile (for example, 1.5 times the installed RAM) does not cause a problem, and this also eliminates the need to worry over how large to make it.
Performance, architectural limits, and RAM).
References
Address Windowing Extensions | https://docs.microsoft.com/nb-NO/troubleshoot/windows-server/performance/ram-virtual-memory-pagefile-management | 2022-08-07T20:51:28 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.microsoft.com |
Cheat Sheet: Integrate IoT Security with Panorama Managed Prisma Access
Last Updated:
Thu Jul 28 15:47:57 PDT 2022
Table of Contents
Search the Table of Contents
How BGP Advertises Mobile User IP Address Pools for Service Connections and Remote Network Connections
Configure Secure Inbound Access for Remote Network Sites for Locations that Allocate Bandwidth by Location | https://docs.paloaltonetworks.com/prisma/prisma-access/3-0/prisma-access-panorama-admin/prisma-access-overview/prisma-access-licenses/iot-summary-steps | 2022-08-07T19:16:21 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.paloaltonetworks.com |
Performing file operations on configuration files and directories
How to perform file operations on configuration files and directories and the related commands.
The vRouter supports several general file-operation commands that are optimized for working with image and configuration files. They are the show file, copy file, and delete file commands. These commands are documented in Using the CLI.
These commands are optimized for configuration files and directories because. | https://docs.vyatta.com/en/supported-platforms/vnf/configuration-vnf/system-and-services/basic-system-configuration/working-with-configuration/managing-system-configuration/performing-file-operations-on-configuration-files-and-directories | 2022-08-07T19:10:55 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.vyatta.com |
Cyotek Gif Animator Help
Send Feedback
Gif Animator
0.0
Editing palettes
We're no longer updating this content regularly.
Recommended Version
Introducing Gif Animator
Getting Started
Working with animations
Working with frames
Editing images
Editing palettes
Editing a frame's palette
Dialog Reference
Using the command line client
Customizing the application
Advanced
Support and online resources
Acknowledgements
Editing palettes
Editing a frame's image
Optimizing an image
Viewing the number of colors in an image
Viewing the global palette
View the local palette of a frame
Mirroring or flipping images
Viewing the original image
Resizing an image | https://docs.cyotek.com/cyogifan/0.0/aboutpaletteediting.html | 2022-08-07T19:08:52 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.cyotek.com |
Add duplicate copy protection for the table or user-defined join index. Adding fallback creates and stores a duplicate copy of the table.
An ALTER TABLE request that changes a table with block-level compression and also adds FALLBACK does not pass the block-level compression characteristics of the primary table for the newly created fallback table by default. Vantage assigns block-level compression to a newly created fallback table, depending on the request. If the request changes the row definition, adds fallback, and also specifies the BlockCompression query band, then the fallback table:
- Has block-level compression if the value for BlockCompression is set to YES.
- Does not have block-level compression if the value for BlockCompression is set to NO.
See “SET QUERY_BAND" in Teradata Vantage™ - SQL Data Definition Language Detailed Topics , B035-1184 .
If the request does not specify the BlockCompression query band, the fallback table defaults to the system-wide compression characteristics defined by the compression columns of the DBS Control record. See Teradata Vantage™ - Database Utilities , B035-1102 .
When a hardware read error occurs, the file system reads the fallback copy of the data and reconstructs the rows in memory on their home AMP. Read From Fallback applies to:
-.
For a system-defined join index, the FALLBACK modification you make on a base table also applies to any system-defined join indexes defined on that table. You alter the FALLBACK for a system-defined join index directly.
- PROTECTION
- Optional default keyword.
- NO
- Fallback protection for the table is removed. Removing fallback deletes the existing duplicate copy.You cannot use the NO FALLBACK option and the NO FALLBACK default on platforms optimized for fallback. | https://docs.teradata.com/r/Teradata-VantageTM-SQL-Data-Definition-Language-Syntax-and-Examples/September-2020/Table-Statements/ALTER-TABLE/ALTER-TABLE-Syntax-Elements/Table-Options/FALLBACK | 2022-08-07T19:15:07 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.teradata.com |
How to process Affirm transactions in Magento Magento: Magento to auto-capture by setting Payment Action to Authorize and Capture.
To capture an order:
1. In Magento, find the order.
2. In the top-right, click Invoice.
1. In the Invoice Total section, set Amount to Capture Online.
2. Click Submit Invoice.
Void
If you haven't yet captured a charge, you can cancel its authorization by voiding it. Voiding a charge is irreversible and Affirm is unable to reinstate those funds.
1. In Magento, find the order.
2. In the top-right, click Void.
3. Click OK.
Refund
If you've already captured a charge, you can reverse it and refund the amount to the customer's Affirm account. You can only process a refund within 120 days of capturing the charge.. Click Refund
Do not click Refund Offline
Partial refund
If you've already captured a charge, you can reverse part of the charge and refund the specified amount to the customer's Affirm account. You can only process a partial refund within 120 days of capturing the charge. Partially. In the Qty to Refund column, enter the values for the number of products included in the partial refund.
6. Click Update Qty's if you edited the Qty to Refund column.
7. Click Refund (note: do not click Refund Offline).
Do not click Refund Offline
Updated over 1 year ago | https://docs.affirm.com/developers/docs/how-to-process-affirm-transactions-in-magento | 2022-08-07T19:13:24 | CC-MAIN-2022-33 | 1659882570692.22 | [array(['https://files.readme.io/1f9eb6f-magento_capture.png',
'magento_capture.png 849'], dtype=object)
array(['https://files.readme.io/1f9eb6f-magento_capture.png',
'Click to close... 849'], dtype=object)
array(['https://files.readme.io/add0584-submit_invoice.png',
'submit_invoice.png 911'], dtype=object)
array(['https://files.readme.io/add0584-submit_invoice.png',
'Click to close... 911'], dtype=object)
array(['https://files.readme.io/299237a-Void.png', 'Void.png 925'],
dtype=object)
array(['https://files.readme.io/299237a-Void.png',
'Click to close... 925'], dtype=object)
array(['https://files.readme.io/ee18241-magento_void_confirmation.png',
'magento_void_confirmation.png 433'], dtype=object)
array(['https://files.readme.io/ee18241-magento_void_confirmation.png',
'Click to close... 433'], dtype=object)
array(['https://files.readme.io/c245573-magento_refund_invoice.png',
'magento_refund_invoice.png 903'], dtype=object)
array(['https://files.readme.io/c245573-magento_refund_invoice.png',
'Click to close... 903'], dtype=object)
array(['https://files.readme.io/b6acc6e-magento_credit_memo.png',
'magento_credit_memo.png 903'], dtype=object)
array(['https://files.readme.io/b6acc6e-magento_credit_memo.png',
'Click to close... 903'], dtype=object)
array(['https://files.readme.io/c7d5eb8-magento_click_refund.png',
'magento_click_refund.png 879'], dtype=object)
array(['https://files.readme.io/c7d5eb8-magento_click_refund.png',
'Click to close... 879'], dtype=object) ] | docs.affirm.com |
Voting Infrastructure SOP
The live voting instance can be found at
and the staging instance at
The code base can be found at
Contents
Contact Information
- Owner
Fedora Infrastructure Team
#fedora-admin, elections
- Servers
elections0\{1,2}, elections01.stg, db02
- Purpose
Provides a system for voting on Fedora matters
Creating a new election
Creating the elections
Go to "Admin" in the menu at the top, select "Create new election".
Complete the election form:
- Alias
A short name for the election. It is the name that will be used in the templates.
Example: FESCo2014
- Summary
A simple name that will be used in the URLs and as in the links in the application
Example: FESCo elections 2014
- Description
A short description about the elections that will be displayed above the choices in the voting page
- Type
Allow setting the types of elections (more on that below)
- Maxium Range/Votes
Allow setting options for some election type (more on that below)
- URL
A URL pointing to more information about the election
Example: the wiki page presenting the election
- Start Date
The Start of the elections (UTC)
- End Date
The Close of the elections (UTC)
- Number Elected
The number of seats that will be selected among the candidates after the election
- Candidates are FAS users?
Checkbox allowing integration between FAS account and their names retrieved from FAS.
- Embargo results
If this is set then it will require manual intervention to release the results of the election
- Legal voters groups
Used to restrict the votes to one or more FAS groups.
- Admin groups
Give admin rights on that election to one or more FAS groups
Adding Candidates
The list of all the elections can be found at
voting/admin/
Click on the election of interest and and select "Add a candidate".
Each candidate is added with a name and an URL. The name can be his/her FAS username (interesting if the checkbox that candidates are FAS users has been checked when creating the calendar) or something else.
The URL can be a reference to the wiki page where they nominated themselves.
This will add extra candidates to the available list.
Modifying an Election
Changing the details of an Election
The list of all the elections can be found at
/voting/admin/
After finding the right election, click on it to have the overview and select "Edit election" under the description.
Edit a candidate
On the election overview page found via
/voting/admin/ (and clicking
on the election of interest), next to each candidate is an
edit button allowing the admins to edit the information
relative to the candidate.
Removing a candidate
On the election overview page found via
/voting/admin/ (and clicking
on the election of interest), next to each candidate is an
x button allowing the admins to remove the candidatei
from the election.
Results
Admins have early access to the results of the elections (regardless of the embargo status).
The list of the closed elections can be found at
/voting/archives.
Find there the election of interest and click on the "Results" link in the last column of the table. This will show you the Results page included who was elected based on the number of seats elected entered when creating the election.
You may use these information to send out the results email.
Legacy
Other things you might need to query
The current election software doesn’t retrieve all of the information that we like to include in our results emails. So we have to query the database for the extra information. You can use something like this to retrieve the total number of voters for the election:
SELECT e.id, e.shortdesc, COUNT(distinct v.voter) FROM elections AS e LEFT JOIN votes AS v ON e.id=v.election_id WHERE e.shortdesc in ('FAmSCo - February 2014') GROUP BY e.id, e.shortdesc;
You may also want to include the vote tally per candidate for convenience when the FPL emails the election results:
SELECT e.id, e.shortdesc, c.name, c.novotes FROM elections AS e LEFT JOIN fvotecount AS c ON e.id=c.election_id WHERE e.shortdesc in ('FAmSCo - February 2014', 'FESCo - February 2014') ; | https://docs.fedoraproject.org/ca/infra/sysadmin_guide/voting/ | 2022-08-07T18:23:38 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.fedoraproject.org |
Implementation
LeanXcale database can be configured to start in secure mode in order to provide safe access for users and administrators. This mode provides authentication and authorization methods to guarantee that users are actually who they claim to be and can only access resources they are allowed to.
Regarding authentication, LeanXcale database integrates an LDAP server to provide user/password authentication. It can also be configured to use an existing external LDAP server instead of the packaged one.
For authorization, LeanXcale has a permission based authorization mechanism which guarantees that only users with specific permissions can read or modify database resources. | https://docs.leanxcale.com/leanxcale/1.7/security/implementation/intro.html | 2022-08-07T18:32:27 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.leanxcale.com |
Deploy monitoring in Skype for Business Server
Summary: Learn how to deploy monitoring in Skype for Business Server.
Before performing these tasks, review Plan for monitoring in Skype for Business Server.
You will typically implement monitoring services within your topology by completing the following two steps:
Enabling monitoring at the same time you set up a new Skype for Business Server pool. (In Skype for Business Server,: back-end databases used to store the data gathered by the monitoring service. These back-end databases can be created using Microsoft SQL Server 2008 R2, Microsoft SQL Server 2012, Microsoft SQL Server 2014, or Microsoft SQL Server 2019.
Note
If monitoring has been enabled for a pool you can disable the process of collecting monitoring data without having to change your topology: Skype for Business Server provides a way for you to disable (and then later re-enable) Call Detail Recording (CDR) or Quality of Experience (QoE) data collection. For more information, see the Configuring Call Detail Recording and Quality of Experience Settings section of this document.
One other important enhancement to monitoring in Skype for Business Server is the fact that Skype for Business.
Note
Ensure that the SQL Server Agent Service Startup Type is Automatic and the SQL Server Agent Service is running for the SQL Instance which is holding the Monitoring databases, so that the Default Monitoring SQL Server Maintenance Jobs can run on their scheduled basis under the control of the SQL Server Agent Service.
This documentation walks you through the process of installing and configuring monitoring and Monitoring Reports for Skype for Business Server..
Deployment checklist for monitoring
Although monitoring is already installed and activated on each Front End server, there are still several steps that you must undertake before you can actually being to collect monitoring data for Skype for Business Server. These steps are outlined in the following checklist:
Enable monitoring
Although the unified data collection agents are automatically installed and activated on each Front End server, that does not mean that you will automatically begin to collect monitoring data the moment you finish installing Skype for Business Server.:
Set-CsCdrConfiguration -Identity "global" -EnableCDR $True:
Set-CsQoEConfiguration -Identity "global" -EnableQoE $True.
See also
Plan for monitoring in Skype for Business Server | https://docs.microsoft.com/en-us/skypeforbusiness/deploy/deploy-monitoring/deploy-monitoring?redirectedfrom=MSDN | 2022-08-07T18:56:16 | CC-MAIN-2022-33 | 1659882570692.22 | [] | docs.microsoft.com |
maestral.database.orm
A basic object relational mapper for SQLite.
This is a very simple ORM implementation which contains only functionality needed by Maestral. Many operations will still require explicit SQL statements. This module is no alternative to fully featured ORMs such as sqlalchemy but may be useful when system memory is constrained.
Module Contents
- class maestral.database.orm.NoDefault[source]
Class to denote the absence of a default value.
This is distinct from
Nonewhich may be a valid default.
- class maestral.database.orm.Column(sql_type, nullable=True, unique=False, primary_key=False, index=False, default=None)[source]
Bases:
Generic[
T]
Represents a column in a database table.
- Parameters
type – Column type in database table. Python types which don’t have SQLite equivalents, such as
enum.Enum, will be converted appropriately.
nullable (bool) – When set to
False, will cause the “NOT NULL” phrase to be added when generating the column.
unique (bool) – If
True, sets a unique constraint on the column.
primary_key (bool) – If
True, marks this column as a primary key column. Currently, only a single primary key column is supported.
index (bool) – If
True, create an index on this column.
default (T | type[NoDefault] | None) – Default value for the column. Set to
NoDefaultif no default value should be used. Note than None / NULL is a valid default for an SQLite column.
sql_type (maestral.database.types.SqlType) –
- render_constraints(self)[source]
Returns a string with constraints for the SQLite column definition.
- Return type
-
- render_properties(self)[source]
Returns a string with properties for the SQLite column definition.
- Return type
-
- py_to_sql(self, value)[source]
Converts a Python value to a value which can be stored in the database column.
- Parameters
value (T) – Native Python value.
- Returns
Converted Python value to store in database. Will only return str, int, float or None.
- Return type
SQLSafeType
- class maestral.database.orm.Manager(db, model)[source]
Bases:
Generic[
M]
A data mapper interface for a table model.
Creates the table as defined in the model if it doesn’t already exist. Keeps a cache of weak references to all retrieved and created rows to speed up queries. The cache should be cleared manually changes where made to the table from outside this manager.
- Parameters
db (maestral.database.core.Database) – Database to use.
model (type[M]) – Model for database table.
- delete(self, query)[source]
- Parameters
query (maestral.database.query.Query) –
- Return type
None
- select(self, query)[source]
- Parameters
query (maestral.database.query.Query) –
- Return type
-
- select_iter(self, query, size=1000)[source]
- Parameters
query (maestral.database.query.Query) –
-
- Return type
Generator[list[M], Any, None]
- select_sql(self, sql, *args)[source]
Performs the given SQL query and converts any returned rows to model objects.
- delete_primary_key(self, primary_key)[source]
Delete a model object / row from database by primary key.
- Parameters
primary_key (Any) – Primary key for row.
- Return type
None
- get(self, primary_key)[source]
Gets a model object from database by its primary key. This will return a cached value if available and None if no row with the primary key exists.
- Parameters
primary_key (Any) – Primary key for row.
- Returns
Model object representing the row.
- Return type
M | None
- has(self, primary_key)[source]
Checks if a model object exists in database by its primary key
- Parameters
primary_key (Any) – The primary key.
- Returns
Whether the corresponding row exists in the table.
- Return type
-
- save(self, obj)[source]
Saves a model object to the database table. If the primary key is None, a new primary key will be generated by SQLite on inserting the row. This key will be retrieved and stored in the primary key property of the object.
- Parameters
obj (M) – Model object to save.
- Returns
Saved model object.
- Return type
M
- update(self, obj)[source]
Updates the database table from a model object.
- Parameters
obj (M) – The object to update.
- Return type
None
- class maestral.database.orm.Model(**kwargs)[source]
Abstract object model to represent an SQL table.
Instances of this class are model objects which correspond to rows in the database table.
To define a table, subclass
Modeland define class properties as
Column. Override the
__tablename__attribute with the SQLite table name to use. The
__columns__attribute will be populated automatically for you. | https://maestral.readthedocs.io/en/stable/autoapi/maestral/database/orm/index.html | 2022-08-07T18:47:22 | CC-MAIN-2022-33 | 1659882570692.22 | [] | maestral.readthedocs.io |
When working_offline, the Ready to Download Page is disabled - the application must be able to connect to the DAZ 3D store in order to retrieve the list of available downloads for an account and for this page to serve a purpose. When working_online, this page provides a list of all products and/or updates that are ready for download, for the current account.
Any products that Install Manager already has a record of downloading and/or installing, for the current account, are not displayed here - unless an update to that product exists on the store, in which case the update will be displayed on this page. Products that have already been downloaded or installed are displayed on the Ready to Install Page, or the Installed Page, respectively. If an update to a product has been downloaded but not installed and a newer update becomes available from the store, the update will be displayed on this page rather than on the Ready to Install/update can be viewed by clicking the information_button, between the Product Name and the Status columns. Clicking this button will open the system default URL handler (usually a web browser) to the corresponding page in the Read Me > Product Index on this site.
When it comes to downloading the products/updates listed on this page, you have a couple of options. You can download each product/update individually by pressing the download_button in the Status column, on the right. Or, if you would like to download a batch of products/updates, you can queue those products for download by checking the small box on the left side of the Product Name column. Checking or unchecking the box next to the “Product Updates” and/or “Products” parent items will cause all of their respective child items to follow in kind. These two items provide 3 distinct checked states: checked, partially checked and unchecked.
When one or more of these boxes are checked, the start_queue_button, below the list, becomes enabled. Pressing the button causes processing of the queue to begin and the checked products/updates to start downloading. At this point, all of the checked products/updates will become unchecked and the Start Queue button will become disabled. Once there are products/updates in the queue (signified by the product/update displaying a download_progress_indicator and a cancel_download_button in place of the Download Button and product_options_button), the clear_queue_button becomes enabled, which when pressed clears the queue of all downloads/updates that are not already in the queue to be added to the end of the queue, the checked products/updates to become unchecked and the button to return to its previous state.
When the Download Progress Indicator for a given product/update is full and the download is complete, the product is moved from this page to the Ready to Install Page. If you have the install_after_download_option at the bottom of the list checked, the product/update will install without any further interaction required from you. In this case, the product is moved from the list on this page to the list on the Ready to Install Page and then finally to the list on the Installed Page. If you use this option you may also want to be sure that the “Install To” paths, displayed in the installation_details at the bottom of the page when the show_details_option is checked, are set to the desired locations prior to beginning the download(s).
When a product is downloaded, the package (*.zip) it was downloaded in is placed in a predefined folder on your hard drive. You can view and/or change this location in the Settings Window, at the top of the downloads_page.
Below is a list of interface elements that the Ready to Download Page can exist within:
Below is a list of interface elements that exist within the Ready to Download Page: | http://docs.daz3d.com/doku.php/public/software/install_manager/referenceguide/interface/ready_to_download_page/start | 2017-10-17T00:14:19 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.daz3d.com |
Contents Application Development Previous Topic Next Topic GlideRecord - addNotNullQuery(String fieldName) Add To My Docs Add selected topic Add selected topic and subtopics Subscribe to Updates Share Save as PDF Save selected topic Save selected topic and subtopics Save all topics in Contents GlideRecord - addNotNullQuery(String fieldName) GlideRecord - addNotNullQuery(String fieldName) Adds a filter to return records where the specified field is not null.. You can set the glide.invalid_query.returns_no_rows system property to true to have queries with invalid encoded queries return no records. Table 1. Parameters Name Type Description fieldName String The field name. Table 2. Returns Type Description QueryCondition QueryCondition of records where the parameter field is not null. Scoped equivalent To use the addNotNullQuery() method in a scoped application, use the corresponding scoped method: Scoped GlideRecord - addNotNullQuery(String fieldName). var target = new GlideRecord('incident'); target.addNotNullQuery('short_description'); target.query(); // Issue the query to the database to get all records while (target.next()) { // add code here to process the incident record } | https://docs.servicenow.com/bundle/helsinki-application-development/page/app-store/dev_portal/API_reference/GlideRecord/reference/r_GlideRecord-AddNotNullQuery_String.html | 2017-10-17T00:12:34 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.servicenow.com |
Communicating with Serial Devices
A universal asynchronous receiver/transmitter (UART) is a device used for serial communication between two devices. The Omega comes with two UART devices:
UART0, and
UART1.
UART0 is largely used for outputting the Omega’s command line, and
UART1 is free to communicate with other devices.
This article will cover:
- what the UART does
- where it is on the hardware
- using the UART on the Omega
- via the command line
- via the
screencommand
- using Python
Only two devices can communicate with each other per UART connection. This is different from other communication protocols, such as I2C or SPI, where there may be 3, 10, or many more devices connected to the same data lines.
What is a UART?
A UART is used for serial communication between devices. UART has no master-slave architecture and allows you to set the transmissions speeds of your data. The transmission speed is known as the baud rate, and it represents the time spent holding each bit high or low.
If you’ve connected to the Omega via serial before you’ll remember that we set the baud rate to 115200 bits-per-second, meaning that the time spent holding each bit high or low is 1/115200bps or 8.6µs per bit.
The UART on the Omega formats the data using the 8N1 configuration, in which there are 8 data bits, No parity bit, and one stop bit.
Connecting UART Devices
The UART uses the TX line to transmit data, and RX to receive data. When communicating with other devices, the TX on device A will send data to the RX on device B and vice versa.
To set up a serial line connection between two devices:
- Connect device A’s
TXline to device B’s
RXline.
- Connect device A’s
RXline to device B’s
TXline.
- Connect the two devices’
GNDlines together.
The Omega & UART
UART interactions on the Omega2 are done using the virtual device files
/dev/ttyS0 and
/dev/ttyS1. This is made possible with
sysfs, a pseudo-file system that holds information about the Omega’s hardware in files, and lets the user control the hardware by editing the files.
On the Hardware
The UART pins are highlighted on the Omega2, Expansion Headers, and Breadboard Dock below.
Pins
12 and
13 are used for
UART0. These are primarily used for the command line on the Omega.
The
UART1 uses pins
45 and
46. These are labelled as
TX1 and
RX1 to signify that they are used for
UART1.
Expansion Dock & Power Dock
On the Expansion Dock and Power Dock, only
UART1 is broken out, shown below:
On the Expansion Dock,
UART0 is connected to the on-board USB-to-Serial chip, providing direct access to the Omega’s command prompt with a USB connection.
Mini Dock
There are no GPIO headers on the Mini Dock so neither of the UARTs are available on the Mini Dock. However, just on like the Expansion Dock,
UART0 is connected to the on-board USB-to-Serial chip, providing direct access to the Omega’s command prompt with a USB connection.
Arduino Dock2
Both
UART0 and
UART1 are available on the Arduino Dock 2:
The Arduino Dock connects the on-board microcontroller’s serial port with
UART1 on the Omega, allowing direct communication between the microcontroller and the Omega. This is useful since it offers users the freedom to fully define and design the communication between the two devices.
Breadboard Dock
Both
UART0 and
UART1 are available on the Breadboard Dock:
IMPORTANT: The TX+/- and RX+/- pins are used for the Ethernet Expansion. Be careful not to connect your serial lines to these pins!
Using the Command Line
We’ll be using some command line tools to write to (send) and read from (receive) data from
/dev/ttyS1 just as if it were any other file.
Sending Data
To send data to
UART1, simply
echo to
/dev/ttyS1 like so:
echo "my message" > /dev/ttyS1
This command will not display any text on the screen when entered, as you are simply writing to a file.
Receiving Data
To read data from
UART1, simply run
cat on it like so:
cat /dev/ttyS1 # waits for input data
This command will wait for and print any data received by the Omega until you exit the program (
Ctrl-C).
Send and Print Received Data
The above commands don’t do anything useful if you don’t have any serial devices connected. However, you can simulate real serial communication by having the Omega talk to itself!
Simply connect the Omegas’s
RX and
TX pins together as shown below; the
GND connection is shared between the pins already.
Now open two separate command line sessions on your Omega. It’s easiest to connect via SSH in two separate terminals from your computer.
- In one terminal, run
cat /dev/ttyS1to start reading the serial port.
- In the other, run
echo "hello world!" > /dev/ttyS1to write a message to the serial port.
You will see your message appear in the terminal running
cat. And that’s the basics of serial communication on the Omega!
Using the
screen Command
The above method is a great way to introduce using UART, but it’s not all that practical. By using the
screen command, we can actually send commands to other Omegas or connected devices.
Installing Screen
You’ll need to start by installing
screen using the Omega’s package manager
opkg. We’ll start by updating our list of packages:
opkg update
Now we’ll install screen:
opkg install screen
And now you’re ready to use screen with the UART!
Running
screen
To use the UART with
screen enter the following command:
screen /dev/ttyS1 <BAUD RATE>
The terminal will go blank, and the command works the following way:
- Any keys or letters you type are immediately sent to the UART (ie. to the device connected to it)
- The terminal will immediately display any data received from the UART (ie. from the device connected to it)
To test this out using just 1 Omega, login to 2 separate SSH sessions as with the previous section. In both sessions, run the following example command:
screen /dev/ttyS1 9600
Both terminals will now go blank, waiting for your input.
Now start typing
hello world! in the first terminal, and the words will start to appear in the second!
This can be also done with 2 Omegas by connecting their
TX1,
RX1, and
GND pins as described in Connecting UART Devices.
Working With
screen Sessions
- To detach from the
screensession and leave it running so you can come back to it later, type
Ctrl-athen
d.
- For detailed information on how to work with attaching and reattaching, see the Screen User’s Manual.
- To kill (end) a session, type
Ctrl-athen
k.
Detaching from a
screen process does not end it, ie. it is still running. If you start and detach from several
screen processes, these will begin to tie up your Omega’s memory.
To kill all
screen processes, copy and paste the following command:
for pid in $(ps | grep "screen" | awk '{print $1}'); do kill -9 $pid; done
This big command essentially does the following:
- Get the list of running processes using
ps
- Pipe (send) it to the
grepcommand and search for process names containing the word
screen
grepis a powerful utility to analyze text using regular expressions.
- Pipe those processes’ information to
awk, which outputs their process IDs
awkis another powerful utility that can perform
killeach of the process IDs corresponding to
screen
Using Python
You can use Python in order to communicate serially via the UART. The module to accomplish this is
PySerial which can be installed using
opkg.
Installing the Module
You’ll need to have Python or Python-light installed in order to continue. If you’ve installed the full version of Python you will already have PySerial.
You can read our guide to installing and using Python on the Omega for more information.
First update
opkg:
opkg update
And then install
python-pyserial:
opkg python-pyserial
You’ll now be able to use PySerial!
Using PySerial
You can use PySerial by connecting a device to your Omega’s UART1. For more on the usage of PySerial you can read the PySerial documentation. | https://docs.onion.io/omega2-docs/uart1.html | 2017-10-17T00:00:05 | CC-MAIN-2017-43 | 1508187820487.5 | [array(['https://raw.githubusercontent.com/OnionIoT/Onion-Docs/master/Omega2/Documentation/Doing-Stuff/img/uart-data-frame.png',
'uart data frame'], dtype=object)
array(['https://raw.githubusercontent.com/OnionIoT/Onion-Docs/master/Omega2/Documentation/Doing-Stuff/img/uart-tx-rx-cross.png',
'cross tx rx'], dtype=object)
array(['https://raw.githubusercontent.com/OnionIoT/Onion-Docs/master/Omega2/Documentation/Doing-Stuff/img/uart-pins-omega2.jpg',
'pinout'], dtype=object)
array(['https://raw.githubusercontent.com/OnionIoT/Onion-Docs/master/Omega2/Documentation/Doing-Stuff/img/uart-pins-exp-dock.jpg',
'uart-exp-power-dock'], dtype=object)
array(['https://raw.githubusercontent.com/OnionIoT/Onion-Docs/master/Omega2/Documentation/Doing-Stuff/img/uart-pins-arduino-dock.jpg',
'uart-arduino-dock'], dtype=object)
array(['https://raw.githubusercontent.com/OnionIoT/Onion-Docs/master/Omega2/Documentation/Doing-Stuff/img/uart-pins-breadboard-dock.jpg',
'uart-breadboard-dock'], dtype=object)
array(['https://raw.githubusercontent.com/OnionIoT/Onion-Docs/master/Omega2/Documentation/Doing-Stuff/img/uart-omega-jumpered.jpg',
'connect-rx-tx'], dtype=object)
array(['https://raw.githubusercontent.com/OnionIoT/Onion-Docs/master/Omega2/Documentation/Doing-Stuff/img/uart-echo-cat.png',
'two-terminals'], dtype=object) ] | docs.onion.io |
The Select
tool is used to select drawing objects in the Drawing or Camera view and to apply basic transformations, such as repositioning, rotating or scaling, using the different handles of the bounding box.
To select with the Select tool:
To deform or reposition selected drawing objects:
Some of the transformations such as rotation, scale and flip, are done using the position of the pivot point as the central point. By default, this pivot point is located in the centre of your selection. You can temporarily reposition this pivot point for a transformation using the Select
tool.
To temporarily reposition the pivot point:
The pivot point appears in the middle of your selection.
This becomes the new position of the pivot point for the current transformation and will remain there until you make a new selection. | https://docs.toonboom.com/help/toon-boom-studio/Content/TBS/User_Guide/004_Drawing/008_H1_Selecting.html | 2017-10-16T23:52:54 | CC-MAIN-2017-43 | 1508187820487.5 | [] | docs.toonboom.com |
Thanks!
You!
Just after two months of the huge 0.9 release, here is Boo 0.9.1 - bringing more new features and bug fixes.
Highlights of this release are:
- Macro definition arguments [BOO-1146] - macro definitions can define typed arguments as with any method definition.
- Nested macros extensions [BOO-1140] - nested macros no longer have to be defined within their parent macro block.
- Omitted expression for member references [BOO-1150] - `.foo' is now equivalent to `self.foo' by default. This behavior can easily be changed by a macro or compiler step.
- TypeSystem refactoring - brings cleaner API and faster compilation (-30% time)
Take note that from now on strong versioning is used on Boo releases, this release assemblies are versioned `2.0.9.1'.
Contributors to this release: Cedric Vivier, Daniel Grunwald, JB Evain, Rodrigo B. De Oliveira.
Read the changelog for the complete list of improvements.
Download it now and have fun!
It's been a long time but the biggest release ever of Boo is right here now!
Huge improvements all over the board as you can read in the full changelog, its chief weapons are:
- Generator macros [BOO-1077] - macros are no longer limited to returning a single statement or block and instead are able to yield an indefinite number of nodes:
- Nestable macros [BOO-1120] - macro definitions can be nested to allow for context sensitive keywords
- Pattern matching [BOO-1106] - simple but powerful object pattern matching with the match/case/otherwise macros
- Strict mode [BOO-1115] - strict mode changes a few compiler rules: default visibility for members is private, method parameter types and return types must be explicitly declared, among other things
- Support for SilverLight profile [BOO-1117] - and Vladimir Lazunin kicked it off with a Tetris example.
You can read examples on these 0.9 new features on Rodrigo's blog.
This release is brought to you by Avishay Lavie, Cedric Vivier, Daniel Grunwald, Marcus Griep and Rodrigo B. De Oliveira.
Download it now and have fun!
Join the mailing-list for questions and latest updates about Boo development.
Yeah,.!
With!
Syndicate this site via RSS. | http://docs.codehaus.org/display/BOO/News | 2015-02-27T04:13:11 | CC-MAIN-2015-11 | 1424936460472.17 | [] | docs.codehaus.org |
The JRuby team is pleased to announce the release of JRuby 0.9.2.
Download:
Ola Bina gets accolades for completely writing an openssl clone in Java from
scratch! We are not worthy
.? | http://docs.codehaus.org/pages/viewpage.action?pageId=68544 | 2015-02-27T04:14:01 | CC-MAIN-2015-11 | 1424936460472.17 | [array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/emoticons/smile.png',
'(smile)'], dtype=object) ] | docs.codehaus.org |
About Environments. Note that these are mutually exclusive — enabling one will completely disable the other.
Directory environments are easier to use and will eventually replace config file environments completely.
Assigning Nodes to Environments
You can assign agent nodes to environments using either the agent’s config file or an external node classifier (ENC).
For details, see the page on assigning nodes to environments.
Referencing the Environment in Manifests
In Puppet manifests, you can get the name of the current environment by using the
$environment variable, which is set by the Puppet master.
About Directory Environments
Directory environments let you add a new environment by simply adding a new directory of config data. Here’s what you’ll need to know to start using them:
Puppet Must Be Configured to Use Them
Since directory environments are a major change to how Puppet loads code and serves configurations, they aren’t enabled by default yet. (They will become the only way to manage environments in Puppet 4.0.)
To start using directory environments, do the following:
- Configure the Puppet master to use directory environments
- Create your environments
- Assign nodes to their environments
Unconfigured Environments Aren’t Allowed
If a node is assigned to an environment which doesn’t exist — that is, there is no directory of that name in any of the
environmentpath directories — the Puppet master will fail compilation of its catalog.
They Disable Config File Environments
If directory environments are enabled, they will completely disable config file environments. This means:
- Puppet will always ignore the
manifest,
modulepath, and
config_versionsettings in puppet.conf.
- Puppet will always ignore any environment config sections in puppet.conf.
Instead, the effective site manifest and modulepath will always come from the active environment.
About Config File Environments
To use config file environments, see the reference page on config file environments.. | https://docs.puppetlabs.com/puppet/latest/reference/environments.html | 2015-02-27T04:01:17 | CC-MAIN-2015-11 | 1424936460472.17 | [] | docs.puppetlabs.com |
This plugin connects SonarQube to Redmine issue and project management tool in various ways.
SonarQube retrieves the number of open issues associated to a project from Redmine. It then reports on the total number of issues and distribution by priority.
SonarQube retrieves the number of open issues associated to a project from Redmine. It then reports on the total number of issues and distribution by developers.
This feature allows you to create a review (on a violation) that will generate a Redmine Issue on your configured Redmine Installation
When logged in, you should find the "Link to Redmine" action available on any violation:
You can enter any comment and after you press "Link to Redmine", a new review comment is added on the violation: you can see the link to the newly-created Redmine issue.
And the corresponding Redmine Issue looks like:
Log in to your Redmine installation with administration rights
Go to the "My Account" page ( /my/account ) and create a new API Access key on the right panel of your screen.
Copy the API Access key to use it in plugin configuration
At Global level, go to Settings -> Redmine and set Redmine's URL and API Access key you copied from previous step
At Project level, go to Configuration -> Redmine Configuration Page | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=230396317 | 2015-02-27T04:12:41 | CC-MAIN-2015-11 | 1424936460472.17 | [] | docs.codehaus.org |
Quick Start¶
This tutorial will walk you through creating a simple project that uses PHPBench as a dependency.
Create your project¶
Create a directory for the tutorial:
$ mkdir phpbench-tutorial $ cd phpbench-tutorial
And create the following Composer file within it:
{ "name": "acme/phpbench-test", "require-dev": { "phpbench/phpbench": "^1.0" }, "autoload": { "psr-4": { "Acme\\": "src/" } }, "autoload-dev": { "psr-4": { "Acme\\Tests\\": "tests/" } }, "minimum-stability": "dev", "prefer-stable": true }
Now perform a Composer install:
$ composer install
PHPBench should now be installed. Please create the following directories:
$ mkdir -p tests/Benchmark $ mkdir src
Before you start¶
You will need some code to benchmark, create the following class:
// src/TimeConsumer.php namespace Acme; class TimeConsumer { public function consume() { usleep(100); } }
PHPBench configuration¶
In order for PHPBench to be able to autoload files from your library, you
should specify the path to your bootstrap file (i.e.
vendor/autoload.php).
This can be done in the PHPBench configuration.
Create a
phpbench.json file in the projects root directory:
{ "$schema":"./vendor/phpbench/phpbench/phpbench.schema.json", "runner.bootstrap": "vendor/autoload.php" }
Above we also added the optional
$schema which should enable auto-completion and
validation in your IDE.
Note
PHPBench does not require a bootstrap (or a configuration file for that matter). You may omit it if you do not need autoloading, or you want to include files manually.
Warning
Some PHP extensions such as Xdebug will affect the performance of your benchmark subjects and you may want to disable them, see Disabling the PHP INI file.
Create a Benchmark¶
In order to benchmark your code you will need to execute that code within
a method of a benchmarking class. By default the class name must
have the
Bench suffix and each benchmark method must be prefixed
with
bench.
Create the following benchmark class:
// tests/Benchmark/TimeConsumerBench.php namespace Acme\Tests\Benchmark; use Acme\TimeConsumer; class TimeConsumerBench { public function benchConsume() { $consumer = new TimeConsumer(); $consumer->consume(); } }
Now you can execute the benchmark as follows:
$ ./vendor/bin/phpbench run tests/Benchmark --report=default
And you should see some output similar to the following:
PHPBench @git_tag@ running benchmarks... with configuration file: /home/daniel/www/phpbench/phpbench-tutorial/phpbench.json with PHP version 7.4.14, xdebug ❌, opcache ❌ \Acme\Tests\Benchmark\TimeConsumerBench benchConsume............................I0 - Mo185.000μs (±0.00%) Subjects: 1, Assertions: 0, Failures: 0, Errors: 0 +------+--------------+--------------+-----+------+----------+-----------+--------------+----------------+ | iter | benchmark | subject | set | revs | mem_peak | time_avg | comp_z_value | comp_deviation | +------+--------------+--------------+-----+------+----------+-----------+--------------+----------------+ | 0 | benchConsume | benchConsume | 0 | 1 | 653,528b | 185.000μs | +0.00σ | +0.00% | +------+--------------+--------------+-----+------+----------+-----------+--------------+----------------+
The code was only executed once (as indicated by the
revs column). To
achieve a better measurement increase the revolutions:
// ... class TimeConsumerBench { /** * @Revs(1000) */ public function benchConsume() { // ... } }
Revolutions in PHPBench represent the number of times that the code is executed consecutively within a single measurement.
Currently we only execute the benchmark subject a single time, to build
confidence in the result increase the number of iterations
using the
@Iterations annotation:
// ..., or set them globally in the
configuration.
At this point it would be better for you to use the aggregate report rather than default:
$ php vendor/bin/phpbench run tests/Benchmark/TimeConsumerBench.php --report=aggregate
Increase Stability¶
Stability can be inferred from rstdev (relative standard deviation) , with 0% being the best and anything about 2% should be treated as suspicious.
To increase stability you can use the @RetryThreshold to automatically repeat the iterations until the diff (the percentage difference from the lowest measurement) fits within a given threshold:
Note
You can see the diff value for each iteration in the default report.
$ php vendor/bin/phpbench run tests/Benchmark/TimeConsumerBench.php --report=aggregate --retry-threshold=5
Warning
Depending on system stability, the lower the
retry-threshold the
longer it will take to resolve a stable set of results.
Customize Reports¶
PHPBench allows you to customize reports on the command line:
$ php vendor/bin/phpbench run tests/Benchmark/TimeConsumerBench.php --report='{"extends": "aggregate", "cols": ["subject", "mode"]}'
Above we configure a new report which extends the aggregate report that we have already used, but we use only the
subject and
mode columns. A full list of all the options for the
default reports can be found in the Report Generators reference.
Configuration¶
To finish off, add the path and new report to the configuration file:
{ "runner.path": "tests/Benchmark", "report.generators": { "consumation_of_time": { "extends": "default", "title": "The Consumation of Time", "description": "Benchmark how long it takes to consume time", "cols": [ "subject", "mode" ] } } }.
Summary¶
In this tutorial you learnt to
- Configure PHPBench for a project
- Create a benchmarking class
- Use revolutions and iterations to more accurately profile your code
- Increase stability with the retry threshold
- Use reports
- Compare against previous benchmarks with Regression Testing | https://phpbench.readthedocs.io/en/latest/quick-start.html | 2021-09-16T19:37:36 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['_images/rstdev.png', '_images/rstdev.png'], dtype=object)] | phpbench.readthedocs.io |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.