content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
CPU HCL Contributing Feel free to test different processors and report your results via email or submit a Pull Request to Dasharo documentation repository or by using Dasharo issues repository. If you already have reported your results and you change some hardware configuration we would appreciate an additional HCL report. HCL list CPU Hardware Compatibility List presents the CPUs tested and verified to work with Dasharo by community. The following list does not include CPU which is tested and verfied in 3mdeb laboratory - this information might be found in Hardware Matrix documentation. Legend: * Processor name - the full name of processor including vendor, brand and CPU number. * Core name - CPU core codename. * CPU base speed - based CPU speed. * CPU boost speed - boosted CPU speed. * Wattage - declared by producer processor wattage. * GPU - information about embedded graphics processing unit. * Results - link to measurement results. Information about processor name, core name and speed might be read from OS (Linux systems - command lscpu, Windows systems - System information menu). Rest of the information might be read from CPU package or documentation.
https://docs.dasharo.com/variants/msi_z690/cpu-hcl/
2022-09-25T07:40:44
CC-MAIN-2022-40
1664030334515.14
[]
docs.dasharo.com
Create Local Archive Volume By default Azure Blob Storage is used as remote archive volume to store backups. However, if you want to create a local archive volume, you can follow the instructions given in this section. If you store your backups locally, you need to create a local archive volume. Archive volumes have a unique format, which is called SDFS (Storage Distributed File System). Archive volumes have a built-in compression mechanism. Incoming data is automatically compressed, and outgoing data will be automatically decompressed. SDFS volumes are formatted during creation. The following are the different supported SDFS block sizes: - max. 16 TB for 64 KiB (default) - max. 24 TB for 96 KiB - max. 32 TB for 128 KiB - max. 64 TB for 256 KiB - max. 128 TB for 512 KiB You can connect to an SDFS volume using any HTTP (s) and (s)FTP(s) client. Follow the steps below to create a local archive volume: - Log in to EXAoperation with administrator account. - In EXAoperation, go to Services > EXAStorage and click Add Volume. - Enter the properties for the new node, and set the Volume Type to Archive. - Click Add to create the volume. The volume is added to EXAStorage.
https://docs.exasol.com/db/6.2/administration/azure/manage_storage/create_local_archive_volume.htm
2022-09-25T09:21:36
CC-MAIN-2022-40
1664030334515.14
[array(['../../../resource/images/administration/storage/add volume - default screen.png', 'Add Volume'], dtype=object) array(['../../../resource/images/administration/storage/createlocalarchive_examplestorage.png', 'Create Local Archive Volume'], dtype=object) ]
docs.exasol.com
SRS Protocol¶ What causes SIG_INVALID messages in SRS?¶ <Error: The request included Unicode/UTF-8 characters and these were not processed correctly while handing it off to your GPG libraries for signing. This often results in an invalid signature for your request. The GPG key you signed the request with is not valid in the environment you are trying to use (for example you have separate test/prod keys, and you attempted to use your prod key against the test environment) You’re attempting to use your key against the SRS Test system before the Friday refresh process has completed. Your GPG key may have expired. What causes LOCK_ERROR messages in SRS?¶ . What causes INSECURE_UPDATE messages in SRS?¶ . How do you blank out or clear a field in SRS XML? (i.e. removing fax or address2)¶: How do I generate a PGP key for use with SRS?¶ We recommended to use the GnuPG tool to generate a key (). Note Make sure all the following commands are executed as the user that will be running the command line client. To generate a key, type: gpg --gen-key Follow the instructions the the gpg application gives you: Choose a ‘RSA and RSA’ type key Keysize ‘4096’, ‘0’ expiry (unless you have reason to choose non-default settings). ‘Real Name’, ‘Email Address’ or both, that you entered for the key (type: ‘g srs xml client or you want to verify the signatures sent with responses by the registry, then you must import the registy’s public key to your keyring. To do this, type: gpg --import reg.key You will have to specify the path to the key file if you’re executing ‘gpg’ in a directory other than the one containing the key file. Note The minimum PGP key size we allow on new RSA keys is 2048-bit, and InternetNZ suggests the use of 4096-bit RSA keys. In order to working with SRS, the key needs to be confirmed by InternetNZ that has been added to SRS. For doing that the key should be exported and sent to [email protected] Where can I find the Registry PGP Keys?¶ The registry public keys can be found Registry PGP Keys.
https://docs.internetnz.nz/legacy/faq/srs/
2022-09-25T09:11:41
CC-MAIN-2022-40
1664030334515.14
[]
docs.internetnz.nz
security saml-sp modify Contributors Modify SAML service provider authentication Availability: This command is available to cluster administrators at the admin privilege level. Description The security saml-sp modify command modifies the Security Assertion Markup Language (SAML) Service Provider (SP) configuration for single sign-on authentication. This command is used to enable or disable an existing SAML SP, security saml-sp modify respectively. -is-enabled`true or false` This command will check the validity of the current SAML SP configuration before enabling the SP. Also, it is necessary to use this command with the -is-enabled parameter can be modified. `false parameter prior to deleting an existing SAML SP configuration. SAML SP can only be disabled in this way by a password authenticated console application user or from a SAML authenticated command interface. The delete command must be used if the SAML configuration settings are to be changed, as only the `is-enabled Parameters [-is-enabled {true|false}]- SAML Service Provider Enabled Use this paramater to enable or disable the SAML SP. Examples The following example enables SAML SP: cluster1::> security saml-sp modify -is-enabled true cluster1::>
https://docs.netapp.com/us-en/ontap-cli-93/security-saml-sp-modify.html
2022-09-25T08:19:07
CC-MAIN-2022-40
1664030334515.14
[]
docs.netapp.com
Monitor SAP HANA database clone operations You can monitor the progress of SnapCenter clone operations by using the Jobs page. You might want to check the progress of an operation to determine when it is complete or if there is an issue. About this task.
https://docs.netapp.com/us-en/snapcenter-46/protect-hana/task_monitor_hana_database_clone_operations.html
2022-09-25T08:05:43
CC-MAIN-2022-40
1664030334515.14
[]
docs.netapp.com
Chapter 1: Install and Configuration¶ Throughout this chapter you will need to be the root user or you will need to be able to sudo to root. Install EPEL and OpenZFS Repositories¶ LXD requires the EPEL (Extra Packages for Enterprise Linux) repository, which is easy to install using: dnf install epel-release Once installed, check for updates: dnf upgrade If there were any kernel updates during the upgrade process, reboot the server. OpenZFS Repository for 8.6 and 9.0¶ Install the OpenZFS repository with: dnf install(rpm --eval "%{dist}").noarch.rpm We also need the GPG key, so use this command to get that: gpg --import --import-options show-only /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux Install snapd, dkms, vim, and kernel-devel¶ LXD must be installed from a snap for Rocky Linux. For this reason, we need to install snapd (and a few other useful programs) with: dnf install snapd dkms vim kernel-devel And now enable and start snapd: systemctl enable snapd And then run: systemctl start snapd Reboot the server before continuing here. Install LXD¶ Installing LXD requires the use of the snap command. At this point, we are just installing it, we are not doing the set up: snap install lxd Install OpenZFS¶ dnf install zfs Environment Set up¶ Most server kernel settings are not sufficient to run a large number of containers. If we assume from the beginning that we will be using our server in production, then we need to make these changes up front to avoid errors such as "Too many open files" from occurring. Luckily, tweaking the settings for LXD is easy with a few file modifications and a reboot. Modifying limits.conf¶ The first file we need to modify is the limits.conf file. This file is self-documented, so look at the explanations in the file as to what this file does. To make our modifications type: vi /etc/security/limits.conf This entire file is remarked/commented out and, at the bottom, shows the current default settings. In the blank space above the end of file marker (#End of file) we need to add our custom settings. The end of the file will look like this when you are done: # Modifications made for LXD * soft nofile 1048576 * hard nofile 1048576 root soft nofile 1048576 root hard nofile 1048576 * soft memlock unlimited * hard memlock unlimited Save your changes and exit. ( SHIFT:wq! for vi) Modifying sysctl.conf With 90-lxd.override.conf¶ With systemd, we can make changes to our system's overall configuration and kernel options without modifying the main configuration file. Instead, we'll put our settings in a separate file that will simply override the particular settings we need. To make these kernel changes, we are going to create a file called 90-lxd-override.conf in /etc/sysctl.d. To do this type: vi /etc/sysctl.d/90-lxd-override.conf Place the following content in that file. Note that if you are wondering what we are doing here, the file content below is self-documenting: ## The following changes have been made for LXD ## # fs.inotify.max_queued_events specifies an upper limit on the number of events that can be queued to the corresponding inotify instance - (default is 16384) fs.inotify.max_queued_events = 1048576 # fs.inotify.max_user_instances This specifies an upper limit on the number of inotify instances that can be created per real user ID - (default value is 128) fs.inotify.max_user_instances = 1048576 # fs.inotify.max_user_watches specifies an upper limit on the number of watches that can be created per real user ID - (default is 8192) fs.inotify.max_user_watches = 1048576 # vm.max_map_count contains the maximum number of memory map areas a process may have. Memory map areas are used as a side-effect of cal ling malloc, directly by mmap and mprotect, and also when loading shared libraries - (default is 65530) vm.max_map_count = 262144 # kernel.dmesg_restrict denies container access to the messages in the kernel ring buffer. Please note that this also will deny access t o non-root users on the host system - (default is 0) kernel.dmesg_restrict = 1 # This is the maximum number of entries in ARP table (IPv4). You should increase this if you create over 1024 containers. net.ipv4.neigh.default.gc_thresh3 = 8192 # This is the maximum number of entries in ARP table (IPv6). You should increase this if you plan to create over 1024 containers.Not nee ded if not using IPv6, but... net.ipv6.neigh.default.gc_thresh3 = 8192 # This is a limit on the size of eBPF JIT allocations which is usually set to PAGE_SIZE * 40000. net.core.bpf_jit_limit = 3000000000 # This is the maximum number of keys a non-root user can use, should be higher than the number of containers kernel.keys.maxkeys = 2000 # This is the maximum size of the keyring non-root users can use kernel.keys.maxbytes = 2000000 # This is the maximum number of concurrent async I/O operations. You might need to increase it further if you have a lot of workloads th at use the AIO subsystem (e.g. MySQL) fs.aio-max-nr = 524288 Save your changes and exit. At this point you should reboot the server. Checking sysctl.conf Values¶ Once the reboot has been completed, log back in as to the server. We need to spot check that our override file has actually done the job. This is easy to do. There's no need to check every setting unless you want to, but checking a few will verify that the settings have been changed. This is done with the sysctl command: sysctl net.core.bpf_jit_limit Which should show you: net.core.bpf_jit_limit = 3000000000 Do the same with a few other settings in the override file (above) to verify that changes have been made. Author: Steven Spencer Contributors: Ezequiel Bruni
https://docs.rockylinux.org/pt/books/lxd_server/01-install/
2022-09-25T08:13:33
CC-MAIN-2022-40
1664030334515.14
[]
docs.rockylinux.org
Can't import the XML data The import depends on the speed of your internet connection and on the speed of your server. If you receive errors or the import is incomplete, probably you have an hosting issue. The best way is contact your hosting provider asking to increase the memory limit, the upload file size, etc. This is not a real WordPress problem and it's not related to the theme and to the plugins installed. It simply happens because you have some limits on your host environment. More info about this problem here:. Alternatively, don't check the option "Download and import file attachments" during the import. But you will not import the images in this case.
https://docs.wphotelier.com/article/51-cant-import-the-xml-data
2022-09-25T07:06:06
CC-MAIN-2022-40
1664030334515.14
[]
docs.wphotelier.com
Moisture origin and stable isotope characteristics of precipitation in southeast Siberia DOI: Persistent URL: Persistent URL: Kostrova, Svetlana S.; Meyer, Hanno; Fernandoy, Francisco; Werner, Martin; Tarasov, Pavel E., 2019: Moisture origin and stable isotope characteristics of precipitation in southeast Siberia. In: Hydrological Processes, Band 34, 1: 51 - 67, DOI: 10.1002/hyp.13571. The paper presents oxygen and hydrogen isotopes of 284 precipitation event samples systematically collected in Irkutsk, in the Baikal region (southeast Siberia), between June 2011 and April 2017. This is the first high‐resolution dataset of stable isotopes of precipitation from this poorly studied region of continental Asia, which has a high potential for isotope‐based palaeoclimate research. The dataset revealed distinct seasonal variations: relatively high δ18O (up to −4‰) and δD (up to −40‰) values characterize summer air masses, and lighter isotope composition (−41‰ for δ18O and −322‰ for δD) is characteristic of winter precipitation. Our results show that air temperature mainly affects the isotope composition of precipitation, and no significant correlations were obtained for precipitation amount and relative humidity. A new temperature dependence was established for weighted mean monthly precipitation: +0.50‰/°C (r2 = 0.83; p <.01; n = 55) for δ18O and +3.8‰/°C (r2 = 0.83, p < 0.01; n = 55) for δD. Secondary fractionation processes (e.g., contribution of recycled moisture) were identified mainly in summer from low d excess. Backward trajectories assessed with the Hybrid Single‐Particle Lagrangian Integrated Trajectory (HYSPLIT) model indicate that precipitation with the lowest mean δ18O and δD values reaches Irkutsk in winter related to moisture transport from the Arctic. Precipitation originating from the west/southwest with the heaviest mean isotope composition reaches Irkutsk in summer, thus representing moisture transport across Eurasia. Generally, moisture transport from the west, that is, the Atlantic Ocean predominates throughout the year. A comparison of our new isotope dataset with simulation results using the European Centre/Hamburg version 5 (ECHAM5)‐wiso climate model reveals a good agreement of variations in δ18O (r2 = 0.87; p <.01; n = 55) and air temperature (r2 = 0.99; p <.01; n = 71). However, the ECHAM5‐wiso model fails to capture observed variations in d excess (r2 = 0.14; p < 0.01; n = 55). This disagreement can be partly explained by a model deficit of capturing regional hydrological processes associated with secondary moisture supply in summer.A long‐term precipitation stable isotope dataset covering every month of the year at least 3 times has been obtained for Irkutsk in southeast Siberia. Distinct seasonal variations are mainly due to changes in air temperature. A new reliable and representative temperature dependency was established for weighted mean monthly precipitation: +0.50‰/°C for δ18O and +3.8‰/°C for δD. Statistik:View Statistics Collection Subjects:atmospheric circulation Baikal region d excess ECHAM5‐wiso climate model HYSPLIT model stable oxygen and hydrogen isotopes of precipitation This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
https://e-docs.geo-leo.de/handle/11858/8474
2022-09-25T08:19:40
CC-MAIN-2022-40
1664030334515.14
[]
e-docs.geo-leo.de
Make files available in your tool's working directory This page will introduce a situation in which you may want to configure the standard procedure on the CGC for storing files. Default handling of input and output files Each. This default handling of files has several implications, which it is often convenient to override: - A tool cannot, in general, write to its input files since they are not in the tool's working directory. If you need your tool to write to its input files, you can copy them to the tool's working directory. - A tool cannot in general report one of its input files as an output file. If you need your tool to pass through an input file as an output (without modifying the file), you can create a symbolic link from the input file to the tool's working directory. -. Stage Input is a setting on the Tool Editor that enables you to copy an input file to a tool's working directory, or to create a symbolic link from the file to the tool's working directory. See the documentation on stage input for more details. This. The. On this page - Default handling of input and output files - Make the indexing tool output available in your tool's current working directory - Configure the tool to unpack the TAR archive Make the indexing tool output available in your tool's current working directory - On the Inputs tab of the Tool Editor click the + button to add an input port. - Set the ID of the input to e.g. reference. You can use another value as the ID (the field allows only alphanumeric characters and underscore), but note that you will need to modify the Javascript expression below to match the ID you have entered. - Set the value in the Type field to File. - In the Label field set the value which will be displayed in the visual interface. - Set the value in the File Types field to .TAR. - Under Stage Input select Link. - Click Save. This will create a symbolic link in your tool's working directory to the archive containing the index files. The procedure above can be adapted to create a symbolic link from other input files into a tool's working directory. To adapt the procedure, make sure to replace .TAR in step 5 with the extension of the input file(s) you are using. Once you have made the archive file available in your tool's working directory, configure your tool to unpack it. Configure the tool to unpack the TAR archive The following procedure explains how to configure your tool to unpack the input TAR archive. - Navigate to the Apps tab in your project. - Click the pencil icon next to the tool you want to configure. - Navigate to the General tab in the Tool Editor. - Click + in the Base Command section. If the field(s) in the Base Command section have already been populated, copy the content of each field to the first blank field below it, until the very first field in the section becomes blank. - Click </> next to the first field. - Paste the following code: { var index_files_bundle = $job.inputs.reference.path.split('/').slice(-1) return 'tar -xf ' + index_files_bundle + ' ; ' } The first line of the expression retrieves the name of the archive file using the $job object. The second line appends the retrieved file name to the command that will unpack the archive file. This Javascript expression assumes that the ID of the input port that takes the TAR archive is reference. Please make sure to replace referencein the above code with the ID value of your tool's input port that takes the archive file. - Click Save. - Click Save in the top-right corner of the Tool Editor. Your tool is now configured to unpack a TAR archive it receives as its input. Updated less than a minute ago
https://docs.cancergenomicscloud.org/docs/make-files-available-in-your-tools-working-directory
2022-09-25T08:10:07
CC-MAIN-2022-40
1664030334515.14
[]
docs.cancergenomicscloud.org
Regexp Query A regexp query finds documents containing terms that match the specified regular expression. Please note that the regex query is a non-analytic query, meaning it won’t perform any text analysis on the query text. { "regexp": "inter.+", "field": "reviews.content" } A demonstration of a regexp query using the Java SDK can be found in Searching from the SDK.
https://docs.couchbase.com/server/current/fts/fts-supported-queries-regexp.html
2022-09-25T08:00:02
CC-MAIN-2022-40
1664030334515.14
[]
docs.couchbase.com
Download PDF Download page Database Visibility System Requirements. Database Visibility System Requirements This page describes the hardware and software requirements using Database Visibility. Hardware Requirements Hardware requirements vary depending on database activity. If your database activity increases, you may need to adjust your hardware configuration. The machine running the Database Agent should meet the following hardware requirements: - 1 GB of heap space and an additional 512 MB of heap space for each monitored database instance. For less busy databases, you may reduce the heap space to 256 MB per monitored database instance. - 2 GHz or higher CPU. Database InstanceA Database instance can be a node in the Oracle RAC, MongoDB, Couchbase cluster, standalone-collector, or a sub-collector. This table shows sample calculations for heap space allocation: AppDynamics Controller Sizing Requirements The Controller database should meet the following hardware requirements: - 500 MB of disk space per collector per day - 500 MB of disk space for the Events Service per day. By default, the Events Service retains data for 10 days. See Controller System Requirements. NoteThe Database Agent requires the Events Service. Start Event Service before you start the Database Agent. Software Requirements - The Database Agent runs on a Java Virtual Machine. You must have Java >= 1.8. - The operating systems Linux and Windows are supported. Network Requirements - The machine on which the database is running or the machine you want to monitor must be accessible from the machine where the Database Agent is installed and running. This machine must have a network connection, internet, or intranet. - If your databases are behind a firewall, you must configure the firewall to permit the machine running the Database Agent program access to the databases. The database listener port (and optionally the SSH or WMI port) must be open. - The network bandwidth used between the agent and the controller is approximately 300 KB per minute per collector for a large database with 200 clients using 50 schemas, processing about 10,000 queries a minute. The actual numbers depend on the type of database server, the number of individual schemas on the server, and the number of unique queries executed daily, and therefore varies.
https://docs.appdynamics.com/appd/22.x/22.6/en/database-visibility/database-visibility-system-requirements
2022-09-25T09:18:38
CC-MAIN-2022-40
1664030334515.14
[]
docs.appdynamics.com
toctoc Running CagesRunning Cages How to run your deployed Cages. Server-SideServer-Side Automatically run your Cage code by passing the Cage name and the payload into the evervault.run() function of your Evervault SDK. Or, send an HTTPS POST request with a JSON payload and API-Key header to - Node.js - Python - Ruby javascript// `encryptedData` must be an Objectconst result = await evervault.run('YOUR-CAGE-NAME', encryptedData); Run your application to see the result. Return the result to the client, or forward it to a third party API via an HTTP request. All outbound HTTP requests are logged, and are shown in your team's Dashboard. The Node.js SDK is pre-initialized in all Cages as the globally-scoped evervault object. This allows you to encrypt the result, and store it in your database. Client-SideClient-Side Run your cage client-side by creating a run token using one of our server-side SDKs and then sending an HTTPS POST request from your client. For security, run tokens are single use, have a five minute time to live and are only valid when the same payload is included in both token creation and when running the cage. - Node.js - Python - Ruby javascript// `payloadForCageRun` must be an Objectconst token = await evervault.createRunToken('YOUR-CAGE-NAME', payloadForCage); Send an HTTPS POST request to with the JSON payload and the run token included in the authorization header. curl --request POST \--url<CAGE-NAME> \--header 'Authorization: Bearer <RUN-TOKEN>' \--header 'Content-Type: application/json' \--data <PAYLOAD-FOR-CAGE> Cage IP WhitelistCage IP Whitelist You can restrict your Cage to only run when invoked from specified IP addresses or CIDR blocks. If your Cage is invoked from a IP which isn’t included in your whitelist, you will receive a 403 HTTP status code with the Evervault error header: x-evervault-error-code: forbidden-ip-error. You can configure the IP Whitelist from the Evervault dashboard by going to Dashboard -> Cage -> IP Whitelist
https://docs.evervault.com/concepts/cages/running-cages
2022-09-25T08:02:58
CC-MAIN-2022-40
1664030334515.14
[]
docs.evervault.com
Autopilot Install and Setup Installing Autopilot Prerequisites Prometheus Autopilot requires a running Prometheus instance in your cluster. If you don’t have Prometheus configured in your cluster, refer to the Prometheus and Grafana to set it up. Once you have it installed, find the Prometheus service endpoint in your cluster. Depending on how you installed Prometheus, the precise steps to find this may vary. In most clusters, you can find a service named Prometheus: kubectl get service -n kube-system prometheus NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE prometheus LoadBalancer 10.0.201.44 52.175.223.52 9090:30613/TCP 11d In the example above, becomes the Prometheus endpoint. Portworx uses this endpoint in the Autopilot Configuration section. Why ? prometheus is the name of the Kubernetes service for Prometheus in the kube-system namespace. Since Autopilot also runs as a pod in the kube-system namespace, it can access Prometheus using its Kubernetes service name and port. Configuring the ConfigMap Replace in the following ConfigMap with your Prometheus service endpoint, if it is different. Once replaced, apply this ConfigMap in your cluster: apiVersion: v1 kind: ConfigMap metadata: name: autopilot-config namespace: kube-system data: config.yaml: |- providers: - name: default type: prometheus params: url= min_poll_interval: 2 This ConfigMap serves as a configuration for Autopilot. Installing Autopilot To install Autopilot, fetch the Autopilot manifest from the Portworx spec generator by clicking here and apply it in your cluster. Autopilot with PX-Security If you’re installing Autopilot with PX-Security using the Operator, you must modify the StorageCluster yaml. Add the following PX_SHARED_SECRET env var to the autopilot section: autopilot: ... env: - name: PX_SHARED_SECRET valueFrom: secretKeyRef: key: apps-secret name: px-system-secrets Upgrading Autopilot To upgrade Autopilot, change the image tag in the deployment with the kubectl set image command. The following example upgrades Autopilot to the 1.3.0 version: kubectl set image deployment.v1.apps/autopilot -n kube-system autopilot=portworx/autopilot:1.3.0 deployment.apps/autopilot image updated kube-systemnamespace. Change the namespace according to where it’s installed in your cluster.
https://docs.portworx.com/operations/operate-kubernetes/autopilot/how-to-use/install-autopilot/
2022-09-25T08:01:36
CC-MAIN-2022-40
1664030334515.14
[]
docs.portworx.com
Overview of Profiles A profile is a collection of all the data associated with an entity. An entity is a customer, supplier, or an employee who form the core of your business. In order to maintian seamless business operations, you must make sure that all entity details are available in your system. Reltio Identity 360 enables you to create and maintain your entities using profiles. These profiles can be created in the following ways: You can use the Data Loader application in Console to load profiles into Reltio Identity 360. For more information, see Getting Started with Reltio Identity 360. - Creating a new profile You can create new profiles in Reltio Identity 360 using the Profile page in Hub. This page includes various attributes and facets, such as Address, Identifiers, Contact, Relationship, and Privacy Preferences. For more information, see Creating Profiles in Reltio Identity 360.
https://docs.reltio.com/identity360/identity360profiles.html
2022-09-25T08:10:37
CC-MAIN-2022-40
1664030334515.14
[]
docs.reltio.com
Trendmaster Auto Charting combines many charting functions into one tool that automatically plots all of the important data that any trader would want directly onto their chart. This is not only a time-saving tool in general but effectively removes any guesswork regarding finding and plotting important levels. This tool is also a great “Confluence” tool with other Trendmaster Indicators as multiple information points converge to show key levels for price reversal. This Tool is designed to work on EVERY trading pair listed on Tradingview to effectively show key levels automatically. Some of the Tools in this Indicator are:Auto S/R (Support/Resistance) LevelsAuto Volume Blocks Swing PointsCandle Identifier High Timeframe OpensAuto Fibonacci RetracementsAuto Trendlines Price Projection - “Experimental”Daily, Weekly, Monthly RangesGlobal Market Sessions - New York, London, Asia
https://docs.trendmaster.com/auto-charting-tools/overview
2022-09-25T09:01:14
CC-MAIN-2022-40
1664030334515.14
[]
docs.trendmaster.com
Product History All changes within a product repository are logged by the system and can be viewed in a product history. The product history is used to: Log changes in the repository you are currently working in (master or channel repository) Log changes in a syndication source directory This can be the master repository of the current organization or a repository of the parent organization. For details about possible sources for syndication, see Product Syndication. Changes are tracked when a product is modified: In Intershop Commerce Management During product syndication and synchronization processes During product batch processes The product history provides information about added products, updated products, and deleted products. For updated products, update details are provided, such as changed attributes (including old and new attribute values), added or deleted attributes, changes to attachments, product links, content relations, product variations, or product bundles. Each product history entry includes the user and the modification date. Preferences can be set to activate the product history feature and to configure how long changes are preserved in the product history. The product history can be filtered by various criteria, such as user or date.
https://docs.intershop.com/icm/latest/olh/icm/en/catalogs_products/concept_product_history.html
2022-09-25T08:50:07
CC-MAIN-2022-40
1664030334515.14
[]
docs.intershop.com
Adding Refund Information For returned products (see Adding Return Confirmation) for which money is refunded manually, you can add a refund information. To do so: - Search for the order.Use the preset Business query and the query Not refunded returns. For details, see Searching Orders. - In the result list, click on the order number you want to edit.This opens the order detail page. - Open the Returns tab.This lists the returns registered for this order. - In the Action column, click the icon.This opens the refund information dialog.Note: The icon is only available for returns with the refund status open. Figure 1. Add refund information - Specify the refund details as necessary, then click Save.Otherwise, click Cancel to discard your settings.
https://docs.intershop.com/iom/3.7/olh/omt/en/topics/managing_orders/task_adding_refund_information.html
2022-09-25T07:59:52
CC-MAIN-2022-40
1664030334515.14
[]
docs.intershop.com
26. Trademark¶ The Digital Rebar name and mark are maintained by RackN, RackN requests vendors obtain permission for commercial uses. 26.1. Name¶ Digital Rebar usage options are as follows: The project name is “Digital Rebar” as two words, both capitalized Acceptable alternatives: DigitalRebar (avoid in written text in favor of Digital Rebar) rebar.digital (the “.” is required in this format) dR or DR (do not use Dr) Internally within the project, it is acceptable to just say “Rebar” 26.2. Logos¶ It is acceptable to use the Digital Rebar logos when referencing the project or workloads that leverage the project. Large Icon: Small Icon 26.3. Mascot¶ Cloud Native Metal Bear
https://docs.rackn.io/en/latest/Trademark.html
2022-09-25T07:38:24
CC-MAIN-2022-40
1664030334515.14
[array(['_images/digital_rebar_small.png', '_images/digital_rebar_small.png'], dtype=object)]
docs.rackn.io
Working with the Rules Builder You can use the Rules builder to identify the attributes and set the relevant conditions that you want to include as part of your match rule. You can search and select the attributes to include in the match rule. By default, the Rules builder shows the list of recommended attributes. The attributes displayed in the Recommended attributes list are based upon the most frequently used attributes in the match rules. To select from the broader list that includes all the attributes, click the VIEW ALL ATTRIBUTES link. As you select the required attributes, these are listed in the Rules builder and the BUILD MATCH RULES button becomes active. Click the BUILD MATCH RULES button to use the Rules builder’s capabilities. Based on the selected attributes, your first Set contains a row for each comparator against each attribute along with an extra row. As an example, the Phone/Number, Address/City, and Name attributes were selected as seen in the screen-shot shown above. Now, you can specify the match rule criterion based on each of these attributes. In addition, you can also add another set that contains any other attributes that you want to match upon. Working with rows in a Set Let us understand how to create a match rule. When you hover-over (mouse-over) a row, the options to Add, Duplicate or Delete a row are displayed. The Delete row option is available when you click the More options icon as shown below. - Select the operator - This could be And, And not, Or, Or not - Select the comparator - Select the attribute - Select the value of the attribute, if needed - Specify the Row settings - Ignore in token, Select match token, Select comparator class and/or select cleanser Multiple attributes within a set can have any of the four operators. The attributes can be specified with an And, And not, Or, Or not condition. Select the Comparator that you want to set for each attribute that you have already selected. For example, select the Fuzzy comparator for the Name attribute if you want the two slightly similar names being compared to be counted as potential matches. The list of comparators is shown below. For more details about these comparators/match rule operators, please see Comparison Operators. For example, you may want to specify that Name equals John, in that case, your row entry would look like the one shown below. Click the Row settings drop-down to see the available options in a dialog box. You can choose to ignore the selected attribute in match rule tokenization by turning On the Ignore in token option. For more details, see ignoreInToken. Click the Select match token drop-down to select the match token applicable to the selected attribute. You can choose from 15 Match tokens. For more details, see Match Token Generation. Click the Select comparator class drop-down to select a comparator class applicable to the selected attribute. You can choose from 16 Comparator classes. For more details, see Comparator Classes. Click the Select cleanser drop-down to select the cleanser applicable to the selected attribute. For more information, see Name Dictionary Cleanser. After you make the required selections in the Row settings dialog box, you can click anywhere outside the dialog box to apply your selection and close the dialog box. If you want to add more details to your match rule, you can either add a new row to the same set, or, add a new set. Working with multiple Sets While working with complex match rules, you may need to add more than one set of match instructions. In that case, you can easily add or duplicate a set. You can also delete a set of instructions, if it is not needed. To use the various options at the Set level, click the More options icon for the set as shown below. After you have completed setting a match rule, the Rules builder would look like the image shown below. The Match queries section displays the conditions that you have set for the selected attributes, including the selections made as part of the row settings. Click SAVE to create your match rule. The newly created match rule is listed in the Match Rules section on the Entity details page for the Entity type which you had selected before creating the match rule. You can edit an existing match rule either in the Rules Builder or the Advanced Editor. While creating a match rule, you can only select the Match Rule Type as either Suspect for review or Automatic merge. However, while editing a match rule, even the Relevance-based match rules and/or user-defined (custom) match rules are displayed in the Advanced Editor or Rules Builder, as appropriate. You can switch to the Advanced Editor view to work with the match rules in the JSON format.
https://docs.reltio.com/datamodeler/workingwithrulesbuilder.html
2022-09-25T08:46:28
CC-MAIN-2022-40
1664030334515.14
[array(['../images/ruleconfigurator/rc_rulesbuilder_1.png', None], dtype=object) array(['../images/ruleconfigurator/rc_rulesbuilder_2.png', None], dtype=object) array(['../images/ruleconfigurator/rc_rulesbuilder_3.png', None], dtype=object) array(['../images/ruleconfigurator/rc_rulesbuilder_4.png', None], dtype=object) array(['../images/ruleconfigurator/rc_rowoptions.png', None], dtype=object) array(['../images/ruleconfigurator/rc_row_operators.png', None], dtype=object) array(['../images/ruleconfigurator/rc_comparatorlist.png', None], dtype=object) array(['../images/ruleconfigurator/rc_namejohn.png', None], dtype=object) array(['../images/ruleconfigurator/rc_row_settings.png', None], dtype=object) array(['../images/ruleconfigurator/rc_set_level.png', None], dtype=object) array(['../images/ruleconfigurator/rc_first_rb.png', None], dtype=object)]
docs.reltio.com
How to show/hide text? An element on a page can be toggled open or closed with just a click. Modern web browsers support HTML5 markup language, allowing use of the <details> and <summary> elements to create a disclosure widget in which information is visible only when the widget is toggled into an ‘open’ state. Preparation: In order for the HTML code to validate correctly when using these HTML5 elements the DOCTYPE must be set to <!DOCTYPE html> <html <?php echo HTML_PARAMS; ?>> This code is in the file includes/templates/YOURTEMPLATE/common/html_header.php Modern Zen Cart templates have this already but older templates may still be based on the HTML/XHTML standards. HTML5 is designed to be backwards compatible so changing the DOCTYPE on these templates should have no ill effect. Implementation: The disclosure widget can be added to any page such as a Define page, EZ-Page, category description, product description … Example code <details> <summary>What is Zen Cart?</summary> <p.</p> </details> The <summary> element should be the first child element of the <details> element. By default the <summary> ... </summary> heading is all that is displayed, with the <p> ... </p> content being hidden. A standard arrow icon is shown next to the heading indicating that it needs to be clicked on. Try it out by copying the above code to one of your pages. Customization: Although less feature rich than traditional Javascript methods of show/hide these elements can still be customized. Changing the icon, formatting the elements and changing the default state can all be achieved by edits to the HTML code or CSS stylesheet.
https://docs.zen-cart.com/user/template/disclosure_widget/
2022-09-25T07:18:30
CC-MAIN-2022-40
1664030334515.14
[]
docs.zen-cart.com
Frequently Asked Questions¶ Where to report bugs or improvements?¶ Please visit our github project to report bugs or request improvements. Slack channel¶ Visit us on slack in channel #t3g-ext-blog Contributions¶ Any contributor is welcome to join our team. All you need is an github account. If you already have an account, visit the: github project. Clone / git repo¶ The git repository is public and you can clone it like every git repository: git clone
https://docs.typo3.org/p/t3g/blog/9.1/en-us/FAQ/Index.html
2022-09-25T07:28:07
CC-MAIN-2022-40
1664030334515.14
[]
docs.typo3.org
This section provides a list of messages that you may encounter with the use of the LifeKeeper MQ Recovery Kit. Where appropriate, it provides an additional explanation of the cause of an error and necessary action to resolve the error condition. Because the MQ Recovery Kit relies on other LifeKeeper components to drive the creation and extension of hierarchies, messages from these other.
https://docs.us.sios.com/spslinux/9.6.2/en/topic/mq-error-messages
2022-09-25T07:17:01
CC-MAIN-2022-40
1664030334515.14
[]
docs.us.sios.com
🏰 Amman - The Capital of Jordan¶ With a population of 4 million people, which makes it the largest city in Jordan which has a population of 10 million people in total. Amman considered one of the oldest cities in the middle east, When you land in Queen Alia airport, and check yourself in one of the many global hotels around Amman, you should first head to “Albalad” area which means “Downtown”, despite the growing urban area that has brought many changes, much remains of its old character, it is considered one of the oldest alive areas in history of Amman, it was originally inhabited around 6500 B.C. where a mix of old buildings collide with modern vibes. You should definitely start your day with breakfast in Hashem restaurant in Albalad where you get to taste one of the best Falafel in the country, afterwards, with a full tummy, you burn these calories when you go tour the old shopping centers of Albalad, countless of goods and memorabilias can be found and once you’re done shopping, you choose one of the many restaurants that serve local cuisines in Albalad, followed by sheesha and kunafa from the oldest kunafa places in Jordan “Habiba”. You also get to visit The Amman Citadel, a historic site in the center of downtown, one of the most visited sites by tourists. And by the way, Amman is one of the oldest inhabited cities in the world, having evidence of settlement from 7250 B.C.! now it’s considered one of the most modernized cities in the Arab world. The night is coming, and you sure want to experience the nightlife in Jordan, and within a walking distance you head to rainbow street which is within what we call “The 1st circle” area. Rainbow street is a very old area where it’s filled with cafes, bars and restaurants. You might want to have a quite time in La Calle bar, or you want to party in an interesting way where you should definitely head to Scrabs bar. Amman’s increasing nightlife scene is shaped by Jordan’s young population, it’s very easy to find nightclubs, music bars, and shisha lounges, which are changing the city’s old image as the conservative capital. Amman has become one of the most liberal and westernized cities of the Arab world. Also close to Rainbow street area, a place we call “Lwaibdeh” which is considered number one in the places where expats prefer to live in, for its western and the same time, eastern vibes that come together to make a very attractive scene in Jordan, and is considered the art hub in Amman. From the 1st circle, to the 8th circle, all modern urban scenes known can be found along the way, and ends in airport road where it’s less than an hour away from the dead sea. Generally, the most important areas in Amman are named after the hills, in Arabic it’s called “Jabal”, Amman was initially built on 7 hills but now spans over 19 hills “Jabal”. Jordan has been receiving refugees for the last century and has become the home of many people from Palestine, Syria, Iraq, Yemen… etc. which one of the causes that Amman has become one of the largest capitals in the middle east, and because of its geographical location, people often think that Jordan is not a stable and safe country, but once you set foot in Amman, you will notice a very different idea of what you would think of this middle eastern country.
https://slurp.readthedocs.io/en/latest/amman.html
2022-09-25T09:14:14
CC-MAIN-2022-40
1664030334515.14
[]
slurp.readthedocs.io
Intermediate return works in a similar manner to Return, with one big difference. An Intermediate return does not end the execution, instead it allows the process to continue executing. An intermediate return only does something when the Process is triggered by a HTTP Trigger. It allows giving a result back to the caller before a time consuming process begins. Intermediate returns are drawn as an alternative execution path and can only be attached to a Task, Call Subprocess or Code element. While it's possible to have multiple Intermediate returns in a Process, the intermediate result will only be returned back to the caller for the first Intermediate Return encountered. Example usage of Intermediate return.
https://docs.frends.com/en/articles/5270902-intermediate-return
2022-09-25T07:23:51
CC-MAIN-2022-40
1664030334515.14
[array(['https://downloads.intercomcdn.com/i/o/70880503/013e8e4a3d291b4ba2174c7e/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/70879720/d51d21183f4edcceff32e365/image.png', None], dtype=object) ]
docs.frends.com
In previous sections we covered configuration of the network. Now we are going to create the first instance. As discussed in Computing Resources Used in this Tutorial, we need two disks. This section also describes how to create the second disk. - Go to “EC2” service. It may be necessary to enter “EC2” in the top search box of the AWS Console to select EC2. - Click “Launch Instance”. - Click “Operating System”. Enter the name of the operating system and select the specific Machine Image. Before selecting the operating system and its version, please review the system requirements for both LifeKeeper and the application you are going to protect. - Click “Instance Size”. This tutorial uses t2.micro since it is defined as the minimum system requirement for evaluation of LifeKeeper and it may qualify for Free Tier usage. Again, it may be necessary to select a larger instance size depending on the application to be protected. Once the instance size has been chosen, click “Next: Configure Instance Details”. - In the “Configure Instance Details” wizard, ensure that the following parameters are used: - VPC: LK-VPC - Subnet: LK-subnet-1 - Network Interface: Please enter 10.20.1.10 for the first instance (node-a). Once these values are confirmed, click “Next: Add Storage”. - The wizard has already created the first storage device (volume). Click “Add New Volume” and create a new disk. Per the minimum requirements previously discussed in this guide, an 8GiB disk should be sufficient. However, this may differ depending on the volume of the data to be protected (replicated). Once the selection is confirmed, click “Next: Add Tags”. - Add a tag with key “Name” and value “Node-A” to make it easier to identify the instance from the AWS EC2 Console. Once the “Name” tag is defined, click “Next: Configure Security Group”. - Select the Security Group previously created in an earlier section (LK-SG): - Review and confirm all selections, then click “Launch”. - In order to access the newly created Linux EC2 instance, we will connect via ssh. AWS uses a key pair to authenticate user ssh sessions. Create a name for the key pair and download it to your local system. The private key will be needed to access the EC2 instance. Click “Launch Instances”. - The “Launch Status” page appears. Select the instance ID to view its details. - Once the Instance is created we need to change the network configuration. As the active node changes from time to time and we are using a Virtual IP address, we should disable the source/destination check on the network interface. To do this, select the Instance, then select Actions > Networking > Change source/destination check. - On the “Source / destination check” page, check “Stop” and save the change. Now the instance is created and ready for us to connect. - The details of the instance may be reviewed on this page. Select “Connect” at the top of the page to see instructions on how to connect. - Here we can view the instructions that explain how to connect to the instance. On a Windows client, please refer to Setup X Window client software on Microsoft Windows for details. Feedback Thanks for your feedback. Post your comment on this topic.
https://docs.us.sios.com/spslinux/9.6.2/en/topic/creating-the-first-ec2-instance
2022-09-25T08:04:19
CC-MAIN-2022-40
1664030334515.14
[]
docs.us.sios.com
v5.6.1¶ Made treatment of CCITT image photometry ignore BlackIs1, since this seems more consistent with other programs. v5.6.0¶ Improved support for extracting the contents of inline images. Marked some “always should have been private” functions as deprecated with removal planned for v6, mainly in pikepdf.models.image. Fixed all Python documentation style inconsistencies. v5.5.0¶ Fixed undefined behavior on creating NameTree on direct object. Thanks @willangley. Fixed sdist with coverage build. Added support for specifying QPDF’s library build directory, for compatibility with QPDF’s transition to cmake. QPDF_*environment variables will modify build paths even when CFLAGSis defined. Fixed rare case where GIL was not held while discarding a certain exception. Now using cibuildwheel 2.9.0. Many typo fixes. Thanks @PabloAlexis611. v5.4.2¶ Fixed Pages.__eq__not returning NotImplemented when it ought to. Fixed possible problems with NameTreeand NumberTree.__eq__operators. Changed to SPDX license headers throughout. v5.4.1¶ Chores. Fixed ReadTheDocs build, updated versions, fixed a test warning, improved coverage, modernized type annotations. v5.4.0¶ New feature: pikepdf.Jobbindings to QPDFJob API. New feature: pikepdf.NumberTreeto support manipulation of number trees, mainly for applying custom page labels. Many improvements to pikepdf.NameTreeincluding the ability to instantiate a new name tree. Several memory leaks were fixed. Rebuilt against pybind11 2.10.0. v5.3.2¶ Build system requires changed to setuptools-scm 7.0.5, which includes a fix to an issue where pikepdf source distribution reported a version of “0.0” when installed. v5.3.1¶ Fixed issue with parsing inline images, causing loss of data after inline images were encountered in a content stream. The issue only affects content streams parsed with parse_content_stream; saved PDFs were not affected. #299 Build system requires changed to setuptools-scm 7.0.3, and setuptools-scm-git-archive is now longer required. v5.3.0¶ Binary wheels for Linux aarch64 are now being rolled automatically. 🎉 Refactor JBIG2 handling to make JBIG2 decoders more testable and pluggable. Fixed some typing issues around ObjectHelper. Exposed some pikepdf settings that were attached to the private _qpdfmodule in a new pikepdf.settingsmodule. v5.2.0¶ Avoid a few versions of setuptools_scm that were found to cause build issues. #359 Improved an unhelpful error message when attemping to save a file with invalid encryption settings. #341 Added a workaround for XMP metadata blocks that are missing the expected namespace tag. #349 Minor improvements to code coverage, type checking, and removed some deprecated private methods. v5.1.5¶ Fixed removal of necessary package packaging. Needed for import. v5.1.4¶ Reorganized release notes so they are better presented in Sphinx documentation. Remove all upper bound version constraints. Replace documentation package sphinx-panels with sphinx-design. Downstream maintainers will need to adjust this in documentation. Removed use of deprecated pkg_resources and replaced with importlib (and, where necessary for backward compatibility, importlib_metadata). Fixed some broken links in the documentation and READMEs. v5.1.3¶ v5.1.2¶ v5.1.1¶ v5.1.0¶ Rebuild against QPDF 10.6.3. Improvements to Makefile for Apple Silicon wheels. v5.0.1¶ Fixed issue where Pdf.check() would report a failure if JBIG2 decoder was not installed and the PDF contains JBIG2 content. v5.0.0¶ Some errors and inconsistencies are in the “pdfdoc” encoding provided by pikepdf have been corrected, in conjunction with fixes in libqpdf. libqpdf 10.6.2 is required. Previously, looking up the number of a page, given the page, required a linear search of all pages. We now use a newer QPDF API that allows quicker lookups.
https://pikepdf.readthedocs.io/en/latest/releasenotes/version5.html
2022-09-25T08:00:02
CC-MAIN-2022-40
1664030334515.14
[]
pikepdf.readthedocs.io
Install Portworx on OpenShift on vSphere This article provides instructions for installing Portworx on OpenShift running on vSphere. To accomplish this, you must: - Install the Portworx Operator using the Red Hat OperatorHub - Deploy Portworx using the Operator - Verify your installation Once you’ve successfully installed and verified your Portworx installation, you’re ready to start using Portworx. To get started after installation, you may want to perform two common tasks: - Create a PersistentVolumeClaim - Set up cluster monitoring Prerequsites - Your cluster must be running OpenShift 4 or higher. - You must have an OpenShift cluster deployed on infrastructure that meets the minimum requirements for Portworx. - Ensure that any underlying nodes used for Portworx in OCP have Secure Boot disabled. Install the Portworx Operator Before you can install Portworx on your OpenShift cluster, you must first install the Portworx Operator. Perform the following steps to prepare your OpenShift cluster by installing the: Deploy Grant permissions Portworx requires by creating a secret with user credentials: Create a secret using the following template. Retrieve the credentials from your own environment and specify them under the datasection: apiVersion: v1 kind: Secret metadata: name: px-vsphere-secret namespace: kube-system type: Opaque data: VSPHERE_USER: <your-vcenter-server-user> VSPHERE_PASSWORD: <your-vcenter-server-password> VSPHERE_USER: to find your vSphere user, enter the following command: echo '<vcenter-server-user>' | base64 VSPHERE_PASSWORD: to find your vSphere password, enter the following command: echo '<vcenter-server-password>' | base64 Once you’ve updated the template with your user and password, apply the spec: oc apply -f <your-spec-name> Ensure ports 17001-17020 on worker nodes are reachable from the control plane node and other worker nodes. If you’re running a Portworx Essentials cluster, then create the following secret with your Essential Entitlement ID: oc -n kube-system create secret generic px-essential \ --from-literal=px-essen-user-id=YOUR_ESSENTIAL_ENTITLEMENT_ID \ --from-literal=px-osb-endpoint='' Generate the StorageCluster spec To install Portworx with OpenShift, you must generate a StorageCluster spec that you will deploy in your cluster. - Navigate to the Portworx spec generator. Select Portworx Enterprise from the product catalog: On the Product Line page, Select Portworx Enterprise and click Continue to start the spec generator: On the Basic tab, Select Use the Portworx Operator and select the Portworx version you want. Choose Built-in ETCD if you have no external ETCD cluster: Select the Next button to continue. On the Storage tab: - At the Select your environment dialog, select the Cloud radio button. - At the Select cloud platform dialog, select vSphere. - At the bottom pane, enter your vCenter endpoint, vCenter datastore prefix, and the Kubernetes Secret Name you created in step 1 of the Grant the required cloud permissions section: Select the Next button to continue. On the Network tab, keep the default values and select the Next button to continue: On the Customize tab, select the Openshift 4+ radio buttom from the Are you running either of these? dialog box. If you’re using a proxy, you can add your details to the Environment Variables section: If you’re using a private container registry, enter your registry location, registry secret, and specify an image pull policy under the Registry and Image Settings: Select the Finish button to continue. Save and download the spec for future reference: Apply the StorageCluster spec You can apply the StorageCluster spec in one of two ways: - Using the OpenShift UI - Using the CLI Apply the spec using the OpenShift UI Within the Portworx Operator page, select the operator Portworx Enterprise Select Create StorageCluster to create a StorageCluster object. The spec displayed here represents a very basic default spec. Copy the spec you created with the spec generator and paste it over the default spec in the YAML view and select the Create button: Verify that Portworx has deployed successfully by navigating to the Storage Cluster tab of the Installed Operators page: Once Portworx has fully deployed, the status will show as Online: Apply the spec using the CLI If you’re not using the OpenShift console, you can create the StorageCluster object using the oc command: Apply the generated specs to your cluster with the oc apply command: oc apply -f px-spec.yaml Using the oc get pods command, monitor the Portworx deployment process. Wait until all Portworx pods show as ready: oc get pods -o wide -n kube-system -l name=portworx Verify that Portworx is deployed by checking its status with the following commands: PX_POD=$(oc get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}') oc exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status Verify your Portworx installation Once you’ve installed Portworx, you can perform the following tasks to verify that Portworx has installed correctly. Verify if all pods are running Enter the following oc get pods command to list and filter the results for Portworx pods: oc get pods -n kube-system -o wide | grep -e portworx -e px portworx-api-774c2 1/1 Running 0 2m55s 192.168.121.196 username-k8s1-node0 <none> <none> portworx-api-t4lf9 1/1 Running 0 2m55s 192.168.121.99 username-k8s1-node1 <none> <none> portworx-kvdb-94bpk 1/1 Running 0 4s 192.168.121.196 username-k8s1-node0 <none> <none> portworx-operator-58967ddd6d-kmz6c 1/1 Running 0 4m1s 10.244.1.99 username-k8s1-node0 <none> <none> prometheus-px-prometheus-0 2/2 Running 0 2m41s 10.244.1.105 username-k8s1-node0 <none> <none> px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-9gs79 2/2 Running 0 2m55s 192.168.121.196 username-k8s1-node0 <none> <none> px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-vpptx 1/2 Running 0 2m55s 192.168.121.99 username-k8s1-node1 <none> <none> px-csi-ext-868fcb9fc6-54bmc 4/4 Running 0 3m5s 10.244.1.103 username-k8s1-node0 <none> <none> px-csi-ext-868fcb9fc6-8tk79 4/4 Running 0 3m5s 10.244.1.102 username-k8s1-node0 <none> <none> px-csi-ext-868fcb9fc6-vbqzk 4/4 Running 0 3m5s 10.244.3.107 username-k8s1-node1 <none> <none> px-prometheus-operator-59b98b5897-9nwfv 1/1 Running 0 3m3s 10.244.1.104 username-k8s1-node0 <none> <none> Note the name of one of your px-cluster pods. You’ll run pxctl commands from these pods in following steps. Verify Portworx cluster status You can find the status of the Portworx cluster by running pxctl status commands from a pod. Enter the following oc exec command, status Defaulted container "portworx" out of: portworx, csi-node-driver-registrar Status: PX is operational Telemetry: Disabled or Unhealthy Metering: Disabled or Unhealthy License: Trial (expires in 31 days) Node ID: 788bf810-57c4-4df1-9a5a-70c31d0f478e IP: 192.168.121.99 Local Storage Pool: 1 pool POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION 0 HIGH raid0 3.0 TiB 10 GiB Online default default Local Storage Devices: 3 devices Device Path Media Type Size Last-Scan 0:1 /dev/vdb STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC 0:2 /dev/vdc STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC 0:3 /dev/vdd STORAGE_MEDIUM_MAGNETIC 1.0 TiB 14 Jul 22 22:03 UTC * Internal kvdb on this node is sharing this storage device /dev/vdc to store its data. total - 3.0 TiB Cache Devices: * No cache devices Cluster Summary Cluster ID: px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d Cluster UUID: 33a82fe9-d93b-435b-943e-6f3fd5522eae Scheduler: kubernetes Nodes: 2 node(s) with storage (2 online) IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version Kernel OS 192.168.121.196 f6d87392-81f4-459a-b3d4-fad8c65b8edc username-k8s1-node0 Disabled Yes 10 GiB 3.0 TiB Online Up 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core) 192.168.121.99 788bf810-57c4-4df1-9a5a-70c31d0f478e username-k8s1-node1 Disabled Yes 10 GiB 3.0 TiB Online Up (This node) 2.11.0-81faacc 3.10.0-1127.el7.x86_64 CentOS Linux 7 (Core) Global Storage Pool Total Used : 20 GiB Total Capacity : 6.0 TiB The Portworx status will display PX is operational if your cluster is running as intended. Verify pxctl cluster provision status Find the storage cluster, the status should show as Online: oc -n kube-system get storagecluster NAME CLUSTER UUID STATUS VERSION AGE px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d 33a82fe9-d93b-435b-943e-6f3fd5522eae Online 2.11.0 10m Find the storage nodes, the statuses should show as Online: oc -n kube-system get storagenodes NAME ID STATUS VERSION AGE username-k8s1-node0 f6d87392-81f4-459a-b3d4-fad8c65b8edc Online 2.11.0-81faacc 11m username-k8s1-node1 788bf810-57c4-4df1-9a5a-70c31d0f478e Online 2.11.0-81faacc 11m Verify the Portworx cluster provision status . Enter the following oc execcommand, cluster provision-status Defaulted container "portworx" out of: portworx, csi-node-driver-registrar NODE NODE STATUS POOL POOL STATUS IO_PRIORITY SIZE AVAILABLE USED PROVISIONED ZONE REGION RACK 788bf810-57c4-4df1-9a5a-70c31d0f478e Up 0 ( 96e7ff01-fcff-4715-b61b-4d74ecc7e159 ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default f6d87392-81f4-459a-b3d4-fad8c65b8edc Up 0 ( e06386e7-b769-4ce0-b674-97e4359e57c0 ) Online HIGH 3.0 TiB 3.0 TiB 10 GiB 0 B default default default Create your first StorageClass and PVC For your apps to use persistent volumes powered by Portworx, you must create a StorageClass that references Portworx as the provisioner. Once you’ve defined a StorageClass, you can create PersistentVolumeClaims (PVCs) that reference this StorageClass. For a more general overview of how storage works within Kubernetes, refer to the Persistent Volumes section of the Kubernetes documentation. Perform the steps in this topic to create and associate StorageClass and PVC objects in your cluster. Create a StorageClass Create a StorageClass spec using the following spec and save it as sc-1.yaml. This StorageClass uses CSI: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <your-storageclass-name> provisioner: pxd.portworx.com parameters: repl: "1" Apply the spec using the following oc applycommand to create the StorageClass: oc apply -f sc-1.yaml storageclass.storage.k8s.io/example-storageclass created Create a PVC Create a PVC based on your defined StorageClass and save the file: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <your-pvc-name> spec: storageClassName: <your-storageclass-name> accessModes: - ReadWriteOnce resources: requests: storage: 2Gi Run the oc applycommand to create a PVC: oc apply -f <your-pvc-name>.yaml persistentvolumeclaim/example-pvc created Verify your StorageClass and PVC Enter the following oc get storageclasscommand, specify the name of the StorageClass you created in the steps above: oc get storageclass <your-storageclass-name> NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE example-storageclass pxd.portworx.com Delete Immediate false 24m ocwill return details about your storageClass if it was created correctly. Verify the configuration details appear as you intended. Enter the oc get pvccommand, if this is the only StroageClass and PVC you’ve created, you should see only one entry in the output: oc get pvc <your-pvc-name> NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE example-pvc Bound pvc-dce346e8-ff02-4dfb-935c-2377767c8ce0 2Gi RWO example-storageclass 3m7s ocwill return details about your PVC if it was created correctly. Verify the configuration details appear as you intended.
https://docs.portworx.com/install-portworx/openshift/openshift-vsphere/
2022-09-25T07:41:46
CC-MAIN-2022-40
1664030334515.14
[]
docs.portworx.com
Getting Started Getting Started with SLWSTK6101 Embedded Software The Blue Gecko Bluetooth Wireless Starter Kit (WSTK) helps evaluate Silicon Labs' Blue Gecko Bluetooth modules and get started with software development. The kits come in different versions with different module radio boards. See for details on current configurations. To get started with the WSTK, download Simplicity Studio 5 (SSv5) and the Bluetooth SDK v3.x and work on the pre-built demos to get some experience. The Bluetooth SDK comes with some prebuilt demos that can be flashed to your EFR32 device and tested using a Smartphone. Here we show how to test three prebuilt demos: - NCP Empty demo - iBeacon demo - Health Thermometer demo Prepare the WSTK Connect a Bluetooth Module Radio Board to the WSTK Main Board as shown in the following figure. Connect the WSTK to a PC using the Main Board USB connector. Turn the Power switch to "AEM" position. Note: At this stage, you might be prompted to install the drivers for the WSTK Main Board but you can skip this for now. Check that the blue USB Connection Indicator LED turns on or starts blinking. Check that the Main Board LCD display turns on and displays a Silicon Labs logo. Before starting to test the demo application note the following parts on the WSTK Main Board: - Temperature & Humidity Sensor - PB0 button - LED0 Kit parts Flash the Demos With your device connected as described above, open SSv5. Select your device in the Debug Adapters view. In the Launcher windows, select Example Projects & Demos tab. Click RUN on the demo of choice. Test the Bluetooth Demos Using a Smartphone Testing the NCP Empty Demo - Load the NCP Empty demo on the target. Open Simplicity Studio with a WSTK and radio board connected and select the corresponding debug adapter. - On the OVERVIEW tab, under "General Information", select the Gecko SDK Suite if it is not selected. On the Example Projects & Demos tab, select the NCP Empty demo and click RUN. This flashes the demo to your device, but it does not start advertising automatically. - At this point, NCP Commander can be used to send BGAPI commands to the kit, using UART. Connections, advertising and other standard Bluetooth Low Energy operation can be controlled via this tool. NCP commander can be found in the Simplicity Studio 5 Tool menu: Launch NCP Commander and then establish the connection to the kit. If everything works correctly, you should see the result of the “sl_bt_system_get_identity_address” command displayed in green: In the ‘Advertise’ menu, click on the + button and then Start to start advertising. On the central side (smartphone), install the EFR Connect app from the App Store, and open it. To find your advertising device, tap the Develop tab, and tap Bluetooth Browser. This shows all advertising devices nearby. Connect to your device by tapping Connect next to "Silabs Example”. Its GATT database is automatically discovered and displayed. Tap any service to list its characteristics and tap any characteristic to read its value. Testing the iBeacon Demo Bluetooth beacons are unconnectable advertisements that help you locate a device, determine your own position, or get minimal information about an asset the beaconing device is attached to. After flashing the iBeacon demo to your device, you can find the beacon signal with the Bluetooth Browser of EFR Connect app. Start EFR Connect and select Bluetooth Browser. To filter beacons, tap the Filter button and select the beacon types to display. The app will provide basic information about the beacon, such as RSSI, which can help determine the distance of the beacon. Tap on the beacon to get more information about the data it provides. Beaconing Demo Testing the Health Thermometer Demo While the NCP Empty demo implements a minimal GATT database with basic static information such as device name, the Health Thermometer demo extends this database with live temperature measurements. After flashing the Health Thermometer demo to your device, open EFR Connect app, select the Demo tab, and select Health Thermometer. Find your device advertising as Thermometer Example in the device list and click on it to connect. The mobile phone app automatically finds the Temperature measurement characteristic of the device, reads its value periodically, and displays the value on the screen of the phone. Try touching the temperature sensor (located on the WSTK as you can see in section Prepare the WSTK). You should be able to see the temperature changing. Health Thermometer Demo Getting Started with BGM220 Explorer Kit The BGM220 Explorer Kit (part number: BGM220-EK4314A) is focused on rapid prototyping and IoT concept creation around Silicon Labs BGM220P module. Kit Overview The kit features USB interface, on-board J-Link debugger, one user LED/button and support for hardware add-on boards via a mikroBus socket, and a qwiic connector. Explorer Kit The hardware add-on support allows developers to create and prototype applications using a virtually endless combination of off-the-shelf boards from mikroE, sparkfun, AdaFruit, and Seeed Studios. The boards from Seeed Studios feature a connector, which is pin compatible with the qwiic connector but mechanically incompatible and it requires an adaption cable or board. Testing the Bluetooth Demos The Bluetooth SDK comes with pre-built demos that can be directly flashed into this kit and tested using a smartphone running the EFR Connect mobile app (Android, iOS): - SoC iBeacon - NCP Empty The iBeacon can be tested with EFR Connect as documented in the above getting started section. NCP Empty can be tested with NCP Commander, which can be launched via the Tools dialog in Simplicity Studio 5. GitHub Examples Silicon Labs applications_examples GitHub repository contains additional examples that can run on the BGM220 Explorer Kit. Some of them leverage 3rd party add-on boards and they are typically found in the bluetooth_applications repository. Porting Code from mikroSDK and Arduino If using a mikroE click board, ready made examples are on your specific mikroE click board Web page that typically reside in mikroE's libstock and/or GitHub. Those examples are using the mikroSDK, which provides abstraction to the supported hardware kits provided by mikroE. If using a board from sparkfun, Adafruit, or Seeed Studios, they typically have examples for the Arduino IDE, which run on some of their own controller boards that are supported by the Arduino platform. Those examples will not run out of the box on the BGM220 Explorer Kit, but with a small amount of effort they can be easily ported by using the guide below, which maps mikroSDK and Arduino APIs for UART/SPI/I2C/GPIO functionality into the Silicon Labs platform equivalents. Whether porting from mikroSDK or Arduino, EMLIB and Platform Drivers/Services contain the most useful Silicon Labs APIs. For corresponding documentation, see below: EMLIB - a low level peripheral driver library for all Silicon labs EFM32 and EFR32 device families Platform Drivers - a higher level driver layer built on top of EMLIB. EMDRV abstracts some aspects of peripheral initialization and use but is more limited in peripheral coverage than EMLIB. Additionally, Silicon Labs' Peripheral Examples on GitHub are a good resource for simple demonstration of peripheral control using EMLIB. mikroSDK Porting Guide The mikroE ecosystem of click boards from MikroElektronika are typically supported by a collection of driver modules and example code built upon the mikroSDK. These click boards feature a mikroBUS connector for connection to a host board and can include connections for power (3.3 V, 5 V, GND), communications (UART, SPI, and/or I2C), and assorted other functions (GPIO, PWM, INT). The mikroSDK is a framework that provides abstraction for the communication and GPIO functions of the mikroE click boards by wrapping vendor-specific functions in a common API framework to accomplish these tasks that is portable across a wide range of host devices. The microSDK is therefore ported to a new device via the assignment of function pointers and other device-specific configuration options. At this time, there is no official mikroSDK port for Silicon Labs devices. Note: The goal of this configuration guide is not to instruct the user on how to port the mikroSDK to the BGM220 or other Silicon Labs devices, but instead to introduce the user to the spectrum of Silicon Labs' native APIs and how to use these APIs instead of the mikroSDK. There is currently a wide selection of mikroE click accessory boards to facilitate product development with devices such as sensors, display/LEDs, storage, interface, and HMI. Some of these boards are shown below. mikroE click boards Using the mikroBUS-compatible socket included on the Explorer Kit BGM220, these boards can be used with the BGM220 as the host controller via the pin functions connecting the mikroBUS socket to the BGM220. When porting mikroE click examples to the Silicon Labs platform, it is important to understand that interaction between the host controller and the click board is accomplished using a subset of UART, I2C, SPI, GPIO, analog, PWM, or interrupt. Pins for each function are allocated between the BGM220 and the mikroBUS connector, as shown below. mikroBUS Socket Note: Knowledge of the pin mapping shown above combined with use of Silicon Labs' native APIs, as shown below, enable a user to port existing click examples to the Explorer Kit BGM220 by substituting Silicon Labs API calls for mikroSDK and click driver API calls. The following sections cover four main categories of API-enabled interactions between a host device (BGM220P in this case) and a click board: GPIO, SPI, I2C, and UART. Note that the functions and structures presented here correspond to elements of the core mikroHAL and mikroBUS APIs of the mikroSDK and their closest Silicon Labs counterparts. Many click boards have additional libraries and driver files that integrate with these layers to create the mikroE project framework. However, these core elements should help guide users porting to the Silicon Labs platform. GPIO The mikroSDK uses a complex system of function pointers and data structures to initialize a low level GPIO driver layer with initialization, "get," and "set" function pointers for each GPIO pin on the mikroBUS header. This structure is inherited by higher-level driver layers, which use "get" and "set" functions in higher-level wrapper functions. By contrast, the Silicon Labs APIs for GPIO control, or em_gpio, are straightforward and easy to understand. mikroSDK GPIO API. Note: More em_gpio functions are available in EMLIB. For more information,see the GPIO API Documentation. SPI As with the mikroSDK GPIO API, the mikroSDK SPI framework relies on vendor-specific function pointers assigned in a configuration layer to provide an interface for SPI communications. peripheral, and configuration of these pins is handled by the SPIDRV_Init() function. Note: More EMLIB and SPIDRV functions are available than shown here. See em_usart and and SPIDRV API documentation for more information. I2C As with other mikroSDK modules, the mikroSDK I2C framework relies on vendor-specific function pointers assigned in a configuration layer to provide an interface for I2C communications. Silicon Labs offers the EMLIB I2C driver em_i2c for firmware control of the I2C interface, which differs slightly in approach from the mikroSDK framework. The em_i2c firmware interface relies on the configuration of the I2C peripheral block and desired transfer parameters using the I2C_Init() and I2C_TransferInit() functions. Management of the I2C transfer state machine In a similar fashion as other mikroSDK API modules, the mikroSDK UART framework relies on vendor-specific function pointers assigned in a configuration layer to provide an interface for UART communications. Silicon Labs offers the low-level EMLIB UART (asynchronous USART) and higher-level EMDRV driver UARTDRV APIs for UART communication, provided as source code.. Configuration of these pins is handled by the UARTDRV_InitUart() or UARTDRV_InitLeuart() function. Note: The EMLIB em_usart and EMDRV UARTDRV APIs provide additional functionality and support a wider feature set of the USART peripheral than is described in this porting guide. See the em_usart and UARTDRV API documentation for more information. Arduino Porting Guide Many open-source examples, including those designed for use with expansion boards from sparkfun, Adafruit, or Seeed Studios, use the Arduino API. This section provides a basic mapping of some of the Arduino API functions for serial communications and GPIO handling onto suggested or possible replacement calls from the Silicon Labs EMLIB and EMDRV libraries. Although the Arduino API contains many submodules for different tasks, this guide focuses on Silicon Labs API replacements for GPIO control (Arduino Digital IO API), SPI communications (Arduino SPI API), I2C communications (Arduino Wire API), and UART communications (Arduino Serial and SoftwareSerial APIs). GPIO (Arduino Digital IO API) The Silicon Labs APIs for GPIO control, or em_gpio, provide straightforward and easy to understand functions for GPIO initialization and control. Arduino Digital I/O API. Note: More em_gpio functions are available in EMLIB than shown here. See the em_gpio API documentation for more information. SPI (Arduino SPI API). Configuration of these pins is handled by the SPIDRV_Init() function. Note: More EMLIB and SPIDRV functions are available than shown here. See the em_usart and and SPIDRV API documentation for more information. I2C (Arduino Wire API) Silicon Labs offers the EMLIB I2C driver (em_i2c) for firmware control of the I2C interface, which differs slightly in approach from the Arduino Wire API. The em_i2c firmware interface relies on the configuration of the I2C peripheral block and desired transfer parameters using the I2C_Init() and I2C_TransferInit() functions. Management of the I2C transfer state machine is then (Arduino Serial and SoftwareSerial APIs) Silicon Labs offers the low-level EMLIB UART (asynchronous USART) and higher-level EMDRV driver UARTDRV APIs for UART communication, provided as source code. Additionally, Silicon Labs offers a higher-level driver called RetargetIo that re-targets some standard IO functions such as printf and may be useful for replacement of some Arduino Serial and SoftwareSerial functions., and configuration of these pins is handled by the UARTDRV_InitUart() or UARTDRV_InitLeuart() function. Similarly, when using RetargetIo functions, USART and GPIO initialization is handled by RETARGET_SerialInit(). Note that the RetargetIo library is Silicon Labs board-specific library, which relies on board support configuration files to properly configure GPIO and peripherals used in communications. Note: The EMLIB em_usart, EMDRV UARTDRV, and RetargetIo APIs provide additional functionality and support a wider feature set of the USART peripheral than described in this porting guide. See the em_usart, UARTDRV, and RetargetIo API documentation for more information. Arduino Serial API porting: Arduino SoftwareSerial API porting: Starting Application Development Developing a Bluetooth application consists of defining the GATT database structure and the event handlers for events such as connection_opened, connection_closed, and so on. The most common starting point for application development is the SOC-Empty example. This project contains a simple GATT database (including the Generic Access service, Device Information service, and OTA service) and a while loop that handles some events raised by the stack. You can extend both the GATT database and the event handlers of this example according to your needs. Note: Beginning with Bluetooth SDK version 2.7.0.0, all devices must be loaded with the Gecko Bootloader as well as the application. While you are getting started, the easiest way to do this is to load any of the precompiled demo images, which come with the bootloader configured as part of the image. When you flash your application, it overwrites the demo application but the bootloader remains. Subsequently, you may wish to build your own bootloader, as described in UG266: Silicon Labs Gecko Bootloader User's Guide. The first bootloader loaded on a clean device should always be the combined bootloader. New Project creation is done through three dialogs: - Target, SDK, and Toolchain - Examples - Configuration An indicator at the top of the dialog shows you where you are. You can start a project from different locations in the Launcher Perspective, as described in the Simplicity Studio 5 User’s Guide. Start from the File menu because that takes you through all three of the above dialogs. - Select New >> Silicon Labs Project Wizard. - Review your SDK and toolchain. To use IAR instead of GCC, change it here. After you have created a project, it is difficult to switch toolchains. Click NEXT. - On the Example Project Selection dialog, filter on Bluetooth and select SoC Empty. Click NEXT. - On the Project Configuration dialog, rename your project. Note that if you change any linked resource, it is changed for any other project that references it. While you are getting started, the default choice to include project files but link to the SDK is best. - Click FINISH. GATT Database Every Bluetooth connection has a GATT client and a GATT server. The server holds a GATT database, which is a collection of Characteristics that can be read and written by the client. The Characteristics are grouped into Services. The group of Services determines a Bluetooth Profile. If implementing a GATT server (typically on the peripheral device), define a GATT database structure. This structure can't be modified during runtime, so it has to be designed in advance. Clients (typically the central device) can also have a GATT database, even if no device will query it, so you can keep the default database structure in your code. The GATT Configurator is a simple-to-use tool to help you build your own GATT database. A list of project Profiles/Services/Characteristics/Descriptors is shown on the left and details about the selected item is shown on the right. An options menu is provided above the Profiles list. The GATT Configurator automatically appears after creating the project to help create your own GATT database with a few clicks. Note that a Simplicity IDE perspective control is now included in the upper right of the screen. You can create your own database at this point, or return to it later either by double-clicking the gatt_configuration.btconf file under your project in Project Explorer, or through the Project Configurator's Advanced > GATT Configurator component. For more information, see section UG438: GATT Configurator User’s Guide for Bluetooth SDK v3.x . To add a custom service, click the Profile (Custom BLE GATT), and then click Add (1). To add a custom characteristic, select a service and then click Add (1). To add a predefined service/characteristic click Add Predefined (6). To learn more about the configurator see UG438: GATT Configurator User’s Guide for Bluetooth SDK v3.x. You can find a detailed description of any Profile/Service/Characteristic/Descriptor on. Characteristics are generally complex structures of fields. To know what fields a characteristic has, see. A reference for each characteristic is generated and defined in gatt_db.h. You can use this references in your code to read / write the values of the characteristics in the local GATT database with sl_bt_gatt_server_read_attribute_value() / sl_bt_gatt_server_write_attribute_value() commands. Bluetooth Event Handlers Open app.c by double-clicking it in Project Explorer. You will find the Bluetooth event handlers in sl_bt_on_event(). You can extend this list with further event handlers. The full list of events and stack commands is in the API Reference. main.c To learn more about Bluetooth application development, see UG434: Silicon Labs Bluetooth ® C Application Developer's Guide for SDK v3.x. If you are developing an NCP application, see AN1259: Using the v3.x Silicon Labs Bluetooth® Stack in Network CoProcessor Mode. Component Configuration Bluetooth SDK v3.x projects are based on a Gecko Platform component-based architecture. Software features and functions can be installed and configured through Simplicity Studio’s Project Configurator. When you install a component, the installation process will do the following: - Copy the corresponding SDK files from the SDK folder into the project folder. - Copy all the dependencies of the given component into the project folder. - Add new include directories to the project settings. - Copy the configurations files into the /config folder. - Modify the corresponding auto-generated files to integrate the component into the application. Additionally, "init" type software components will implement the initialization code for a given component, using their corresponding configuration file as input. Some software components, such as OTA DFU, will fully integrate into the application to perform a specific task without any additional code, while other components provide an API for use in the application. To see the component library, click the Components installed in the project are checked (1), and can be uninstalled. Configurable components are indicated by a gear symbol (2). Click Configure to open the Component Editor and see a configurable component’s parameters. As you change component configurations, your changes are automatically saved and project files are automatically generated. You can see generation progress in the lower right corner of the Simplicity IDE. Wait until generation is complete before building the application image. Building and Flashing To build and debug your project click Debug ( ) in the upper left corner of the Simplicity IDE perspective. It will build and download your project, and open up the Debug perspective. Click Play ( ) to start running you project on the device. Enabling Field Updates Deploying new firmware for devices in the field can be done by UART DFU (Device Firmware Update) or, for SoC applications, OTA DFU (Over-the-Air Device Firmware Update). For more information about each of these methods, see AN1086: Using the Gecko Bootloader with the Silicon Labs Bluetooth Applications.
https://docs.silabs.com/bluetooth/3.3/general/getting-started
2022-09-25T07:21:23
CC-MAIN-2022-40
1664030334515.14
[array(['/resources/bluetooth/general/getting-started/images/fig4-3-1.png', None], dtype=object) array(['/resources/bluetooth/general/getting-started/images/ncp-commander.png', None], dtype=object) array(['/resources/bluetooth/general/getting-started/images/fig5-new-project.png', None], dtype=object) array(['/resources/bluetooth/general/getting-started/images/fig5-new-project-wizard.png', None], dtype=object) array(['/resources/bluetooth/general/getting-started/images/fig6-1-gatt-configrator.png', None], dtype=object) array(['/resources/bluetooth/general/getting-started/images/fig5-2-1.png', None], dtype=object) array(['/resources/bluetooth/general/getting-started/images/fig5-2-2.png', None], dtype=object) array(['/resources/bluetooth/general/getting-started/images/fig5-2-3.png', None], dtype=object) array(['/resources/bluetooth/general/getting-started/images/fig5-2-4.png', None], dtype=object) ]
docs.silabs.com
🇹🇷 Turkey¶ Turkey, officially known as the Republic of Turkey, is the country that is the bridge between Europe and Asia. It shares borders with 8 different countries and 3 seas. It consists of 7 geographical regions and 81 cities. The region in which the country is situated is called Anatolia and is accepted as a peninsula. Anatolia has been a strategically important area throughout history. Turkish people came to Anatolia in 1071 first and since then they are living there. Never mind, let’s focus on the country again. The country has a population of over 80 million and Kurds are the largest minority with a population of over 16 million. Since the beginning of the Syrian Civil War, Turkey has been a shelter to Syrians and there are over 2 million Syrian refugees in Turkey. Even though there are ethnicities other than Turkish in the country, every Turkey citizen is accepted Turkish by law. That does not mean Turkey forces people to forget about their roots. The law has been put into order to ensure that every citizen is treated equally. The country’s official language is Turkish. Kurdish is the second most spoken language in the country, but it is not accepted as an official language, there are people who support this idea, though. Even though, due to being the most populous city in Turkey, Istanbul is thought to be the capital of the country, the country’s capital is Ankara since the law about the country’s capital was accepted in 1923 by the new parliament of the new country. If we are to talk about a country, we should talk about its economy. Turkey has a GDP of $794 billion in total which is the 20th biggest one in the world but when you consider the population (GDP per capita) Turkey is 67th in the world. Also, the purchasing power of the Turkish population is the 45th best one in the world. One of the biggest industries in Turkey is tourism. It has been so beneficial to the economy of the country that tourism has been called a “smokeless industry”. Turkey ranked sixth in the world in terms of the number of international tourist arrivals, with over 50 million foreign tourists visiting the country. Another important point about Turkey is that there are 204 universities in the country. Some think this number is unnecessarily high, but the government continues to establish new universities each year. Every city should have at least a university as their main idea regarding this issue, and they have achieved this goal, but they have not stopped yet. There are 840 thousand university students in the country in total and this number is so high that graduating from a university is not important anymore. Health is thought to be one of the areas where Turkey is good, but the numbers are saying this is not true. In 2018, the total expenditure on health as a share of GDP was the lowest among OECD countries at %6.3 of GDP. Also, Turkey has one of the highest rates of obesity in the world (%29.5). Since the majority of Turkey’s population is Muslim, there is a common bias about Turkey that is it is an Islamic country, but it is not. Since its foundation, the Republic of Turkey is a secular state with no official state religion. The first article of the Turkish constitution says that the country’s regime is a republic, and the next article says that Turkey is a secular, social, democratic, nationalist, state of law and it respects human rights. Even though there have always been governments against some of these articles, including the current one, and they have taken some actions to executes their aims, they were not successful, until this one. Turkey was founded by the old soldiers of the Ottoman Empire. Mustafa Kemal who was later given the surname of “Atatürk” (father of the Turks) was the leader of the independence movement of which aim was to defend Anatolia against enemies in the time following the First World War. They refused the conditions of the Treaty of Sevres and fought for years. Finally, they were able to establish Turkey with its today’s borders except for Hatay which joined Turkey on its will after it gained its independence from France. Turkey is considered to be a European country thanks to the Treaty of Lausanne with which Turkey’s independence and sovereignty were accepted. But it is not a member of the European Union. Turkey has applied to become a member of the EU and some of the legislation and the laws have been adapted to those of the EU but there were a few points on which the two sides were not able to agree. So, the process has not been completed and the Turkish government is not trying to join the EU anymore. If you are talking about a country, you should mention its culture. So, let’s dive into Turkish culture. As we have mentioned above, there is more than one nation in Turkey, and this was also the case in the Ottoman Empire, the predecessor of Turkey. Therefore, the culture of the country is also a mix of different cultures. Like its geographical location, Turkey’s culture is also a bridge between Europe and the Middle East. The cultures in the area were affected so much by each other that there are some disputed meals about their origins. For example, “yoğurt”, a globally known food, is claimed by Greek people. They have been so successful in defending their claims that it is known as Greek yogurt almost everywhere. There are so many meals like yogurt, but we will not further discuss that matter in this article. Turkey is a good country despite all its cons. I hope one day you can visit this country and enjoy the Turkish breakfast. (Be cautious, you can be addicted to it.) If you want to read further about the country, you may use these resources as we did:
https://slurp.readthedocs.io/en/latest/turkey.html
2022-09-25T08:39:23
CC-MAIN-2022-40
1664030334515.14
[]
slurp.readthedocs.io
Visual Basic Guide. If you don't already have Visual Basic, you can acquire a version of Visual Studio that includes Visual Basic for free from the Visual Studio site. In This Section Getting Started Helps you begin working by listing what is new and what is available in various editions of the product. Programming Concepts Presents the language concepts that are most useful to Visual Basic programmers.. Framework Class Library Provides entry to the library of classes, interfaces, and value types that are included in the Microsoft .NET Framework SDK.
https://docs.microsoft.com/en-gb/dotnet/visual-basic/
2018-01-16T11:45:33
CC-MAIN-2018-05
1516084886416.17
[]
docs.microsoft.com
You may need to change the associativity of dimensions in several circumstances including adding associativity to dimensions created in previous releases. You may need to change the associativity of dimensions in several circumstances such as the following: Reassociate Dimensions to Different Objects With DIMREASSOCIATE, you can select one or more dimensions and step through the extension-line origin points of each dimension. For each extension-line origin point, you can specify a new association point on a geometric object. Association points determine the attachment of extension lines to locations on geometric objects. When you use the DIMREASSOCIATE command, a marker is displayed that indicates whether each successive extension line origin point of the dimension is associative or nonassociative. A square with an X in it means that the point is associated with a location on an object, while an X without the square means that the point is not associated with an object. Use an object snap to specify the new association for the extension-line origin point or press ENTER to skip to the next extension-line origin point. Change Nonassociative Dimensions to Associative You can change all the nonassociative dimensions in a drawing to associative. Use QSELECT to select all nonassociative dimensions, and then use DIMREASSOCIATE to step through the dimensions, associating each one with locations on geometric objects. Change Associative Dimensions to Nonassociative You can change all associative dimensions in a drawing to nonassociative dimensions. Use QSELECT to select all associative dimensions, and then use DIMDISASSOCIATE to convert them into nonassociative dimensions.
http://docs.autodesk.com/ACD/2010/ENU/AutoCAD%202010%20User%20Documentation/files/WS1a9193826455f5ffa23ce210c4a30acaf-612c.htm
2016-10-21T11:12:58
CC-MAIN-2016-44
1476988717963.49
[]
docs.autodesk.com
The purpose of this page is to describe how Listeners are used within Javascript within MapBuilder More advanced applications will likely want to add functionality by performing specific actions in response to user input. There are a few methods provided in MapBuilder to customize your application. An important aspect of the MVC design pattern is that it is event driven. Almost everything that happens over the course of the application execution is the result of an event being triggered on a model, which then calls listener functions registered with that model. This allows the models, widgets and tools to remain independant of each other and makes MapBuilder very modular - objects can be added and removed from the configuration without affecting the other objects. There is a global "config" object that can be referenced from anywhere. You can retrieve a reference to all other objects by using the config.objects property. For example, a model with an ID of "model_x" would be referenced as config.objects.model_x, or config.objects["model_x"]. The following functions are available for carrying out user actions: config.loadModel(modelId,docUrl) This method will load the model with the specified modelId from the docUrl provided. This will also trigger a "loadModel" event for that model. model.setParam(param,value) This method updates a parameter and call all interested listeners. For example, you can set an area of interest on a Context document by setting the "aoi" event as the param argument. Any listeners registered to listen for the "aoi" event would then be called and the value is passed to the listener functions as an argument. The value can be any Javascript object, e.g. a string, integer, array, or object. A list of the more common events that are built in to MapBuilder is listed below. model.setXpath(model,xpath,value,refresh) Updates the value of a node in the model's XML document and optionally triggers a "refresh" event for this model (based on the boolean refresh argument value). The xml node to be updated is specified by the xpath argument. button action property Buttons in the configuration file accept an <action> property which specifies an object method to be called when a button is selected. The way to use these functions is to set them as the HREF element of an anchor tag: or as an event handler (onclick, onmouseover, etc.) on any HTML element that allows it: Events The primary events that models, widgets and tools listen for are listed below. When calling model.setParam(param,value), the event name is the "param" argument, and the "value" argument is the JavaScript object listed. Models will accept any event name as the param so custom widgets can define their own event types that only that custom widget will listen for. Next> Back to the configuration page
http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=37358&selectedPageVersions=11&selectedPageVersions=10
2014-03-07T12:30:28
CC-MAIN-2014-10
1393999642517
[]
docs.codehaus.org
Unit Testing for webMethods is a unique package designed to help webMethods 4.6 & 6 customers achieve a higher standard of quality and reliability with their webMethods services by allowing them to use automated tools for developing and testing their services. Unit Testing for webMethods is based upon popular industry unit testing techniques and does NOT require developers to write complex Java code or modify the code to be tested. Unit Testing for webMethods improves the quality and robustness of your integration. Downloads Current Downloads Core Package - Unit Testing 6.4.0 ( download , Release Notes , request trial key ) Test Packages Previous Versions Core Package - WmUnit 6.3.2 ( download , Release Notes ) - WmUnit 6.3.1 ( download , Release Notes ) - WmUnit 6.3.0 ( download , Release Notes ) - Older Versions Test Packages - WmUnitTests 6.3.2 ( download) - WmPublicTests 6.3.2 ( download) - WmUnitTests 6.3.1 ( download) - WmPublicTests 6.3.1 ( download) Documentation - Core Documentation - HOWTO - HOWTO - assertEquals with arrays correctly - HOWTO - Checking flat files - HOWTO - Check that a service throws an exception - HOWTO - Compare two documents or records - HOWTO - Create a null as a service input - HOWTO - Extract values from the pipeline dynamically - HOWTO - Gauge test activity on a Project - HOWTO - Install WmUnit - HOWTO - Reduce the duplicated code in test services - HOWTO - Return meaningful error messages - HOWTO - Schedule a Test Group to Run - HOWTO - Structure your test packages - HOWTO - Techniques for using existing data - HOWTO - Test adapter services with transaction problems - HOWTO - Test services which need to be executed as a different user - HOWTO - Test XML - HOWTO - Upgrade WmUnit - HOWTO - Use WmUnit on a non-graphics enabled server environment - HOWTO - Write a java WmUnit Test service - HOWTO - Write a test case - Tutorials - WmUnitPatterns News Available from the WmUnit Repository at What's new in this release? New concept of "Test Group" which is a set of packages and test services which are to be executed as a whole. UI enhancements, new assert services, configuration ability and test coverage metrics. New Features - Test Groups functionality - Allow all the WmUnit UI to be accessible for Developer users - Configurable "test package list" parameters (e.g. white/blacklist of words) - Configuration section in webUI - Upload licence key facility - Test coverage for packages - Documentation page with links to the repository - New assert services: - assertFileContainsString service - assertFileDoesNotExist service - accumulateError service - assert accumulator services: - assertEquals - assertListsEqual - assertStringContains - assertFileContainsString - assertInvokeFailed - Store the username of the test runner with the history Improvements - Improve web UI look and feel - Navigation quick links with explanation on about page - More graceful and informative error handling of invalid licences - No frames or simple browser navigation Any feedback or comments on this release, please email [email protected] New in the WmUnit Space: - WmUnitPattern - Test Input Data Repository - HOWTO - Test services which need to be executed as a different user - HOWTO - Test adapter services with transaction problems - HOWTO - Reduce the duplicated code in test services Tidied up a bit of content recently, removed the discussion area. We've now got a community of 380 users accessing the WmUnit and WmFAQ knowledge bases. As always, any feedback welcome. Added a new WmUnitPattern WmUnitPattern - Execute Assert Wrapper a refactoring of existing code that I've been finding common. This and the other WmUnitPatterns can be found here: regards, Nathan CustomWare is sponsoring the webMethods BIF () in both Sydney and Singapore - come and see us! The online community for the CustomWare repository is still growing and has now surpassed 200 users! It's extremely satisfying to see the number of interested people continuing to make use of this public resource for webMethods and WmUnit related material. As always we welcome feedback and participation: [email protected] for any WmUnit related feedback and queries. For a complete list of news, please see the News Archive
https://docs.servicerocket.com/display/WMUNIT/Home
2014-03-07T12:24:43
CC-MAIN-2014-10
1393999642517
[]
docs.servicerocket.com
The embedder should be used for all client use in 2.1 to protect any clients from internal changes in Maven. People will generally want to use Maven code to: - Execute Maven - Read/Write POMs - Artifact Resolution In the short term, these should be provided via the embedder to give a unified facade for client code allowing us to restructure the internals of Maven. Right now anyone attempting to use the maven-artifact, or maven-project have an incredibly difficult time because the APIs are frankly terrible. There will always be cases where people want the functionality of the components but in the short term I think the only option is promoting the Embedder with documentation and work diligently to improve the APIs of the core components. Given the activity in the core there is no way that the refactoring of the APIs of maven-artifact, and maven-project can be resolved in the timeframe of 2.1. I think the only API we can reasonably commit to is the Embedder API. People are free to use the components but they use them at their own risk. I think it is reasonable that if we provide for the majority of use cases using the Embedder then I think we will be doing a service. The difference in size in the artifacts required for artifact resolution versus the full embedder is not something that is a great concern to most consumers. I think the plan should be to flesh out the embedder API to a usable state and commit to maintaining compatibility. It will allow us to decouple from the core and vary independently while we figure out the exact internals. Provided we maintain the Embedder API and clearly state this is the only supported externalized form for consumption then we are free to pursue cleaning up the core to point where later on in the 2.1 lifespan we have the components in a form we can safely tell people to use. This.
http://docs.codehaus.org/pages/viewpage.action?pageId=79858
2014-03-07T12:30:00
CC-MAIN-2014-10
1393999642517
[]
docs.codehaus.org
Turn on or turn off Key Describer mode The BlackBerry Screen Reader provides a feature called Key Describer mode, which helps you learn and remember the layout of the keyboard and keys on your BlackBerry smartphone. - To turn on Key Describer mode, press the right convenience key twice and press any button or key on your smartphone. The BlackBerry Screen Reader will echo the key pressed and help you create a map of the key layout from memory. - To turn off Key Describer mode, press the right convenience key again. The Key Describer mode will turn off automatically if you do not press any key within 10 seconds of turning it on. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/47681/mfl1334699646334.jsp
2014-03-07T12:27:08
CC-MAIN-2014-10
1393999642517
[]
docs.blackberry.com
.2 GumTree Workbench GumTree Server GumTree Data Browser GumTree Runtime Additional Resources Unable to render embedded object: File (update_48x48.png) not found. Archived Update Site Unable to render embedded object: File (documents_48x48.png) not found. GumTree Developer Guide Unable to render embedded object: File (api_48x48.png) not found. GumTree API Javadoc
http://docs.codehaus.org/pages/viewpage.action?pageId=227049533
2014-03-07T12:29:53
CC-MAIN-2014-10
1393999642517
[array(['/download/attachments/192512068/workbench_48x48.png?version=1&modificationDate=1307419174115&api=v2', None], dtype=object) array(['/download/attachments/192512068/server_48x48.png?version=1&modificationDate=1307419348033&api=v2', None], dtype=object) array(['/download/attachments/192512068/data_browser_48x48.png?version=1&modificationDate=1307419348045&api=v2', None], dtype=object) array(['/download/attachments/192512068/runtime_48x48.png?version=1&modificationDate=1307419348016&api=v2', None], dtype=object) ]
docs.codehaus.org
This is an iframe, to view it upgrade your browser or enable iframe display. Prev Chapter 8. Installing using anaconda 8.1. The Text Mode Installation Program User Interface 8.1.1. Using the Keyboard to Navigate 8.2. The Graphical Installation Program User Interface 8.2.1. Screenshots during installation 8.2.2. A Note about Virtual Consoles 8.3. Installation Method 8.3.1. Installing from DVD 8.3.2. Installing from a Hard Drive 8.3.3. Performing a Network Installation 8.3.4. Installing via NFS 8.3.5. Installing via FTP or HTTP 8.4. Verifying Media 8.5. Language Selection 8.6. Keyboard Configuration 8.7. Storage Devices 8.7.1. The Storage Devices Selection Screen 8.8. Setting the Hostname 8.8.1. Edit Network Connections 8.9. Time Zone Configuration 8.10. Set the Root Password 8.11. Assign Storage Devices 8.12. Initializing the Hard Disk 8.13. Upgrading an Existing System 8.13.1. The Upgrade Dialog 8.13.2. Upgrading Using the Installer 8.13.3. Upgrading Boot Loader Configuration 8.14. Disk Partitioning Setup 8.15. Encrypt Partitions 8.16. Creating a Custom Layout or Modifying the Default Layout 8.16.1. Create Storage 8.16.2. Adding Partitions 8.16.3. Create Software RAID 8.16.4. Create LVM Logical Volume 8.16.5. Recommended Partitioning Scheme 8.17. Write changes to disk 8.18. x86, AMD64, and Intel 64 Boot Loader Configuration 8.18.1. Advanced Boot Loader Configuration 8.18.2. Rescue Mode 8.18.3. Alternative Boot Loaders 8.19. Package Group Selection 8.19.1. Installing from Additional Repositories 8.19.2. Customizing the Software Selection 8.20. Installing Packages 8.21. Installation Complete This chapter describes an installation using the graphical user interface of anaconda . 8.1. The Text Mode Installation Program User Interface Important — Graphical installation recommended We recommed that you install Fedora using the graphical interface. If you are installing Fedora on a system that lacks a graphical display, consider performing the installation over a VNC connection – see Chapter 13, option – refer to Chapter 10, Boot Options. Important — Graphical Interface on the Installed System Installing in text mode does not prevent you from using a graphical interface on your system once it is installed. Apart from the graphical installer, anaconda also includes a text-based installer that includes most of the on-screen widgets commonly found on graphical user interfaces. Figure 8.1, “Installation Program Widgets as seen in URL Setup ” and Figure 8.2, “Installation Program Widgets as seen in Choose a Language ” illustrate widgets that appear on screens during the installation process. Installation Program Widgets as seen in URL Setup Figure 8.1. Installation Program Widgets as seen in URL Setup Installation Program Widgets as seen in Choose a Language Figure 8.2. Installation Program Widgets as seen in Choose a Language If one of the following situations occurs, the installation program uses text mode: The installation system fails to identify the display hardware on your computer You choose the text mode installation from the boot menu While text mode installations are not explicitly documented, those using the text mode installation program can easily follow the GUI installation instructions. However, because text mode presents you with a simpler, more streamlined installation process, certain options that are available in graphical mode are not also available in text mode. These differences are noted in the description of the installation process in this guide, and include: configuring advanced storage methods such as LVM, RAID, FCoE, zFCP, and iSCSI. customizing the partition layout customizing the bootloader layout selecting packages during installation configuring the installed system with Firstboot If you choose to install Fedora in text mode, you can still configure your system to use a graphical interface after installation. Refer to Section 17.3, “Switching to a Graphical Login” for instructions. OK button. Figure 8.2, “Installation Program Widgets as seen in Choose a Language ” , shows the cursor on the Edit OK button. Warning Unless a dialog box is waiting for your input, do not press any keys during the installation process (doing so may result in unpredictable behavior). Prev 7.4. Booting from the Network using PXE Up 8.2. The Graphical Installation Program User Inte...
http://docs.fedoraproject.org/en-US/Fedora/15/html/Installation_Guide/ch-guimode-x86.html
2014-03-07T12:24:39
CC-MAIN-2014-10
1393999642517
[]
docs.fedoraproject.org
. DSE Search integrates Apache Solr™ 6.0.1 to manage search indexes with a persistent store. The benefits of running enterprise search functions through DataStax Enterprise and DSE Search. - Using CQL, DSE Search supports partial document updates that enable you to modify existing information while maintaining a lower transaction cost. - Supports indexing and querying of advanced data types, including tuples and User-defined type (UDT). - Supports all Solr tools and APIs, with several specific unsupported features. Solr resources Resources for more information on using Open Source Solr (OSS): - Apache Solr documentation - Solr Tutorial on Apache Lucene site - Comma-Separated-Values (CSV) file importer - JSON importer - Solr cell project, including a tool for importing data from PDFs
https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/search/searchAbout.html
2021-01-16T03:26:24
CC-MAIN-2021-04
1610703499999.6
[]
docs.datastax.com
[−][src]Crate pui_arena A set of very efficient, and very customizable arenas that can elide bounds checks wherever possible. This crate is heavily inspired by crates like slotmap and slab. pui-arena provides a set of collections that allow you to insert and delete items in at least amortized O(1), access elements in O(1). It also provides the tools required to avoid the ABA problem. You can think of the collections in pui-arena as a HashMap/ BTreeMap where the arena manages the keys, and provides a very efficient way to access elements. Why use pui-arena over alternatives pui-arena allows you to minimize overhead wherever possible, and fully customize the arenas. This allows you to use an api like slab or slotmap based on how you use the api. (There are also newtypes featured-gated by the features slab and slotmap that implement a similar interface to those two crates). If you use the pui/ scoped feature, then you can also eliminate bounds checks entirely, which can be a huge performance save in performance sensitive regions. pui-arena also provides a more features than competitors, such as a vacant entry api for versioned arenas, and drain_filter for all arenas. Choosing sparse, hop, or dense If you want fast insertion/deletion/acccess and don't care about iteration speed, use sparse. If you want fast iteration speed above all else, use dense If you want reasonable iteration speed and also fast access/delete, or if denseis to memory heavy, use hop You can read about the details of how each works in the corrosponding module docs Performance characteristics Speed all of the collections in pui-arena allow you to - insert elements in amortized O(1) - delete/access elements in O(1) - guarantee that keys never get invalidated unless removeis called Memory For each Arena<T, _, V> where V: Version, the memory overhead is as follows: sparse Arena- size_of(V) + max(size_of(T), size_of(usize))per slot hop Arena- size_of(V) + max(size_of(T), 3 * size_of(usize))per slot dense Arena- size_of(V) + size_of(usize)per slot, and size_of(usize) + size_of(T)per value Implementation Details The core of this crate is the the Version trait, the ArenaKey trait, and the BuildArenaKey trait. Version specifies the behavior of the arenas. pui-arena provides three implementations, see Version for more details: DefaultVersion - Ensures that all keys produced by insertare unique - backed by a u32, so it may waste space for small values - technically if items are inserted/removed many times, slots will be "leaked", and iteraton performance may degrade but, this is unlikely, unless the same slot is reused over 2 billion times TinyVersion- - Ensures that all keys produced by insertare unique - backed by a u8, if items are inserted/removed many times, slots will be "leaked", and iteraton performance may degrade Unversioned- - Keys produced by insertare not guartneed to be unique - slots will never be "leaked" ArenaKey specifies the behavior of keys into arenas. pui-arena provides a number of implementations. See ArenaKey for details. usize- allows accessing a given slot directly, with no regard for it's version - Note: when I say "with no regard for it's version", it still checks the version to see if the slot is occupied, but it has no means of checking if a slot a value was re-inserted into the same slot Key<K, _>- allows accessing a slot specified by K, and checks the generation of the slot before providing a value. Kcan be one of the other keys listed here (except for ScopedKey) TrustedIndex- allows accessing a given slot directly, with no regard for it's version - elides bounds checks, but is unsafe to construct - This one should be used with care, if at all. It is better to use the puifeature and use pui_vec::Idinstead. It is safe, and also guartnees bound check elision ScopedKey<'_, _>- only allows access into scoped arenas (otherwise identical to Key) enabled with the pui feature pui_vec::Id- allows accessing a given slot directly, with no regard for it's version - elides bounds checks BuildArenaKey specifies how arenas should create keys, all implementors of ArenaKey provided by this crate also implement BuildArenaKey except for TrustedIndex. Custom arenas You can newtype arenas with the newtype macro, or the features: slab, slotmap, or scoped. slab- provides a similar api to the slabcrate - uses usizekeys, and Unversionedslots slotmap- provides a similar api to the slabcrate - uses Key<usize>keys, and DefaultVersionslots scoped- provides newtyped arenas that use pui_core::scopedto elide bounds checks - uses scoped::ScopedKey<'_, _>keys, and is generic over the version newtype- creates a set of newtyped arenas with the module structure of base - These arenas elide bounds checks, in favor of id checks, which are cheaper, and depending on your backing id, can be no check at all! (see pui_core::scalar_allocatordetails) // Because the backing id type is `()`, there are no bounds checks when using // this arena! pui_arena::newtype! { struct MyCustomArena; } let my_sparse_arena = sparse::Arena::new(); let my_dense_arena = dense::Arena::new(); let my_hop_arena = hop::Arena::new(); Becomes something like pui_core::scalar_allocator! { struct MyCustomArena; } mod sparse { pub(super) Arena(pub(super) pui_arena::base::sparse::Arena<...>); /// more type aliases here } mod dense { pub(super) Arena(pub(super) pui_arena::base::dense::Arena<...>); /// more type aliases here } mod hop { pub(super) Arena(pub(super) pui_arena::base::hop::Arena<...>); /// more type aliases here } let my_sparse_arena = sparse::Arena::new(); let my_dense_arena = dense::Arena::new(); let my_hop_arena = hop::Arena::new(); Where each Arena newtype has a simplified api, and better error messages.
https://docs.rs/pui-arena/0.5.1/pui_arena/
2021-01-16T02:39:17
CC-MAIN-2021-04
1610703499999.6
[]
docs.rs
The Tilemap Renderer component renders the TilemapA GameObject that allows you to quickly create 2D levels using tiles and a grid overlay.. Unity creates Tilemaps with the Tilemap Renderer attached by default. The Tilemap Renderer can: The Render Mode affects how the Tilemap SpritesA 2D graphic objects. If you are used to working in 3D, Sprites are essentially just standard textures but there are special techniques for combining and managing sprite textures for efficiency and convenience during development. More info See in Glossary are sorted when rendered. Chunk Mode is the default rendering mode of the Tilemap Renderer: When set to Chunk Mode, the Tilemap Renderer handles Sprites on a Tilemap in batches and renders them together. They are treated as a single sort item when sorted in the 2D Transparent Queue. This reduces the number of draw calls to improve overall performance, however other Renderers cannot be rendered in between any portion of the Tilemap which prevents other rendered Sprites being able to interweave with the Tilemap Sprites. In Chunk Mode, the Tilemap Renderer is not able to sort Tiles from multiple textures individually and does not render the Tile Sprites consistently (see example below). Pack all the individual Sprites that make up the Tilemap into a single Sprite Atlas to solve this issue. To do this: Create a Sprite AtlasA texture that is composed of several smaller textures. Also referred to as a texture atlas, image sprite, sprite sheet or packed texture. More info See in Glossary from the Assets menu (go to: Atlas > Create > Sprite Atlas). Add the Sprites to the Sprite Atlas by dragging them to the Objects for Packing list in the Atlas’ InspectorA Unity window that displays information about the currently selected GameObject, asset or project settings, allowing you to inspect and edit the values. More info See in Glossary window. Click Pack Preview. Unity packs the Sprites into the Sprite Atlas during Play mode, and correctly sorts and renders them. This is only visible in the Editor during Play mode. In Individual Mode, the Tilemap Renderer sorts and renders the Sprites on a Tilemap with consideration of other Renderers in the Scene, such as the Sprite RenderersA component that lets you display images as Sprites for use in both 2D and 3D scenes. More info See in Glossary and Mesh RenderersA mesh component that takes the geometry from the Mesh Filter and renders it at the position defined by the object’s Transform component. More info See in Glossary. Use this mode if other Renderers interact with Sprites and objects on the Tilemap. In this mode, the Tilemap Renderer sorts Sprites based on their position on the Tilemap and the sorting properties set in the Tilemap Renderer. For example, this allows a character Sprite to go in-between obstacle Sprites (see example below). Using the same example in Chunk Mode, character Sprites may get hidden behind ground sprites: Using Individual Mode may reduce performance as there is more overhead when renderingThe process of drawing graphics to the screen (or to a render texture). By default, the main camera in Unity renders its view to the screen. More info See in Glossary each Sprite individually on the Tilemap. To correctly sort and render Tile Sprites on an Isometric Z as Y Tilemap, the Transparency Sort Axis must be set to a Custom Axis. First set the Renderer Mode to ‘Individual Mode’ and go to to Edit > Settings > Graphics. Set Transparency Sort Mode to Custom Axis, and set its Y-value to –0.26. Refer to the page on Creating an Isometric Tilemap for more information about the Transparency Sort Axis settings.
https://docs.unity3d.com/2020.2/Documentation/Manual/Tilemap-Isometric-RenderModes.html
2021-01-16T04:11:12
CC-MAIN-2021-04
1610703499999.6
[]
docs.unity3d.com
If you cannot login your Liker ID with your social logins, please try the following steps to reset password and see if it helps. Go to and make sure that you are being logged out. You may go to the menu at the top right hand corner and find the "Logout" option. Login again, on the login box select "Reset Password". Input your registered email address and press "Send", it should be the email used during registration If it doesn't work, e.g. "Can't find the email address or phone number" appears, please try to: Use the same email address that you registered your social media profile ( Facebook, Google, Twitter); If your email got a "dot" ".", please delete the dot and try again. e.g. use [email protected] instead of [email protected] and try again. The following appears. Receive the reset password email on your mailbox, click the link on the email to reset password. Input your new password twice, and click "Reset Password" You have successfully reset your password, go back to Liker Land and login your Liker ID. On the login box, use your email and password to login. Please do not select the social logins on the above. After login, go to ,click on Authcore → Security settings → Social logins to reset your Google, Facebook, Twitter logins. If the problem persists, please click on the green bubble at the lower right hand corner on Like.co and find our Customer Service to help.
https://docs.like.co/user-guide/liker-id/reset-password
2021-01-16T03:49:25
CC-MAIN-2021-04
1610703499999.6
[]
docs.like.co
May 02, 2015 Originally, on Wednesday 24 August, 2005: rdflib and SPARQL by Michel Pelletier: piece,. Subsquently, on 10 Oct 2005: SPARQL in RDFLib (Version 2.1) by Ivan Herman. — Intro Later still, on May 19 2006: SPARQL BisonGen Parser Checked in to RDFLib blog post by Chimezie I just checked in the most recent version of what had been an experimental, generated (see:) parser for the full SPARQL syntax, I had been working on to hook up with sparql-p. It parses a SPARQL query into a set of Python objects representing the components of the grammar: The parses itself is a Python/C extension, so the setup.py had to be modified in order to compile it into a Python module. I also checked in a test harness that’s meant to work with the DAWG test cases: I’m currently stuck on this test case, but working through it: The test harness only checks for parsing, it doesn’t evaluate the parsed query against the corresponding set of test data, but can be easily be extended to do so. I’m not sure about the state of those test cases, some have been ‘accepted’ and some haven’t. first Our integrated version of sparql-p is outdated as there is a more recent version that Ivan has been working on with some improvements we should consider integrating And later yet, on Sun, 01 Apr 2007 SPARQL Algebra, Reductions, Forms and Mappings for Implementations a post to public-sparql-dev by Chimezie I’ve been gearing up to an attempt at implementing the Compositional SPARQL semantics expressed in both the ‘Semantics of SPARQL’ and ‘Semantics and Complexity of SPARQL’ papers with the goal of reusing existing sparql-p which already implements much of the evaluation semantics. Some intermediate goals are were neccessary for the first attempt at such a design [1]: - Incorporate rewrite rules outlined in the current DAWG SPARQL WD - Incorporate reduction to Disjunctive Normal Form outlined in Semantics and Complexity of SPARQL - Formalize a mapping from the DAWG algebra notation to that outlined in Semantics of SPARQL - Formalize a mapping from the compositional semantics to sparql-p methods In attempting to formalize the above mappings I noticed some interesting parallels that I thought you and Ivan might be interested in (given the amount independent, effort that was put into both the formal semantics and the implementations). In particular The proposed disjunctive normal form of SPARQL patterns coincides directly with the ‘query’ API of sparql-p [2] which essentially implements evaluation of SPARQL patterns of the form:(P1 UNION P2 UNION .... UNION PN) OPT A) OPT B) ... OPT C) I.e., DNF extended with OPTIONAL patterns. In addition, I had suggested [3] to the DAWG that they consider formalizing a function symbol which relates a set of triples to the IRIs of the graphs in which they are contained. As Richard Newman points out, this is implemented [4] by most RDF stores and in RDFLib in particular by the ConjunctiveGraph.contexts method:contexts((s,p,o)) -> {uri1,uri2,...} I had asked their thoughts on performance impact on evaluating GRAPH patterns declaratively instead of imperatively (the way they are defined in both the DAWG semantics and the Jorge P. et. al papers) and I’m curious on your thoughts on this as well. Finally, an attempt at a formal mapping from DAWG algebra evaluation operators to the operators outlined in the Jorge P.et. al papers is below:merge(μ1,μ2) = μ1 ∪ μ2 Join(Omega1,Omega2) = Filter(R,Omega1 ⋉ Omega2) Filter(R,Omega) = [[(P FILTER R)]](D,G) Diff(Omega1,Omega2,R) = (Omega1 \ Omega2) ∪ {μ | μ in Omega1 ⋉ Omega2 and *not* μ |= R} Union(Omega1,Omega2) = Omega1 ∪ Omega2
http://rdfextras.readthedocs.io/en/latest/sparql/index.html
2018-06-18T05:14:33
CC-MAIN-2018-26
1529267860089.11
[]
rdfextras.readthedocs.io
Maven Ant Plugin This page provides a space for users to contribute examples, errata, tips and other useful information about the Maven Ant: <project> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ant-plugin</artifactId> <version>2.1</version> <configuration> <!-- your example configuration here --> </configuration> </plugin> </plugins> </build> </project> How do I ...? You need to ...
http://docs.codehaus.org/display/MAVENUSER/Ant+Plugin
2009-07-04T14:48:50
crawl-002
crawl-002-005
[]
docs.codehaus.org
Help Center Local Navigation Set an administrator password The password must be at least four characters in length. A member can type the group password that you set for the group to become an administrator. Before you begin: To perform this task, you must be an administrator of the group. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/13195/Set_an_administrator_password_825841_11.jsp
2014-04-16T08:08:19
CC-MAIN-2014-15
1397609521558.37
[]
docs.blackberry.com
We removed our free Sandbox April 25th. You can read more on our blog. How it Works¶ Background¶ Now that you have deployed a simple app, and a more involved app with a database, we can review the different steps happening behind a "dotcloud push". That should help you gain a deeper understanding of how the dotCloud platform works, and what’s behind our “zero-downtime pushes” feature. The Code Store¶ We started by adding a dotCloud Build File to our code and using the client to push the code to dotCloud. When you use the client to push code, the client will choose the best upload method for your app. For example, if you have a Git or Mercurial repository, the upload will be like a repository push. If you do not use a supported Version Control System, the client will upload your code directly comparing your local code with any previously uploaded code so that only changes are uploaded. Note You can learn more about the way the CLI chooses the best upload method and how to override it in the corresponding guide. Our internal name for the code store is just “uploader”. The Builder¶ Once new code has been uploaded, we start looking for your dotcloud.yml Build File, and we deploy the stack of services it describes on our dedicated build cluster. Each service in your stack is built accordingly to a specific set of rules (e.g: a Python service will run "pip" to install dependencies whereas a NodeJS service will run "npm"). In addition to these predefined rules, you can setup hooks to be executed before or after the build. Finally your application is packaged and stored for the deployment phase. You can see the builder in action in the "dotcloud push" output: […] ---> Building the application... [www] Build started for revision rsync-1339191773365 (clean build) [www] I am snapshotsworker_02/bob-2, and I will be your builder today. [www] Build completed successfully. Compiled image size is 427KB ---> Application build is done […] The Deployer¶ If your application built successfully, then the platform deploys your stack on the Sandbox, Live or Enterprise cluster depending on its flavor. It starts by initializing a full new stack of service, while the current one (if it’s not your first push on this application) are still running and serving traffic. When the new stack is initialized, the platform retrieves the application package from the builder and install it. Finally the postinstall hook is run on each service and the new stack is launched. Once this new version of the application is running and ready to accept requests, we seamlessly switch the traffic to it (unless you use a data directory, in that case a short downtime will happen while move it to the new service): […] ---> Initializing new services... (This may take a few minutes) ---> Using default scaling for service www (1 instance(s)). [] Initializing... [] Service initialized ---> All services have been initialized. Deploying code... [] Deploying build revision rsync-1339191773365... [] Running postinstall script... [] Launching... [] Waiting for the instance to become responsive... [] Re-routing traffic to the new build... [] Successfully deployed build revision rsync-1339191773365 ---> Deploy finished ---> Application fully deployed […] The Stack Runtime¶ During its lifetime, your application is continually monitored, and services automatically restarted when error conditions occur. Each service is independent of the others; and, more importantly, services never interact directly with the core platform, except when you deploy (push) new code. That means that your services won’t be impacted when we have to perform maintenance operations on the dotCloud API. Likewise, if you experience slow response times or errors with the dotCloud CLI or website, your websites will not be affected since they are decoupled from those components. You should now have a much better idea of how to dotCloud works. At this point, you may want to: - learn more about the dotcloud.yml Build File; - see some more complex examples in our tutorials;
http://docs.dotcloud.com/0.4/firststeps/how-it-works/
2014-04-16T07:13:36
CC-MAIN-2014-15
1397609521558.37
[]
docs.dotcloud.com
ResourcePendingMaintenanceActions Describes the pending maintenance actions for a resource. Contents - PendingMaintenanceActionDetails.PendingMaintenanceAction.N A list that provides details about the pending maintenance actions for the resource. Type: Array of PendingMaintenanceAction objects Required: No - ResourceIdentifier The ARN of the resource that has pending maintenance actions. Type: String Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ResourcePendingMaintenanceActions.html
2020-11-23T20:26:34
CC-MAIN-2020-50
1606141164142.1
[]
docs.aws.amazon.com
C/C++ DataStax Enterprise Driver A driver built specifically for the DataStax Enterprise (DSE). It builds on the DataStax C/C++ driver for Apache Cassandra and includes specific features for DSE. This software can be used solely with DataStax Enterprise. See the License section below. Getting the Driver Binary versions of the driver, available for multiple operating systems and multiple architectures, can be obtained from our download server. Features - DSE authentication - Plaintext/DSE - LDAP - GSSAPI (Kerberos) - DSE geospatial types - DSE graph integration - DSE proxy authentication and execution - DSE DateRange Compatibility The DataStax C/C++ driver currently supports DataStax Enterprise 4.8+ and its graph integration only supports DataStax Enterprise 5.0+. Disclaimer: DataStax products do not support big-endian systems. Documentation Getting Help - JIRA: (Assign “Component/s” field set to “DSE”) - Mailing List: - DataStax Academy via Slack: Feedback Requested Help us focus our efforts! Provide your input on the DSE C/C++ Driver Platform and Runtime Survey (we kept it short). Examples Examples for using the driver can be found in our examples repository. A Simple Example; } License
https://docs.datastax.com/en/developer/cpp-dse-driver/1.7/
2020-11-23T20:08:20
CC-MAIN-2020-50
1606141164142.1
[]
docs.datastax.com
Best practices for DataStax drivers These rules and recommendations improve performance and minimize resource utilization in applications that use DataStax drivers. These rules and recommendations improve performance and minimize resource utilization in applications that use DataStax drivers. Use a single session object per application Create and reuse a single session for the entire lifetime of an application. Sessions are expensive to create because they initialize and maintain connection pools to every node in a cluster. A single driver session can handle thousands of queries concurrently. Use a single driver session to execute all the queries in an application. Using a single session per cluster allows the drivers to coalesce queries destined for the same node, which can significantly reduce system call overhead. Use a single cluster object per physical cluster Run queries asynchronously for higher throughput Use the driver's asynchronous APIs to achieve maximum throughput. The asynchronous APIs provide execution methods that return immediately without blocking the application’s progress, allowing a single application thread to run many queries concurrently. Asynchronous execution methods return future objects that can be used by the application to obtain query results and errors if they occur. Running many queries concurrently allows applications to optimize their query processing, improves the driver’s ability to coalesce query requests, and maximizes use of server-side resources. Use prepared statements for frequently run queries Prepare queries that are used more than once. Preparing queries allows the server and driver to reduce the amount of processing and network data required to run a query. For prepared statements, the server parses the query once and it is then cached for the lifetime of an application. The server also avoids sending response metadata after the initial prepare step, which reduces the data sent over the network and the corresponding client side processing. Explicitly set the local datacenter when using a datacenter-aware load balancing When using a datacenter-aware load balancing policy, your application should explicitly set the local datacenter instead of allowing the drivers to infer the local datacenter from the contact points. If the driver chose the wrong local datacenter, it increases cross-datacenter traffic, which is often higher latency and monetarily expensive than inter-datacenter traffic. Setting the local datacenter explicitly eliminates the chance that the driver will choose the wrong local datacenter. When configuring a driver connection, it is easy to include contact points in remote datacenters or invalid datacenters. For example, an application might include contact points for an internal datacenter used during testing. Explicitly setting the local datacenter avoids these types of errors. Avoid delete workloads and writing nulls In DSE and Cassandra, a tombstone is a marker that indicates that table data is logically deleted. DSE and Cassandra store updates to tables in immutable SSTable files to maintain throughput and avoid reading stale data. Deleted data, time-to-live (TTL) data, and null values will create tombstones, which allows the database to reconcile the logically deleted data with new queries across the cluster. While tombstones are a necessary byproduct of a distributed database, limiting the number of tombstones and avoiding tombstone creation increases database and application performance. Deletes can often be avoided through data modeling techniques. Nulls can be avoided with proper query construction. For more details on tombstones, see this article on DataStax Academy. Heavy deletes and nulls use extra disk space and decrease performance on reads. Tombstones can cause warnings and log errors. For example, in the following schema: CREATE TABLE test_ks.my_table_compound_key ( primary_key text, clustering_key text, regular_col text, PRIMARY KEY (primary_key, clustering_key) ) This query results in no tombstones for regular_col: INSERT INTO my_table_compound_key (primary_key, clustering_key) VALUES ('pk1', 'ck1'); However this query results in a tombstone for regular_col: INSERT INTO my_table_compound_key (primary_key, clustering_key, regular_col) VALUES ('pk1', 'ck1', null);
https://docs.datastax.com/en/devapp/doc/devapp/driversBestPractices.html
2020-11-23T20:05:10
CC-MAIN-2020-50
1606141164142.1
[array(['images/driversAsyncQueries.png', 'Asynchronous queries'], dtype=object) ]
docs.datastax.com
View the insights into your data, through the ThoughtSpot Pinboard. Explore the Pinboard that your administrator recommended to you in ThoughtSpot’s business user onboarding. Notice who created this Pinboard, and who uses it. You can follow this Pinboard, so you get notifications for major changes to the data it represents. With Pinboards, you can drill down to a chart or table’s underlying data for a deeper understanding, or follow an AI-guided exploration of a visualization or headline metric using Answer Explorer. You can filter on the Pinboard’s data, or on an individual visualization’s data. Watch this video on Pinboards: Next steps Next, exit the onboarding experience by clicking Go to homepage, and start gaining actionable insights by searching your data in ThoughtSpot.
https://cloud-docs.thoughtspot.com/admin/ts-cloud/business-user-pinboard-view.html
2020-11-23T19:41:52
CC-MAIN-2020-50
1606141164142.1
[]
cloud-docs.thoughtspot.com
Historic data is stored in a raw unprocessed format with an optional quality code and annotation. Display, edit and export data in variety of tabular and chart formats. Apply historic aggregates to process the raw data and apply specific calculations or use processing and logic for advanced statistical analysis and decision making. Use the eagle.io HTTP API to automate importing or extracting data. Data editing can be performed in both historic table and historic chart formats. Eagle.io supports the acquisition and storage of up to 20000 records per Data Source per day. Exceeding the limit will trigger an Overload Alarm on the Source. Refer to Historic Data Limits for more information.
https://docs.eagle.io/en/latest/topics/historic_data/index.html
2020-11-23T18:31:42
CC-MAIN-2020-50
1606141164142.1
[]
docs.eagle.io
Let's start with creating the plugin. Head over to the directory <shopware root>/custom/plugins. The plugin's directory must be named after the plugin, so in this scenario SwagBundleExample is used throughout the full tutorial. Each plugin is defined by a composer.json file, which contains a plugin's name, version, requirements and many more meta-information. Those of you familiar with composer might have figured out what's going on here already: Each plugin you write can be used by composer just like any other composer package - thus every property mentioned here can be used in your plugin's composer.json as well. Create this file inside your new directory SwagBundleExample and head over to the next step. So what do you need in your plugin's meta information? Each composer package comes with a technical name as its unique identifier if you were to publish your plugin using composer. Note: Don't worry - just because your plugin is basically a composer package, it won't be published because of that. That's still up to you, you'll easily be able to do so though. { "name": "swag/bundle-example" } The naming pattern is the very same like the one recommended by composer: It consists of vendor name and project name, separated by / The vendor name can also be your vendor prefix, Shopware for example uses swag here. The project name should be separated by a -, often referred to as kebab-case. So, what else would you need in general? A description, a version, the used license model and probably the author. After having a look at the composer schema once more, your composer.json could look like this: { "name": "swag/bundle-example", "description": "Bundle example", "version": "v1.0.0", "license": "MIT", "authors": [ { "name": "shopware AG", "role": "Manufacturer" } ] } All of those values being used in the example are mostly used by composer. Yet, there are plenty more values, that are required by Shopware 6, so let's have a look at them as well. First of all you can define a type, which has to be shopware-platform-plugin here. { ... "type": "shopware-platform-plugin" } Your plugin won't be considered to be a valid plugin if you do not set this value. The next value would be the autoload property, which works exactly like described on the documentation linked above. In short: You're defining your plugin's location + namespace in there. This allows you to structure your plugin code the way you want. Since, as mentioned earlier in this tutorial, every plugin is also a composer package, we want it to look like most other composer packages do. Their directory naming is mostly lowercase and most of them store their main code into a src directory, just like our Shopware platform code itself. While you're free to structure your plugin in whichever way you want, we recommend you to do it this way. { ... "autoload": { "psr-4": { "Swag\\BundleExample\\": "src/" } }, } Also required is the related namespace you want to use in your plugin. Usually you'd want it to look something like this: YourVendorPrefix\YourPluginName Last but not least is the extra property, which can fit ANY value. Shopware 6 is using it for fetching a few more meta information, such as a label and a plugin-icon path. Another important value is the fully qualified class name (later referred to as 'FQCN') of your plugin's base class, so Shopware 6 knows where to look for your plugin's base class. This is necessary, since due to your freedom to setup your plugin structure yourself, Shopware 6 also has no clue where your plugin's base class could be. { ... "extra": { "shopware-plugin-class": "Swag\\BundleExample\\BundleExample", "copyright": "(c) by shopware AG", "label": { "de-DE": "Beispiel für Shopware", "en-GB": "Example for Shopware" } } } Here's what the final composer.json looks like once all values described were set. { "name": "swag/bundle-example", "description": "Bundle example", "version": "v1.0.0", "license": "MIT", "authors": [ { "name": "shopware AG", "role": "Manufacturer" } ], "type": "shopware-platform-plugin", "autoload": { "psr-4": { "Swag\\BundleExample\\": "src/" } }, "extra": { "shopware-plugin-class": "Swag\\BundleExample\\BundleExample", "copyright": "(c) by shopware AG", "label": { "de-DE": "Beispiel für Shopware", "en-GB": "Example for Shopware" } } } In order to get a fully functional plugin running, we still need the plugin's base class. As you probably noticed from the composer.json, our main source is going to be in a src directory with the namespace Swag\BundleExample. So that's also where the plugin's base class will be at, so create a new file named after your plugin in the <plugin root>/src directory. In this example, it will be named BundleExample: <?php declare(strict_types=1); namespace Swag\BundleExample; use Shopware\Core\Framework\Plugin; class BundleExample extends Plugin { } Your plugin base class always has to extend from Shopware\Core\Framework\Plugin in order to work properly. The namespace and class name are set as defined in the composer.json. That's it for now, the plugin would already be recognized by Shopware 6 and is installable. Now it's time to check if everything was done correctly until this point. First you have to refresh the plugins. ./bin/console plugin:refresh Try to install your new plugin in the Pluginmanager in the Administration. You can find the Pluginmanager under "Settings" > "System" > "Plugins". If you're more into using the CLI, you can also execute the following command from inside your development template root. ./bin/console plugin:install --activate --clearCache BundleExample If everything was done right, it should install without any issues. Head over to the next step to create new database tables for your plugin using migrations.
https://docs.shopware.com/en/shopware-platform-dev-en/how-to/indepth-guide-bundle/setup
2020-11-23T19:25:58
CC-MAIN-2020-50
1606141164142.1
[]
docs.shopware.com
Programmable Boards and Modules This information pertains to Tibbo's programmable IoT boards (EM1001 and EM2001) and embedded modules (EM500, EM510, EM1000, EM1202, EM1206, EM2000, and WM2000). Updating these devices via a serial port can be tricky, as they don't have proper RS232 ports, but only "TTL/CMOS-level" UART(s). There is an easy solution for Tibbo modules that have not been embedded in a host device: Such modules can be upgraded using their evaluation (EV) boards. We offer EV boards for every model of our programmable modules. Unfortunately, there are no EV boards for our programmable boards. Manual wiring Below is a step-by-step guide to wiring a board or module for a serial upgrade. Only two lines are required: TX and RX. Since Tibbo boards and modules have TTL/CMOS-level UARTs, an RS232 transceiver (MAX232 or similar) is necessary to connect the device's TX and RX lines to your PC's COM port (or a USB-to-serial adapter). To begin the update, boot into the Monitor/Loader by powering on the device while pulling the MD line LOW. For example, you can connect a push button between the MD line and the ground. You will also need a power source that provides regulated 3.3V power. The EM1001, EM2001, EM2000, and WM2000 have built-in green, red, and yellow status LEDs. For other devices, you can connect the LEDs externally. The above diagram illustrates how to wire the EM1000 module for a serial upgrade, but a similar arrangement can be used for other boards and modules.
https://docs.tibbo.com/phm/ml_xmodem_boards_and_modules
2020-11-23T19:38:22
CC-MAIN-2020-50
1606141164142.1
[]
docs.tibbo.com
Remove a device from the watchlist You can remove devices that are on the watchlist from the Analysis Priorities page. -. - At the top of page in the Advanced Analysis Watchlist section, click View the Watchlist. The Watchlist page appears and displays all the devices on the watchlist. - To remove devices from the watchlist, complete the following steps: - Select the checkbox next to the device name. - Click Remove Devices. - Click Save. Thank you for your feedback. Can we contact you to ask follow up questions?
https://docs.extrahop.com/current/watchlist-remove/
2021-10-16T02:10:33
CC-MAIN-2021-43
1634323583408.93
[]
docs.extrahop.com
API Builder 4.x Save PDF Selected topic Selected topic and subtopics All content Go to Amplify Platform getting started and onboarding roadmap API Builder Getting Started Guide Prerequisites You should have NPM (recommended minimum v6.14.13), and Node.js (recommended minimum v14.17.0 LTS) installed. API Builder v4 maintains compatibility with a minimum of Node.js v8.9, however, using a version older than v14.17.0 is not recommended. See the Node.js support policy for more information. Minimum requirements These are the absolute minimum requirements for running an API Builder service. Memory and disk usage may fluctuate over time and between releases. Recommended system specs should be significantly higher to account for additional plugins, inbound requests and custom service logic. Development Production HDD 110MB 80MB RAM 45MB 40MB Getting started This section describes installing the Axway command line interface (CLI) and the API Builder CLI. The API Builder CLI is installed using the Axway CLI. We first describe how to install the Axway CLI, and then the API Builder CLI. Install the Axway CLI globally Refer to the Axway CLI documentation for more details. Install Axway CLI npm install -g axway Verify the Axway CLI installation by running the following command. axway pm list If you run into issues getting the Axway CLI working, see the Axway CLI troubleshooting guide. You may need to check that NPM and Node.js were installed correctly. Install the API Builder CLI The API Builder CLI is used to create new API Builder projects and plugins. Refer to API Builder CLI documentation for more details. Install API Builder axway pm install @axway/amplify-api-builder-cli Create a new API Builder project Once API Builder CLI is installed, you can use it to create a new project. In the following example, the CLI will create and initialize a project called myproject in the ./myproject directory. Initialize a new project axway builder init myproject Then, start the API Builder project. Run project cd myproject npm start Using version control We recommend using version control for tracking changes. One popular version control solution is git. API Builder projects come with a .gitignore file which is used to ignore files and folders that should be downloaded or created as part of an install or build. These files and folders should not be versioned or distributed when sharing a project, as they can sometimes include sensitive information, or take up a large file size compared to the rest of the project (for example 150MB vs 30kb). When cloning a project from elsewhere, you should ensure that all project dependencies, including API Builder itself, are installed before you can start the service. From the project directory run: Install dependencies npm ci If you have issues running this command, it may be because the project is missing a package-lock.json file which specifies an exact dependency tree for reproducible installs. If this is the case, then run npm install instead. Updating API Builder New versions of API Builder are released every 2 weeks, often containing important fixes and features. Each API Builder project depends on it's own version of API Builder which allows you to update your services individually. To update API Builder run the following command from your project directory. This will also update any additional dependencies in your project. Update dependencies npm update If you're going to be creating more API Builder projects, it's important to use the latest API Builder CLI so that your new projects use the latest template. Update Axway CLI and API Builder CLI npm install -g axway axway pm update For more information about fixes and features in new versions, see API Builder Release Notes. From time to time there will be changes to new projects and plugins. To keep your service in sync with these changes, see API Builder Updates. Introduction to the UI Once your project is running, point your browser to to access the API Builder user interface (UI) console. Upon reviewing the API Builder console, you can navigate to the following items. Summary Your application's admin home page. API Doc & Test Auto-generated documentation about your API endpoints. Provides help for the client application to access your application. Flows Lists flows that are part of your service and lets you manage them. Models Interface to help you build models. A model represents data stored from another source. Configuration Lists configuration files that you can modify and save within a browser. Credentials Lists the currently configured credentials. Plugins Lists available and installed plugins to extend the core functionality of API Builder, and that can be used to connect to different data sources and services or enhance the Flow editor. View Documentation Links to the Axway documentation for API Builder. Sidebar toggle Toggles the width of the sidebar. To quickly navigate to the Summary tab, click on the Axway icon or click on API Builder. Advanced startup You can choose which configuration values you want to be configurable from the environment by explicitly setting them in your conf/default.js using process.env. For example, to make the log level configurable, you could do: Example environmental configuration variables // Log level of the main logger logLevel: process.env.LOG_LEVEL || 'debug', This allows you to create containers for your application that can be configured when the container is started. The PORT is already environmentalized, so if you wish to launch API Builder on a different port, you can set PORT as an environment variable. For example, on Unix: Change port via env // The port for the UI $ PORT=8000 npm start However, we recommend that you do not change the environmentalized port configuration in conf/default.js as this value is used when using Docker containers. Environmentalization guide explains how to quickly set values to the environment variables referred to in the configuration files so they can be used during the development of the service. Further reading Once you are familiar with startup and the UI, be sure to read the Best Practices guide as it will help guide your next phase of development. Related Links
https://docs.axway.com/bundle/API_Builder_4x_allOS_en/page/api_builder_getting_started_guide.html
2021-10-16T03:33:46
CC-MAIN-2021-43
1634323583408.93
[]
docs.axway.com
Prerequisites for Intune functionality Prerequisites for Intune functionality. Chocolatey Configuration 📝 NOTE As the Intune commands are in preview, ensure you enable the allowPreviewFeaturesfeature by using the command: choco feature enable --name=allowPreviewFeatures The Intune commands need to know what your Intune tenant is, and there are two options: - Specify the tenant each time you push a package using the --sourceswitch; - Store the tenant information in the Chocolatey configuration using the command choco config set --name=intuneTenantGUID --value=<INTUNE TENANT GUID>. The GUID is available on the Azure AD Application page. Chocolatey Packages To push packages for the first time, you will need chocolatey and chocolatey.extension to be in the same directory as the Chocolatey package you want to push. If you don't already have these downloaded, you can download it to the current directory with the command: choco download chocolatey chocolatey.extension --internalize Additionally, it is recommended to have already installed the Chocolatey packages intunewinapputil and azcopy10 before converting or pushing other Chocolatey packages. If these do not exist, Chocolatey will try to install them from available sources.
https://docs.chocolatey.org/en-us/licensed-extension/intune/prerequisites
2021-10-16T03:00:27
CC-MAIN-2021-43
1634323583408.93
[]
docs.chocolatey.org
Pie layered chart Overview The layered pie chart is a pie chart with concentric layers to show successive levels of data. What data do I need for this widget? The option to create this chart will be disabled unless your query contains at least one column with numeric values. Furthermore, to show meaningful content on the chart, you must group your data. Creating a pie layered chart Go to Data Search and open the required table. Perform the required operations to get the data you want to use in the chart. Click the gear icon on the toolbar and select Charts → Diagrams → Pie Layered Chart. Click and drag the column headers to the corresponding fields. This diagram requires you to specify at least two fields: You can drag more than one column to the Signals field and each of them will be represented as a new outer layer. - The pie layered chart is displayed. Working with pie layered charts Hover over a segment on the chart to see the values of the assigned fields and the percentage of the total they represent. Click on a segment to analyze it in detail, thus hiding all internal layers. A new version of the chart is generated consisting of the chosen segment and the corresponding outer-layer segments. Click the inner circle to show the full chart again. Query example You can recreate the example shown in the picture above with the data from the following query and mapping the fields as follows: from demo.ecommerce.data group every 5m by statusCode, bytesTransferred every 5m In case you want another example with more than one argument to add, here is another query to create it: from demo.ecommerce.data group every 5m by method, statusCode, bytesTransferred every 5m
https://docs.devo.com/confluence/ndt/v7.1.1/searching-data/working-in-the-search-window/generate-charts/pie-layered-chart
2021-10-16T02:11:21
CC-MAIN-2021-43
1634323583408.93
[]
docs.devo.com
What’s New in Office 2010 and SharePoint 2010 (Technical Preview post) I.
https://docs.microsoft.com/en-us/archive/blogs/erikaehrli/whats-new-in-office-2010-and-sharepoint-2010-technical-preview-post
2021-10-16T04:03:01
CC-MAIN-2021-43
1634323583408.93
[]
docs.microsoft.com
Click the edit icon on the right side of the screen. Click Advanced Type..
https://docs.trendmicro.com/en-us/enterprise/deep-discovery-director-(consolidated-mode)-35-online-help/detections/affected-hosts/affected-hosts-advan/about-affected-hosts/editing-an-affected-.aspx
2021-10-16T03:54:25
CC-MAIN-2021-43
1634323583408.93
[]
docs.trendmicro.com
Contents: Contents: Review details about the selected user's account. Figure: Workspace User Details Page Group membership Any group assignments are listed in this section. For more information on groups, see Configure Users and Groups. Roles The workspace roles assigned to the user are listed. For more information, see Workspace Roles Page. Privileges In this section, you can review the maximal set of privileges that are assigned to the user. - Privileges are additive. - For more information, see Privileges and Roles Reference. User Details Information on the current status and recent activity of the user. If the user has any platform roles, they are listed here. These roles can be enabled or disabled when you edit the user. For more information, see Workspace Users Page. This page has no comments.
https://docs.trifacta.com/display/r076/Workspace+User+Details+Page?reload=true
2021-10-16T03:13:25
CC-MAIN-2021-43
1634323583408.93
[]
docs.trifacta.com
Data address regulatory compliance by ensuring that healthcare organizations have instant and uninterrupted access to the data they need to provide quality patient care. Treating Patients with Answers, not. For more info, watch UTHealth Case Study, read customer stories from JPS Health Network, CGH Medical Center, and New York Presbyterian Hospital.
http://docs.virtualinstruments.com/healthcare-solutions/linkedin/
2018-10-15T17:50:09
CC-MAIN-2018-43
1539583509336.11
[]
docs.virtualinstruments.com
Modules Introduction Antares has been designed to help developers deliver a modular and scalable application. In order to fulfill that condition, Antares functionality has been separated into packages called modules. Modules are like mini-applications within the system, they can be treated like the blocks from which the whole application is built. Additional benefit of having modular architecture is to reuse the code written once. Modular applications Modular applications require a slightly different approach than classic, non-modular ones: Modules must keep a similar form Data flow between the modules must be controlled by dedicated interfaces, implemented on the main system engine The key is to design the module to be used in multiple projects, not only the particular one, so you and other contributors will avoid the code duplication and reinventig the wheel. This approach in a long term will make your and others life easier. Every module in Antares can interact/handle following aspects: - Navigation control (breadcrumbs, menus, placeholders, panes etc.) - which provides browsing between the module views and other modules. - Views - presentation layer which is responsible for deliver graphical user interface (GUI). - Actions - working with data, classic CRUD (create, read, update, delete). - Data binding - a separated data layer to maintain independence between views and database. Antares modules In Antares, modules are divided into two groups: Core modules This group includes modules which are responsible for delivering functionalities within the application core - the heart of Antares. These modules are required for every Antares environment. Currently there are 5 core modules: - Automation - used to execute cyclic operations based on laravel task scheduler. - Acl - designed to manage users’ access to resources. - Logger - responsible for gathering the logs coming from different parts of the system. - Notifications - used in order to execute the process of sending notifications to end users. - Translations - language and translations manager. Additional modules Extending the application functionality which is not a part of main Antares branch and are not required. You may want to use them or not, depending on the project type. Please note: The Antares Module structure is following Laravel package standard with very slight improvements. If you know how to make a package for Laravel, then it will be super easy for you to build modules for Antares.* Making your own module If you’d like to make your own Antares Module, we suggest you to follow one of the following paths: - Read the Module Development documentation articles. Start with the Module Base. - Follow step by step tutorial of building a Sample Module.
http://www.docs.antaresproject.io/php-framework/0.9.2/antares_concepts/modules/index.html%23antares-modules
2018-10-15T16:51:40
CC-MAIN-2018-43
1539583509336.11
[]
www.docs.antaresproject.io
Syntax General rules of Perl 6 syntax Perl 6 borrows many concepts from human language. Which is not surprising, considering it was designed by a linguist. It reuses common elements in different contexts, has the notion of nouns (terms) and verbs (operators), is context-sensitive (in the every day sense, not necessarily in the Computer Science interpretation), so a symbol can have a different meaning depending on whether a noun or a verb is expected. It is also self-clocking, so that the parser can detect most of the common errors and give good error messages. Lexical conventions Perl 6 code is Unicode text. Current implementations support UTF-8 as the input encoding. See also Unicode versus ASCII symbols. Free form Perl 6 code is also free-form, in the sense that you are mostly free to chose the amount of whitespace you use, though in some cases, the presence or absence of whitespace carries meaning. So you can write if True or if True or if True or even if True though you can't leave out any of the remaining whitespace. Unspace In many places where the compiler would not allow a space you can use any amount of whitespace, as long as it is quoted with a backslash. Unspaces in tokens are not supported. Newlines that are unspaced still count when the compiler produces line numbers. Use cases for unspace are separation of postfix operators and routine argument lists. sub alignment(+) ;sub long-name-alignment(+) ;alignment\ (1,2,3,4).say;long-name-alignment(3,5)\ .say;say Inf+Inf\i; In this case, our intention was to make the . of both statements, as well as the parentheses, align, so we precede the whitespace used for padding with a \. Separating statements with semicolons A Perl 6 program is a list of statements, separated by semicolons ;. say "Hello";say "world"; A semicolon after the final statement (or after the final statement inside a block) is optional. say "Hello";say "world" if Truesay "world" Implied separator rule (for statements ending in blocks) Complete statements ending in bare blocks can omit the trailing semicolon, if no additional statements on the same line follow the block's closing curly brace }. This is called the "implied separator rule". For example, you don't need to write a semicolon after an if statement block as seen above, and below. if Truesay "world"; However, semicolons are required to separate a block from trailing statements in the same line. if True ; say "world";# ^^^ this ; is required This implied statement separator rule applies in other ways, besides control statements, that could end with a bare block. For example, in combination with the colon : syntax for method calls. my = <Foo Bar Baz>;my = .map: # OUTPUT: [FOO BAR BAZ] For a series of blocks that are part of the same if/ elsif/ else (or similar) construct, the implied separator rule only applies at the end of the last block of that series. These three are equivalent: if True else ; say "world";# ^^^ this ; is required if True else # <- implied statement separatorsay "world"; if True # still in the middle of an if/else statementelse # <- no semicolon required because it ends in a block# without trailing statements in the same linesay "world"; Comments are parts of the program text which are only intended for human readers; the Perl 6 compilers do not evaluate them as program text. Comments count as whitespace in places where the absence or presence of whitespace disambiguates possible parses. Single-line comments The most common form of comments in Perl 6 starts with a single hash character # and goes until the end of the line. if > 250 Multi-line / embedded comments Multi-line and embedded comments start with a hash character, followed by a backtick, and then some opening bracketing character, and end with the matching closing bracketing character. The content can not only span multiple lines, but can also be embedded inline. if #`( why would I ever write an inline comment here? ) True These comments can extend multiple lines #`[And this is how a multi would work.That says why we do what we do below.]say "No more"; Brackets inside the comment can be nested, so in #`{ a { b } c }, the comment goes until the very end of the string. You may also use more complex brackets/braces, such as #`{{ double-curly-brace }}, which might help disambiguate from nested brackets/braces. You can embed these comments in expressions, as long as you don't insert them in the middle of keywords or identifiers. Pod comments Pod syntax can be used for multi-line comments say "this is code";=begin commentHere are severallinesof comment=end commentsay 'code again'; identifiers, extended identifiers, string». Statements and expressions Perl 6 programs are made of lists of statements. A special case of a statement is an expression, which returns a value. For example if True { say 42 } is syntactically a statement, but not an expression, whereas 1 + 2 is an expression (and thus also a statement). The do prefix turns statements into expressions. So while my = if True ; # Syntax error! is an error, my = do if True ; assigns the return value of the if statement (here 42) to the variable $x. Terms Terms are the basic nouns that, optionally together with operators, can form expressions. Examples for terms are variables ( $x), barewords such as type names ( Int), literals ( 42), declarations ( sub f() { }) and calls ( f()). For example, in the expression 2 * $salary, 2 and $salary are two terms (an integer literal and a variable). Variables Variables typically start with a special character called the sigil, and are followed by an identifier. Variables must be declared before you can use them. # declaration:my = 21;# usage:say * 2; See the documentation on variables for more details. Barewords (constants, type names) Pre-declared identifiers can be terms on their own. Those are typically type names or constants, but also the term self which refers to an object that a method was called on (see objects), and sigilless variables: say Int; # OUTPUT: «(Int)␤»# ^^^ type name (built in)constant answer = 42;say answer;# ^^^^^^ constantsay Foo.type-name; # OUTPUT: «Foo␤»# ^^^ type name Packages and qualified names Named entities, such as variables, constants, classes, modules or subs, are part of a namespace. Nested parts of a name use :: to separate the hierarchy. Some examples: # simple identifiers::Bar::baz # compound identifiers separated by ::::()::baz # compound identifiers that perform interpolationsFoo::Bar::bob(23) # function invocation given qualified name See the documentation on packages for more details. Literals A literal is a representation of a constant value in source code. Perl 6 has literals for several built-in types, like strings, several numeric types, pairs and more. String literals String literals are surrounded by quotes: say 'a string literal';say "a string literal\nthat interprets escape sequences"; See quoting for many more options, including the escaping quoting q. Perl 6 uses the standard escape characters in literals: \a \b \t \n \f \r \e, with the same meaning as the ASCII escape codes, specified in the design document. say "🔔\a";# OUTPUT: «🔔␇␤» Number literals Number literals are generally specified in base ten,; they don't carry any semantic information; the following literals all evaluate to the same number: 10000001_000_00010_00000100_00_00 Int literals Integers default to signed base-10, but you can use other bases. For details, see Int. # actually not a and not 6.123 * 10 ** (5i) Pair literals Pairs are made of a key and a value, and there are two basic forms for constructing them: key => 'value' and :key('value'). Arrow pairs Arrow pairs can have an expression or an identifier on the left-hand side: identifier => 42"identifier" => 42('a' ~ 'b') => 1 Adverbial pairs (colon pairs) Short forms without explicit values: my = 42;: # same as thing => $thing:thing # same as thing => True:!thing # same as thing => False The variable form also works with other sigils, like :&callback or :@elements. Long forms with explicit values: :thing() # same as thing => $value:thing<quoted list> # same as thing => <quoted list>:thing['some', 'values'] # same as thing => ['some', 'values']:thing # same as thing => { a => 'b' } Array literals A pair of square brackets can surround an expression to form an itemized Array literal; typically there is a comma-delimited list inside: say ['a', 'b', 42].join(' '); # OUTPUT: «a b 42␤»# ^^^^^^^^^^^^^^ Array constructor If the constructor is given a single Iterable, it'll clone and flatten it. If you want an Array with just 1 element that is that Iterable, ensure to use a comma after it: my = 1, 2;say [].perl; # OUTPUT: «[1, 2]␤»say [,].perl; # OUTPUT: «[[1, 2],]␤» The Array constructor does not flatten other types of contents. Use the Slip prefix operator ( |) to flatten the needed items: my = 1, 2;say [, 3, 4].perl; # OUTPUT: «[[1, 2], 3, 4]␤»say [|, 3, 4].perl; # OUTPUT: «[1, 2, 3, 4]␤» Hash literals A leading associative sigil and pair of parenthesis %( ) can surround a List of Pairs to form a Hash literal; typically there is a comma-delimited List of Pairs inside. If a non-pair is used, it is assumed to be a key and the next element is the value. Most often this is used with simple arrow pairs. say %( a => 3, b => 23, :foo, :dog<cat>, "french", "fries" );# OUTPUT: «a => 3, b => 23, dog => cat, foo => True, french => fries␤»say %(a => 73, foo => "fish").keys.join(" "); # OUTPUT: «a foo␤»# ^^^^^^^^^^^^^^^^^^^^^^^^^ Hash constructor When assigning to a % sigiled variable on the left-hand side, the sigil and parenthesis surrounding the right-hand side Pairs are optional. my = fred => 23, jean => 87, ann => 4; By default, keys in %( ) are forced to strings. To compose a hash with non-string keys, use curly brace delimiters with a colon prefix :{ } : my = :; Note that with objects as keys, you cannot access non-string keys as strings: say :<0>; # OUTPUT: «(Any)␤»say :; # OUTPUT: «42␤» Regex literals A Regex is declared with slashes like /foo/. Note that this // syntax is shorthand for the full rx// syntax. /foo/ # Short versionrx/foo/ # Longer versionQ :regex /foo/ # Even longer versionmy $r = /foo/; # Regexes can be assigned to variables Signature literals Signatures can be used standalone for pattern matching, in addition to the typical usage in sub and block declarations. A standalone signature is declared starting with a colon: say "match!" if 5, "fish" ~~ :(Int, Str); # OUTPUT: «match!␤»my = :(Int , Str);say "match!" if (5, "fish") ~~ ; # OUTPUT: «match!␤»given "foo", 42 See the Signatures documentation for more about signatures. Declarations). Subroutine declaration # The signature is optionalsub foosub say-hello() You can also assign subroutines to variables. my = sub # Un-named submy = -> # Lambda style syntax. The & sigil indicates the variable holds a functionmy = -> # Functions can also be put into scalars Package, Module, Class, Role, and Grammar declaration There are several types of package, each declared with a keyword, a name, some optional traits, and a body of subroutines, methods, or rules. You can declare a unit package without explicit curly braces. This must be at the start of the file (preceded only by comments or use statements), and the rest of the file will be taken as being the body of the package. unit ;# ... stuff goes here instead of in {}'s Multi-dispatch declaration See also Multi-dispatch. Subroutines can be declared with multiple signatures. multi sub foo()multi sub foo() Inside of a class, you can also declare multi-dispatch methods. multi method greetmulti method greet(Str ) Subroutine calls Subroutines are created with the keyword sub followed by an optional name, an optional signature and a code block. Subroutines are lexically scoped, so if a name is specified at the declaration time, the same name can be used in the lexical scope to invoke the subroutine. A subroutine is an instance of type Sub and can be assigned to any container. foo; # Invoke the function foo with no argumentsfoo(); # Invoke the function foo with no arguments(); # Invoke &f, which contains a function.(); # Same as above, needed to make the following workmy = (, , );>>.(); # hyper method call operator When declared within a class, a subroutine is named "method": methods are subroutines invoked against an object (i.e., a class instance). Within a method the special variable self contains the object instance (see Methods). # Method invocation. Object (instance) is $person, method is set-name-age.set-name-age('jane', 98); # Most common way.set-name-age: 'jane', 98; # Precedence dropset-name-age(: 'jane', 98); # Invocant markerset-name-age : 'jane', 98; # Indirect invocation For more information see functions. Precedence drop In the case of method invocation (i.e., when invoking a subroutine against a class instance) it is possible to apply the precedence drop, identified by a colon : just after the method name and before the argument list. The argument list takes precedence over the method call, that on the other hand "drops" its precedence. In order to better understand consider the following simple example (extra spaces have been added just to align method calls): my = 'Foo Fighters';say .substr( 0, 3 ) .substr( 0, 1 ); # Fsay .substr: 0, 3 .substr( 0, 1 ); # Foo In the second method call the rightmost substr is applied to "3" and not to the result of the leftmost substr, which on the other hand yields precedence to the rightmost one. Operators See Operators for lots of details. Operators are functions with a more symbol heavy and composable syntax. Like other functions, operators can be multi-dispatch to allow for context-specific usage. There are five types (arrangements) for operators, each taking either one or two arguments. ++ # prefix, operator comes before single input5 + 3 # infix, operator is between two inputs++ # postfix, operator is after single input<the blue sky> # circumfix, operator surrounds single input<bar> # postcircumfix, operator comes after first input and surrounds second Metaoperators Operators can be composed. A common example of this is combining an infix (binary) operator with assignment. You can combine assignment with any binary operator. += 5 # Adds 5 to $x, same as $x = $x + 5min= 3 # Sets $x to the smaller of $x and 3, same as $x = $x min 3.= child # Equivalent to $x = $x.child Wrap an infix operator in [ ] to create a new reduction operator that works on a single list of inputs, resulting in a single value. say [+] <1 2 3 4 5>; # OUTPUT: «15␤»(((1 + 2) + 3) + 4) + 5 # equivalent expanded version Wrap an infix operator in « » (or the ASCII equivalent ) to create a new hyper operator that works pairwise on two lists. say <1 2 3> «+» <4 5 6> # OUTPUT: «(5 7 9)␤» The direction of the arrows indicates what to do when the lists are not the same size. «+« # Result is the size of @b, elements from @a will be re-used»+» # Result is the size of @a, elements from @b will be re-used«+» # Result is the size of the biggest input, the smaller one is re-used»+« # Exception if @a and @b are different sizes You can also wrap a unary operator with a hyper operator. say -« <1 2 3> # OUTPUT: «(-1 -2 -3)␤»
https://docs.perl6.org/language/syntax
2018-10-15T17:06:42
CC-MAIN-2018-43
1539583509336.11
[]
docs.perl6.org
Reset New Visitor Days¶ This is a Setting that your System Admin can add to your database if you want to change the number of days someone can go between visits to the same class/organization before you consider them to be a new guest again. This applies to those who are not members of the organization. The default is 180 days. Purpose of this Setting¶ The reason we have this setting is totally related to ministry - ministering to a new guest vs. ministering to a return guest. If a person waits a long time between visits, we should probably employ the same strategy for him when he makes a return visit as we would for a First Time Guest. Therefore, we need to reset his Attend Type back to New Guest. TouchPoint handles that automatically. The first time someone attends a meeting for an organization and is not a member, he is assigned an Attend Type of New Guest. The next time he visits (and for other subsequent visits), the Attend Type will be Recent Guest. However, if a person does not visit that class/organization for more than 180 days, the next time he visits, he will be considered a First Time Guest again. In other words, the system will reset their Attend Type back to New Guest if they do not visit with the number of days specified. You can add the setting to your database and enter the number of days you want someone to go between visits before you change his Attend Type back to New Guest if you want the number of days to be something other than 180. See also Create the Setting¶ The system admin for your church must create this setting. - Step 1 - Go to Administration > Setup > Settings. - Step 2 - Click the green +Add Setting button and enter this text: ResetNewVisitorDays. - Step 3 - Enter the desired number of days. See also 1st Time Attenders Report This report will not be affected if you change the default by adding this setting. This setting is for visits to the same class/organization. So, a person will not appear on the 1st Time Attender report just because we change his Attend Type within an organization to New Guest. Guests will display on the 1st Time Attender Report only once. See also First, Second, or Third Time Attender Reports to read more about the 1st, 2nd, 3rd Time Attender Reports. More about the Attend Type for Guests¶ Changing the First Meeting Date on an organization will reset the entire list of guests for that org. We suggest that you do that for orgs involved in Promotion (that is, those orgs that are based on age/grade), in order to start fresh with guests for the new year. See also
http://docs.touchpointsoftware.com/Organizations/ResetNewVisitorDays.html
2018-10-15T17:09:41
CC-MAIN-2018-43
1539583509336.11
[]
docs.touchpointsoftware.com
This IDC Customer Case Study and Interview provides an overview of IDC’s discussion with Simon Close, the well-known IT consultant who was responsible for managing the SAN and overall IT infrastructure for British supermarket chain Morrison’s, Lloyds Banking Group and HBOS, to understand about the implementation and use of Virtual Instruments’ infrastructure performance management solutions and how they accelerated problem resolution times, prevented outages and cut costs from deploying the IPM platform. ‘IDC believes that infrastructure monitoring and analytics solutions that provide a unified, integrated view of workload requirements, cross-tier dependencies, and root cause of performance problems are important enablers of today’s hybrid IT architectures.’ Archana Venkatraman, Research Manager, Storage and Data Centre, IDC ‘VirtualWisdom has saved our lives across all three organizations on a number of occasions in identifying the root cause and identifying the bad citizen and has given us the lifeline to save the infrastructure.’ Simon Close, IT Consultant.
http://docs.virtualinstruments.com/idc-storage-newsletter/
2018-10-15T17:06:37
CC-MAIN-2018-43
1539583509336.11
[]
docs.virtualinstruments.com
Combines the items in an array into a single string using the argument as a separator. Input {% assign beatles = "John, Paul, George, Ringo" | split: ", " %} {{ beatles | join: " and " }} Output John and Paul and George and Ringo © 2005, 2006 Tobias Luetke Licensed under the MIT License.
http://docs.w3cub.com/liquid/filters/join/
2018-10-15T16:38:32
CC-MAIN-2018-43
1539583509336.11
[]
docs.w3cub.com
You can determine whether your virtual disk is in thick or thin format. If you have thin provisioned disks, you can change them to thick by selecting Flat pre-initialized disk provisioning. You change thick provisioned disks to thin by selecting Allocate and commit space on demand. Procedure - Right-click a virtual machine in the inventory and select Edit Settings. - On the Virtual Hardware tab, expand Hard disk. The disk type is displayed in the Disk Provisioning field. - Click OK. What to do next If your virtual disk is in the thin format, you can inflate it to its full size using the vSphere Web Client.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-1971C213-318B-4987-A449-2CD27FC2FC07.html
2018-10-15T18:04:40
CC-MAIN-2018-43
1539583509336.11
[]
docs.vmware.com
Columns gives you access to the properties of the columns Member of Tree View (PRIM_TRVW) Data Type - Collection of PRIM_TVCL - Defines the data class and behavior of a column in a tree view The Columns property provides access to the collection of columns parented to the control. Columns cannot be added or removed from a tree view at runtime. All Component Classes Technical Reference Febuary 18 V14SP2
https://docs.lansa.com/14/en/lansa016/prim_trvw_columns.htm
2018-10-15T16:40:28
CC-MAIN-2018-43
1539583509336.11
[]
docs.lansa.com
Changes to OneDrive sync client deployment in Office Click-to-Run Starting in October 2017, we're changing how the previous OneDrive for Business sync client installs for enterprise customers using Office 365 ProPlus or Office 2016 with Click-to-Run. The previous OneDrive for Business sync client (Groove.exe) will no longer be installed by default with Office 2016 Click-to-Run. If your organization provides an Office deployment configuration file to Setup.exe, you'll need to update your file to exclude Groove from the install. When not in use or running, the previous OneDrive for Business sync client (Groove.exe) will be uninstalled, unless: (a) Groove is already configured to sync one or more SharePoint Online or SharePoint Server libraries or (b) a "PreventUninstall" registry key is present on the computer. These changes won't affect Office 365 customers who are already using the new OneDrive sync client (OneDrive.exe) to sync OneDrive and SharePoint Online files. Neither will these changes affect enterprises who have deployed Office with the traditional Windows Installer-based (MSI) method. (when your organization doesn't subscribe to an Office 365 Business plan). Which version of OneDrive am I using? Ensuring Groove.exe is no longer installed When these changes roll out, the previous OneDrive for Business sync client (Groove.exe) will no longer be installed by default when a user installs Office 2016 via Click-to-Run. If your organization provides an Office deployment configuration file to Setup.exe, you'll need to update your file to exclude Groove from the install. To exclude Groove in your deployment, add this to your config file: <Product ID="O365ProPlusRetail" > <Language ID="en-us" /> <ExcludeApp ID="Groove" /> </Product> For more information about configuration options, see Configuration options for the Office 2016 Deployment Tool. To override the default behavior and make sure the previous OneDrive for Business sync client installs and stays installed, you'll need to provide a config file that doesn't exclude Groove.exe. Also, you'll need to set the "PreventUninstall" registry key on all computers where you need Groove.exe installed, so that the process doesn't uninstall Groove.exe. Uninstalling Groove when not in use On Office upgrade, the installer runs on each computer to detect whether Groove.exe is currently in use or the "PreventUninstall" registry key is set. If either Groove.exe is in use or the registry key is set, Groove.exe is left in place. Otherwise, if Groove.exe isn't in use and the registry key isn't set, Groove.exe gets uninstalled automatically on that computer. Registry key to prevent uninstallation [HKLM\SOFTWARE\Microsoft\Office\Groove] "PreventUninstall"=dword:00000001 Who these changes affect and when These changes will affect organizations who have deployed the previous OneDrive for Business sync client to sync on-premises SharePoint libraries, or libraries that have Information Rights Management enabled on them. The following table shows more detail about which Office installations are affected by these changes and when. All these changes are Office client-level changes rolled out across clients, and are not turned on organization by organization. For more information about Office channels, see Overview of update channels for Office 365 ProPlus. Unless you need Groove.exe for some of your scenarios (for example, syncing on-premises SharePoint files), we strongly recommend leaving the new defaults in place and excluding Groove.exe from Office 2016 installations. Related Topics Learn more about the Sync button update on SharePoint sites
https://docs.microsoft.com/en-us/onedrive/exclude-or-uninstall-previous-sync-client?redirectSourcePath=%252fzh-hk%252farticle%252f%2525E8%2525AE%25258A%2525E6%25259B%2525B4-office-%2525E9%25259A%2525A8%2525E4%2525B8%2525AD%2525E7%25259A%252584-onedrive-%2525E5%252590%25258C%2525E6%2525AD%2525A5%2525E8%252599%252595%2525E7%252590%252586%2525E7%252594%2525A8%2525E6%252588%2525B6%2525E7%2525AB%2525AF%2525E9%252583%2525A8%2525E7%2525BD%2525B2-3eff17b9-c709-462f-946c-17719af68aca
2018-10-15T18:03:19
CC-MAIN-2018-43
1539583509336.11
[]
docs.microsoft.com
How to: Target a Version of the .NET Framework Note This article applies to Visual Studio 2015. If you're looking for Visual Studio 2017 documentation, use the version selector at the top left. We recommend upgrading to Visual Studio 2017. Download it here. This document describes how to target a version of the .NET Framework when you create a project and how to change the targeted version in an existing Visual Basic, Visual C#, or Visual F# project. Important For information about how to change the target version for C++ projects, see How to: Modify the Target Framework and Platform Toolset. In this topic Targeting a version when you create a project When you create a project, the version of the .NET Framework that you target determines which templates you can use. Note In Express editions of Visual Studio, you must create the project first, and then you can change the target, as Changing the target version describes later in this topic. To target a version when you create a project On the menu bar, choose File, New, Project. In the list at the top of the New Project dialog box, choose the version of the .NET Framework that you want your project to target. Note Typically, only one version of the .NET Framework is installed with Visual Studio. If you want to target another version, you must first make sure that it's installed. See Visual Studio Multi-Targeting Overview. In the list of installed templates, choose the type of project that you want to create, name the project, and then choose the OK button. The list of templates shows only those projects that are supported by the version of the .NET Framework that you chose. Changing the target version You can change the targeted version of the .NET Framework in a Visual Basic, Visual C#, or Visual F# project by following this procedure. To change the targeted version In Solution Explorer, open the shortcut menu for the project that you want to change, and then choose Properties. Important For information about how to change the target version for C++ projects, see How to: Modify the Target Framework and Platform Toolset. In the left column of the properties window, choose the Application tab. Note After you create a Windows Store app, you can't change the targeted version of either Windows or the .NET Framework. In the Target Framework list, choose the version that you want. In the verification dialog box that appears, choose the Yes button. The project unloads. When it reloads, it targets the .NET Framework version that you just chose. Note If your code contains references to a different version of the .NET Framework than the one that you targeted, error messages may appear when you compile or run the code. To resolve these errors, you must modify the references. See Troubleshooting .NET Framework Targeting Errors. See Also Visual Studio Multi-Targeting Overview .NET Framework Multi-Targeting for ASP.NET Web Projects Troubleshooting .NET Framework Targeting Errors Application Page, Project Designer (C#) Application Page, Project Designer (Visual Basic) Configuring Projects How to: Modify the Target Framework and Platform Toolset
https://docs.microsoft.com/en-us/visualstudio/ide/how-to-target-a-version-of-the-dotnet-framework?view=vs-2015
2018-10-15T17:52:11
CC-MAIN-2018-43
1539583509336.11
[]
docs.microsoft.com
Chapter 24: Traditional Digital Animation Traditional digital animation, also known as hand-drawn animation, is the process of drawing every frame of an animation. This is very different from using cut-out puppets, where body parts drawn at different angles are stored in a bank and swapped. These drawings do not use motion paths; everything is hand drawn. With the advent of digital animation, much has changed in the way that animation is done. However, there are still tried and true animation tools and techniques that every animator may still find useful. Most of these techniques are derived from the classic method of traditional animation. The following sections will be covered in this chapter:
https://docs.toonboom.com/help/toon-boom-studio-81/Content/TBS/User_Guide/008_Animation/000_CT_Tradi_Animation.html
2018-10-15T16:56:16
CC-MAIN-2018-43
1539583509336.11
[array(['../../../Resources/Images/TBS/User_Guide/handDrawn.jpg', None], dtype=object) ]
docs.toonboom.com
. Release the interpreter’s import lock. On platforms without threads, this function does nothing.. The following functions are conveniences for handling PEP 3147 byte-compiled file paths. New in version 3.2.()). The returned path will end in .pyc when __debug__ is True or .pyo for an optimized Python (i.e. __debug__ is False). By passing in True or False for debug_override you can override the system’s value for __debug__ for extension selection. path need not exist.. Return the PEP 3147 magic tag string matching this version of Python’s magic number, as returned by get_magic().()
http://docs.python.org/release/3.2.2/library/imp.html
2013-05-18T11:11:46
CC-MAIN-2013-20
1368696382360
[]
docs.python.org
Search This Document No Directory groups to display. Please check the configuration. Description This message appears when the BlackBerry Directory Sync Tool cannot connect to Microsoft Active Directory using the information that you specified. Possible solution Perform any of the following actions: - Verify that the directory settings that you specified are correct. - Verify that the Search Path DN that you specified is a valid path. From left to right, the path should specify the general organizational units (OU) to the specific domain components (DC) (for example, OU=Groups,DC=sample,DC=net). - If you selected Automatic in the Server Discovery drop-down list, verify that the Windows account that you are currently using has read permissions for Microsoft Active Directory. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/49277/1565566.jsp
2013-05-18T11:33:37
CC-MAIN-2013-20
1368696382360
[]
docs.blackberry.com
For the next version of Gant we want to introduce project tool support a la Maven. Here you can find a discussion in the groovy user mailing list about this topic. Basically we want to implement three abstractions: - Build lifecycle - Multiproject builds - Dependency handling I have submitted a framework (see attached Gant04.zip) for lifecycle handling based on an idea Jochen has brought up. The framework is good enough to get an idea and to discuss it, it is by no means complete. There is no support for multiproject builds yet and only the simplest support for dependency handling. There is no integration with the current Gant yet. Lifecycle (My) Requirements for the lifecycle handling: - Easy customization of the lifecycle (which includes an easy reuse of an existing implementation of a phase). - Support of multiple lifecycles - Even for the standard cases multiple lifecycles make sense (Jar Lifecycle, War lifecycle, EAR lifecycle, ...) - Reuse of a customized lifecycle (in particular helpful for multiproject builds) - Customization should also be possible by removing or creating new phases. Of course you can squeeze in any custom behavior by customizing existing phases. But this might be not expressive. We have been discussing two alternative approaches for lifecycle handling. One is based on Gant targets where the lifcycle is established by the dependencies of the targets. The other is to have a lifecycle class with methods implementing a phase of the lifecycle. I have implemented the latter approach. Examples of build scripts This is enough to compile, test and create a jar of our test project. This is one way of customizing the lifecycle. It is a non reusable customization applied directly in the build script. If there is a closure named like a lifecycle phase, this closure gets called instead of the lifecycle method. The other way of using a non standard lifecycle is: creating a new lifecycle from scratch, inheriting from an existing lifecycle, etc... One implementation detail is how to define an order for the methods of a lifecycle class. One possibility would be, that the lifecycle method calls it predecessor directly. This has two disadvantages: - You can't customize an existing lifecycle class by adding a new phase. - You can't call a lifecycle method in isolation. I have no use case yet, where this is a problem. But I'm pretty sure they will come up. My approach is to use a list which defines the lifecycle phases. Each name of the list corresponds to a lifecycle method name. If the build is started via gant compile for example, all lifecycle methos are called that correspond to entries in this list up to the compile phase. Using the framework Right now the framework is not capable of being used from the command line. The only user right now are the integration tests which call the main method of the Gant04 class and passes it the basedir and the buildscript path. I might be a good idea to have first a look at /Gant04/src/test/integTests/org/codehaus/groovy/graven/BuildTest.groovy to best understand the framework. Conclusions The framework is a prototype. There are many things that can be improved. The main question is whether we want to implement the lifecycle handling in such a manner or not. I personally think this is a good approach. And I think this complements what Gant offers right now. In a build there are always lifecycle related action and non lifecycle related actions. For the latter the current Gant approach is a good way to implement them.
http://docs.codehaus.org/pages/diffpages.action?pageId=24248331&originalId=228172022
2013-05-18T11:14:17
CC-MAIN-2013-20
1368696382360
[]
docs.codehaus.org
California Public Utilities Commission 505 Van Ness Ave., San Francisco _________________________________________________________________________________ FOR IMMEDIATE RELEASE PRESS RELEASE Media Contact: Terrie Prosper, 415.703.1366, [email protected] CPUC ANNOUNCES AVAILABILITY OF REBATES FOR MULTI-FAMILY AND COMMERCIAL SOLAR WATER HEATING SYSTEMS SAN FRANCISCO, Oct. 11, 2010 - The California Public Utilities Commission (CPUC) today announced that cash rebates of up to $500,000 are available for the installation of solar water heating systems on multi-family residences, businesses, and industrial facilities. On October 8, 2010, the California Solar Initiative (CSI)-Thermal Program began accepting applicants from the multi-family/commercial sector CSI-Thermal Program began accepting applications from single-family residents on May 1, 2010, but the program was until now unavailable to the multi-family/commercial sector while the CPUC and Program Administrators worked on technical implementation details. Incentives of up to $500,000 are now available to multi-family/commercial applicants who meet the application criteria, including completion of a one-day training course and installation of equipment that has been certified by the Solar Rating and Certification Corporation. Applicants may apply for rebates online. ###
http://docs.cpuc.ca.gov/PUBLISHED/NEWS_RELEASE/124578.htm
2013-05-18T11:12:34
CC-MAIN-2013-20
1368696382360
[]
docs.cpuc.ca.gov
Deploy and install a SharePoint-hosted SharePoint Add-in This is the second in a series of articles about the basics of developing SharePoint-hosted SharePoint Add-ins. You should first be familiar with the topic SharePoint Add-ins and the overview article in this series: Note If you have been working through this series about SharePoint-hosted add-ins, you have a Visual Studio solution that you can use to continue with this topic. You can also download the repository at SharePoint_SP-hosted_Add-Ins_Tutorials and open the BeforeColumns.sln file. You'll find it a lot easier to develop SharePoint-hosted SharePoint Add-ins if you are familiar with how users deploy and install your add-ins. So, in this article, we'll take a brief break from coding to create and use an add-in catalog, and then install the add-in you've been working on. Create an add-in catalog Sign in to your Office 365 subscription as an administrator. Select the add-in launcher icon, and then select the Admin tile. Figure 1. Office 365 add-in launcher In the Admin Center, expand the Admin centers node in the task pane, and then select SharePoint. In the SharePoint Admin Center, select apps in the task pane. On the apps page, select App Catalog. (If there is already an add-in catalog site collection in the subscription, it opens and you are finished. You cannot create more than one add-in catalog in a subscription.) On the App Catalog Site page, select OK to accept the default option and create a new app catalog site. In the Create App Catalog Site Collection dialog, specify the title and website address of your app catalog site. We recommend that you include "catalog" in the title and URL to make it memorable and distinguishable in the SharePoint Admin Center. Specify a Time Zone and set yourself as the Administrator. Set the Storage Quota to the lowest possible value (currently 110, but that can change), because the packages you upload to this site collection are very small. Set the Server Resource Quota to 0 (zero), and then select OK. (The server resource quota is related to throttling poorly performing sandboxed solutions, but you won't be installing any sandboxed solutions on your add-in catalog site.) As the site collection is being created, SharePoint takes you back to the SharePoint Admin Center. After a few minutes, you'll see that the collection has been created. Package the add-in and upload it to the catalog Open the Visual Studio solution, right-click the project node in Solution Explorer, and then select Publish. In the Publish pane, select Package the add-in. The add-in is packaged and saved as an *.appfile in the solution's \bin\debug\web.publish\1.0.0.0 folder. Open your add-in catalog site in a browser, and then select SharePoint Add-ins in the navigation bar. The SharePoint Add-ins catalog is a standard SharePoint asset library. Upload the add-in package to it using any of the methods of uploading files to SharePoint libraries.. The Site Contents page automatically opens and the add-in appears with a notation that it is installing. After it installs, users can select the tile to run the add-in. Remove the add-in To continue enhancing the same SharePoint Add-in in Visual Studio (see Next steps), remove the add-in with these steps: In the Site Contents page, move the cursor over the add-in so that the callout button ... appears. Select the callout button, and then select REMOVE on the callout. Navigate back to your add-in catalog site and select SharePoint Add-ins in the navigation bar. Highlight the add-in and select manage on the task bar just above the list, and then select Delete on the manage menu. Next steps We strongly recommend that you continue with this series about SharePoint-hosted add-ins before you go on to the more advanced topics. Next, we get back to coding in Add custom columns to a SharePoint-hosted SharePoint Add-in. Feedback Send feedback about:
https://docs.microsoft.com/en-us/sharepoint/dev/sp-add-ins/deploy-and-install-a-sharepoint-hosted-sharepoint-add-in
2019-04-18T19:16:02
CC-MAIN-2019-18
1555578526228.27
[]
docs.microsoft.com
DeregisterInstance Deletes the Amazon Route 53 DNS records and health check, if any, that AWS Cloud Map Example DeregisterInstance Example Sample Request POST / HTTP/1.1 host:servicediscovery.us-west-2.amazonaws.com x-amz-date:20181118T211816Z authorization: AWS4-HMAC-SHA256 Credential=AKIAIIO2CIV3EXAMPLE/20181118/us-west-2/servicediscovery/aws4_request, SignedHeaders=content-length;content-type;host;user-agent;x-amz-date;x-amz-target, Signature=[calculated-signature] x-amz-target:Route53AutoNaming_v20170314.Deregister { "OperationId":"httpvoqozuhfet5kzxoxg-a-response-example" } See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/cloud-map/latest/api/API_DeregisterInstance.html
2019-04-18T19:03:52
CC-MAIN-2019-18
1555578526228.27
[]
docs.aws.amazon.com
WAN Multi-Master Replication Gateway Overview This WAN Gateway example includes PU folders with config files for a Multi-Master topology that includes 3 sites: DE , RU , US. Each site have an independent cluster and a Gateway. You will find folders for the following PUs: - wan-gateway-DE - Deployed into DE zone, using the DE lookup group and a lookup service listening on port 4366. - wan-gateway-RU - Deployed into RU zone, using the RU lookup group and a lookup service listening on port 4166. - wan-gateway-US - Deployed into US zone, using the US lookup group and a lookup service listening on port 4266. - wan-space-DE - Deployed into DE zone, using the DE lookup group and a lookup service listening on port 4366. - wan-space-RU - Deployed into RU zone, using the RU lookup group and a lookup service listening on port 4166. - wan-space-US - Deployed into US zone, using the US lookup group and a lookup service listening on port 4266. The internal architecture of the setup includes a clustered space and a Gateway, where each Gateway includes a Delegator and a Sink: Installing the Example Step 1. Download the WAN_Gateway_example.zip. It includes two folders: deploy and scripts. View on GitHub Step 2. Please extract the file and and copy the content of the deploy folder into \gigaspaces-xap-premium-14.0-ga\deploy folder. It should looks like this: Directory of D:\gigaspaces-xap-premium-14.0-ga\deploy 09/11/2011 04:41 AM <DIR> . 09/11/2011 04:41 AM <DIR> .. 07/05/2011 03:08 PM <DIR> templates 09/11/2011 04:44 AM <DIR> wan-gateway-DE 09/11/2011 04:44 AM <DIR> wan-gateway-RU 09/11/2011 04:43 AM <DIR> wan-gateway-US 09/11/2011 04:43 AM <DIR> wan-space-DE 09/11/2011 05:15 AM <DIR> wan-space-RU 09/11/2011 04:42 AM <DIR> wan-space-US Step 3. Please move into the scripts folder and edit the setExampleEnv.bat/sh to include correct values for NIC_ADDR as the machine IP and GS_HOME to have GigaSpaces root folder location. Running the Example You will find within the scripts folder running scripts to start Grid Service Agent for each site and a deploy script for all sites. This will allow you to run the entire setup on one machine to test. Here are the steps to run the example: - Run startAgent-DE.bat/shor to start DE site. - Run startAgent-RU.bat/shto start RU site. - Run startAgent-US.bat/shto start US site. - Run deployAll.bat/shfile to deploy all the PUs listed above. Viewing the Clusters - Start the \gigaspaces-xap-premium-14.0-ga\bin\GS-UI.bat/sh. - Once you deployed make sure you enable the relevant groups within the GS-UI: You should check all Groups: You should see this: Once deployed successfully you should see this: Testing the WAN Gateway Replication You can test the setup by using the benchmark utility comes with the GS-UI. Move the one of the Clusters Benchmark icon and click the Start Button: You will see all spaces Object Count across all clusters by clicking the Spaces icon on the Space Browser Tab. You should see identical number of objects (5000) for all members: You can remove objects from each space cluster by selecting the Take operation and click Start: You will see the Object Count changing having zero object count for each space: Replication Throughput Capacity The total TP a gateway can push out into remote sites depends on: - Network speed - Partition count - Partition activity Distribution - Partition TP - Replication Frequency - Replication packet size - Network bandwidth - Replication Meta data size The total TP will be: Total TP = (Partition TP X Partitions count X Distribution X Network Speed)+ Replication Meta data size / Replication Frequency If we have 10 IMDG partitions, each sending 5000 objects/sec 1K size to the GW with a replication frequency of 10 replication cycles per/sec (100 ms delay between each replication cycle , i.e. 1000 operations per batch) with even distribution (1) and network speed between the sites is 10 requests/sec (i.e. 100 ms latency) the Total TP we will have is: (10 X 5000 X 1 X 10) / 10 = 50,000 objects per second. = 50M per second The above assumes the network bandwidth is larger than 50M. WAN Gateway Replication Benchmark With the following benchmark we have 2 sites; one located in the US East coast EC2 region and another one located within EC2 EU Ireland region. The latency between the sites is 95 ms and the maximum bandwidth measured is 12MByte/sec. A client application located within the US East coast EC2 region running multiple threads perform continuous write operation to a clustered space in the same region. The space cluster within the US East coast EC2 region has a WAN Gateway configured , replicating data to a clustered space running within the EC2 EU Ireland region via a Gateway running within the same region. Blue line - The amount of data generated at the source site (EC2 EU East coast region) by the client application. Green line - The amount of consumed bandwidth is measured at the target site (EC2 EU Ireland region). Red line - The network bandwidth. Up to 16 client threads at the client application, the utilized bandwidth at the target site is increasing. Once the maximum bandwidth has been consumed, no matter how many client threads will be writing data to the source space, the target site bandwidth consumption will stay the same. We do see some difference between the amount of data generated and replicated at the source site and the amount of bandwidth consumed at the target site. This difference caused due-to the overhead associated with the replicated data over the WAN and the network latency. For each replicated packet some meta data is added. It includes info about the order of the packet, its source location, etc.
https://docs.gigaspaces.com/sbp/wan-replication-gateway.html
2019-04-18T19:06:29
CC-MAIN-2019-18
1555578526228.27
[array(['/attachment_files/sbp/wan_example1.jpg', 'wan_example1.jpg'], dtype=object) array(['/attachment_files/sbp/wan_example3.jpg', 'wan_example3.jpg'], dtype=object) array(['/attachment_files/sbp/wan_example4.jpg', 'wan_example4.jpg'], dtype=object) array(['/attachment_files/sbp/wan_bench1.jpg', 'wan_bench1.jpg'], dtype=object) array(['/attachment_files/sbp/wan_bench2.jpg', 'wan_bench2.jpg'], dtype=object) ]
docs.gigaspaces.com
Important: #85560 - Location of XLF labels changed¶ See Issue #85560 Description¶ Downloaded files for XLF language files are usually stored within typo3conf/l10n. When the environment variable TYPO3_PATH_ROOT is set, which is common for all composer-based installations, the XLF language files are now found outside the document root, available under var/labels/. The Environment API Environment::getLabelsPath() resolves the correct full location path prefix.
https://docs.typo3.org/typo3cms/extensions/core/Changelog/9.5/Important-85560-LocationOfXLFLabelsChanged.html
2019-04-18T19:34:50
CC-MAIN-2019-18
1555578526228.27
[]
docs.typo3.org
v1 In This Article API Reference The following make up the primary API of Stratigility. Middleware Zend\Stratigility\MiddlewarePipe is the primary application interface, and has been discussed previously. Its API is: namespace Zend\Stratigility; use Interop\Http\Middleware\DelegateInterface; use Interop\Http\Middleware\MiddlewareInterface as InteropMiddlewareInterface; use Interop\Http\Middleware\ServerMiddlewareInterface; use Psr\Http\Message\ResponseInterface; use Psr\Http\Message\ServerRequestInterface; class MiddlewarePipe implements MiddlewareInterface, ServerMiddlewareInterface { public function pipe( string|callable|InteropMiddlewareInterface|ServerRequestInterface $path, callable|InteropMiddlewareInterface|ServerRequestInterface $middleware = null ); public function __invoke( ServerRequestInterface, ResponseInterface $response, callable $out = null ) : executing the MiddlewarePipe via its __invoke() method, if $out is not provided, an instance of Zend\Stratigility\FinalHandler will be created and used in the event that the pipe stack is exhausted ( MiddlewarePipe passes the $response instance it receives to FinalHandler as well, so that the latter can determine if the response it receives is new). $out is no longer optional Starting in version 1.3.0, we now raise a deprecation notice if no argument is passed for the $outargument to __invoke(); starting in version 2.0.0, the argument will be required. Always pass a Nextinstance, a Zend\Stratigility\NoopFinalHandlerinstance, or a custom callback; we no longer recommend the FinalHandlerimplementation. When using __invoke(), the callable $out argument should use the signature: use Psr\Http\Message\ResponseInterface; use Psr\Http\Message\ServerRequestInterface; function ( ServerRequestInterface $request, ResponseInterface $response ) : ResponseInterface Within Stratigility, Zend\Stratigility\Next provides such an implementation. Starting in version 1.3.0, MiddlewarePipe also implements the http-interop ServerMiddlewareInterface, and thus provides a process() method. This method requires a ServerRequestInterface instance and an Starting in version 1.3.0, you can. Additionally, if an error condition has occurred, you may pass an optional third argument, $err, representing the error condition. class Next { public function __invoke( Psr\Http\Message\ServerRequestInterface $request, Psr\Http\Message\ResponseInterface $response ) : Psr\Http\Message\ResponseInterface; } You should always either capture or return the return value of $next() when calling it in your application. The expected return value is a response instance, but if it is not, you may want to return the response provided to you. $err argument Technically, Next::__invoke()accepts a third, optional argument, $err. However, as of version 1.3.0, this argument is deprecated, and usage will raise a deprecation notice during runtime. We will be removing the argument entirely starting with version 2.0.0. - Since 1.3.0.. use Interop\Http\Middleware\DelegateInterface; use Zend\Diactoros\Response; ', ]); return $response; } - Deprecated as of 1.3.0; please use exceptions and a error handling middleware such as the ErrorHandler to handle error conditions in your application instead. To raise an error condition, pass a non-null value as the third argument to function ($request, $response, $next) { try { // try some operation... } catch (Exception $e) { return $next($request, $response, $e); // Next registered error middleware will be invoked } } FinalHandler - Deprecated starting with 1.3.0. Use Zend\Stratigility\NoopFinalHandleror a custom handler guaranteed to return a response instead. Zend\Stratigility\FinalHandler is a default implementation of middleware to execute when the stack exhausts itself. It expects three arguments when invoked: a request instance, a response instance, and an error condition (or null for no error). It returns a response. FinalHandler allows two optional arguments during instantiation $options, an array of options with which to configure itself. These options currently include: env, the application environment. If set to "production", no stack traces will be provided. onerror, a callable to execute if an error is passed when FinalHandleris invoked. The callable is invoked with the error (which will be nullin the absence of an error), the request, and the response, in that order. Psr\Http\Message\ResponseInterface $response; if passed, it will compare the response passed during invocation against this instance; if they are different, it will return the response from the invocation, as this indicates that one or more middleware provided a new response instance. Internally, FinalHandler does the following on invocation: - If $erroris non- null, it creates an error response from the response provided at invocation, ensuring a 400 or 500 series response is returned. - If the response at invocation matches the response provided at instantiation, it returns it without further changes. This is an indication that some middleware at some point in the execution chain called $next()with a new response instance. - If the response at invocation does not match the response provided at instantiation, or if no response was provided at instantiation, it creates a 404 response, as the assumption is that no middleware was capable of handling the request. HTTP Messages Zend\Stratigility\Http\Request - Deprecated in 1.3.0; to be removed in 2.0.0. Zend\Stratigility\Http\Request acts as a decorator for a Psr\Http\Message\ServerRequestInterface instance. The primary reason is to allow composing middleware such that you always have access to the original request instance. As an example, consider the following: $app1 = new Middleware(); $app1->pipe('/foo', $fooCallback); $app2 = new Middleware(); $app2->pipe('/root', $app1); $server = Server::createServer($app2 /* ... */); In the above, if the URI of the original incoming request is /root/foo, what $fooCallback will receive is a URI with a past consisting of only /foo. This practice ensures that middleware can be nested safely and resolve regardless of the nesting level. If you want access to the full URI — for instance, to construct a fully qualified URI to your current middleware — Zend\Stratigility\Http\Request contains a method, getOriginalRequest(), which will always return the original request provided to the application: function ($request, $response, $next) { $location = $request->getOriginalRequest()->getUri()->getPath() . '/[:id]'; $response = $response->setHeader('Location', $location); $response = $response->setStatus(302); return $response; } Zend\Stratigility\Http\Response - Deprecated in 1.3.0; to be removed in 2.0.0. Zend\Stratigility\Http\Response acts as a decorator for a Psr\Http\Message\ResponseInterface instance, and also implements Zend\Stratigility\Http\ResponseInterface, which provides the following convenience methods: write(), which proxies to the write()method of the composed response stream. end(), which marks the response as complete; it can take an optional argument, which, when provided, will be passed to the write()method. Once end()has been called, the response is immutable and will throw an exception if a state mutating method like withHeaderis called. isComplete()indicates whether or not end()has been called. Additionally, it provides access to the original response created by the server via the method getOriginalResponse(). v2 migration chapter for more details. Middleware Decorators Starting in version 1.3.0, we offer the ability to work with http-interop middleware. Internally, if a response prototype is composed in the MiddlewarePipe, callable middleware piped to the MiddlewarePipe will be wrapped in one of these decorators. Interop\Http\Middleware\DelegateInterface implementation, Zend\Stratigility\Delegate\CallableDelegateDecorator. This class can be used to wrap a callable $next instance for use in passing to a ServerMiddlewareInterface::process() method as a delegate; the primary use case is adapting functor middleware to work as http-interop middleware. As an example: use Interop\Http\Middleware\DelegateInterface; use Interop\Http\Middleware!
https://docs.zendframework.com/zend-stratigility/v1/api/
2019-04-18T18:43:02
CC-MAIN-2019-18
1555578526228.27
[]
docs.zendframework.com
Contents Sub-campaigns Once you've configured a campaign, you can run a sub-campaign from it. When you run a sub-campaign, its settings will default to the settings you defined when you set up the campaign. You can, however, edit settings as you set up the sub-campaign. Create and send an outbound sub-campaign Before you create and send a new sub-campaign, you must: - Define an agent group. In some applications, agent groups are pre-defined, based on a set of rules pertinent to the outreach. - Create the campaign under which you want to run the sub-campaign. - Import the contact list(s) you need for the sub-campaign. - Define a script. To create and send an outbound sub-campaign: - On the Campaigns tabbed page, click New Outbound from the Actions drop down next to the name of the campaign under which you want to create the new sub-campaign, or click New -> Outbound Sub-campaign from the menu bar. The New Outbound Sub-campaign page appears, populated with options from the campaign's default strategy. - Select a contact list from the Contact List dropdown. - Give the sub-campaign a name. If you do not enter a name for the sub-campaign, the system uses the contact list name. - Leave the default campaign settings or edit as needed. Keep clicking Next to view or edit the settings or click Send Sub-campaign. The sub-campaign will now appear under the Campaign on the main Campaigns page. Refer to the Campaigns page if you need help configuring campaign settings. Sub-campaign Actions in the Actions menu associated with each sub-campaign, you can choose from the following actions: - Resume - Resumes a paused sub-campaign. - Pause - Pauses a sub-campaign, which can later be resumed. - Stop - Stops a sub-campaign, which cannot be resumed. - Manage - Enables you to view and manage key features related to your sub-campaign. See Manage a Sub-campaign below. - Settings - Displays all sub-campaign settings that were configured when the campaign and/or sub-campaign were created. Manage a sub-campaign Many of the sub-campaign settings can be adjusted on the Manage Sub-campaign page. From the Campaigns list, select the sub-campaign to manage and select Manage from the Actions menu. The Manage Sub-campaign/Passes page shows a snapshot of the sub-campaign. Beneath it, there are four tabs, where you can view and edit important sub-campaign information: Profile The Profile page presents a breakdown of the sub-campaign across all the time zones in the contact list. As an example, calling in the Eastern U.S. time zone may show the status as Done while calling in the Central U.S. time zone shows a status of Pending. This is likely because the start time for the Central time zone has not yet been reached, in which case the sub-campaign status is "Paused" or "Pending" because no attempts are currently being made. In the upper half of the Profile page, a pie chart represents contact results by time zone (each time zone is shown in a different shade of blue). If any contact attempts have been made, you can toggle between the Attempt Projection chart and the Attempt History chart. Attempts are broken down by the following: - This Hour - The number of attempts expected to be made before the next hour starts. For example, if it is 9:43 AM, this is the number of attempts the system can make before 10:00 AM. - Next Hour - The number of attempts expected to be made in the next hour. For example, if it is 9:43 AM, this is the number of attempts the system can make between 10:00 AM and 11:00 AM. When 10:00 AM arrives, the value in this column is moved into the "this hour" column. - Future - The number of attempts remaining to be made for the current pass after the next hour. For example, if it is 9:43 AM, this is the number of attempts that will probably be made after 11:00 AM. - Attempt Projection - This chart displays the number of attempts that will be made in each time zone. The chart appears only when an outbound sub-campaign is in a “Running” state. If the sub-campaign is in a “Pending/Paused” state, the chart does not appear and “No Data” is shown in its place. - Attempt History - This chart shows all of the contact attempts made during each hour the sub-campaign was running where there was activity. The contacts are categorized by Not Delivered and Delivered. If no contact attempts have been made, the chart does not appear and “No Data” appears in its place. - Attempt Results - This chart shows the contact results for the sub-campaign. The Attempt Results chart appears only for an outbound sub-campaign that has made contact attempts. If no contact attempts have been made, the chart does not appear and “No Data” is shown in its place. - All successful statuses are shown in a shade of green. All unsuccessful statuses are shown in a shade of blue. IMPORTANT! For large sub-campaigns, some contact attempts may be shown in the Next Hour and Future columns even though they are in a time zone currently being called. This is because the system analyzes the call rate and determines how many attempts can be made during the current hour. If there are 60,000 attempts to be made in the current time zone, and the system can only make 45,000 of those attempts in the current hour, 15,000 of them will be moved into the Next Hour time frame. The time configured between retry attempts can also affect these values. Due to the time configured between retry attempts, retries for the current time zone will sometimes be included in the Next Hour value and possibly the Future value. In the bottom half of the Profile screen, contact results are displayed in a table, as follows: - Time zone - shows attempts by each time zone within the sub-campaign. - Pass - shows the type of pass for the sub-campaign. - Start / End - The start time and end time of the pass. - Status - The status is shown as an icon, whether running, pending, stopped, etc. for each pass. - List Size - The number of unique contacts in the active list. - Delivered - The number of successful contact attempts. - Remaining - The number of contacts left to process. - Failed - The number of contacts with a current failure status. - Filtered - The number of contacts filtered from the list. - Done - The total number of contact attempts made. Activity Log The Activity Log tab shows a log of actions taken, either by users or by the system, related to the sub-campaign. By default, only those events for which an action was taken are displayed. You can then print or send, via email, the activity log. You can also toggle between Show All and Hide No Action Taken events. AutoManage AutoManage Rules enable a campaign to automatically take certain actions in real time, based on campaign performance. See the AutoManage page for more information. Details The Details tab shows all details related to the sub-campaign and groups the details by contact center information and sub-campaign information, as described below. Contact Center Information The following information displays: - Agent Group - This is the name of the agent group assigned to the sub-campaign. Click the name to display the Edit Agent Group page, on which you can change the agent group schedule for the current day or a future day. Statistics for the sub-campaign are presented in a table and show the activity of the previous 15 minutes and also for the total duration of the sub-campaign thus far. If calls are not in progress, the Last 15m column does not appear. - Direct Connected Success - The number of successful Direct Connects. - Direct Connected Failure - The number of Direct Connects that have failed; in other words, calls in which the customer could not be connected to the contact center. When an attempt fails (due to a Busy, No Answer, or No DC Lines Available) more attempts are made. If still unsuccessful, the failed connection attempts are identified as Direct Connected Failure and, depending upon your script, plays an alternate message such as "Due to the overwhelming response, we are unable to connect you to an operator at this time. Please listen to the following message for more information." - Ring Time - The average ring time (in seconds) for the successful Direct Connects. - Hold Time - The average hold time (in seconds) after the call has been transferred to the contact center but before the call was connected to an agent. The hold time figures are available only if the script includes the Whisper feature. - Connect Time - The average connect time (in seconds) for the successful Direct Connects. Connect time, also referred to as “talk time,” excludes any hold time, although hold time is included as part of the connect time in Summary and Detail reports. Sub-campaign Information In this section, you can view and change the system priority, which assigns priority delivery, such as Highest, Normal, or Lowest, to a sub-campaign.If one sub-campaign has a higher priority than another sub-campaign, the high priority sub-campaign is given precedence to available phone lines. Unless there is a special circumstance that warrants a high or low priority, it’s best to leave this option as the default Normal priority. There may be circumstances in which you will want to change priority, either during sub-campaign setup or as a sub-campaign is in progress. You can also select the Generate Sub-campaign reports after each pass completes to send a Sub-campaign Summary report to your account email address when each call pass completes. Feedback Comment on this article:
https://docs.genesys.com/Documentation/OCS/latest/OCShelp/SubCampaigns
2019-04-18T19:17:02
CC-MAIN-2019-18
1555578526228.27
[]
docs.genesys.com
Contents Mobile navigation and configuration Previous Topic Next Topic Mobile navigation and configuration Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Mobile navigation and configuration Manage incidents, collaborate with your team, respond to approval requests, access the knowledge base, and receive push notifications all on the go with your mobile device. Depending on your device, go to the iTunes store or the Google Play store and search for ServiceNow to download the native mobile app. Explore Upgrade to london Domain separation in the Mobile application Set up Configure authentication Customize mobile lists Configure Connect chat for mobile Use Get started with a ServiceNow instance on your mobile device Log in on a mobile device Use the application navigator Mobile lists Mobile favorites Navigate the mobile service catalog Develop Mobile client scripting Mobile UI Actions Script items in a mobile list Developer training Developer documentation Troubleshoot and get help Ask or answer questions in the community Search the HI knowledge base for known error articles Contact ServiceNow Support On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/london-mobile/page/administer/tablet-mobile-ui/concept/mobile-experience.html
2019-04-18T18:50:26
CC-MAIN-2019-18
1555578526228.27
[]
docs.servicenow.com
All docs This doc The BAM mediator is deprecated. You can use the Publish Event Mediator to perform similar functionality. The BAM mediator captures data events from the ESB profile mediation sequences and sends them to the WSO2 Business Activity Monitor server via its Thrift API. This API uses a binary protocol and enables fast data transmission between the EI and BAM server. For more information on BAM, see the BAM documentation. <bam xmlns=""> <serverProfile name="string"> <streamConfig name="string" version="string"></streamConfig> </serverProfile> </bam> Before you can configure a BAM mediator, you must configure a BAM server profile and stream. Parameters that can be configured for the BAM mediator are as follows.
https://docs.wso2.com/pages/viewpage.action?pageId=85376588
2019-04-18T18:21:16
CC-MAIN-2019-18
1555578526228.27
[]
docs.wso2.com
If you enjoyed playing this, then you can find similar games in the category. Yes this is the question that you will ask from me before downloading this file. The graphics and fast paced combat still are relevant. It's one of those easy to learn, hard to master games. It was released for Arcades in March 1997, and for the PlayStation in March - September 1998. I live in a small city, but there is no shortage of professional here because they are many great pro and those are just awesome. What another significant change in motion is jumping is softer, not allowing fighters to jump to extreme heights which was present in previous games , but keep climbing sensible realistic heights. Another big change in the movement was that the jumps were mitigated and the fighters could no longer jump to extreme heights as was the case in the previous games , but to make jumps at reasonable and realistic heights. Tekken 3 is a single title from the many , and offered for this console. Tekken 3 is a small fighting video game that was developed and published by Namco Studios. Watch that video and if you still face any problem, then you can ask inside comments section, we will try to help you solve you problem. Lazy Namco didn't bother to optmise it for 50hz tvs and it plays like your walking through dry sand. Installing this file is very easy, but you have to understand it carefully otherwise you will not be able to install this file.. Alot of modes including the Versus Multiplayer and the 2 New Special Modes Tekken Force and Tekken Ball. We hope your game is working 100% fine because it is our first priority to upload only working and tested games. Your task is very simple, you need to beat the enemy until he falls unconscious.. This made air combat more controllable, and to make greater use of avoidance tricks, like jumping today has become a global movement to avoid flying over the earth moves. It was also the last installment of the series for the PlayStation. They are the most dangerous players that I have ever seen. Tekken 3 is still widely considered one of the greatest games of its genre, and of all time. It's a classic download it asap. We test every single game before uploading but but if you encountered some error like Runtime Errors or Missing dll files or others errors during installation than you must need read this to fix it. Definetly the best Fighter on the Playstation 1. In this product there are two modes, Single Player and Multiplayer, you can play online at anytime you want. Lots of Unlockables ranging from Movie Endings to extra Characters. Other than that, the improved engine allowed a quick recovery of the decline, more escapes tackles and stunned, better juggling like many old movements have changed the parameters that enable them to connect to stressful situations complex, not link to previous games and further launches new combined entity. To lure it out of hiding will take the greatest fighting contest the world has ever seen. Game Features: Alot of characters to choose from. Click the link below to Download. Sometime, I go to the Game club to watch their games even sometime, they bet too because they spent their whole life in this field so, they made it a source to earn the money and I have really impressed by the way of their thinking because they knew that they have the talent to win from anyone so, they spend their whole day there even I have the complete list of the Top players of Tekken 3 in Pakistan.. It still stands up well today. Developers, Publishers, Release Dates and Genres Introduction In each and every installment there are some developers, directors and publishers who make and publish these installments, so there are also some developers, directors, publishers, release dates and genres. I'm more of a Virtua Fighter fan but this impressed me to no end. The original Arcade version of the game was released in 2005 for the PlayStation 2 as part of Tekken 5's Arcade History mode. System Requirements Of Tekken 3 Game We have added two section and there is the comparison between the exact one and the provided one so, we have added recommended + minimum requirements, then you will be able to understand it more clearly. Nothing beats the satisfaction humiliating youre friends in this game. This third installment of the Tekken series showcases the same core fighting system and concept as its predecessors but features a more detailed graphics, new moves and combos, a more balanced character roster, and so much more. It was released in arcades in March 1997 and for the PlayStation in 1998. Tekken 3 — Many of us like this kind of game like fighting game, and probably everyone who played a lot of fighting games knows this great game. My personal opinion is that this is the best fighting franchise in the business. Whereas the element of depth had been largely insignificant in previous Tekken games aside from some characters having unique sidesteps and dodging maneuvers. It is the 3rd installment in this series that is available to download free from this website, this website always provide working games, so you can easily get it Today. To enter the background or exit, lightly press the joystick or touch the Controller button in the console version in the appropriate direction. Tekken 3 Pc Game Free Download is widely regarded as one of the best games of its kind, and all the time. Tekken 3 game is from the various on the site, and there are more games like this, including Tekken Advance, Tekken 2 and Mario Kart 64. No installation required You Will Also Love to Check: Tekken 3 Pc Game Free Download Password: thepcgames. Tekken 3 Game is Working or Not? Tekken 3 retains the same central combat system and concept as its predecessors, but brings many improvements, including much more detailed graphics and animations, fifteen new characters, more up-to-date music and a faster and smoother game. You have to check it before installing in your computer or you have to download it according to the operating system that you currently have. Tekken 3 maintains the same core fighting system and concept as its predecessors. This made air combat more controllable and made more use of avoiding evasive maneuvers since jumping was no longer a universal evasion movement that overflew all the movement of the ground. Check out the Latest Version of Tekken Game Other Search Terms Tekken 3 is the 3rd part of Tekken game series. A game rivalled by the likes of Street fighter, and at the time, the king of fighters series. Best Screenshots of This Game How To Download This Game? A variety of heroes and the sea of blood will be approved by all fans of fighting games. Some are fighting for revenge, some for honor, Ultimately, all are fighting for their lives and the fate of all mankind. The original arcade version of the game was released in 2005 for the PlayStation 2 as part of the arcade story mode. All as usual, no abstruse plot, but only fights, blood and a sea of violence.
http://reckon-docs.com.au/windows/tekken-3.html
2019-04-18T19:04:10
CC-MAIN-2019-18
1555578526228.27
[]
reckon-docs.com.au
Create a WMI Event Alert a SQL Server Agent alert that is raised when a specific SQL Server event occurs that is monitored by the WMI Provider for Server Events in SQL Server 2017 by using SQL Server Management Studio or Transact-SQL. For information about the using the WMI Provider to monitor SQL Server events, see WMI Provider for Server Events Classes and Properties., using: SQL Server Management Studio Before You Begin. Only WMI namespaces on the computer that runs SQL Server Agent are supported. Security Permissions By default, only members of the sysadmin fixed server role can execute sp_add_alert. Using SQL Server Management Studio. Using Transact-SQL). Feedback Send feedback about:
https://docs.microsoft.com/en-us/sql/ssms/agent/create-a-wmi-event-alert?view=sql-server-2017
2019-04-18T18:30:56
CC-MAIN-2019-18
1555578526228.27
[]
docs.microsoft.com
BioThings Studio tutorial¶ This tutorial will guide you through BioThings Studio, a pre-configured environment used to build and administer BioThings API. This guide will show how to convert a simple flat file to a fully operational BioThings API, with as minimal effort as possible. Note You may also want to read the developer’s guide for more detailed informations. Note The following tutorial is only valid for BioThings Studio release 0.1.e. Check all available releases for more. What you’ll learn¶ Through this guide, you’ll learn: - how to obtain a Docker image to run your favorite API - how to run that image inside a Docker container and how to access the BioThings Studio application - how to integrate a new data source by defining a data plugin - how to define a build configuration and create data releases - how to create a simple, fully operational BioThings API serving the integrated data Prerequisites¶ Using BioThings Studio requires a Docker server up and running, some basic knowledge about commands to run and use containers. Images have been tested on Docker >=17. Using AWS cloud, you can use our public AMI biothings_demo_docker ( ami-44865e3c in Oregon region) with Docker pre-configured and ready for studio deployment. Instance type depends on the size of data you want to integrate and parsers’ performances. For this tutorial, we recommend using instance type with at least 4GiB RAM, such as t2.medium. AMI comes with an extra 30GiB EBS volume, which is more than enough for the scope of this tutorial. Alternately, you can install your own Docker server (on recent Ubuntu systems, sudo apt-get install docker.io is usually enough). You may need to point Docker images directory to a specific hard drive to get enough space, using -g option: # /mnt/docker points to a hard drive with enough disk space sudo echo 'DOCKER_OPTS="-g /mnt/docker"' >> /etc/default/docker # restart to make this change active sudo service docker restart Downloading and running BioThings Studio¶ BioThings Studio is available as a Docker image that you can download following this link using your favorite web browser or wget: $ wget Typing docker ps should return all running containers, or at least an empty list as in the following example. Depending on the systems and configuration, you may have to add sudo in front of this command to access Docker server. $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Once downloaded, the image can be loaded into the server: $ docker image load < biothings_studio_latest.docker $ docker image list REPOSITORY TAG IMAGE ID CREATED SIZE biothings_studio 0.1e 742a8c502280 2 months ago 1.81 GB Notice the value for TAG, we’ll need it to run the container (here, 0.1e) We will map and expose those ports to the host server using option -p so we can access BioThings services without having to enter the container: $ docker run --name studio -p 8080:8080 -p 7022:7022 -p 7080:7080 -p 9200:9200 -p 27017:27017 -p 8000:8000 -p 9000:9000 -d biothings_studio:0.1e Note we need to add the release number after the image name: biothings_studio:0.1e. Should you use another release (including unstable releases, tagged as master) you would need to adjust this parameter accordingly. Note Biothings Studio and the Hub are not designed to be publicly accessible. Those ports should not be exposed. When accessing the Studio and any of these ports, SSH tunneling can be used to safely access the services from outside. Ex: ssh -L 7080:localhost:7080 -L 8080:localhost:8080 user@mydockerserver will expose the web application and the REST API ports to your computer, so you can access the webapp using and the API using. See for more We can follow the starting sequence using docker logs command: $ docker logs -f studio Waiting for mongo tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN - * Starting Elasticsearch Server ... now run webapp not interactive Please refer Filesystem overview and Services check for more details about Studio’s internals. By default, the studio will auto-update its source code to the latest available and install all required dependencies. This behavior can be skipped by adding no-update at the end of the command line. We can now access BioThings Studio using the dedicated web application (see webapp overview). Creating an API from a flat file¶ In this section we’ll dive in more details on using the BioThings Studio and Hub. We will be integrating a simple flat file as a new datasource within the Hub, declare a build configuration using that datasource, create a build from that configuration, then a data release and finally instantiate a new API service and use it to query our data. Input data, parser and data plugin¶ For this tutorial, we will integrate data from the Cancer Genome Interpreter (CGI). This datasource is used in MyVariant.info, one of the most used BioThings APIs. The input file is available here:. The parser itself is not the main topic of this tutorial, the full code for the parser can be found here, in MyVariant’s github repository. From a single flat file, it produces JSON documents looking like this: { "_id": "chr9:g.133747570A>G", "cgi": { "association": "Resistant", "cdna": "c.877A>G", "drug": "Imatinib (BCR-ABL inhibitor 1st gen&KIT inhibitor)", "evidence_level": "European LeukemiaNet guidelines", "gene": "ABL1", "primary_tumor_type": "Chronic myeloid leukemia", "protein_change": "ABL1:I293V", "region": "inside_[cds_in_exon_5]", "source": "PMID:21562040", "transcript": "ENST00000318560" } } Note The _id key is mandatory and represents a unique identifier for this document. The type must a string. The _id key is used when data from multiple datasources are merged together, that process is done according to its value (all documents sharing the same _id from different datasources will be merged together). We can easily create a new datasource and integrate data using BioThings Studio, by declaring a data plugin. Such plugin is defined by: - a folder containing a manifest.json file, where the parser and the input file location are declared - all necessary files supporting the declarations in the manifest, such as a python file containing the parsing function for instance. This folder must be located in the plugins directory (by default /data/biothings_studio/plugins, where the Hub monitors changes and reloads itself accordingly to register data plugins. Another way to declare such plugin is to register a github repository, containing everything useful for the datasource. This is what we’ll do in the following section. Note whether the plugin comes from a github repository or directly found in the plugins directory doesn’t really matter. In the end, the code will be found that same plugins directory, whether it comes from a git clone command while registeting the github URL or whether it comes from folder and files manually created in that location. It’s however easier, when developing a plugin, to directly work on local files first so we don’t have to regurlarly update the plugin code ( git pull) from the webapp, to fetch the latest code. That said, since the plugin is already defined in github in our case, we’ll use the github repo registration method. The corresponding data plugin repository can be found at. The manifest file looks like this: { "version": "0.2", "__metadata__" : { "license_url" : "", "licence" : "CC BY-NC 4.0", "url" : "" }, "dumper" : { "data_url" : "", "uncompress" : true, }, "uploader" : { "parser" : "parser:load_data", "on_duplicates" : "ignore" } } - the dumper section declares where the input file is, using data_url key. Since the input file is a ZIP file, we first need to uncompress the archive, using uncompress : true. - the uploader section tells the Hub how to upload JSON documents to MongoDB. parser has a special format, module_name:function_name. Here, the parsing function is named load_data and can be found in parser.py module. ‘on_duplicates’ : ‘ignore’ tells the Hub to ignore any duplicated records (documents with same _id). For more information about the other fields, please refer to the plugin specification. Let’s register that data plugin using the Studio. First, copy the repository URL: Moving back to the Studio, click on the tab, then icon, this will open a side bar on the left. Click on New data plugin, you will be asked to enter the github URL. Click “OK” to register the data plugin. Interpreting the manifest coming with the plugin, BioThings Hub has automatically created for us: - a dumper using HTTP protocol, pointing to the remote file on the CGI website. When downloading (or dumping) the data source, the dumper will automatically check whether the remote file is more recent than the one we may have locally, and decide whether a new version should be downloaded. - and an uploader to which it “attached” the parsing function. This uploader will fetch JSON documents from the parser and store those in MongoDB. At this point, the Hub has detected a change in the datasource code, as the new data plugin source code has been pulled from github locally inside the container. In order to take this new plugin into account, the Hub needs to restart to load the code. The webapp should detect that reload and should ask whether we want to reconnect, which we’ll do! Upon registration, the new data source appears: is used to trigger the dumper and (if necessary) download remote data will trigger the uploader (note it’s automatically triggered if a new version of the data is available) can be used to “inspect” the data, more of that later Let’s open the datasource by clicking on its title to have more information. Dumper and Uploader tabs are rather empty since none of these steps have been launched yet. The Plugin tab though shows information about the actual source code pulled from the github repository. As shown, we’re currently at the HEAD version of the plugin, but if needed, we could freeze the version by specifiying a git commit hash or a branch/tag name. Without further waiting, let’s trigger a dump to integrate this new datasource. Either go to Dump tab and click on or click on to go back to the sources list and click on at the bottom of the datasource. The dumper is triggered, and after few seconds, the uploader is automatically triggered. Commands can be listed by clicking at the top the page. So far we’ve run 3 commands to register the plugin, dump the data and upload the JSON documents to MongoDB. All succeeded. We also have new notifications as shown by the red number on the right. Let’s have a quick look: Going back to the source’s details, we can see the Dumper has been populated. We now know the release number, the data folder, when was the last download, how long it tooks to download the file, etc… Same for the Uploader tab, we now have 323 documents uploaded to MongoDB. Inspecting the data¶ Now that we have integrated a new datasource, we can move forward. Ultimately, data will be sent to ElasticSearch, an indexing engine. In order to do so, we need to tell ElasticSearch how the data is structured and which fields should be indexed (and which should not). This step consists of creating a “mapping”, describing the data in ElasticSearch terminology. This can be a tedious process as we would need to dig into some tough technical details and manually write this mapping. Fortunately, we can ask BioThings Studio to inspect the data and suggest a mapping for it. In order to do so, click on Mapping tab, then click on . We’re asked where the Hub can find the data to inspect. Since we successfully uploaded the data, we now have a Mongo collection so we can directly use this. Click on “OK” to let the Hub work and generate a mapping for us. Since the collection is very small, inspection is fast, you should have a mapping generated within few seconds. For each field highlighted in blue, you can decide whether you want the field to be searchable or not, and whether the field should be searched by default when querying the API. Let’s click on “gene” field and make it searched by default. Indeed, by checking the “Search by default” checkbox, we will be able to search for instance gene “ABL1” with /query?q=ABL1 instead of /query?q=cgi.gene:ABL1. After this modification, you should see at the top of the mapping, let’s save our changes clicking on . Also, before moving forwared, we want to make sure the mapping is valid, let’s click on . You should see this success message: Note “Validate on test” means Hub will send the mapping to ElasticSearch by creating a temporary, empty index to make sure the mapping syntax and content are valid. It’s immediately deleted after validation (wheter successful or not). Also, “test” is the name of an environment, by default, and without further manual configuration, this is the only development environment available in the Studio, pointing to embedded ElasticSearch server. Everything looks fine, one last step is to “commit” the mapping, meaning we’re ok to use this mapping as the official, registered mapping, the one that will actually be used by ElasticSearch. Indeed the left side of the page is about inspected mapping, we can re-launch the inspection as many time as we want, without impacting active/registered mapping (this is usefull when the data structure changes). Click on then “OK”, and you now should see the final, registered mapping on the right: Defining and creating a build¶ Once we have integrated data and a valid ElasticSeach mapping, we can move forward by creating a build configuration. A build configuration tells the Hub which datasources should be merged together, and how. Click on then and finally, click on . - enter a name for this configuration. We’re going to have only one configuration created through this tutorial so it doesn’t matter, let’s make it “default” - the document type represents the kind of documents stored in the merged collection. It gives its name to the annotate API endpoint (eg. /variant). This source is about variant, so “variant” it is… - open the dropdown list and select the sources you want to be part of the merge. We only have one, “mvcgi” - in root sources, we can declare which sources are allowed to create new documents in the merged collection, that is merge documents from a datasource, but only if corresponding documents exist in the merged collection. It’s usefull if data from a specific source relates to data on another source (it only makes sense to merge that relating data if the data itself is present). If root sources are declared, Hub will first merge them, then the others. In our case, we can leave it empty (no root sources specified, all sources can create documents in the merged collection) - the other fields are for advanced usage and are out-of-topic for this tutorial Click “OK” and open the menu again, you should see the new configuration available in the list. Click on it and create a new build. You can give a specific name for that build, or let the Hub generate one for you. Click “OK”, after few seconds, you should see the new build displayed on the page. Open it by clicking on its name. You can explore the tabs for more information about it (sources involved, build times, etc…). The “Release” tab is the one we’re going to use next. Creating a data release¶ If not there yet, open the new created build and go the “Release” tab. This is the place where we can create new data releases. Click on . Since we only have one build available, we can’t generate an incremental release so we’ll have to select full this time. Click “OK” to launch the process. Note Should there be a new build available (coming from the same configuration), and should there be data differences, we could generate an incremental release. In this case, Hub would compute a diff between previous and new builds and generate diff files (using JSON diff format). Incremental releases are usually smaller than full releases, usually take less time to deploy (appying diff data) unless diff content is too big (there’s a threshold between using an incremental and a full release, depending on the hardware and the data, because applying a diff requires to first fetch the document from ElasticSearch, patch it, and then save it back) Hub will directly index the data on its locally installed ElasticSearch server ( test environment). After few seconds, a new full release is created. We can easily access ElasticSearch server using the application Cerebro which comes pre-configured with the studio. Let’s access it through (assuming ports 9200 and 9000 have properly been mapped, as mentioned earlier). Cerebro provides an easy to manager ElasticSearch and check/query indices. Click on the pre-configured server named BioThings Studio. Clicking on an index gives access to different information, such as the mapping, which also contains metadata (sources involved in the build, releases, counts, etc…) Generating a BioThings API¶ At this stage, a new index containing our data has been created on ElasticSearch, it is now time for final step. Click on then and finally To turn on this API instance, just click on , you should then see a label on the top right corner, meaning the API can be accessed: Note When running, queries such /metadata and /query?q=* are provided as examples. They contain a hostname set by Docker though (it’s the Docker instance hostname), which probably means nothing outside of Docker’s context. In order to use the API you may need to replace this hostname by the one actually used to access the Docker instance. Accessing the API¶ Assuming API is accessible through, we can easily query it with curl for instance. The endpoint /metadata gives information about the datasources and build date: $ curl localhost:8000/metadata { "build_date": "2018-06-05T18:32:23.604840", "build_version": "20180605", "src": { "mvcgi": { "stats": { "mvcgi": 323 }, "version": "2018-04-24" } }, "src_version": { "mvcgi": "2018-04-24" }, "stats": {} Let’s query the data using a gene name (results truncated): $ curl localhost:8000/query?q=ABL1 { "max_score": 2.5267246, "took": 24, "total": 93, "hits": [ { "_id": "chr9:g.133748283C>T", "_score": 2.5267246, "cgi": [ { "association": "Responsive", "cdna": "c.944C>T", "drug": "Ponatinib (BCR-ABL inhibitor 3rd gen&Pan-TK inhibitor)", "evidence_level": "NCCN guidelines", "gene": "ABL1", "primary_tumor_type": "Chronic myeloid leukemia", "protein_change": "ABL1:T315I", "region": "inside_[cds_in_exon_6]", "source": "PMID:21562040", "transcript": "ENST00000318560" }, { "association": "Resistant", "cdna": "c.944C>T", "drug": "Bosutinib (BCR-ABL inhibitor 3rd gen)", "evidence_level": "European LeukemiaNet guidelines", "gene": "ABL1", "primary_tumor_type": "Chronic myeloid leukemia", "protein_change": "ABL1:T315I", "region": "inside_[cds_in_exon_6]", "source": "PMID:21562040", "transcript": "ENST00000318560" }, ... Note we don’t have to specify cgi.gene, the field in which the value “ABL1” should be searched, because we explicitely asked ElasticSearch to search that field by default (see fieldbydefault) Finally, we can fetch a variant by its ID: $ curl "localhost:8000/variant/chr19:g.4110584A>T" { "_id": "chr19:g.4110584A>T", "_version": 1, "cgi": [ { "association": "Resistant", "cdna": "c.373T>A", "drug": "BRAF inhibitors", "evidence_level": "Pre-clinical", "gene": "MAP2K2", "primary_tumor_type": "Cutaneous melanoma", "protein_change": "MAP2K2:C125S", "region": "inside_[cds_in_exon_3]", "source": "PMID:24265153", "transcript": "ENST00000262948" }, { "association": "Resistant", "cdna": "c.373T>A", "drug": "MEK inhibitors", "evidence_level": "Pre-clinical", "gene": "MAP2K2", "primary_tumor_type": "Cutaneous melanoma", "protein_change": "MAP2K2:C125S", "region": "inside_[cds_in_exon_3]", "source": "PMID:24265153", "transcript": "ENST00000262948" } ] } Conclusions¶ We’ve been able to easily convert a remote flat file to a fully operational BioThings API: - by defining a data plugin, we told the BioThings Hub where the remote data was and what the parser function was - BioThings Hub then generated a dumper to download data locally on the server - It also generated an uploader to run the parser and store resulting JSON documents - We defined a build configuration to include the newly integrated datasource and then trigger a new build - Data was indexed internally on local ElasticSearch by creating a full release - Then we created a BioThings API instance pointing to that new index The final step would then be to deploy that API as a cluster on a cloud. This last step is currently under development, stay tuned! Troubleshooting¶ We test and make sure, as much as we can, that the BioThings Studio image is up-to-date and running properly. But things can still go wrong… First make sure all services are running. Enter the container and type netstat -tnlp, you should see services running on ports (see usual running `services`_). If services running on ports 7080 or 7022 aren’t running, it means the Hub has not started. If you just started the instance, wait a little more as services may take a while before they’re fully started and ready. If after ~1 min, you still don’t see the Hub running, log to user biothings and check the starting sequence. Note Hub is running in a tmux session, under user biothings # sudo su - biothings $ tmux a # recall tmux session $ python bin/hub.py DEBUG:asyncio:Using selector: EpollSelector INFO:root:Hub DB backend: {'uri': 'mongodb://localhost:27017', 'module': 'biothings.utils.mongo'} INFO:root:Hub database: biothings_src DEBUG:hub:Last launched command ID: 14 INFO:root:Found sources: [] INFO:hub:Loading data plugin '' (type: github) DEBUG:hub:Creating new GithubAssistant instance DEBUG:hub:Loading manifest: {'dumper': {'data_url': '', 'uncompress': True}, 'uploader': {'ignore_duplicates': False, 'parser': 'parser:load_data'}, 'version': '0.1'} INFO:indexmanager:{} INFO:indexmanager:{'test': {'max_retries': 10, 'retry_on_timeout': True, 'es_host': 'localhost:9200', 'timeout': 300}} DEBUG:hub:for managers [<SourceManager [0 registered]: []>, <AssistantManager [1 registered]: ['github']>] INFO:root:route: ['GET'] /job_manager => <class 'biothings.hub.api.job_manager_handler'> INFO:root:route: ['GET'] /command/([\w\.]+)? => <class 'biothings.hub.api.command_handler'> ... INFO:root:route: ['GET'] /api/list => <class 'biothings.hub.api.api/list_handler'> INFO:root:route: ['PUT'] /restart => <class 'biothings.hub.api.restart_handler'> INFO:root:route: ['GET'] /status => <class 'biothings.hub.api.status_handler'> DEBUG:tornado.general:sockjs.tornado will use json module INFO:hub:Monitoring source code in, ['/home/biothings/biothings_studio/hub/dataload/sources', '/home/biothings/biothings_studio/plugins']: ['/home/biothings/biothings_studio/hub/dataload/sources', '/home/biothings/biothings_studio/plugins'] You should see something looking like this above. If not, you should see the actual error, and depending on the error, you may be able to fix it (not enough disk space, etc…). BioThings Hub can be started again using python bin/hub.py from within the application directory (in our case, /home/biothings/biothings_studio) Note Press Control-B then D to dettach the tmux session and let the Hub running in background. By default, logs are available in /home/biothings/biothings_studio/data/logs. Finally, you can report issues and request for help, by joining Biothings Google Groups ()
http://docs.biothings.io/en/latest/doc/studio_tutorial.html
2019-04-18T18:31:31
CC-MAIN-2019-18
1555578526228.27
[array(['../_images/githuburl.png', '../_images/githuburl.png'], dtype=object) array(['../_images/registerdp.png', '../_images/registerdp.png'], dtype=object) array(['../_images/hub_restarting.png', '../_images/hub_restarting.png'], dtype=object) array(['../_images/listdp.png', '../_images/listdp.png'], dtype=object) array(['../_images/plugintab.png', '../_images/plugintab.png'], dtype=object) array(['../_images/allcommands.png', '../_images/allcommands.png'], dtype=object) array(['../_images/allnotifs.png', '../_images/allnotifs.png'], dtype=object) array(['../_images/dumptab.png', '../_images/dumptab.png'], dtype=object) array(['../_images/uploadtab.png', '../_images/uploadtab.png'], dtype=object) array(['../_images/inspectmenu.png', '../_images/inspectmenu.png'], dtype=object) array(['../_images/inspected.png', '../_images/inspected.png'], dtype=object) array(['../_images/genefield.png', '../_images/genefield.png'], dtype=object) array(['../_images/validated.png', '../_images/validated.png'], dtype=object) array(['../_images/registered.png', '../_images/registered.png'], dtype=object) array(['../_images/buildconfform.png', '../_images/buildconfform.png'], dtype=object) array(['../_images/buildconflist.png', '../_images/buildconflist.png'], dtype=object) array(['../_images/newbuild.png', '../_images/newbuild.png'], dtype=object) array(['../_images/builddone.png', '../_images/builddone.png'], dtype=object) array(['../_images/newreleaseform.png', '../_images/newreleaseform.png'], dtype=object) array(['../_images/newfullrelease.png', '../_images/newfullrelease.png'], dtype=object) array(['../_images/cerebro_connect.png', '../_images/cerebro_connect.png'], dtype=object) array(['../_images/cerebro_index.png', '../_images/cerebro_index.png'], dtype=object) array(['../_images/newapi.png', 'newapi'], dtype=object) array(['../_images/apilist.png', '../_images/apilist.png'], dtype=object) array(['../_images/apirunning.png', '../_images/apirunning.png'], dtype=object) ]
docs.biothings.io
Mark a playbook as completed Applies to Dynamics 365 for Customer Engagement apps version 9.x When you complete all the activities created for a playbook, you must mark the playbook as completed. This helps you to know if the playbook was successful or not. To mark a playbook as completed, go to the record you launched the playbook from (calling record). In the playbook record, on the command bar, select Complete as, and then select one of the following results: Successful Not Successful Partially Successful Not Required Note A system administrator or customizer can add custom values to this field. See also Launch a playbook to carry out activities consistently Track playbook activities Feedback Send feedback about:
https://docs.microsoft.com/en-gb/dynamics365/customer-engagement/sales-enterprise/mark-playbook-completed
2019-04-18T18:26:30
CC-MAIN-2019-18
1555578526228.27
[]
docs.microsoft.com
Cookbook Generating custom links in middleware and request handlers In most cases, you can rely on the ResourceGenerator to generate self relational links, and, in the case of paginated collections, pagination links. What if you want to generate other links to include in your resources, though? The ResourceGenerator provides access to the metadata map, hydrators, and link generator via getter methods: getMetadataMap() getHydrators() getLinkGenerator() We can thus use these in order to generate custom links as needed. Creating a custom link to include in a resource In our first scenario, we'll create a "search" link for a resource. We'll assume that you have composed a ResourceGenerator instance in your middleware, and assigned it to the $resourceGenerator property. The link we want to generate will look something like /api/books?query={searchParms}, and map to a route named books. $searchLink = $this->resourceGenerator ->getLinkGenerator() ->templatedFromRoute( 'search', $request, 'books', [], ['query' => '{searchTerms}'] ); You could then compose it in your resource: $resource = $resource->withLink($searchLink); Adding metadata for generated links In our second scenario, we'll consider a collection endpoint. It might include a per_page query string argument, to allow defining how many results to return per page, a sort argument, and a query argument indicating the search string. We know these at runtime, but not at the time we create our configuration, so we need to inject them after we have our metadata created, but before we generate our resource, so that the pagination links are correctly generated. $queryParams = $request->getQueryParams(); $query = $queryParams['query'] ?? ''; $perPage = $queryParams['per_page'] ?? 25; $sort = $queryParams['sort'] ?? ''; $metadataMap = $this->resourceGenerator->getMetadataMap(); $metadata = $metadataMap->get(BookCollection::class); $metadataQuery = $origMetadataQuery = $metadata->getQueryStringArguments(); if ('' !== $query) { $metadataQuery = array_merge($metadataQuery, ['query' => $query]); } if ('' !== $perPage) { $metadataQuery = array_merge($metadataQuery, ['per_page' => $perPage]); } if ('' !== $sort) { $metadataQuery = array_merge($metadataQuery, ['sort' => $sort]); } $metadata->setQueryStringArguments($metadataQuery); // ... $resource = $this->resourceGenerator->fromObject($books, $request); // Reset query string arguments $metadata->setQueryStringArguments($origMetadataQuery); This will lead to links with URIs such as /api/books?query=Adams&per_page=5&sort=DESC&page=4. Found a mistake or want to contribute to the documentation? Edit this page on GitHub!
https://docs.zendframework.com/zend-expressive-hal/cookbook/generating-custom-links-in-middleware/
2019-04-18T18:53:53
CC-MAIN-2019-18
1555578526228.27
[]
docs.zendframework.com
Remove-Dns Server Zone Syntax Remove-DnsServerZone [-Name] <String> [-AsJob] [-CimSession <CimSession[]>] [-ComputerName <String>] [-Force] [-PassThru] [-ThrottleLimit <Int32>] [-Confirm] [-WhatIf] Description The Remove-DnsServerZone removes a zone from a Domain Name System (DNS) server. You can use this cmdlet to remove any type of zone, primary, seconday, stub, or conditional forwarder. Examples Example 1: Remove a zone PS C:\> Remove-DnsServerZone "western.contoso.com" -PassThru -Verbose This command removes a zone named western.contoso.com. The command uses the PassThru parameter to create output. Required Parameters Specifies a name of a zone. The cmdlet removes this zone.. Removes a zone without prompting you for confirmation. By default, the cmdlet prompts you for confirmation before it proceeds.
https://docs.microsoft.com/en-us/powershell/module/dnsserver/remove-dnsserverzone?view=winserver2012-ps
2018-01-16T09:53:54
CC-MAIN-2018-05
1516084886397.2
[]
docs.microsoft.com
Posts shortcode [su_posts] shortcode is intended for display of posts, pages and random post types. You can display posts from specific category or by specific tag. You can also choose random taxonomies and select the number of displayed posts. This shortcode uses WP_Query class. Full list of shortcode's parameters can be found under the plugin page at: Dashboard - Shortcodes - Available shortcodes - Posts. Pagination Unfortunately, pagination is currently not available in shortcode. This function will be added in the next versions. Plugin version at the moment of writing this: 4.9.9. Built-in templates [su_posts] shortcode allows to use various templates for display of posts. You can use several templates built into the plugin. Built-in templates are located relative path to template file from plugin folder or folder of your theme. Shortcode example: [su_posts template="templates/default-loop.php"] In this example, search for the template will be made in the following locations (in specified order): - /wp-content/themes/child-theme/templates/default-loop.php - /wp-content/themes/parent-theme/templates/default-loop.php - /wp-content/plugins/shortcodes-ultimate/templates/default-loop.php Template editing Do not edit templaes in plugin folder, since all your changes will be deleted at the next plugin update. To change one of built-in templates, you should copy it to the folder of your theme first. For convenience, you can copy the whole "templates" folder from plugin folder to the folder of your theme. Folder with templates may have other name than "templates". Resulting paths to template files should look like: /wp-content/themes/your-theme/templates/ Now you can edit imported templates. As it was mentioned above, plugin will search for template file in the theme folder first. Creation of"]
http://docs.getshortcodes.com/article/43-posts
2018-01-16T09:29:37
CC-MAIN-2018-05
1516084886397.2
[]
docs.getshortcodes.com
Obtaining Software Bundles for Upgrade Before upgrading Teradata components deployed in a virtual or public cloud environment, you must complete this procedure and redeploy the system if you are doing either a major or minor upgrade. Since the operating system bundle includes both important security updates and the latest version of PUT, you must separately upgrade the operating system whenever upgrading other Teradata components deployed in a virtual or public cloud environment, with the exception of Server Management. - Log in to you have multiple Teradata Support accounts, log in to the account associated with your Azure deployment. - Download the latest bundles for your operating system, if applicable, and each additional component to be upgraded.For instructions, search for KAP1A730A on the Teradata Support portal.
https://azure.docs.teradata.com/gyc1508357868094.html
2018-01-16T09:13:10
CC-MAIN-2018-05
1516084886397.2
[]
azure.docs.teradata.com
Authenticate search request Kibana Elasticsearch basic authentication is used for authentication. Valid certificate sentinl: settings: authentication: enabled: true username: 'elastic' password: 'password' cert: selfsigned: false pem: '/path/to/pem/key' Self-signed certificate sentinl: settings: authentication: enabled: true username: 'elastic' password: 'password' cert: selfsigned: true Siren Platform Authenticate Siren Alert using single user - default sentinl from Access Control app. For example, default investigate.yml. + # Access Control configuration investigate_access_control: enabled: true cookie: password: "12345678123456781234567812345678" admin_role: kibiadmin sentinl: elasticsearch: username: sentinl password: password ... Siren Platform or Kibana It is possible to create multiple user credentials and assign these credentials to watchers, one credential per watcher, thereby authenticating each watcher separately. It is called impersonation. - Create credentials in Search Guard or X-Pack and assign the permissions you need. You need one user for Sentinl and one user per watcher. Set Siren Alert authentication. sentinl: settings: authentication: enabled: true impersonate: true username: 'elastic' password: 'password' sha: '6859a748bc07b49ae761f5734db66848' cert: selfsigned: true - Set password as clear text in passwordproperty. The password can be put in encrypted form instead. Set password hash in shaproperty, now you can remove passwordoption. - Use sentinl/scripts/encryptPassword.jsscript to obtain the hash. Edit the value of the plainTextPasswordvariable, replacing adminwith your password. Copy the generated hash and paste as the shavalue. Also, you can change password hashing complexity by setting options inside encryption. Node.js crypto library is used to hash and unhash user password. - Set watcher authentication. encryptPassword.js username impersonate: true.
https://docs.siren.io/10.0.4/platform/en/siren-alert/authentication/authenticate-search-request.html
2021-09-17T01:13:04
CC-MAIN-2021-39
1631780053918.46
[array(['../../image/15c9e1f5bd7b06.png', 'Watcher authentication'], dtype=object) ]
docs.siren.io
Overview Version Third Party Applications and Versions Supported Platforms and Third Party Applications for Version 7.3 New Features and Enhancements Work Manager - Added ability to specify an Equipment Operator on a work order - - Enable work order editing - Smart sync process – reduced sync time by 80% - Active directory integration - Type ahead activity search when creating work orders - Delta enabled syncing for faster time to sync - Persisted user id at login - Added ability to hide cost day cards per customer - Unit of measure displayed for all quantity entries - Updated the work date field to be an optional entry on location day cards - Add location to a day card by selecting an asset on the map - Option on map to locate and show a user's current location - Create and edit work orders - View current location on map Maintenance Manager - Refreshed user interface for creating a work order from the Daily Work Report and Plan Matrix windows - Refreshed user interface and experience of the work order history window - Multi-activity job field configurable in mobile app - Added a new work order history window - Improved the work order creation wizard to simplify the user experience Structures Inspector - New Inspection Teams screen - allows an inspection coordinator to create a team of inspectors that can be assigned bridge structures for inspection - Enhanced Inspection Manager screen - allows an inspection coordinator to assign new inspection candidates to an inspection team Structures Analyst - In addition to optimization analysis on bridge NBI components, bridge analysis has been enhanced to support optimization analysis of AASHTO elements (or an aggregation of elements) by applying probabilistic deterioration models to a distribution of the four condition states GIS Explorer - Added security settings to the catalog for better content management - Support for creating and displaying a layer from an Image Service - New 'Between' operand for date fields when filtering - Ability to update (save) an existing map and its layers after changes have been made to the styling or other component Jasper Reports - Added support for Jasper server v6.4 as a future replacement for the embedded Jasper reports functionality - Note: Embedded Jasper reports functionality will remain functional for the next couple of application versions to allow for full migration of existing reports to Jasper server Dropped/Replaced Features - This version drops support for Esri Roads & Highways version 10.4.1 and below - This version drops support for Tomcat 7 - This version drops support for Android 4.4 (KitKat) - This version drops support for iOS 9 Other Improvements and Bug Fixes - Added: Support for ESRI Roads & Highways V10.6 - Added: Support for PostgreSQL as a database server OS option for deploying the application - Added: Ability to edit point and linear features on a map bound to a data window - Added: Ability to define up to four fixed columns (these will remain visible, when scroll horizontally) in a data window display - Added: Search box with type ahead for tree list items in Reports filter - Added: Shortlist for bridge inspections now shows inspections assigned to a user's team(s) - Added: Support for ESRI Roads & Highways V10.5.1 - Added: High Contrast mode display for color blind users, this applies a grey highlight over white text on grid views - Updated: The calendar widget was updated to simplify its GIS Explorer, where an error is returned when attempt to export a layer as a shape file - Fixed: Issue in GIS Explorer with IE 11, where an administrator could not edit a folder in the catalog - - Fixed: Issue in the work manager mobile app, where the cursor is placed at the beginning of a field when attempt to edit the contents of a field - - Fixed: Issue in materials management with Firefox, where an error is displayed when attempt to issue a material to a vehicle or a crew - Fixed: Issue in fleet management, where an error is displayed when attempt to complete a repair order with a costs daycard - - Fixed: Issue in the Work Manager app, where an inactive labour resource was available for selection when creating or updating day cards - Fixed: Issue in maintenance management, where a selected work order on the Daily Log screen was not retained when save changes made to day cards - Fixed: Performance issue in fleet & equipment management, where an attempt to save updates in the equipment fueling screen took over 20 seconds - Fixed: Performance issue in resources management, where an attempt to save a new transaction in the materials inventory management screen sometimes results in a hangup of the application - Fixed: Issue in bridge inspections, where previously attached media to an inspection is removed when additional media attachments are added from an FTP location - Fixed: Issue in bridge inspections, where the second line of text in a field could not be edited when the application is being used in right to left
https://docs.agileassets.com/display/PD10/7.3+Release+Notes
2021-09-17T00:11:41
CC-MAIN-2021-39
1631780053918.46
[]
docs.agileassets.com
Contents - Overview: Understanding the Concur Expense (Best Practice & Enhanced) – NetSuite integration template - Install the Concur Expense (Best Practice & Enhanced) – NetSuite integration templates - Configure and run flows in the Concur Expense (Best Practice & Enhanced) – NetSuite integration templates Two related Quickstart integration templates are available for automating data updates between SAP Concur Solutions and Oracle NetSuite. Choose the specific template for your Concur Expense account level: - Concur Expense (Best Practice) – NetSuite - Concur Expense (Enhanced) – SAP Concur NetSuite..
https://docs.celigo.com/hc/en-us/articles/360046746992-Understanding-the-Concur-Expense-Best-Practice-Enhanced-NetSuite-Quickstart-integration-templates
2021-09-17T01:20:12
CC-MAIN-2021-39
1631780053918.46
[]
docs.celigo.com
aws_alb resource Use the aws_alb InSpec audit resource to test properties of a single AWS Application Load Balancer (ALB). Syntax Ensure that an aws_alb exists describe aws_alb('arn:aws:elasticloadbalancing') do it { should exist } end describe aws_alb(load_balancer_arn: 'arn:aws:elasticloadbalancing') do it { should exist } end Parameters load_balancer_arn (required) This resource accepts a single parameter, the ALB Arn which uniquely identifies the ALB. This can be passed either as a string or as a load_balancer_arn: 'value' key-value entry in a hash. See also the AWS documentation on Elastic Load Balancing. Properties Examples Test that an ALB has its availability zones configured correctly describe aws_alb('arn::alb') do its('zone_names.count') { should be > 1 } its('zone_names') { should include 'us-east-2a' } its('zone_names') { should include 'us-east-2b' }_alb('AnExistingALB') do it { should exist } end describe aws_alb('ANonExistentALB') do it { should_not exist } end AWS Permissions Your Principal will need the elasticloadbalancing:DescribeLoadBalancers action set to Allow. You can find detailed documentation at Authentication and Access Control for Your Load Balancers Was this page helpful?
https://docs.chef.io/inspec/resources/aws_alb/
2021-09-17T01:15:41
CC-MAIN-2021-39
1631780053918.46
[]
docs.chef.io
Create a New Tax Office To create a new tax office 1. On the Codejig ERP Main menu, click the VAT module, and then select Tax office. A listing page of the Tax office directory opens. 2. On the listing page, click + Add new. You are taken to a form page of the generic Company directory we use to define our business partners, such as customers, vendors, banks, etc. Here you enter information about the tax office. Since it is the tax office you are adding and not any other type of business partners, the page is adjusted specifically for entering information about tax authorities and certain fields are completed by default. For example, company type is identified as Institution (not as Customer or Vendor) and among Institution company types Tax agency is selected. 3. Provide general information about the tax office: name and a registration number. 4. Specify VAT accounts that accumulate difference between the output and input VAT to be paid to the tax office. 5. Indicate contact and payment details of the tax office. 6. Click Save when finished. The generic Company directory adjusted for defining tax offices comprises 7 sections: - General area - Accounting tab - Contacts tab - Contact persons tab - Bank accounts tab - Settings tab - List of transactions tab To be able to save the tax office, you have to fill in the required fields in all sections of the document which are marked with an asterisk (*). More information Tax Office: Details Section Create a New Company Update Companies Delete Companies
https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427395992
2021-09-17T00:18:37
CC-MAIN-2021-39
1631780053918.46
[]
docs.codejig.com
Rate•Give Feedback A region is a physical location in the world where DigitalOcean has a datacenter that can run your App Platform app. You can always see the available regions to choose from when creating a new app, or by keeping an eye on the “Regional Availability” section of the App Platform documentation’s home page.
https://docs.digitalocean.com/products/app-platform/concepts/region/
2021-09-17T00:36:32
CC-MAIN-2021-39
1631780053918.46
[]
docs.digitalocean.com
Rate•Give Feedback The DigitalOcean API lets you manage DigitalOcean resources programmatically using conventional HTTP requests. All the functionality available in the DigitalOcean Control Panel is also available through the API. You can use the API to deploy, delete, and manage apps on App Platform. doctl is a command-line interface for the DigitalOcean API and supports many of the same actions. doctl supports managing apps from the command line. See the doctl documentation or use doctl apps --help for more information.
https://docs.digitalocean.com/products/app-platform/references/
2021-09-17T00:59:04
CC-MAIN-2021-39
1631780053918.46
[]
docs.digitalocean.com
Date: Wed, 10 Jan 2018 15:29:02 +0000 From: Katie Sadowske <[email protected]> To: "[email protected]" <[email protected]> Subject: Microsoft Azure Users List Message-ID: <MA1PR0101MB181501222AE24A8800AEED6985110@MA1PR0101MB1815.INDPRD01.PROD.OUTLOOK.COM> Next in thread | Raw E-Mail | Index | Archive | Help Hello Good Day, I would like to know if you are interested in acquiring Microsoft Azure Use= rs List. Information fields: Names, Title, Email, Phone, Company Name, Company URL, = Company physical address, SIC Code, Industry, Company Size (Revenue and Emp= loyee). Let me know if you are interested and I will get back to you with the count= s and price list Regards, Katie Sadowske:) Data Consultant If you don't wish to receive emails from us reply back stati= ng "REMOVE" Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=405303+0+archive/2018/freebsd-questions/20180114.freebsd-questions
2021-09-17T00:42:29
CC-MAIN-2021-39
1631780053918.46
[]
docs.freebsd.org
Download the LayoutHub Kit Extension Click on Extension tab on left sidebar Click on three dots on top right bar Select Install from VSIX. Select the file you downloaded at step 1 To use the extension you must have LayoutHub teamwork account. To get your account, please contact your leader. Command + SHIFT + p for Mac os or Ctrl + SHIFT + p for Window or Linux Select Startup LayoutHub kit Select Login Enter your teamwork username (email) and press Enter Enter Teamwork password and press Enter. If success It will show the selection like the screenshot blow Command + SHIFT + p for Mac OS or Ctrl + SHIFT+ p for Window, Linux Select Startup LayoutHub Kit Select Download pages Tick on pages you want to download and press Enter Command + SHIFT + p for Mac OS or Ctrl + SHIFT+ p for Window, Linux Select Startup LayoutHub Kit Select Watch your changes and start coding. All your changes will show on tab Output. LayoutHub Kit on Visual Studio code v1.0.0 If you are facing any problem when using the extension, please contact via email: [email protected]
https://docs.layouthub.com/development/tools/layouthub-kit-vscode
2021-09-17T00:17:40
CC-MAIN-2021-39
1631780053918.46
[]
docs.layouthub.com
Working with Particle Sets¶ There are Sets, Subsets, Particles, Children and Parents…¶ The fundamental data structures used in AMUSE are particle sets. Based on attributes of the elements in the sets (particles), selections can be made using the selection method which return subsets. These subsets are views or scopes on the set and do not hold values of their own. It is also possible to add structure to the set by defining parent-child relationships between particles in a set. These structures exist only in the set and are a property of it and have no meaning with respect to the particles outside the set. Selecting stars in a plummer model¶ In this tutorial we generate a set of stars, a plummer model, and two subsets of stars. The first subset contains the stars within a certain radius from the center of mass of the plummer system and the second subset contains all the other stars. The plummer module generates a particle set of the form datamodel.Stars(N). Imports¶ We start with some AMUSE imports in a python shell: The core module contains the set objects, and the units and nbody_system are needed for the correct assignment of properties to the particles, which need to be quantities. >>> from amuse import datamodel >>> from amuse.units import nbody_system >>> from amuse.units import units >>> from amuse.ic.plummer import new_plummer_sphere >>> import numpy Model¶ Note A quick way to look at the sets we are going to make is by using gnuplot. If you have gnuplot, you can install the gnuplot-py package to control gnuplot directly from your script. To install gnuplot-py, open a shell and do: easy_install gnuplot-py Let’s generate a plummer model consisting of 1000 stars, >>> convert_nbody = nbody_system.nbody_to_si(100.0 | units.MSun, 1 | units.parsec) >>> plummer = new_plummer_sphere(1000, convert_nbody) We can work on the new plummer particle set, but we want to keep this set unchanged for now. So, we copy all data to a working set: >>> stars = plummer.copy() Note To look at the stars in gnuplot do: >>> plotter = Gnuplot.Gnuplot() >>> plotter.splot(plummer.position.value_in(units.parsec)) Selection¶ At this stage we select the subsets based on the distance of the individual stars with respect to the center of mass, being 1 parsec in this example. We need the center of mass, of course: >>> center_of_mass = stars.center_of_mass() and the selection of the sets: >>> innersphere = stars.select(lambda r: (center_of_mass-r).length()<1.0 | units.parsec,["position"]) >>> outersphere = stars.select(lambda r: (center_of_mass-r).length()>=1.0 | units.parsec,["position"]) Note To look at the stars in gnuplot do: >>> plotter = Gnuplot.Gnuplot() >>> plotter.splot(outersphere.position.value_in(units.parsec), innersphere.position.value_in(units.parsec),) We can achieve the same result in another way by using the fact that outersphere is the difference of the innersphere set and the stars set: >>> outersphere_alt = stars.difference(innersphere) or using the particle subtraction ‘-’ operator: >>> outersphere_alt2 = stars - innersphere The selections are all subsets as we can verify: >>> outersphere <amuse.datamodel.particles.ParticlesSubset object at ...> Set operations¶ The result should be the same, but we’ll check: >>> hopefully_empty = outersphere.difference(outersphere_alt) >>> hopefully_empty.is_empty() True >>> len(outersphere - outersphere_alt2)==0 True From our selection criteria we would expect to have selected all stars, to check this we could do something like this: >>> len(innersphere)+len(outersphere) == 1000 True >>> len(innersphere)+len(outersphere_alt) == 1000 True >>> len(innersphere)+len(outersphere_alt2) == 1000 True The union of the innersphere and outersphere set should give the stars set, we can check: >>> like_stars = innersphere.union(outersphere) >>> stars.difference(like_stars).is_empty() True >>> (innersphere + outersphere_alt2 - stars).is_empty() True Iteration¶ We can iterate over sets and we will put that to use here to check whether we really selected the right stars in the outersphere: >>> should_not_be_there_stars = 0 >>> for star in outersphere: ... if (center_of_mass-star.position).length()<1.0|units.parsec: ... should_not_be_there_stars += 1 >>> should_not_be_there_stars 0 Using codes¶ Channels¶ Imagine we evaluate stars in some MPI bound legacy code, and want to know what happened to the stars in our subsets. Each query for attributes in de code set invokes one MPI call, which is inefficient if we have many queries. Copying the entire set to the model, however, costs only one MPI call too and we can query from the model set at will without the MPI overhead. Copying the data from one existing set to another set or subset can be done via channels. First we define a simple dummy legacy code object, with some typical gd interface methods: >>> class DummyLegacyCode(object): ... def __init__(self): ... self.particles = datamodel.Particles() ... def add_particles(self, new_particles): ... self.particles.add_particles(new_particles) ... def update_particles(self, particles): ... self.particles.copy_values_of_attributes_to(['x', 'y', 'z', 'mass'], particles) ... particles = self.particles.copy() ... def evolve(self): ... self.particles.position *= 1.1 We instantiate the code and use it to evolve (expand) our plummer model. We add a channel to our innersphere subset to track the changes: >>> code = DummyLegacyCode() >>> channel_to_innersphere = code.particles.new_channel_to(innersphere) >>> code.add_particles(stars) >>> code.evolve() >>> r_inner_init = innersphere.position >>> r_outer_init = outersphere.position >>> channel_to_innersphere.copy() >>> r_inner_fini = innersphere.position >>> r_outer_fini = outersphere.position Checking the changes by looking at the positions (all changed after the evolve), we will see that only the innersphere particles are updated: >>> numpy.all(r_inner_init[0]==r_inner_fini[0]) False >>> numpy.all(r_outer_init[0]==r_outer_fini[0]) True If we want to update all particles in our model, we can use: >>> code.update_particles(stars) and check: >>> r_outer_fini = outersphere.position >>> numpy.all(r_outer_init[0] == r_outer_fini[0]) False Particle Hierarchies¶ Let us suppose that the zero-th star, stars[0] has a child and a grandchild star who do not belong to the plummer model: >>> child_star = datamodel.Particle() >>> grandchild_star = datamodel.Particle() >>> child_star.mass = 0.001|units.MSun >>> child_star.position = [0,0,0]|units.AU >>> child_star.velocity = [0,0,0]|units.AUd >>> grandchild_star.mass = 0.0001|units.MSun >>> grandchild_star.position = [0,0.1,0]|units.AU >>> grandchild_star.velocity = [0,0,0]|units.AUd We can add them as child and grandchild etc. to the set of plummer stars. But first we have to add them to the set as regular stars: >>> child_star_in_set = stars.add_particle(child_star) >>> grandchild_star_in_set = stars.add_particle(grandchild_star) Now we can define the hierarchy: >>> stars[0].add_child(child_star_in_set) >>> child_star_in_set.add_child(grandchild_star_in_set) The descendants of star 0 form a subset: >>> stars[0].descendents() <amuse.datamodel.ParticlesSubset object at ...> >>> stars[0].children().mass.value_in(units.MSun) array([ 0.001]) >>> stars[0].descendents().mass quantity<[1.98892e+27, 1.98892e+26] kg> Methods to retreive physical properties of the particles set¶ Particle sets have a number of functions for calculating physical properties that apply to the sets. Although some of these might be implemented in the legacy codes as well, using them via particle sets guarantees the applicability to all particles when multiple legacy codes are used. Furthermore, the particle set functions provide a uniform way of doing the calculations. Speed might be the downside. amuse.datamodel.particle_attributes. center_of_mass(particles) Returns the center of mass of the particles set. The center of mass is defined as the average of the positions of the particles, weighted by their masses. >>> from amuse.datamodel import Particles >>> particles = Particles(2) >>> particles.x = [-1.0, 1.0] | units.m >>> particles.y = [0.0, 0.0] | units.m >>> particles.z = [0.0, 0.0] | units.m >>> particles.mass = [1.0, 1.0] | units.kg >>> particles.center_of_mass() quantity<[0.0, 0.0, 0.0] m> amuse.datamodel.particle_attributes. center_of_mass_velocity(particles) Returns the center of mass velocity of the particles set. The center of mass velocity is defined as the average of the velocities of the particles, weighted by their masses. >>> from amuse.datamodel import Particles >>> particles = Particles(2) >>> particles.vx = [-1.0, 1.0] | units.ms >>> particles.vy = [0.0, 0.0] | units.ms >>> particles.vz = [0.0, 0.0] | units.ms >>> particles.mass = [1.0, 1.0] | units.kg >>> particles.center_of_mass_velocity() quantity<[0.0, 0.0, 0.0] m * s**-1> amuse.datamodel.particle_attributes. kinetic_energy(particles) Returns the total kinetic energy of the particles in the particles set. >>> from amuse.datamodel import Particles >>> particles = Particles(2) >>> particles.vx = [-1.0, 1.0] | units.ms >>> particles.vy = [0.0, 0.0] | units.ms >>> particles.vz = [0.0, 0.0] | units.ms >>> particles.mass = [1.0, 1.0] | units.kg >>> particles.kinetic_energy() quantity<1.0 m**2 * kg * s**-2> amuse.datamodel.particle_attributes. potential_energy(particles, smoothing_length_squared=quantity<zero>, G=quantity<6.67428e-11 m**3 * kg**-1 * s**-2>) Returns the total potential energy of the particles in the particles set. >>> from amuse.datamodel import Particles >>> particles = Particles(2) >>> particles.x = [0.0, 1.0] | units.m >>> particles.y = [0.0, 0.0] | units.m >>> particles.z = [0.0, 0.0] | units.m >>> particles.mass = [1.0, 1.0] | units.kg >>> particles.potential_energy() quantity<-6.67428e-11 m**2 * kg * s**-2> amuse.datamodel.particle_attributes. particle_specific_kinetic_energy(set, particle) Returns the specific kinetic energy of the particle. >>> from amuse.datamodel import Particles >>> particles = Particles(2) >>> particles.vx = [0.0, 1.0] | units.ms >>> particles.vy = [0.0, 0.0] | units.ms >>> particles.vz = [0.0, 0.0] | units.ms >>> particles.mass = [1.0, 1.0] | units.kg >>> particles[1].specific_kinetic_energy() quantity<0.5 m**2 * s**-2> amuse.datamodel.particle_attributes. particle_potential(set, particle, smoothing_length_squared=quantity<zero>, G=quantity<6.67428e-11 m**3 * kg**-1 * s**-2>) Returns the potential at the position of the particle. >>> from amuse.datamodel import Particles >>> particles = Particles(2) >>> particles.x = [0.0, 1.0] | units.m >>> particles.y = [0.0, 0.0] | units.m >>> particles.z = [0.0, 0.0] | units.m >>> particles.mass = [1.0, 1.0] | units.kg >>> particles[1].potential() quantity<-6.67428e-11 m**2 * s**-2>
https://amuse.readthedocs.io/en/latest/tutorial/particle_sets.html
2021-09-17T00:01:23
CC-MAIN-2021-39
1631780053918.46
[array(['../_images/sets.png', '../_images/sets.png'], dtype=object) array(['../_images/plummer11.png', '../_images/plummer11.png'], dtype=object) array(['../_images/plummer2.png', '../_images/plummer2.png'], dtype=object) array(['../_images/channels.png', '../_images/channels.png'], dtype=object) array(['../_images/children.png', '../_images/children.png'], dtype=object)]
amuse.readthedocs.io