content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
UDP Trigger The UDP Trigger will trigger a workflow whenever the Edge Compute device receives a UDP datagram on the configured port. Configuration The UDP Trigger has one piece of configuration, the port to listen on. When a workflow with this trigger is deployed to an Edge Compute device, the Edge Agent will open a UDP server on this port. When that UDP server receives a datagram, the workflow will trigger. Payload The payload will include the triggering datagram data field. In the general case, a UDP workflow payload will look like the following: { "applicationId": <id of the current application>, "applicationName": <name of the current application>, "data": { "sourcePort": <source port of incoming datagram>, "sourceAddress": <source address of incoming datagram>, "message": <contents of the incoming datagram> }, the command arrived>, "triggerId": <udp port>, "triggerType": "udp" }
http://docs.prerelease.losant.com/workflows/triggers/udp/
2019-06-15T23:51:05
CC-MAIN-2019-26
1560627997501.61
[array(['/images/workflows/triggers/udp-trigger.png', 'UDP Trigger UDP Trigger'], dtype=object) ]
docs.prerelease.losant.com
. A number of U-Boot features need to be enabled for Mender to work correctly, and these should it should be changed to: bootcmd=run mender_setup; run mmcboot mender_uboot_root: This is an environment variable that contains the description of the device currently set to boot. Whenever a U-Boot command is issued that needs to access the current boot partition, this variable should be referenced. For example, if you have a script, loadimage, that loads the kernel from the file system, using mmc as the device and a ${bootpart} variable reference as the partition to load from: loadimage=load mmc ${bootpart} ${loadaddr} ${bootdir}/${bootfile} it should be changed into: loadimage=load ${mender_uboot_root} ${loadaddr} ${bootdir}/${bootfile} Note that mmc is included in the ${mender_uboot_root} string;: bootargs=console=${console},${baudrate} root=${mmcroot} should be changed to:. If not currently in use, this step can be skipped. If altbootcmd is being used, one first needs to disable Mender's built-in altbootcmd. To do this, the MENDER_NO_DEFAULT_ALTBOOTCMD define should be added to the board configuration header in U-Boot (inside include/configs). Then, at the beginning of altbootcmd, the call run mender_altbootcmd should be added. Like mender_setup, this will not perform any boot steps, but it may modify and potentially save the environment. Afterwards the mender_uboot_root and mender_kernel_root variables will refer to the correct partitions, taking into account the potential rollback that may happen because altbootcmd was called. After the desired alternate boot steps have been performed, one can either call bootcmd to perform a normal boot using the new partitions, or one can perform a different type of boot sequence and refer to the Mender variables directly. mender_try_to_recover: It is recommended to add a call to this boot script right after the normal, disk based boot command for the board. Note that it should be added a network boot has been attempted or the device has been rebooted through other means. mender_uboot_boot, mender_uboot_if, mender_uboot_dev: These variables are not required by Mender, but can be used if, in the board boot code, kernel is loaded from the rootfs partition, not from the boot partition. This is in order to make a complete upgrade possible, including the kernel. be changed to: uimage=boot/uImage fdt_file=boot/uImage.dtb Because the kernel and associated files are loaded from a rootfs partition, in the majority of cases it will be an ext4 or ext3 partition. If the existing boot code for the board uses the fatload command to load the kernel and/or any associated files, it will need to be changed, since the rootfs is usually not a FAT partition. We recommend that it is replaced simply with load, since it will work in both cases, but it can also be replaced with either ext2load or ext4load if desired. In the bitbake recipe for u-boot, BOOTENV_SIZE should be set to the same value that CONFIG_ENV_SIZE is set to in the board specific C header for U-Boot (inside u-boot/include/configs). Which value exactly is board specific; the important thing is that they are the same. The same value should be provided for both u-boot and u-boot-fw-utils, and the easiest way to achieve this is by putting them in a common file and then referring to it. For example, in u-boot/include/configs/myboard.h: #define CONFIG_ENV_SIZE 0x20000 and in recipes-bsp/u-boot/u-boot-common-my-board.inc: BOOTENV_SIZE = "0x20000" and in both recipes-bsp/u-boot/u-boot_%.bbappend and recipes-bsp/u-boot/u-boot-fw-utils_%.bbappend: include u-boot-common-my-board.inc. If it doesn't already exist, the most straightforward approach is to start with the meta/recipes-bsp/u-boot/u-boot-fw-utils_*.bb recipe found in Yocto, and then make the necessary changes in the same fashion as done for the main u-boot recipe. An alternative approach is to port the forked U-Boot recipe, u-boot-my-fork_*.bb to a u-boot-fw-utils-my-fork_*.bb recipe. We have provided a practical example for this. The two recipes should use identical source code (e.g. the same patches applied). Like with u-boot, u-boot-fw-utils should contain sections to define what it provides, and that it is the preferred provider. In the recipe, there should be: PROVIDES += "u-boot-fw-utils" RPROVIDES_${PN} += "u-boot-fw-utils" And in the machine section of the board: PREFERRED_PROVIDER_u-boot-fw-utils = "u-boot-fw-utils-my-fork" PREFERRED_RPROVIDER_u-boot-fw-utils = "u-boot-fw-utils-my-fork" u-boot-fw-utils-mender.incneeds to included using require, in the same fashion as u-boot-mender.incfor u-boot..
https://docs.mender.io/2.0/devices/yocto-project/bootloader-support/u-boot/manual-u-boot-integration
2019-06-15T22:40:07
CC-MAIN-2019-26
1560627997501.61
[]
docs.mender.io
tns error-reporting Description Configures anonymous error reporting for the NativeScript CLI. All data gathered is used strictly for improving the product and will never be used to identify or contact you. Commands Arguments <Command> extends the error-reporting command. You can set the following values for this attribute. status- Shows the current configuration for anonymous error reporting for the NativeScript CLI. enable- Enables anonymous error reporting. disable- Disables anonymous error reporting.
https://docs.nativescript.org/tooling/docs-cli/general/error-reporting
2019-06-15T23:24:00
CC-MAIN-2019-26
1560627997501.61
[]
docs.nativescript.org
Contents Now Platform User Interface Previous Topic Next Topic Add an application menu to a category Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Add an application menu to a category There are two methods for adding an application menu to a category. Before you beginRole required: admin About this task Note: Menu categories are deprecated in UI16. The following procedure has no effect in UI16. Procedure Use one of the following methods to add an application menu to a menu category. OptionDescription In the menu category record, add the application menu to the Application Menus related list. Navigate to System Definition > Menu Categories. Open a menu category. In the Application Menus related list, click Edit. Use the slushbucket to add the application menu to the category. Click Save. In the application menu record, enter the menu category in the Category field. Navigate to System Definition > Application Menus. Open an application menu. In the Category field, select the menu category. Click Submit. Related TasksNavigate directly to a tableCreate or modify a menu listEnable or disable an application menu or moduleCreate an application menuCreate a module On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/london-platform-user-interface/page/administer/navigation-and-ui/task/t_AddAnApplicationMenuToACategory.html
2019-06-15T23:11:20
CC-MAIN-2019-26
1560627997501.61
[]
docs.servicenow.com
🔗How CDAP Pipelines Work A "behind-the-scenes" look at CDAP Pipelines CDAP Pipelines is a capability of CDAP and combines a user interface with back-end services to enable the building, deploying, and managing of data pipelines. It has no dependencies outside of CDAP, and all pipelines run within a Hadoop cluster. 🔗Architecture CDAP pipelines allows users to build complex data pipelines, either simple ETL (extract-transform-load) or more complicated Data Pipelines on Hadoop. Data pipelines—unlike the linear ETL pipelines—are often not linear in nature and require the performing of more complex transformations including forks and joins at the record and feed level. They can be configured to perform various functions at different times, including machine-learning algorithms and custom processing. Pipelines need to support the creation of complex processing workloads that are repeatable, high-available and easily maintainable. 🔗Logical versus Physical Pipelines Within CDAP, there is the concept of logical and physical pipelines, converted by a planner, and then run in an execution environment. A logical pipeline is the view of the pipeline as seen in the CDAP Studio and the CDAP UI. It is the view composed of sources, sinks, and other plugins, and does not show the underlying technology used to actually manifest and run the pipeline. This view of a pipeline focuses on the functional requirements of the pipeline, rather than the physical runtime. It’s closer to the inherent nature of processing as viewed by a user. This view isolates it from the volatile physical pipeline, which can be operated in different runtime environments. A physical pipeline is the manifestation of a logical pipeline as a CDAP application, which is a collection of programs and services that read and write through the data abstraction layer in CDAP. Physical view elements are those elements that actually run during the execution of a data pipeline on a Hadoop cluster. They execute the MapReduce Programs, Spark, Spark Streaming, Tigon, Workflows, and so on. The physical pipeline view is based on the particular underlying technologies used and, as such, can be changed dynamically. A planner is responsible for converting the logical pipeline to a physical pipeline. The planner analyzes the logical view of the pipeline and converts it to a physical execution plan, performing optimizations, and bundling functions into one or more jobs. 🔗Execution Environment The execution environment is the actual runtime environment where all the components of the data pipeline are executed on the Hadoop cluster by CDAP. MapReduce, Spark, Spark Streaming, Tigon are part of this environment that allows the execution of the data pipeline. The planner maps the logical pipeline to physical pipeline using the environment runtimes available. 🔗Functional Components These are the different functional components that are utilized within CDAP pipelines: 🔗Application An application is a standardized container framework for defining all services. It is responsible for managing the lifecycle of programs and datasets within an application. Each CDAP pipeline is converted into a CDAP application, and deployed and managed independently. 🔗Application Template An application template is a user-defined, reusable, reconfigurable pattern of an application. It is parameterized by a configuration that can be reconfigured upon deployment. It provides a generic version of an application which can be repurposed, instead of requiring the ongoing creation of specialized applications. The re-configurability and modularization of the application is exposed through plugins. CDAP provides its own, system-defined application templates, though new user-defined ones can be added that can use the DAG interface of the CDAP Studio. The application templates are configured using the CDAP Studio and deployed as applications into a Hadoop cluster. Application templates consist of a definition of its different components—processing, workflow, and dataset—in the form of a configuration. Once a configuration is passed to the template, a CDAP application is constructed by combining the necessary pieces to form an executable pipeline. An application template consists of: - A definition of the different processing supported by the template. These can include MapReduce, Service, Spark, Spark Streaming, Tigon, Worker, and Workflow. In the case of a CDAP Pipeline, it (currently) can include MapReduce, Spark, Tigon, Worker, and Workflow. - A planner is optional; however, CDAP includes a planner that translates a logical pipeline into a physical pipeline and pieces together all of the processing components supported by the template. 🔗Plugin A plugin is a customizable module, exposed and used by an application template. It simplifies adding new features or extending the capability of an application. Plugin implementations are based on interfaces exposed by the application templates. Currently, CDAP pipeline application templates expose Source, Transform, and Sink interfaces, which have multiple implementations. Future Application Templates will expose more plugins such as Compute, Arbitrary MR, and Spark in addition to those mentioned above. 🔗Artifact An artifact is a versioned packaging format used to aggregate applications, datasets, and plugins along with associated metadata. It is a JAR (Java Archive) containing Java classes and resources. 🔗CDAP Studio CDAP Studio is a visual development environment for building data pipelines on Hadoop. It has a click-and-drag interface for building and configuring data pipelines. It also supports the ability to develop, run, automate, and operate pipelines from within the CDAP UI. The pipeline interface integrates with the CDAP interface, allowing drill-down debugging of pipelines and can build metrics dashboards to closely monitor pipelines through CDAP. The CDAP Studio integrates with other capabilities such as Cask Tracker. 🔗Testing and Automation Framework An end-to-end JUnit framework (written in Java) is available in CDAP that allows developers to test their application templates and plugins during development. It is built as a modular framework that allows for the testing of individual components. It runs in-memory in CDAP, as the abstracting to in-memory structures makes for easier debugging (shorter stack traces). The tests can be integrated with continuous integration (CI) tools such as Bamboo, Jenkins, and TeamCity. 🔗Implementation of CDAP Pipelines CDAP pipelines are built as a CDAP capability, with three major components: - CDAP Studio, the visual editor, running in a browser - Application Templates, packaged as artifacts, either system- or user-defined - Plugins, extensions to the application templates, in a variety of different types and implementations The CDAP Studio interfaces with CDAP using RESTful APIs. The application templates—ETL Batch, Data Pipeline Batch, and ETL Real-time—are available by default from within the CDAP Studio. Additional application templates, such as Data Pipeline Real-time and Spark Streaming, are being added in upcoming releases. The ETL Batch and ETL Real-time application templates expose three plugin types: source, transform, and sink. The Data Pipeline Batch application template exposes three additional plugin types: aggregate, compute, and model. Additional plugin types can be created and will be added in upcoming releases. There are many different plugins that implement each of these types available "out-of-the-box" in CDAP. New plugins can be implemented using the public APIs exposed by the application templates. When an application template or a plugin is deployed within CDAP, it is referred to as an artifact. CDAP provides capabilities to manage the different versions of both the application templates and the plugins. 🔗Building of a Pipeline Here is how the CDAP Studio works with CDAP to build a pipeline, beginning with a user creating a new pipeline in the CDAP Studio. First, the components of the CDAP Studio: User Selects an Application Template A user building a pipeline within the CDAP Studio will select a pipeline type, which is essentially picking an application template. They will pick one of ETL Batch, ETL Real-time, or Data Pipeline. Other application templates such as Spark Streaming will be available in the future. Retrieve the Plugins types supported by the selected Application Template Once a user has selected an application template, the Studio makes a request to CDAP for the different plugin types supported by the application template. In the case of the ETL Batch pipeline, CDAP will return Source, Transform, and Sink as plugin types. This allows the Studio to construct the selection drawer in the left sidebar of the UI. Retrieve the Plugin definitions for each Plugin type CDAP Studio then makes a request to CDAP for each plugin type, requesting all plugin implementations available for each plugin type. User Builds the CDAP Pipeline The user then uses the Studio's canvas to create a pipeline with the available plugins. Validation of the CDAP Pipeline The user can request at any point that the pipeline be validated. This request is translated into a RESTful API call to CDAP, which is then passed to the application template, which validates whether the pipeline is valid. Application Template Configuration Generation As the user is building a pipeline, the Studio is building a JSON configuration that, when completed, will be passed to the application template to configure and create an application that is deployed to the cluster. Converting a logical into a physical Pipeline and registering the Application When the user publishes the pipeline, the configuration generated by the Studio is passed to the application template as part of the creation of the Application. The application template takes the configuration, passes it through a planner to create a physical layout, appropriately generates an application specification and registers the specification with CDAP as an application. Managing the physical Pipeline Once the application is registered with CDAP, the pipeline is ready to be started. If it was scheduled, the schedule is ready to be enabled. The CDAP UI then uses the CDAP RESTful APIs to manage the pipeline's lifecycle. The pipeline can be managed from CDAP through the CDAP UI, by using the CDAP CLI, or by using the RESTful APIs. Monitoring the physical Pipeline As CDAP pipelines are run as CDAP applications, their logs and metrics are aggregated by the CDAP system and available using RESTful APIs.
https://docs.cask.co/cdap/4.1.1/en/developers-manual/pipelines/how-cdap-pipelines-work.html
2018-01-16T13:12:13
CC-MAIN-2018-05
1516084886436.25
[]
docs.cask.co
ApSIC Xbench 3.0 is available in 32-bit and 64-bit editions. The 32-bit edition can be installed on 32-bit and 64-bit Windows machines. The 64- bit edition can only be installed on 64-bit machines. ApSIC Xbench is supported on Windows XP, Windows Vista, Windows 7, Windows 8, and Windows 10. It is also supported on Windows 2003, Windows 2008, Windows 2012 and Windows 2016. To install ApSIC Xbench, please do the following: Run the installation executable (for example, Setup.Xbench.x64.3.0.1080.exe). The installer welcome screen will appear. Click Next. The license window appears. Please read carefully the license information to ensure you accept its terms. If the terms of the license are acceptable to you, please click I Agree. If they are not acceptable, please click Cancel. Change the destination folder if necessary and click Install to continue. Files are copied to the selected destination and the following window appears. Click Finish to close the Window and start using ApSIC Xbench. A link to the ApSIC Xbench executable is installed on the Start->Programs->ApSIC Tools->Xbench path, together with the documentation. To uninstall ApSIC Xbench, please do the following: You can perform an unattended installation of ApSIC Xbench using the /S switch from the command line. c:\>Setup.Xbench.x64.3.0.exe /S If you wish to specify a different installation directory, you can do so using the /D switch. c:\>Setup.Xbench.x64.3.0.exe /S /D=[install_directory_path] Please note that the /S and /D switches are case-sensitive!
https://docs.xbench.net/user-guide/install-uninstall/
2018-01-16T13:13:31
CC-MAIN-2018-05
1516084886436.25
[]
docs.xbench.net
With custom metrics, you can report metric timeslice data from your application code and see it alongside default metrics and data in New Relic. Create custom metrics to record arbitrary performance data via an API call, such as: - Timing data - Computer resource data - Subscription or purchasing data Then, use the New Relic Insights metric explorer to search your custom metrics and create customized dashboards for them. Name custom metrics Start all custom metric names with Custom/; for example, Custom/MyMetric/My_label. The Custom/ prefix is required for all custom metrics. Any custom metric names that do not start with Custom/ are subject to all other grouping rules. They may not be visible in Insights, or they may not appear as expected in the New Relic UI. A custom metric name consists of the prefix Custom/, the category or class name, and a method or label, each separated with a slash. Implement custom metrics Implementing custom metrics requires API calls. The exact details of the API call vary by agent..
https://docs.newrelic.com/docs/agents/manage-apm-agents/agent-data/collect-custom-metrics
2020-07-02T16:57:44
CC-MAIN-2020-29
1593655879532.0
[array(['http://newrelicdev.prod.acquia-sites.com/sites/default/files/thumbnails/image/custom-metric-syntax.png', 'custom-metric-syntax.png custom-metric-syntax.png'], dtype=object) ]
docs.newrelic.com
TLS troubleshooting Validate your elasticsearch.yml The Elasticsearch configuration is in yaml format, and so is the Search Guard configuration. A quick way of checking the validity of any yml file is to use the Yaml Lint web service: Just copy and paste the content of your yaml file there and check for any errors. Note: You can of course use to also validate any other Search Guard configuration file. Viewing the contents of your Key- and Truststore In order to view information about the certificates stored in your keystore or truststore, use the keytool command like: keytool -list -v -keystore keystore.jks The keytool will prompt for the password of the keystore and list all entries with detailed information. For example you can use this output to check for the correctness of the SAN and EKU settings. If you rather like to work with a GUI, we recommend KeyStore Explorer: KeyStore Explorer is an open source GUI replacement for the Java command-line utilities keytool and jarsigner. KeyStore Explorer presents their functionality, and more, via an intuitive graphical user interface. You can use it to examine the contents of locally stored files, but you can also retrieve and inspect certificates from a server (or Elasticsearch cluster) directly. Viewing the contents of PEM certificates The content of PEM certificates can either be displayed by using OpenSSL or by the diagnose function of the Search Guard TLS tool. OpenSSL: openssl x509 -in node1.pem -text -noout TLS diagnose tool: ./sgtlsdiag.sh -ca root-ca.pem -crt node1.pem The TLS diagnose tool will also check the validity of the certificate chain. Checking the main attributes of a certificate The main attributes of an entry in the keystore look like: Alias name: node-0 Entry type: PrivateKeyEntry Certificate chain length: 3 Certificate[1]: Owner: CN=node-0.example.com, OU=SSL, O=Test, L=Test, C=DE Issuer: CN=Example Com Inc. Signing CA, OU=Example Com Inc. Signing CA, O=Example Com Inc., DC=example, DC=com Checking the configured alias If you have multiple entries in the keystore and you are using aliases to refer to them, make sure that the configured alias in elasticsearch.yml matches the alias name in the keystore. In the example above example, you’d need to set: searchguard.ssl.transport.keystore_alias: node-0 If there is only one entry in the keystore, you do not need to configure an alias. Checking the type of the certificate The relevant certificate types for Search Guard are: - PrivateKeyEntry - This type of entry holds a cryptographic PrivateKey, which is optionally stored in a protected format to prevent unauthorized access. It is also accompanied by a certificate chain for the corresponding public key. - trustedCertEntry - This type of entry contains a single public key Certificate belonging to another party. It is called a trusted certificate because the keystore owner trusts that the public key in the certificate indeed belongs to the identity identified by the subject (owner) of the certificate. For Client-, Admin- and Node certificates the type is PrivateKeyEntry. Root and intermediate certificates have type TrustedCertificateEntry. Checking the DN of the certificate If you are unsure what the DN of your certificate looks like, check the Owner field of keytool output. In the example above the DN is: Owner: CN=node-0.example.com, OU=SSL, O=Test, L=Test, C=DE The corresponding configuration in elasticsearch.yml is: searchguard.nodes_dn: - 'CN=node-0.example.com,OU=SSL,O=Test,L=Test,C=DE' Checking for special characters in DNs Search Guard uses the String Representation of Distinguished Names (RFC1779) when validating node certificates. If parts of your DN contain special characters, for example a comma, make sure it is escaped properly in your configuration, for example: searchguard.nodes_dn: - 'CN=node-0.example.com,OU=SSL,O=My\, Test,L=Test,C=DE' Omit whitespaces between the individual parts of the DN. Instead of: searchguard.nodes_dn: - 'CN=node-0.example.com, OU=SSL,O=My\, Test, L=Test, C=DE' use: searchguard.nodes_dn: - 'CN=node-0.example.com,OU=SSL,O=My\, Test,L=Test,C=DE' Checking the IP Addresses of the certificate Sometimes the IP address contained in your certificate is not the one communicating with the cluster. This can happen if your node has multiple interfaces, or is running dual stack (IPv6 + IPv4). When this happens, you would see the following in the node’s elasticsearch log: SSL Problem Received fatal alert: certificate_unknown javax.net.ssl.SSLException: Received fatal alert: certificate_unknown And 10.0.0.42, but it’s the IPv6 address 2001:db8:0:1:1.2.3.4 that is contained in the certificate. Validating the certificate chain TLS certificates are organized in a certificate chain:. You can check with keytool that the certificate chain is correct by inspecting the owner and the issuer of each certificate. If you used the demo installation script that ships with Search Guard, the chain looks like: Node certificate: Owner: CN=node-0.example.com, OU=SSL, O=Test, L=Test, C=DE Issuer: CN=Example Com Inc. Signing CA, OU=Example Com Inc. Signing CA, O=Example Com Inc., DC=example, DC=com Intermediate / Signing certificate node certificate was signed by the intermediate certificate. The intermediate certificate was signed by the root certificate. The root certificate was signed by itself, hence the name root or self signed certificate. If you’re using separate key- and truststore files, your root CA can most likely be found in the truststore. As a rule of thumb: - The keystore contains the client or node certificate with its private key, and all intermediate certificates - The truststore contains the root certificate Checking the ... ] Node certificates: Checking the OID ] Node certificates: Checking the EKU field Node certificates need to have both serverAuth and clientAuth set in the extended key usage field: #3: ObjectId: 2.5.29.37 Criticality=false ExtendedKeyUsages [ serverAuth clientAuth ] TLS versions Search Guard disables TLSv1 by default, because it is outdated, unsecure and vulnerable. If you need to use TLSv1 and you know what you are doing, you can re-enable it in elasticsearch.yml like: searchguard will see an info message on startup like: On startup, Search Guard will print out the available ciphers for the REST- and transport layer: [INFO ][c.f.s.s.DefaultSearchGuardKeyStore] AES-256 not supported, max key length for AES is 128 bit. That is not an issue, it just limits possible encryption strength. To enable AES 256 install 'Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files' Search Guard will still work and fall back to weaker cipher suites. Search Guard will also print out all available cipher suites on startup: [INFO ][c.f.s.s.DefaultSearchGuardKeyStore] sslTransportClientProvider: JDK with_DHE_DSS_WITH_AES_128_CBC_SHA256, ...] Fixing curl curl is not always curl when it comes to SSL/TLS. Sometimes it depends against which SSL/TLS implementation your curl version was compiled against. And there are a few including OpenSSL, GnuTLS and NSS. We made the experience that NSS and GnuTLS are very often problematic and so we assume (in our docs and scripts) your curl version is compiled against OpenSSL. You can check your curl version with curl -V This will print out something like: curl 7.54.0 (x86_64-apple-darwin17.0) libcurl/7.54.0 OpenSSL/1.0.2g zlib/1.2.11 nghttp2/1.24.0 To make curl work best with Search Guard we recommend you use a curl binary with - Version 7.50 or later - Compiled against OpenSSL 1.0.2 or later - Protocol needs to include https - Features should include GSS-API Kerberos SPNEGO NTLMif you like to use Kerberos If curl is compiled against NSS you may need to use explicitly a ./ for a relative or a / for an absolute paths like curl --insecure --cert ./chain.pem --key ./kirk.key.pem "<API Endpoint>" We already had a lot of issues, here are the most relevant ones: - Is it possible to configure elasticsearch/search guard to not make curl try to look for NSS certs? - Unable to Use REST API - enabling kerberos Encrypted PKCS#8 key does not work When you experience a File does not contain valid private key or data isn't an object ID exception while using encrypted PKCS#8 keys (pem keys with a password) you might have hit a JDK bug where “SunJCE support of password-based encryption scheme 2 params (PBES2) is not working”. There is also an related issue in the elastic GitHub repo. The solution is to use the v1 encryption scheme. With openssl use the -v1 flag in your command like openssl pkcs8 -in key.pem -topk8 -out enckey.pem -v1 PBE-SHA1-3DES. Please also refer to Search Guard issue #524 and OpenSSL pkcs8 documentation for more details.
https://docs.search-guard.com/7.x-40/troubleshooting-tls
2020-07-02T14:52:52
CC-MAIN-2020-29
1593655879532.0
[]
docs.search-guard.com
The Change Programme Roadmap shows clear transition, benefits, and objectives of your change programme. Map out your workstreams and KPIs – all on 1 sheet. A 1-year strategic plan – your complete IT Roadmap Template in one Powerpoint pack:- Governance, Strategy, Change, and Stakeholder Engagement. This template clearly shows allocation to each workstream and aligns it to project plans and activities. Project Resources & Budget Roadmap since 2008. Our most popular Visio Roadmap format, first started in 2005! With 5 Workstreams, Activities, Milestones, Risks – the best “at a glance” planning format. Published : March 7th, 2013 | SKU: BDUK-14.
https://business-docs.co.uk/downloads/company-roadmap-template/
2020-07-02T15:33:22
CC-MAIN-2020-29
1593655879532.0
[]
business-docs.co.uk
Key management for static key encryption In AWS Elemental MediaConnect, you can use static key encryption to secure content in sources, outputs, and entitlements. To use this method, you store an encryption key as a secret in AWS Secrets Manager, and you give AWS Elemental MediaConnect permission to access the secret. Secrets Manager keeps your encryption key secure, allowing it be accessed only by entities that you specify in an AWS Identity and Access Management (IAM) policy. With static key encryption, all participants (the owner of the source, the flow, and any outputs or entitlements) need the encryption key. If the content is shared using an entitlement, both AWS account owners must store the encryption key in AWS Secrets Manager. For more information, see Setting up static key encryption.
https://docs.aws.amazon.com/mediaconnect/latest/ug/encryption-static-key-key-management.html
2020-07-02T17:12:55
CC-MAIN-2020-29
1593655879532.0
[]
docs.aws.amazon.com
Availability report Overview Availability reports show availability for host groups, service groups, hosts, or services, during a selected time period. You can also use them to show trends — a graphic view of the status of a host or service during a selected time period. You create availability reports using menu entry Report > Availability > Create Availability Report. Create an availability report When you are in the availability report creation page, you can switch to the availability report page by clicking the Switch to SLA report link. To create an availability report: - Select the monitoring object from the Report type drop-down list. - Choose the objects to show in the report. - Select the report settings, as described in the table below. - Click Show report. The availability report The availability report displays the amount of time each object has spent in the specified states. It also includes a summary of all objects and pie charts, if specified for the report. You can perform the following from the report: - Click an object to get a more detailed report. - Any of the general report functions, such as save, edit, or export the report. For guidance, see Reports. Once you have saved the report, you can also schedule the report. For guidance, see Schedule reports.
https://docs.itrsgroup.com/docs/op5-monitor/8.1.0/topics/report/availability-report.html
2020-07-02T16:42:14
CC-MAIN-2020-29
1593655879532.0
[array(['../../Resources/Images/report/availability-report.png', None], dtype=object) ]
docs.itrsgroup.com
Activate and configure limit concurrent sessions plugin You can activate the Limit Concurrent Sessions plugin (com.glide.limit.concurrent.sessions) if you have the admin role. Before you beginRole required: admin About this task Procedure Navigate to System Definition > Plugins. Find and click the Limit Concurrent Sessions plugin. On the System Plugin form, review the plugin details and then click the Activate/Upgrade related link. Click Activate. To enable this feature and set a maximum limit of concurrent sessions, go to the Plugin Files tab, find the following properties, and change the setting values: OptionDescription glide.authenticate.limit.concurrent.interactive.sessions You can enable the ability to limit concurrent sessions by setting the value to True. By default, this property is set to False, which means there is no limit on the number of interactive sessions a user can have active.Note: To disable this feature, set this property back to False. glide.authenticate.max.concurrent.interactive.sessions You can set the maximum number of concurrent active interactive sessions a user can have on the instance across all nodes. (Optional) You can also amend the following properties, if necessary: OptionDescription glide.authenticate.session.types.to.limit.concurrency This property limits session types. By default, only the web browser sessions have a limit. Session types include: Web Browser (1) Mobile Browser (2) ServiceNow Mobile App (3) Non-interactive (10) You can configure and set the value to '1' for web browser, '2' for mobile browser, or '1,2' for both.Note: Only web and mobile browser sessions can have a limit. There are no limits for sessions that originate from the ServiceNow mobile app or non-interactive sessions. glide.authenticate.limit.concurrent.sessions.across.all.nodes This property restricts the limit of concurrent sessions per node instead of restricting them across all nodes of a ServiceNow instance. By default, the value is set to true, which limits user sessions across all nodes. If the property is set to false, only the sessions on that node and not the ones on the other nodes are subject to the limit. Click Update to have the settings take effect. What to do nextSet a concurrent session limit by user or role. Related topicsList of plugins (Madrid)
https://docs.servicenow.com/bundle/madrid-platform-administration/page/integrate/authentication/task/limit-concurrent-sessions-plugin.html
2020-07-02T16:49:36
CC-MAIN-2020-29
1593655879532.0
[]
docs.servicenow.com
Simply enter the table to sync to. It will then show in the sync block on the right How fast do you need the records to be updated. How it works: At each interval, it will scan your base for records that you updated in last x minutes These are the fields ( columns ) that you want to sync. What if the column name is not same in base A and in base B? Yes, you need to put ':' in your field name like this: nameA:nameB Can it synced linked record ? Yes, you need to put '/' in your field name like this: [field name]/[linked table name]/[Primary field] Please read this : There are different sync options, that surely fit your use case. by view: we will sync a specific view. If you keep empty we will scan all the records. ( Note that from the API standpoint, a view save your filter, but it doesn't save which fields are hidden, so you still need to list the fields to sync ) push/pull: In the case you are syncing many bases with the same settings, you would want to check the push/pull mode. It allows you to set up things only once and is really seamless. Primary field is a unique records that identify each record. Sometime your primary field is computed so to make things easier, we take care of it with the "key" field How to test and sync manually your settings to see what's going on. there is a few limitation to be aware of, and most of your issues will be solved by paying attention to these
https://docs.syncbases.com/settings
2020-07-02T16:45:08
CC-MAIN-2020-29
1593655879532.0
[]
docs.syncbases.com
Review system requirements Review system requirements and firewall rules Review Armor services and support information Learn how to complete the onboarding process and create a virtual machine Learn how to complete the onboarding process and install the Armor Anywhere agent (CORE Defense) agent (CORE Defense) Infrastructure: Create, configure, and maintain your virtual machine Compliance: Configure your vulnerability scanning options Support: Contact Armor Support Account: Manage your AMP account
https://docs.armor.com/plugins/viewsource/viewpagesrc.action?pageId=11632647
2020-07-02T16:31:22
CC-MAIN-2020-29
1593655879532.0
[]
docs.armor.com
Access to this feature depends on your subscription level. Requires Infrastructure Pro. New Relic Infrastructure integrations include an integration for reporting Amazon CloudFront service data to New Relic products. This document explains how to activate this integration and describes the data that can be reported. Features Amazon CloudFront is an Amazon web service that speeds up the distribution of your web content. With the New Relic CloudFront integration, you can track CloudFront data in New Relic Infrastructure, including error rates, request counts, and uploaded/downloaded bytes. You can track your CloudFront configuration and see how configuration changes impact performance. And, with New Relic Insights, you can create custom queries of your CloudFront integration data and custom charts. Activate integration To enable this integration follow standard procedures to Connect AWS services to Infrastructure. Configuration and polling You can change the polling frequency and filter data using configuration options. Default polling information for the Amazon CloudFront integration: - New Relic polling interval: 5 minutes - Amazon CloudWatch data interval: 1 minute, with up to 1 minute delay because CloudFront sometimes reports partial data If you're using Lambda@Edge to customize content that CloudFront delivers so to execute Lambda functions in AWS locations closer to your clients, you can enable the Collect Lambda@Edge data filter in order to get Lambda execution location metadata. Find and use data To find your integration data in Infrastructure, go to infrastructure.newrelic.com > Integrations > Amazon Web Services and select one of the CloudFront integration links. In New Relic Insights, data is attached to the LoadBalancerSample event type, with a provider value of CloudFrontDistribution. For more on how to use your data, see Understand and use integration data. Metric data The following data is collected for CloudFront Web distributions. Data is not available for RTMP distributions. Inventory data CloudFront configuration options are reported as inventory data. For more on inventory data and how to find it and use it, see Understand integration data.
https://docs.newrelic.com/docs/integrations/amazon-integrations/aws-integrations-list/aws-cloudfront-monitoring-integration
2020-07-02T16:45:01
CC-MAIN-2020-29
1593655879532.0
[]
docs.newrelic.com
You must meet certain requirements before installing and cabling the disk shelves. Mini-SAS HD SAS optical cable rulessection. Installation and Setup Instructions(ISI) that came with your new system. The ISI addresses system setup and configuration for your new system. You use the ISI in conjunction with this procedure to install and cable the disk shelves. ISIs are also available on the NetApp Support Site. AFF and FAS Documentation Center
https://docs.netapp.com/platstor/topic/com.netapp.doc.hw-ds-sas3-icg/GUID-3BD2BB25-A867-44E3-BB42-9E917CD074D8.html
2020-07-02T14:38:50
CC-MAIN-2020-29
1593655879532.0
[]
docs.netapp.com
[email protected]@[email protected]@[email protected] #include <opencv2/img_hash/block_mean_hash.hpp> #include <opencv2/img_hash/average_hash.hpp> Calculates img_hash::AverageHash in one call. #include <opencv2/img_hash/block_mean_hash.hpp> Computes block mean hash of the input image. #include <opencv2/img_hash/color_moment_hash.hpp> Computes color moment hash of the input, the algorithm is come from the paper "Perceptual Hashing for Color Images Using Invariant Moments". #include <opencv2/img_hash/marr_hildreth_hash.hpp> Computes average hash value of the input image. #include <opencv2/img_hash/phash.hpp> Computes pHash value of the input image. #include <opencv2/img_hash/radial_variance_hash.hpp> Computes radial variance hash of the input image.
https://docs.opencv.org/3.4/d4/d93/group__img__hash.html
2020-07-02T16:20:04
CC-MAIN-2020-29
1593655879532.0
[]
docs.opencv.org
Numeric and Mathematical Modules¶ The modules described in this chapter provide numeric and math-related functions and data types. The numbers module defines an abstract hierarchy of numeric types. The math and cmath modules contain various mathematical functions for floating-point and complex numbers. The decimal module supports exact representations of decimal numbers, using arbitrary precision arithmetic. The following modules are documented in this chapter: numbers— Numeric abstract base classes math— Mathematical functions cmath— Mathematical functions for complex numbers decimal— Decimal fixed point and floating point arithmetic fractions— Rational numbers random— Generate pseudo-random numbers statistics— Mathematical statistics functions
https://docs.python.org/3/library/numeric.html
2020-07-02T16:28:06
CC-MAIN-2020-29
1593655879532.0
[]
docs.python.org
[−][src]Crate nom_derive nom-derive Overview nom-derive is a custom derive attribute, to derive nom parsers automatically from the structure definition. It is not meant to replace nom, but to provide a quick and easy way to generate parsers for structures, especially for simple structures. This crate aims at simplifying common cases. In some cases, writing the parser manually will remain more efficient. - API documentation - Documentation of Nomattribute. This is the main documentation for this crate, with all possible options and many examples. Feedback welcome ! #[derive(Nom)] This crate exposes a single custom-derive macro Nom which implements parse for the struct it is applied to. The goal of this project is that: derive(Nom)should be enough for you to derive nom parsers for simple structures easily, without having to write it manually - it allows overriding any parsing method by your own - it allows using generated parsing functions along with handwritten parsers and combining them without efforts - it remains as fast as nom nom-derive adds declarative parsing to nom. It also allows mixing with procedural parsing easily, making writing parsers for byte-encoded formats very easy. For example: use nom_derive::Nom; #[derive(Nom)] struct S { a: u32, b: u16, c: u16 } This adds a static method parse to S, with the following signature: impl S { pub fn parse(i: &[u8]) -> nom::IResult(&[u8], S); } To parse input, just call let res = S::parse(input);. For extensive documentation of all attributes and examples, see the Nom derive attribute documentation. Many examples are provided, and more can be found in the project tests. Debug tips - If the generated parser does not compile, add #[nom(DebugDerive)]to the structure. It will dump the generated parser to stderr. - If the generated parser fails at runtime, try adding #[nom(Debug)]to the structure or to fields. It wraps subparsers in dbg_dmpand will print the field name and input to stderrif the parser fails.
https://docs.rs/nom-derive/0.6.2/nom_derive/
2020-07-02T15:51:03
CC-MAIN-2020-29
1593655879532.0
[]
docs.rs
Материал Materials are used in conjunction with Mesh Renderers, Particle Systems and other rendering components used in Unity. They play an essential part in defining how your object is displayed. Свойства The properties that a Material’s inspector displays are determined by the Shader that the Material uses. A shader is a specialised kind of graphical program that determines how texture and lighting information are combined to generate the pixels of the rendered object onscreen. See the manual section about Shaders for in-depth information about how they are used in a Unity project.
https://docs.unity3d.com/ru/2017.4/Manual/class-Material.html
2020-07-02T15:37:40
CC-MAIN-2020-29
1593655879532.0
[]
docs.unity3d.com
Operating Environment Overview The Gateway Operating environment top-level section contains settings that affect the Gateway as a whole, and do not belong to any other section. Almost all of the settings in this section are optional. Operation The only mandatory setting in this section is operatingEnvironment > gatewayName. This name is displayed to all users that connect to the Gateway, and is also used in name lookup and database logging. It is strongly recommended that this name be unique for each Gateway on a particular site. The Gateway listen ports can also be set in the operating environment (operatingEnvironment > listenPorts). The listen ports are used by components connecting to gateway such as Active Console or Webslinger to request monitoring data for display to users. Note: This does not include Netprobe connections, as configuration for these are contained within Probes. If not configured, the gateway listen port defaults to port 7039 for insecure channel and 7038 for secure channel. Data quality options Settings in Operating environment control the data quality algorithm that Gateways use to maintain a consistent level of service under excessive load. This algorithm runs throughout the lifetime of a gateway (unless operatingEnvironment > dataQueues > disableChecks is set) and operates as follows: - A gateway monitors dataview updates to determine if the oldest pending update becomes stale (as controlled by the operatingEnvironment > dataQueues > maxDataAgeMs). If this occurs, a probes connection is dropped to reduce the gateway's load and restore timely data processing. - The gateway determines which connection to drop based upon CPU usage and processing the incoming data over the previous minute. The connection with the highest load is then suspended for a period (see operatingEnvironment > dataQueues > connectionSuspensionDuration) before the gateway reconnects. - Once a connection has been suspended, no further suspensions will occur until a grace period (see operatingEnvironment > dataQueues > suspendGracePeriod) has elapsed, allowing time to evaluate the effect of the suspension on the quality of the data. - Setup changes represent a special case where the data age metrics may spike in the gateway. During setup application no incoming data from netprobes is processed, leading to a backlog of updates to be applied. To avoid unnecessary netprobe suspensions, the algorithm is disabled during setup changes and for operatingEnvironment > dataQueues > setupGracePeriod seconds afterwards. Probes suspension may additionally be controlled by the suspend probe and unsuspend probe commands. See the commands appendix for details. For more information regarding Data quality, see Data Quality User Guide. Memory protection Settings for memory protection are found in the Data Quality tab. When data quality is disabled, or in extreme situations when it cannot suspend sufficient probes to prevent the gateway becoming overloaded, the gateway throttles the reading of TCP data to prevent the backlogged data-queues from unbounded growth. This is necessary but less preferable to a managed data-quality suspension because if it continues without recovery, netprobe connections either flow-control or timeout and netprobes are dropped at random. There are two threshold levels: - Low-priority threshold (operatingEnvironment > dataQueues > memoryProtection > lowPriorityThresholdMB). - High-priority threshold (operatingEnvironment > dataQueues > memoryProtection > highPriorityThresholdMB). When the low-priority threshold is breached, the gateway throttles reads from all importing (netprobe) connections but remain responsive to downstream components, such as Active Console. In the unlikely event that this fails to prevent memory growth and the high-priority threshold is breached, the gateway throttles reads from all connections and become unresponsive until it recovers. The default for the low-priority threshold is 250 MB. This is calculated to be enough to buffer 77 seconds (see operatingEnvironment > dataQueues > maxDataAgeMs) worth of data on a high bandwidth gateway (approximately 30,000 cell updates per second). This is in order to give the data-quality algorithm time to step in and save the situation before the threshold is reached. Note: These thresholds govern memory usage by unprocessed EMF messages only. The gateway memory footprint as a whole is typically far more influenced by other factors and could potentially exceed these thresholds without issue. It is unusual for unprocessed EMF messages to account for more than a few megabytes in a normally operating gateway. Conflation Settings for conflation are found in the Data Quality tab. Conflation is an optional and less drastic method of coping with an overloaded gateway than a data quality suspension. When the data queues (containing incoming sampler updates from netprobes) become backlogged due to the gateway being unable to process them as fast as they arrive, conflation allows the gateway to discard out-of-date cell updates and only process and publish the latest cell values. As this could potentially result in the gateway discarding important updates, or missing short-lived events, conflation is disabled by default and should only be used with care. Rapid cell updates When a Netprobe has published several updates to the same cell before the gateway has processed the first update: - Update cell from 1 to 2 - Update cell from 2 to 3 - Update cell from 3 to 4 - Update cell from 4 to 5 With conflation active, gateway only publishes the latest value: - Update cell from 1 to 5 Updates to a recently created row When a netprobe updates values in a recently created row before the gateway has processed the create: - Create row newRow with three cells: 100,200,300. - Update first cell in newRow from 100 to 111. - Update second cell in newRow from 200 to 222. With conflation active, gateway adds the row to the dataview with the latest values: - Create row newRow with three cells: 111,222,300. Updates to a row that is then removed When a netprobe updates values in a row and then removes the row before the gateway has processed the updates: - Update first cell in row1 from 100 to 111. - Update second cell in row1 from 200 to 222. - Remove row row1. With conflation active, gateway discards the updates and only process the row-removal: - Remove row row1. Short lived rows When a netprobe creates a row and then removes it again before the gateway has processed the create: - Create row newRow with four cells: 100,200,300,400. - Update first cell in newRow from 100 to 111. - Remove row newRow. With conflation active, gateway conflates away the update as normal but does not conflate away the entire row: - Create row newRow with four cells: 111,200,300,400. - Remove row newRow. Potential Issues Conflation can prevent a gateway from becoming overloaded, and ensure that published values are always up-to-date, but there are a number of potential issues which you should be aware of. Lost Spikes A dataview cell that updates from 32% to 34% to 33% is unlikely to cause issues by having the intermediate update conflated away, but one that updates from 32% to 99% to 33% may miss an important spike. Similarly, a cell that goes from OK to ERROR to OK again, could cause an alert to be missed if conflation is enabled. This might also affect compute-engine rules that use statistical functions such as maximum or minimum. Rate Function Rate function triggers off the time an update is processed, rather than the time it is generated, and therefore its general performance is likely be improved by conflation. Note: Spikes in the rate-of-change in a cell may be conflated away. sampleTime and logNetprobeSampleTimeForDataItems If logNetprobeSampleTimeForDataItems is configured, cell updates may be logged with sample-times later than they were produced with. This is because the sample-time is published by the netprobe along with the sample-data and will be conflated to the latest value. E.g. A series of updates produced at twenty second intervals by the netprobe: - Update cell1 @ 09:25:02 - Update cell2 @ 09:25:22 - Update cell3 @ 09:25:42 Might be conflated into a single update with the latest sample-time: - Update cell1, cell2, and cell3 @ 09:25:42 The updates to cell1 and cell2 may be logged with this later sample-time. Similarly, rules that reference the sample-time may only see the later value. Configuration Basic tab These settings are found under the Basic tab. operatingEnvironment > gatewayName A short name identifying the Gateway. When using database logging functionality, this name is also logged to the database, and is used to identify records for the Gateway. Mandatory: Yes When the <<timestamp>> FATAL: Gateway Mandatory 'gatewayName' has not been specified. [/gateway/operatingEnvironment] error message appears, this means that the Gateway name setting of the configuration files with the highest priority has no value. To resolve this: - Look for any enabled Gateway name settings in the Gateway Setup Editor, and check the main and include files. - Click Includes > Priority field, and increase the priority of the configuration file (main or include files) that has a Gateway name setting with a value. - Click Save current document to apply the changes. operatingEnvironment > licensingGroup Group that the Gateway requests licences from on the Licence Daemon. operatingEnvironment > listenPorts The gateway listen ports for incoming connections. - If operatingEnvironment > listenPorts > secure is set and operatingEnvironment > listenPorts > insecure is not set, then the Gateway only listens on a secure port. - If operatingEnvironment > listenPorts > secure is not set then the gateway only listens on an insecure port. - If both operatingEnvironment > listenPorts > secure and operatingEnvironment > listenPorts > insecure are set, the gateway listens on two ports. See Secure Communications for more details. The listen port can also be specified as a command-line argument to gateway. If this is done, then the command-line value is used for the lifetime of the gateway process - it cannot be overridden or altered by editing the gateway setup file. An example of using this command-line option is shown below: gateway2 -port <12345> operatingEnvironment > listenPorts > secure This specifies that the gateway should listen securely. In order to listen securely, a SSL certificate needs to be provided using either the -ssl-certificate or -ssl-certificate-key command line option. By default if configured to be secure, the gateway will listen on port 7038. This can be overriden by using the child setting operatingEnvironment > listenPorts > secure > listenPort. operatingEnvironment > listenPorts > secure > listenPort This value overrides the default secure listenPort. Specify an integer in the range 1-65535. operatingEnvironment > listenPorts > insecure This specifies that the gateway should insecurely. By default if configured to allow insecure connections, the gateway will listen on port 7039. This can be overriden by using the child setting operatingEnvironment > listenPorts > insecure > listenPort. operatingEnvironment > listenPorts > insecure > listenPort This value overrides the default insecure listenPort. Specify an integer in the range 1-65535. operatingEnvironment > var List of user environment variable definitions. See User Variables and Environments for details on how to configure environment variables. Advanced tab These settings are found under the Advanced tab. operatingEnvironment > maxLogFileSizeMb Maximum size in Megabytes of the Gateway log file before it rolls that log file over. Valid values are 1-2047 inclusive for 32-bit Gateways. 10<![CDATA[ ]]> operatingEnvironment > logArchiveScript The name of a batch file or shell script that should be executed when the log file is rolled over. Note: Using operatingEnvironment > logArchiveScript overrides LOG_ARCHIVE_SCRIPT (if set). operatingEnvironment > Log time format The time format used to record timestamps in the log file. Choose from: - (Default) Iso-8601: 2019-09-25 09:18:28.871-0400 - Iso-8601-utc: 2019-09-25 13:18:28.871Z - Legacy: <Wed Sep 25 09:18:28> Mandatory: No Note: The log time format can also be set using an environment variable. for more information, see Specify the log file time formatting in Gateway Log File . operatingEnvironment > timezone This sets the TZ environment variable which determines the time zone the Gateway runs in. This allows a Gateway in one country to monitor Netprobes in another country whilst keeping the time zones the same. The time zone can be specified in any format compatible with the unix TZ environment variable: - (Recommended) IANA time zone name, such as America/New_Yorkor Europe/London - (Not recommended) POSIX time zone specification, such as EST+5EDT,M3.2.0/2,M11.1.0/2 For more information, see the POSIX manual page for the tzset command. The Gateway attempts to validate your choice of time zone against the TZenvironment variable. If TZis not set, Gateway uses the local time of its host machine. operatingEnvironment > timezoneabbreviation A list of time zone abbreviations and their default time zone regions. This is used to override the time zone abbreviations when parsing dates in rules and when parsing information from dataviews for standard formatting. operatingEnvironment > internalQueueSizeLimit Controls the maximum length of the internal update queue. Updates to data-items (e.g. a severity change as the result of running a rule) are placed in the queue temporarily between data updates. The default maximum limit should be adequate for normal gateway operation. If a pair (or more) of rules are configured such that an update caused by rule A makes rule B fire and update, then this can cause the internal queue to fill faster than it is processed. If the queue is completely filled an error message is logged, and gateway performance is likely to be affected. The solution is to write rules A and B to be more selective, so that they do not fire each other. Certain compute engine rules (typically involving wildcarded paths) can also fill the processing queue during gateway startup. The queue limit can be increased to prevent warning messages if required, however this should only be done if it is known that this situation is a "one off". 4000<![CDATA[ ]]> operatingEnvironment > numRuleEvaluationThreads Specifies the maximum number of rule evaluation threads the Gateway can run. These threads are used to execute rules on data changes, and can be enabled if rule execution is becoming a bottleneck on a busy gateway. It is recommended that this is not set too high as doubling the number of threads does not double throughput. It should not be set higher than the number of CPU cores available. To set the number of rule evaluation threads used the Gateway will determine the number of available processors, using the taskset command on systems or the psrset command on systems, and evaluate the numRuleEvaluationThreads variable. The lower value will be used to specify the number of threads used by the Gateway to execute rules. The number of rule evaluation threads used is recorded in the gateway log. If the available number of processors is changed while the gateway is running then the number of threads to use is re-evaluated at the next setup change. For more detailed information about the optimum value to use, see Gateway Performance Tuning. A hard limit can also be placed on the number of rule threads by setting the environment variable MAX_RULE_THREADS to a positive integer. This will override the value specified in the Gateway Setup Editor. 0(no threads used) operatingEnvironment > historyFiles The maximum number of history files that the gateway is allowed to create when receiving set-up changes. Valid values are 0 -9999 inclusive. To suppress history file creation altogether, set this to zero. 10<![CDATA[ ]]> operatingEnvironment > dataDirectory Allows you to specify where temporary files which the gateway may produce while running should be stored. If not set, files are stored in the current working directory. If the directory specified already contains any of these temporary files, they are over-written. The data directory must have read, write, and execute permissions as it needs to be able to read, write and search within it. operatingEnvironment > duplicateRowAlerts When duplicate rows in a single dataview are detected, gateway alerts the user of this fact as it indicates a configuration error. These alerts can be adjusted using this setting. TICKER_AND_STATUS operatingEnvironment > insecurePasswordLevel In a number of places throughout the Gateway configuration, passwords have to be specified. Examples of this might be plugins that require logins to systems to retrieve the data they need, the gateway's connection to the database, or the configuration of users. In most of these places it is possible to enter the password in a number of different formats (depending on context), from a cleartext format, to more secure formats such as AES (for two way), and crypt (for one way). While it may be useful to use a cleartext format in a UAT or testing environment, you may prefer to ensure that a secure format is used when in a production environment. This setting helps locate these, by causing each insecure password to generate an issue at the specified level. This is shown when validating or saving the setup. Note: We have deemed standard encoded passwords (std) to be insecure since they are encoded rather than encrypted, and these will be flagged in the same way as cleartext passwords. The setting has the following effects: None— no checks are performed on the security of the passwords and no issues are reported. Critical— the setup cannot be saved and the Gateway cannot be started with any insecure passwords present. With Warning or Error set, the ability to save the setup with insecure passwords present depends on if the -max-severity command line parameter is set. See the Gateway Installation Guide. The Gateway data reports the level of this setting. None<![CDATA[ ]]> operatingEnvironment > allowComputeEngine Specifies whether the Gateway compute engine feature is available to add additional data to existing dataviews. It is allowed by default, but administrators and users can use this setting to disallow compute engine features. See Compute Engine. operatingEnvironment > writeStatsToFile The "write stats to file" section contains settings controlling how load monitoring statistics are written out from the gateway. These statistics can then be read by the Gateway load. Also see the Gateway Performance Tuning for more information. operatingEnvironment > writeStatsToFile > enablePeriodicWrite Specifies whether to write data to file periodically. If false, statistics are only written when the "write statistics" command is executed. true<![CDATA[ ]]> Connections tab These settings are found under the Connections tab. operatingEnvironment > heartbeatInterval Number of seconds before a Gateway sends a heartbeat message to a connected component if it does not receive any communication from the component. Gateway expects a reply within the number of seconds specified by the connectWait setting. If the reply is not received within this time, the connection is terminated and re-established. The valid range for the heartbeat interval is 20-300 seconds inclusive. 75(seconds) operatingEnvironment > connectWait Time in seconds to wait for a connection to Netprobe to be established. That is, the maximum duration the gateway waits after sending the initial TCP SYN segment for a SYN/ACK reply from the Netprobe. The valid range is 1-300 seconds inclusive. 30(seconds) operatingEnvironment > dnsCacheExpiryTime Time in minutes that the Gateway caches the result of resolving a hostname to an IP address. Valid values are 0-2880 inclusive. If set to 0, hostnames are cached indefinitely. 720(12 hours) operatingEnvironment > clientConnectionRequirements > minimumComponentVersion > minimumForAllComponents This instructs the Gateway to reject connections from every component with versions older than the specified version. You can specify the minimum version using the: - Version number. For example, GA4.7.0, or GA2011.2.1. - Version number with the build date. For example, GA4.7.0-180529. operatingEnvironment > clientConnectionRequirements > minimumComponentVersion > components > component > name Name of a Geneos component type. The drop-down list has the following options: - Active Console - Gateway - Licence Daemon - Netprobe - Web Dashboard - Webslinger operatingEnvironment > clientConnectionRequirements > minimumComponentVersion > components > component > version The minimum version of the component selected in operatingEnvironment > clientConnectionRequirements > minimumComponentVersion > components > component > name that the Gateway accepts connections from. You can specify the minimum version using the: - Version number. For example, GA4.7.0, or GA2011.2.1. - Version number with the build date. For example, GA4.7.0-180529. operatingEnvironment > clientConnectionRequirements > requireCertificates This allows the gateway to require certificates when connections are made to the gateway for certain connection types. This can be enabled/disabled for each supported connection type. The following connection types are supported: - Netprobe: Incoming connections from netprobes (this will include Floating Netprobes and Self-announcing Netprobes). - Importing Gateways: Incoming Importing Gateway connections. - Importing Gateways: Incoming Gateway connections from Importing Gateways (to which this gateway exports data). - Secondary Gateways: Incoming Gateway connections from the secondary Gateway in a Hot Standby configuration. operatingEnvironment > httpConnectionRequirements This group of settings allow HTTP requests made to the Gateway (e.g. from a web browser) to be restricted. operatingEnvironment > httpConnectionRequirements > internalData The internal data web pages provide low level information about various parts of the system. They may be requested by ITRS support when debugging issues. They do not form part of the normal operation of the Gateway so can safely be restricted to Geneos administrators. The internal data pages available, and the information available on each page, can vary by version. However, the connection requirements cover all internal data pages, so will secure any pages added in future versions. operatingEnvironment > httpConnectionRequirements > internalData > acceptHosts This allows the internal data web pages, used for debugging issues, to be viewed only from particular locations. The available settings are: All— allow access from any host. Local— allow access only from the local loopback interface where the Gateway is running (127.0.0.1). None— prevent access completely. Specific— a list of locations may be entered. Each item in the list can be specified as a hostname (if a reverse DNS entry is available for the remote host) or as an IP address. The source of any HTTP requests must match at least one item in the list otherwise they are rejected. If no items are specified, access is prevented completely. The remote hostname and IP address are written to the Gateway log file, along with the URL requested, for any attempts that are blocked. This can be useful to see if the Gateway host is able to access a reverse DNS entry for the remote host and therefore what would need to be added to the 'specific' list for the request to be accepted. If a hostname is not available then the IP address is seen instead of the name in the log file, so will appear twice. local<![CDATA[ ]]> operatingEnvironment > DNS > maxAcceptableDNSLookupTime The maximum time in seconds that the Gateway is allowed to perform a reverse DNS lookup. If this time is exceeded, reverse DNS lookups are disabled for the IP address for the number of units of time specified in operatingEnvironment > DNS > DNSReverseLookupDisableTime > value. For non-Gateway components, this setting defaults to the time specified in the environment variable $HR_TIMEOUT. 1 operatingEnvironment > DNS > DNSReverseLookupDisableTime > value Number of units of time that reverse DNS lookups are disabled for after exceeding operatingEnvironment > DNS > maxAcceptableDNSLookupTime. The unit of time is specified using operatingEnvironment > DNS > DNSReverseLookupDisableTime > units Once the time has elapsed, reverse DNS lookups are re-enabled for the IP address. For non-Gateway components, DNSReverseLookupDisableTime can be specified using the environment variable $HR_REVERSE_LOOKUP_DISABLE_TIME. 5 operatingEnvironment > DNS > DNSReverseLookupDisableTime > units The unit of time used to determine how long DNS lookups are disabled for after exceeding operatingEnvironment > DNS > maxAcceptableDNSLookupTime. There are two options: minutes seconds minutes Debug tab These settings are found under the Debug tab. operatingEnvironment > debug A list of gateway debug settings. These settings are only intended for debugging error conditions and should be enabled with care. Caution: Use of these setting is likely to adversely impact the performance of the Gateway and should only be enabled when debugging a particular configuration and in coordination with ITRS support staff. Data Quality tab These settings are found under the Data Quality tab. operatingEnvironment > dataQueues > disableChecks Enable this setting to disable the data quality checking algorithm. false(algorithm is run) operatingEnvironment > dataQueues > maxDataAgeMs Time in milliseconds of the maximum acceptable age for a pending update to a dataview. The limit is inclusive, so an update must be older than the set value to cause a connection to be suspended. For more details on how this setting is used, see Data quality options. Note: The default value is set to approximate the behaviour of Gateway versions prior to the introduction of the data quality feature. 77000(77 seconds) operatingEnvironment > dataQueues > connectionSuspensionDuration Time in seconds that a connection (to a Netprobe) is suspended before the gateway reconnects. For more details on how this setting is used, see Data quality options. 300(5 minutes) operatingEnvironment > dataQueues > suspendGracePeriod Time in seconds specifying how long the gateway waits after suspending a connection before allowing further connections to be suspended. For more details on how this setting is used, see Data quality options. 60(1 minute) <![CDATA[ ]]> operatingEnvironment > dataQueues > setupGracePeriod Time in seconds specifying how long the gateway suspends the data quality algorithm for after a setup change. For more details on how this setting is used, see Data quality options. 60(1 minute) operatingEnvironment > dataQueues > memoryProtection Allows overriding the default data-queue memory protection settings. For more details on how this setting is used, see Memory protection. operatingEnvironment > dataQueues > memoryProtection > lowPriorityThresholdMB Threshold size in MB for backlogged EMF messages at which the gateway throttles read-data from low-priority connections. Low priority connections are importing EMF connections (Netprobe connections only). All other gateway connections continue to operate normally. For more details on how this setting is used, see Memory protection. 500<![CDATA[ ]]> operatingEnvironment > dataQueues > memoryProtection > highPriorityThresholdMB Threshold size in MB for backlogged EMF messages at which the gateway throttles read-data from all connections. In practice it is very unlikely even for a heavily overloaded gateway to hit this threshold, as the low-priority threshold is hit first. For more details on how this setting is used, see Memory protection. 750<![CDATA[ ]]> operatingEnvironment > conflation Settings to control conflation of incoming monitoring data. For more details on how this setting is used, see Conflation . operatingEnvironment > conflation > enabled Whether conflation is enabled. Conflation can significantly aid an overloaded gateway and ensure that all published data is as up-to-date as possible. However, it does this by discarding out-of-date cell updates and should not be enabled if this is unacceptable. For more details on how this setting is used, see Conflation . operatingEnvironment > conflation > strategy Specify the strategy for controlling gateway conflation, so that conflation is only enabled when required and no updates are unnecessarily discarded. operatingEnvironment > conflation > strategy > maxDataAgeThreshold Under this strategy the gateway does not enable conflation unless the maximum age of backlogged updates (as displayed by the Probe data ) exceeds a certain threshold. Conflation works best when it is preventing stale data from building up rather than clearing large backlogs (not only does it have fewer backlogged messages to process, but it minimises the amount of updates conflated away) and the threshold should not be set too high. An ideal value for the threshold is the minimum samplers > sampler > sampleInterval used in the setup. For this reason it defaults to the default sampleInterval of twenty seconds. operatingEnvironment > conflation > strategy > maxDataAgeThreshold > threshold Time in milliseconds for the threshold maxDataAge above which conflation is enabled. An ideal value for this setting is the minimum samplers > sampler > sampleInterval used in the setup. 20000<![CDATA[ ]]>
https://docs.itrsgroup.com/docs/geneos/5.0.0/Gateway_Reference_Guide/gateway_operating_environment.htm
2020-07-02T16:46:33
CC-MAIN-2020-29
1593655879532.0
[]
docs.itrsgroup.com
The). Download and further details @
https://docs.microsoft.com/en-us/archive/blogs/nawar/the-exchange-activesync-admin-easadmin
2020-07-02T14:51:22
CC-MAIN-2020-29
1593655879532.0
[]
docs.microsoft.com
Interview with a Wiki Ninja, MVP, Partner, MCC, & Expert in .NET & Azure - Gaurav Kumar Arora Welcome to another Interview with a Wiki Ninja! This week's interview is with... Gaurav Kumar Arora Gaurav is an MVP, Microsoft Partner, and MCC! He has written 9 fantastic articles! Example articles: - Azure:Create and Deploy ASP.NET WEBAPI to Azure and Manage using Azure API Management - Guru Award Winner - Cruds in NancyFx using ASP.Net and FluentNHibernate Using Repository Pattern - ASP.NET Web API: Discussing route constraints and creating custom constraints - Consume ASP.NET WEB API via Windows Phone using RestSharp - C#: How to check whether API server is up or down - Guru Award Winner - Simply c# Func - Guru Award Winner MSDN/TechNet Statistics: - 10 Wiki Articles - 59 Wiki Edits - 17 Achievement Awards - Multiple Guru Medals in Azure and Visual C#! Let's get to the interview! ====================== First of all thanks to Mr. Ed Price and all Wiki Team for this honor. Who are you, where are you, and what do you do? What are your specialty technologies? Hi everyone, this is Gaurav Kumar Arora from India. I’m currently located in New Delhi (Capital and heart of India). I live with loving family (Mom and my lovely wife). Science and Technology is in my blood. When I was in 6th standard, I created a prototype model of an electronic city (model for warehouses and factories) for my school Science & Technology fest (in year 1987-88). This model was based on solar energy and alerts for critical situations eg. If fire in a factory or material is out of stock. On Vacassion in Rani Khet, Uttranchal, India After completion of my Software Engineering Diploma, I started my carrier with my favorite language C/C++ at O.S.E. Ltd., India in year-1998. I am always interested in Research & Development and believe in ‘Learn & Share’ methodologies. I always thankful to team OSE, they have sponsored my higher education and I completed many courses viz. AI Certs and M.Phill (Comp. Sc.) and currently, pursuing Ph.D. (Comp. Sc.). After that I moved to Pyramid IT Consulting (P) Ltd. And these days, I am mentoring 2-startups (as a consultant Architect) to speed-up their development for Azure IoT stuffs. In 1999, I started a small group (named it as Muhalla Techies), where we exchange our knowledge on various technologies/languages including C/C++ , Microsoft Technologies (especially MS Windows). I thanksful to Microsoft Team who created this history page: Speaking in conference Local group - Noida, India I believe in smart-hard-works and I am a positive thinker. As per my view, there is nothing impossible in the world, you just need hard work and determination. These days, I am also working on my dream project (aka Mango in clouds), its still in drafts and working to release few white papers in near future. Speaking Local group - Noida, India Apart from writing, speaking, conducting events of Technology, I usually tried to spend my time with my family. I love nature and whenever get free time, I spend few moments to enjoy chirping of words. What are your big projects right now? I am looking for any certifications related to Machine Learning and Artificial Intelligence, I would love to attend any lab/event on Neuron Networks. I am associated with few local groups, where I speak and mentoring on various technologies (especially ASP.Net, Azure and Software & Architecture). During my association with Hyderabad Techies (a Microsoft User Group), we have created records for delivering nonstop 48 hrs. and 100 hrs. Webinars. I was doing Silverlight Applications, Games stuff as a WebMaster of SilverlightClub (this website has been acquired by Microsoft). I am working on subject ASP.Net WebHooks and ASP.Net Core for my upcoming books. Here are glimpse from my stuffs: Yeah! I am on TechNet Wiki J - , where I love to write, creative posts. I do write blog at: , where I writes about Microsoft Technologies I am a mentor at IndiaMentor where I provide free mentoring to Students and Devs to develop their technical and Industrial skills. I am also a Curator at Docs.com (aka MS Curah). I am a Site-Coordinator of DotNetSpider – a free online Tech community. I love to do coding for ASP.NET WehBooks and always thankful to Henrik Frystyk Nielsen for his supporting words for my work. Recently I got the biggest award anyone could ever receive, namely the title of Microsoft Most Valuable Professional, this award is due to all my colleagues and community supporters who helped me to serve for the community. What is TechNet Wiki for? Who is it for? In my view, Technet Wiki is a platform to showcase our writing and innovative skills. It is for all Professionals, teachers, authors, writers, students and all who want to share knowledge. Its also a knowledge bank where we can learn and write good contents and programs. What do you do with TechNet Wiki, and how does that fit into the rest of your job? Apart from my stuff I do for community activities, I read the Azure blog at and try to make myself updated with new Azure technologies and stuffs. I like write-ups on Security not only related to Azure but all Microsoft Stack (viz. Asp.Net, C# etc.). As I said earlier, I believe in learning so, I am still a learner and always try to learn new things and TechNet Wiki is the place which provides me such platform to fulfill my needs. What is it about TechNet Wiki that interests you? Its always my pleasure to read new articles in TechNet Wiki, there are great authors who regularly contributed on their expertise subject and I love their articles. TechNet wiki is a repository of quality articles, which guide in right direction for specific subject. What are your favorite Wiki articles you’ve contributed? It’s a tricky one for me. All my TechNet Wiki articles are favorite for me. The most liked and recommended articles as per readers is: Azure:Create and Deploy ASP.NET WEBAPI to Azure and Manage using Azure API Management This article selected for TechNet Guru award and published in TechNet Wiki Magazine (Flipboard) October 2015 Edition. What are your top 5 favorite Wiki articles? The list is long, but here are five: How to Configure Windows Failover Cluster in Azure for AlwaysOn Availability Groups ASP.NET 5 - Connect to SQL Azure Getting Started: Windows Workflow Foundation and ASP.NET ASP.Net Web Greeting Card Tool ASP.NET MVC 5 – Entity Framework 6, CRUD Operations on Visual Studio 2015 There are few more articles, I always admire to read (not Wiki articles): Patterns Determine Decisions (microwaves are EVIL!!!!) Introducing Microsoft ASP.NET WebHooks Preview Do you have any comments for product groups about TechNet Wiki? TechNet Wiki team is doing great stuff. It’s a team work to make product presence globally by quality articles. If I got a chance I would like to add a separate category for ‘ASP.Net WebHooks’ currently its in preview and I am expecting a great future for this. There would be a separate sections for Azure IoT stuffs and Azure scaling (for quality articles). Do you have any tips for new Wiki authors? For a new author, it’s a place to start. I would say to new authors do not hesitate to select any subject and start with a descriptive but meaningful articles. One more, todays many new authors are participating and writing articles for more points, it’s a suggestion – please submit your article without any interest. Discuss your articles within your group and other authors. Ask others to read articles and take all comments as a compliment. There is always a scope of improvement so, learn and write quality articles. Do not forget to follow Technet Wiki article rules while you’re submitting your articles. =================================== Thank you to Gaurav for all your fantastic articles and community contributions! I really appreciate the high level of quality that you bring to your articles. Everyone, please join me in thanking Gaurav for all he does for the Microsoft .NET community! Jump on in! The Wiki is warm! - Ninja Ed
https://docs.microsoft.com/en-us/archive/blogs/wikininjas/interview-with-a-wiki-ninja-mvp-partner-mcc-expert-in-net-azure-gaurav-kumar-arora
2020-07-02T17:08:32
CC-MAIN-2020-29
1593655879532.0
[array(['https://i1.social.s-msft.com/profile/u/avatar.jpg?displayname=gaurav%20kumar%20arora&size=extralarge&version=0b3f761e-e781-4086-9c6b-b900b2190ff7', "Gaurav Kumar Arora's avatar Gaurav Kumar Arora's avatar"], dtype=object) ]
docs.microsoft.com
Storage. Click either of the summary charts to see details by host: space used, free, and total in gigabytes and as a percentage of the total. Click a host name to display a list of directories on the host file system where partitions containing one or more Greenplum Database data directories are mounted. Hover over the directory name to see a list of the Greenplum data directories the partition contains. Note: Newly added data directories (a newly created tablespace, for example) are not immediately updated, but will be refreshed within four hours. GP Segments Usage History The GP Segments Usage History panel presents a chart of percentage of disk space in use for the time period set by the control in the panel header. Hover over the chart to see the percentage disk in use by all Greenplum Database segments at any given point in time. GP Masters Usage History The GP Masters Usage History panel presents a chart of percentage of disk space in use by the master and standby masters for the time period set by the control in the panel header. Hover over the chart to see the percentage disk in use at any given point in time.
https://gpcc.docs.pivotal.io/490/topics/ui/storage-status.html
2020-07-02T14:44:43
CC-MAIN-2020-29
1593655879532.0
[]
gpcc.docs.pivotal.io
Virtualization - setGuestCustomizationConfiguration Virtualization - setGuestCustomizationConfiguration Description : This command uses an XML file to set operating system-wide configuration settings for virtual guest packages (VGPs) that are based on discovered templates. For example, you can use this command to specify that the Windows Server 2008 VGP contains a user name of myUser, an administrator password of myPassword and a license key of xyz, along with other settings. This command is used in combination with the createVGTemplateEnrollmentJobForServer command and the createVGTemplateEnrollmentJobForServerGroup command.. When these commands discover templates and create VGPs that are based on the templates, they use the configuration settings you established with the setGuestCustomizationConfiguration command to complete the VGPs. For information on obtaining and using the values in the XML file, see the introductory chapter, Virtualization Concepts. Return type : Boolean Command Input : Example The following example shows how to specify configuration settings for operating systems. Script Virtualization setGuestCustomizationConfiguration "//myHost/c/virtConfiguration.xml"
https://docs.bmc.com/docs/blcli/86/virtualization-setguestcustomizationconfiguration-481025133.html
2019-10-14T07:16:17
CC-MAIN-2019-43
1570986649232.14
[]
docs.bmc.com
API stands for Application Programming Interface which allows you to deal with the server without using an actual Graphic user interface of it. It is a set of programming instructions, protocols and standards for accessing the web-based software application. It is a software-to-software interface, not the one created and designed for users. For example, imagine that you are booking a flight. You open a site of the airline of your choice to check availability and a cost of the flight on a chosen period. For that you will have to access airline's database. For you that would be a click of a mouse and a couple of seconds of browsing. Under covers there is a great work happening to get you these data. The site using API reaches the database to get the information you require and then post it to the site interface you are looking at. Much more operations are performed using API. But since all of that is happening behind the curtains, it may seem less obvious and more difficult than it is. API is used for building new software that uses data or works jointly with the chosen program (which API you are using). In a way that code allows two different applications (DSX and your trading robot, for example) to communicate with each other without going through the usual interface. Thus, your robot can analyse the order book at DSX and decide whether you should buy or sell some currency. The full API documentation of DSX you can find by that link When to use API? Our clients are using API if building robots or accessing their accounts remotely through self-developed terminals. Where to create API keys? API keys (private and public) can be created in a personal account at DSX. Please open Settings section () for that. Is there any limit for the amount of the request? If you make more than 60 requests of Trading API methods (/tapi/) per minute using the same pair of API keys you will receive an error on each new request during the same minute.
https://docs.dsx.uk/en/en/articles/2069138-what-is-api-and-when-do-you-need-it
2019-10-14T05:47:38
CC-MAIN-2019-43
1570986649232.14
[]
docs.dsx.uk
ASP.NET Troubleshooting You are probably here because your application doesn't log errors to elmah.io, even though you installed the integration. Before contacting support, there are some things you can try out yourself. - Make sure that you are referencing one of the following NuGet packages: Elmah.Io, Elmah.Io.AspNet, Elmah.Io.Mvc or Elmah.Io.WebApi. - Make sure that the Elmah.Io.Client NuGet package is installed and that the major version matches that of Elmah.Io, Elmah.Io.AspNet, Elmah.Io.Mvcor Elmah.Io.WebApi. - Make sure that your project reference the following assemblies: Elmah, Elmah.Io, and Elmah.Io.Client. - Make sure that your web.configfile contains valid config as described here. You can validate your web.configfile using this Web.config Validator. When installing the Elmah.IoNuGet package, config is automatically added to your web.configfile, as long as your Visual Studio allows for running PowerShell scripts as part of the installation. To check if you have the correct execution policy, go to the Package Manager Console and verify that the result of the follow statement is RemoteSigned: Get-ExecutionPolicy - Make sure that your server has an outgoing internet connection and that it can communicate with api.elmah.ioon port 443. Most of our integrations support setting up an HTTP proxy if your server doesn't allow outgoing traffic. - Make sure that you didn't enable any Ignore filters or set up any Rules with an ignore action on the log in question. - Make sure that you don't have any code catching all exceptions happening in your system and ignoring them (could be a logging filter or similar). - If you are using custom errors, make sure to configure it correctly. For more details, check out the following posts: Web.config customErrors element with ASP.NET explained and Demystifying ASP.NET MVC 5 Error Pages and Error Logging. Common exceptions and how to fix them Here you will a list of common exceptions and how to solve them. TypeLoadException Exception [TypeLoadException: Inheritance security rules violated by type: 'System.Net.Http.WebRequestHandler'. Derived types must either match the security accessibility of the base type or be less accessible.] Microsoft.Rest.ServiceClient`1.CreateRootHandler() +0 Microsoft.Rest.ServiceClient`1..ctor(DelegatingHandler[] handlers) +59 Elmah.Io.Client.ElmahioAPI..ctor(DelegatingHandler[] handlers) +96 Elmah.Io.Client.ElmahioAPI..ctor(ServiceClientCredentials credentials, DelegatingHandler[] handlers) +70 Elmah.Io.Client.ElmahioAPI.Create(String apiKey, ElmahIoOptions options) +146 Elmah.Io.Client.ElmahioAPI.Create(String apiKey) +91 Elmah.Io.ErrorLog..ctor(IDictionary config) +109 Solution This is most likely caused by a problem with the System.Net.Http NuGet package. Make sure to upgrade to the newest version ( 4.3.4 as of writing this). The default template for creating a new web application, installs version 4.3.0 which is seriously flawed.
https://docs.elmah.io/asp-net-troubleshooting/
2019-10-14T06:31:14
CC-MAIN-2019-43
1570986649232.14
[]
docs.elmah.io
Constructs the image pyramid which can be passed to calcOpticalFlowPyrLK. Computes a dense optical flow using the Gunnar Farneback's algorithm. The function finds an optical flow for each prev pixel using the [55] algorithm so that \[\texttt{prev} (y,x) \sim \texttt{next} ( y + \texttt{flow} (y,x)[1], x + \texttt{flow} (y,x)[0])\] Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids. The function implements a sparse iterative version of the Lucas-Kanade optical flow in pyramids. See [22] . The function is parallelized with the TBB library. Finds an object center, size, and orientation. See the OpenCV sample camshiftdemo.c that tracks colored objects. Computes an optimal affine transformation between two 2D point sets. The function finds an optimal affine transform [A|b] (a 2 x 3 floating-point matrix) that approximates best the affine transformation between: Two point sets Two raster images. In this case, the function first finds some features in the src image and finds the corresponding features in dst image. After that, the problem is reduced to the first case. In case of point sets, the problem is formulated as follows: you need to find a 2x2 matrix A and 2x1 vector b so. Finds the geometric transform (warp) between two images in terms of the ECC criterion [53] . The function estimates the optimum transformation (warpMatrix) with respect to ECC criterion ([53]),. Finds an object on a back projection image.).
https://docs.opencv.org/4.0.0/dc/d6b/group__video__track.html
2019-10-14T06:13:23
CC-MAIN-2019-43
1570986649232.14
[]
docs.opencv.org
TOPICS× About sending messages with Campaign Once you have defined the target and created the content of a message, you need to test it and to approve it before sending it to the main target. To do this: - Preview your delivery by using a test profile. - Schedule the sending: define when to send the message. - Prepare the send: this step allows you to move to analyzing and preparing messages to send. The message preparation analyzes the target, the personalization and the validity of the message. Errors detected during this step must be corrected before being able to proceed further. You can launch the message preparation as many times as needed.You can set global cross-channel fatigue rules that will automatically exclude oversollicited profiles from campaigns. See Fatigue rules . - Test the send: this step allows you to approve the message by sending proofs. - Check the delivery rendering: make sure that your message will be displayed in an optimal way on a variety of web clients, web mails and devices (highly recommended). - Send the message: once the message ready, you can start the sending. Access logs and reports are then available to monitor the message delivery and measure the success of your campaign. Adobe Campaign also provides an email alerting system to keep track of delivery successes or failures. Related topics:
https://docs.adobe.com/content/help/en/campaign-standard/using/testing-and-sending/about-sending-messages-with-campaign.html
2019-10-14T07:07:46
CC-MAIN-2019-43
1570986649232.14
[]
docs.adobe.com
Project configuration Introduction A Dataform project is primarily configured through the dataform.json file that is created at the top level of your project directory. In addition, package.json is used to control NPM dependency versions, including the current Dataform version. dataform.json This file contains information about the project. These settings, such as the warehouse type, default schema names, and so on, are used to compile final SQL. The following is an example of the dataform.json file for a BigQuery project: { "warehouse": "bigquery", "defaultSchema": "dataform", "assertionsSchema": "dataform_assertions", "gcloudProjectId": "my-project-id" } Changing default schema names: { ... "defaultSchema": "mytables", ... } Assertions are created inside a different schema as specified by the assertionsSchema property. package.json. Updating Dataform to the latest version All Dataform projects depend on the @dataform/core NPM package. If you are developing your project locally and would like to upgrade your Dataform version, run the following command: npm update @dataform/core If you use the dataform command line tool, you may also wish to upgrade your globally installed Dataform version: npm update -g @dataform/cli
https://docs.dataform.co/guides/configuration/
2019-10-14T05:46:04
CC-MAIN-2019-43
1570986649232.14
[]
docs.dataform.co
Custom Validator. Custom Validate Empty Text Validator. Custom Validate Empty Text Validator. Custom Validate Empty Text Validator. Property Validate Empty Text Definition Gets or sets a Boolean value indicating whether empty text should be validated. public: property bool ValidateEmptyText { bool get(); void set(bool value); }; [System.Web.UI.Themeable(false)] public bool ValidateEmptyText { get; set; } member this.ValidateEmptyText : bool with get, set Public Property ValidateEmptyText As Boolean Property Value Remarks Each validator can be associated with a targeted control. In previous versions of the .NET Framework, if the targeted control had an empty string value, such as a Text property having a value of String.Empty, the validator (except for the RequiredFieldValidator validator) would not evaluate the targeted control and would simply return that the validation passed. The ValidateEmptyText property is new for the .NET Framework version 2.0. If ValidateEmptyText is set to true, the validator evaluates the control's value (using the criteria specified to the CustomValidator control) and returns the validation results. This property allows developers to evaluate the results of a CustomValidator control regardless of the value of the targeted control. This property cannot be set by themes or style sheet themes. For more information, see ThemeableAttribute and ASP.NET Themes and Skins.
https://docs.microsoft.com/en-us/dotnet/api/system.web.ui.webcontrols.customvalidator.validateemptytext?view=netframework-4.8
2019-10-14T06:52:18
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
Integrating Dynamics NAV and Microsoft Office Dynamics NAV includes several features that work with Microsoft Office products, including Excel, Word, OneNote, Outlook, and SharePoint. Some of the features require only that Office is installed on or accessible from the devices that are running the Dynamics NAV clients, whereas other features require additional configuration. Depending on the feature, some configuration tasks are performed on the Dynamics NAV deployment environment, such as configuring the Microsoft Dynamics NAV Server instance. These tasks are typically done by the system or IT administrator. Other tasks are performed on the application from the Dynamics NAV clients, such as configuring user accounts. These tasks are typically done by the business application administrator. The following table describes the available features: See Also Configuring Microsoft Dynamics NAV Server Feedback
https://docs.microsoft.com/en-us/dynamics-nav/integrating%20dynamics%20nav%20and%20office
2019-10-14T06:50:58
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
Getting started for developers Thank you for either choosing PlayFab as your platform for back-end services and LiveOps, or evaluating the PlayFab offering. Welcome! The purpose of this section is to shed some light on how PlayFab helps you - as a developer - to build your game using PlayFab. PlayFab offers you a wide range of services, and has over 280 different APIs that you can leverage to make your game. This can be a lot to absorb in one session, so we'll take it slow. Let's start with the first steps of making your game - and then move into the more advanced features as we go. Note Before you can make your first API call, you need to create a PlayFab Developer account. Making your first API call with PlayFab We have SDKs for all major game engines and languages. Choose an environment from the links provided below, and follow it to the appropriate quickstart guide. These quickstarts walk you through installing your environment, creating a new test project, and making your first API call. Pick your SDK: - Unity - Unreal - HTML5 (Javascript) - Flash (ActionScript3) - C# - Cocos2d-x - Xamarin - Node - Java - Defold (Lua) - Corona (Lua) - Windows - C++ - Linux - C++ - Xbox - C++ Updating your login mechanism The first step in adding PlayFab to any game is always logging in the player. Logging in the player returns a security token that is needed for all other API calls. The quickstart guides utilize a test TitleId - but from now on, you should be using your own. Create a Title in Game Manager, and update your environment with your own TitleId. Obtaining your TitleId A TitleId is obtained from the Game Manager. If you haven't already, register for a free PlayFab developer account, then log into the Game Manager. Once you have logged in, select Settings. The TitleId for your game should already be present in the field below the Name column. The SDK guide that you followed in the first step should have included instructions on how to enter your Title ID. Login and account basics Now that you're set up to make API calls, the starting point for any PlayFab integration is authentication. You have to authenticate your player to make further API calls. PlayFab offers a number of methods to authenticate and link your players. Here are some resources that will help you with the initial authentication of your player: - Login basics and Best Practices – Check this tutorial first to learn about the best practices to use various authentication methods in your game. - Authentication Service Helper – Learn how this service can save you valuable time by leveraging building best practices in this authentication service for each SDK. - Authentication quickstart – Use this guide to understand the basics of authentication calls into PlayFab. - Account Linking tutorial - Learn about linking and unlinking different types of accounts to a single player profile. Next steps Every game is fairly different, so you will have a unique set of features that you must build every time. It is important to know and understand how to map those features onto PlayFab. This generally starts with the configuration of your game. You will want to store variables in PlayFab, and pull them down on to game clients. But these are not the only types of configurations that you'll want to make. Some of the number of different ways that PlayFab maps onto a game are shown below, giving you the opportunity to find the combo tool set that is just right for your game: - Title Data – Map variables containing data on PlayFab to data structures in your game clients. - Entity Objects (aka: Player Data) – Store and retrieve data on a per player basis. - Catalogs (Items) - Very useful for storing configuration data about your Items and potentially being able to sell them as virtual goods. - Groups – Groups are generally used for things like guilds or clans. Groups are arbitrary and have members, roles and other guild-like features. PlayFab advanced Mapping your game on top of PlayFab is a great start. But there is more power to be harnessed in PlayFab that can help your LiveOps team create better engagement, retention and monetization mechanics. A majority of these features leverage PlayStream, an event system that drives real time events. This enables you to perform actions on player behaviors. Actions can occur in a number of ways - either via segmentation, or direct rules that are applied to specific events. Actions might result in a CloudScript being run. Our CloudScript is JavaScript code that lives on a remote server, and you can execute it either from a rule, or directly from a game client. For more information, check out these resources to get you started with Cloud Scripting and Automation on PlayFab: - Automation – A hub for information on CloudScript, Scheduled Tasks, PlayStream and Action & Rules. - CloudScript quickstart – Get up and running quickly with your first CloudScript call. Tip To leverage rules in the automation system, write custom events in your game which will create a PlayStream event. Get to know PlayFab features There is much more you can do with PlayFab. Check out each of our feature areas in the links provided below to find the right feature set for your game: Commenti Caricamento dei commenti...
https://docs.microsoft.com/it-it/gaming/playfab/personas/developer
2019-10-14T06:17:27
CC-MAIN-2019-43
1570986649232.14
[array(['images/game-manager-settings-secret-keys.png', 'Game Manager - Settings - Secret Keys - Title ID'], dtype=object) array(['images/liveops-config.png', 'Configuration and Events'], dtype=object) ]
docs.microsoft.com
Adding your TagniFi key The TagniFi Excel add-in requires a key to update models. Follow these steps to add your key in Excel: - Go to My Account and copy your TagniFi key. - Open Excel - Select the TagniFi menu - Select My Key - Paste your TagniFi key into the text box - Select Save
https://docs.tagnifi.com/article/40-adding-your-tagnifi-key
2019-10-14T06:56:59
CC-MAIN-2019-43
1570986649232.14
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/553f4c08e4b0eb143c62b6aa/images/558ea1eee4b01a224b42f12a/file-y6YoLvYP9Z.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/553f4c08e4b0eb143c62b6aa/images/558ea297e4b027e1978eb64e/file-E0acFPQL8G.png', None], dtype=object) ]
docs.tagnifi.com
Category: Core Helper to manage UndoRedo in the editor or custom tools. enum MergeMode Helper to manage UndoRedo in the editor or custom tools. It works by storing calls to functions in both ‘do’ an ‘undo’ lists. Common behavior is to create an action, then add do/undo calls to functions or property changes, then committing the action. Set a property with a custom value. Add a ‘do’ reference that will be erased if the ‘do’ history is lost. This is useful mostly for new nodes created for the ‘do’ call. Do not use for resources. Undo setting of a property with a custom value. Add an ‘undo’ reference that will be erased if the ‘undo’ history is lost. This is useful mostly for nodes removed with the ‘do’ call (not the ‘undo’ call!). Clear the undo/redo history and associated references. Commit the action. All ‘do’ methods/properties are called/set when this function is called. Create a new action. After this is called, do all your calls to add_do_method, add_undo_method, add_do_property and add_undo_property. Get the name of the current action. Get the version, each time a new action is committed, the version number of the UndoRedo is increased automatically. This is useful mostly to check if something changed from a saved version. © 2014–2018 Juan Linietsky, Ariel Manzur, Godot Engine contributors Licensed under the MIT License.
https://docs.w3cub.com/godot~3.0/classes/class_undoredo/
2019-10-14T05:25:14
CC-MAIN-2019-43
1570986649232.14
[]
docs.w3cub.com
Customer specific (form) Applies To: Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2, Microsoft Dynamics AX 2012 Feature Pack, Microsoft Dynamics AX 2012 Click Product information management > Common > Released products. Select the item you want to work with. On the Action Pane, on the Manage Inventory tab, in the Batch attributes group, click Customer specific. Use this form to view and maintain batch attributes and batch attribute groups for batch-controlled items that are associated with specific customers. Before you can add a batch attribute item or batch attribute group to a customer, you must first add the batch attributes to the item and item group. Tasks that use this form Add a batch attribute to an item for a customer Add a batch attribute group to an item for a customer Navigating the form The following table provides descriptions for the controls in this form. Fields See also Assign batch attributes to a potency item Announcements: To see known issues and recent fixes, use Issue search in Microsoft Dynamics Lifecycle Services (LCS).
https://docs.microsoft.com/en-us/dynamicsax-2012/customer-specific-form
2019-10-14T06:41:09
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
Building Block: Performance Monitors and Request Throttles Applies to: SharePoint Foundation 2010 Microsoft SharePoint Foundation has an extensible system for monitoring Windows Server 2008 performance counters and for throttling HTTP requests when those counters indicated that a worker process is too busy to handle all the requests that it is receiving. Object Model for Performance Monitors and Request Throttles Most of the classes and members you can use to extend the system are located in the Microsoft.SharePoint.Utilities namespace. The most important classes include the following: SPHttpThrottleSettings An object of this type provides management and configuration settings for performance monitoring and HTTP request throttling. There is one such object for each Web application. SPSystemPerformanceCounterMonitor An object of this type monitors for the value of a specific Windows Server 2008 performance counter. SPBucketHealthScoreCalculator A health-score calculator that computes a score for a specific performance value based on the bucket of values into which the value falls. A "bucket" is a subrange of possible values. The health of a worker process, as determined by the health scores of its monitors, controls when the process enters throttling mode and begins blocking certain classes of HTTP requests. SPRequestThrottleClassifier An object that defines a class of HTTP requests and specifies whether matching requests are throttled when the server is busy, throttled when the server has been continuously busy for at least 60 seconds, or not throttled at all. The collections of registered monitors and request classifiers are persisted as the HttpThrottleSettings property of the SPWebApplication class. Areas Related to Performance Monitors and Request Throttles Building Block: Health Rules More Information about Development with Performance Monitors and Request Throttles Detailed information about development with the SharePoint Foundation system of performance monitors and HTTP request throttles is located in the Request Throttling section of this SDK.
https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-2010/ff407263%28v%3Doffice.14%29
2019-10-14T06:12:52
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
Try it now and let us know what you think. Switch to the new look >> You can return to the original look by selecting English in the language selector above. DeleteDBClusterParameterGroup Deletes a specified cluster parameter group. The cluster parameter group to be deleted can't be associated with any clusters. Request Parameters For information about the parameters that are common to all actions, see Common Parameters. - DBClusterParameterGroupName The name of the cluster parameter group. Constraints: Must be the name of an existing cluster parameter group. You can't delete a default cluster parameter group. Cannot be associated with any clusters. Type: String Required: Yes Errors For information about the errors that are common to all actions, see Common Errors. - DBParameterGroupNotFound DBParameterGroupNamedoesn't refer to an existing parameter group. HTTP Status Code: 404 - InvalidDBParameterGroupState The parameter group is in use, or it is in a state that is not valid. If you are trying to delete the parameter group, you can't delete it when the parameter group is in this state. HTTP Status Code: 400 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/documentdb/latest/developerguide/API_DeleteDBClusterParameterGroup.html
2019-10-14T06:08:42
CC-MAIN-2019-43
1570986649232.14
[]
docs.aws.amazon.com
For Each...Next statement Repeats a group of statements for each element in an array or collection. Syntax For Each element In group [ statements ] [ Exit For ] [ statements ] Next [ element ] The For...Each...Next statement syntax has these parts: Remarks The For…Each block is entered if there is at least one element in group. After the loop has been entered, all the statements in the loop are executed for the first element in group. If there are more elements in group, the statements in the loop continue to execute for each element. When there are no more elements in group, the loop is exited and execution continues with the statement following the Next statement. Any number of Exit For statements may be placed anywhere in the loop as an alternative way to exit. Exit For is often used after evaluating element is included. If a Next statement is encountered before its corresponding For statement, an error occurs. You can't use the For...Each...Next statement with an array of user-defined types because a Variant can't contain a user-defined type. Example This example uses the For Each...Next statement to search the Text property of all elements in a collection for the existence of the string "Hello". In the example, MyObject is a text-related object and is an element of the collection MyCollection. Both are generic names used for illustration purposes only. Dim Found, MyObject, MyCollection Found = False ' Initialize variable. For Each MyObject In MyCollection ' Iterate through each element. If MyObject.Text = "Hello" Then ' If Text equals "Hello". Found = True ' Set Found to True. Exit For ' Exit loop. End If Next See also Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/Language/Reference/User-Interface-Help/for-eachnext-statement
2019-10-14T06:01:51
CC-MAIN-2019-43
1570986649232.14
[]
docs.microsoft.com
>> Enterprise software evaluates configuration files, the files in the $SPLUNK_HOME/etc/slave-apps/[_cluster|<app-name>]/local subdirectories have the highest precedence. For information on configuration file precedence, see Configuration file precedence in the Admin Manual... When the push is successful, the peers use the new set of configurations, now located in their local $SPLUNK_HOME/etc/slave-apps. Leave the files in $SPLUNK_HOME/etc/slave-apps. For more information on the distribution process, see the following rolling restart works, see Use rolling restart. To set searchable rolling restart as the default mode for rolling restarts triggered by a bundle push, see Use searchable rolling restart with configuration bundle push. When the process is complete, the peers. (Optional) Validate the bundle. Use the CLI to view the status of the bundle push To see how the cluster bundle push of the indexes.confchanges described in Determine which indexes.conf changes require restart. - You delete an existing app from the configuration bundle. For more information, see When to restart Splunk Enterprise after a configuration file change in the Admin Manual. Use searchable rolling restart with configuration bundle push Searchable rolling restart lets you perform a rolling restart of peer nodes with minimal interruption of in-progress searches. You can set searchable rolling restart in server.conf as the default mode for all rolling restarts triggered by a configuration bundle push. For instructions, see Set searchable rolling restart as default mode for bundle push. For more information, see Perform a rolling restart of an indexer cluster. Rollback the configuration bundle You can rollback the configuration bundle to the previous version. This action allows you to recover from a misconfigured bundle. The rollback action toggles the most recently applied configuration bundle on the peers with the previously applied bundle. You cannot rollback beyond the previous bundle. For example, say that the peers have an active configuration bundle "A" and you apply a configuration bundle "B", which then becomes the new active bundle. If you discover problems with B, you can rollback to bundle A, and the peers will then use A as their active bundle. If you rollback a second time, the peers will return again to bundle B . If you rollback a third time, the bundles will return again to A, and so on. The rollback action always toggles the two most recent bundles. To rollback the configuration bundle, run this command from the master node: splunk rollback cluster-bundle As with splunk apply cluster-bundle, this command initiates a rolling restart of the peer nodes, when necessary. You can use the splunk show cluster-bundle-status command to determine the current active bundle. You can use the cluster/master/info endpoint to get information about the current active and previous active bundles. If the master-apps folder gets corrupted, resulting in rollback failure, a message specifying the failure and the workaround appears on the master node dashboard, as well as in splunkd.log. To remediate, follow the instructions in the message. This includes removing the $SPLUNK_HOME/etc/master-apps.dirty marker file, which indicates failure, and manually copying over the active bundle, as specified in the message. fix the errors and run splunk apply cluster-bundle from the master.: - Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/7.3.8/Indexer/Updatepeerconfigurations
2021-02-25T08:14:26
CC-MAIN-2021-10
1614178350846.9
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Creating Keyframes in the Function View You can add keyframe directly on the graph using the Function view. - In the Timeline view, open the drawing or layer’s parameters by clicking the Expand button or press Alt + F. - Double-click on the parameter layer name to open the Function editor window or click once on the parameter layer to display it in the Function view's left sidebar. If you are using the Function view, you must click on the function name from the sidebar list to view it in the graph display region. - In the Function editor, do one of the following: - In the graph section, click on the frame where you want to make changes. - In the Frame field, enter the frame number. In the graph display area, the red playhead moves to that frame number. - In the Function editor, click the Add Keyframe button. - Click on the newly created keyframe and drag it up to increase the value of the function or down to decrease the value. Depending on the selected function, this could increase or decrease the width of the object (scale_x) or change the object's vertical position (position_y). Pull on the handles to create non-linear transition speeds between keyframes. - If you do not like the changes you just made, select and delete the new keyframe by pressing Del or clicking the and Delete keyframe button. You can delete an existing keyframe by using the same process.
https://docs.toonboom.com/help/harmony-17/essentials/motion-path/create-keyframe-function-view.html
2021-02-25T07:51:09
CC-MAIN-2021-10
1614178350846.9
[array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Stage/Paths/HAR11/HAR11_animationPaths_changeKF.png', None], dtype=object) ]
docs.toonboom.com
How to Deploy the RMS Client Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 If you are using Microsoft Windows XP or Microsoft Windows 2000, the Rights Management Services (RMS) client must be installed before you can use any RMS features, such as Information Rights Management in Microsoft® Office System 2003 and the Rights Management Add-on for Internet Explorer. The RMS client is built into Windows Vista® and Windows® 7. Many organizations choose to control the deployment of the client software in their organization. Either Systems Management Server (SMS) or Group Policy can be used to deploy the RMS with Service Pack 2 (SP2) client. Before you begin your deployment, see to download the RMS client. Important The RMS client has been integrated into Windows Vista. Therefore, a separate installation is no longer required. Extracting the Installation Files After you download the WindowsRightsManagementServicesSP2-KB917275-Client-ENU.exe file, you must extract the Microsoft® Windows® Installer files from the executable package. You can use the following command at a command prompt to do this: WindowsRightsManagementServicesSP2-KB917275-Client-ENU.exe /x <path> where <path> is the target directory in which you want to place the extracted files. Running this command extracts the following files to the target directory you specified: Bootstrap.exe This is a wrapper file that is used by the executable file to install the other files included. It is not used when installing the RMS with SP2 client by using SMS or Group Policy. MSDrmClient.msi This is the installation file for the RMS with SP2 client. This installation uninstalls any previous version of the RMS client on a computer. This program should be installed on client computers first. RMClientBackCompat.msi This is the installation file that identifies the new RMS with SP2 client to RMS-enabled applications (such as Microsoft Office Professional 2003 or 2007 Microsoft Office System) that are dependent on the previous version of the RMS client so that the RMS with SP2 client can be used instead. This program should be installed on client computers after the MSDrmClient.msi has been successfully installed. Note Whichever installation method you choose to implement, ensure that both Windows Installer files are successfully installed. If an error occurs that prevents installation of MSDrmClient.msi, do not install RMClientBackCompat.msi. Deploy RMS Client by Using an Unattended Installation Extracting the files to install the Windows Installer files is optional. You can also deploy the RMS client by using an unattended installation method. You can use the following command at a command prompt to do this:. Note Because this is an unattended installation, the installer does not inform you when it is complete. Unattended installations are usually run in a batch or script file. Deploy RMS Client Using SMS To deploy the RMS client by using SMS Open the SMS Administrator console. Expand the site database you want to use. In the left pane, right-click Packages, choose New, and then click Package From Definition. Create packages from the MSDRMClient.msi and the RMClientBackCompat.msi files. The packages should have the following properties: General: For Command line, type the following: msiexec.exe /q ALLUSERS=2 /m MSIDGHOG /i "<file_name>.msi" Note MSIDGHOG is a random value. Replace <file_name> with the name of the Windows Installer file that this package will be installing. For Run, select the Hidden option. For After running, select the No action required option. For Category, select the Administrative Software option. Requirements: For Estimated disk space, type 445 KB. For Maximum allowed run time, select Unknown. Select the This program can run on any platform check box. Environment: For Program can run, select the Whether or not a user is logged on option. For Run mode, select the Run with administrative rights option. For Drive mode, select the Runs with UNC name option. Advanced: Clear the Run another program first check box. Clear the Suppress program notification check box under When the program is assigned to a computer. Clear the Disable this program on computers where it is advertised check box. Set the Access Accounts and Distribution Points as appropriate for your organization. Create an advertisement to the appropriate collection. It is recommended that you use the Per-system unattended program in an SMS deployment. Schedule this advertisement according to the needs of your organization. Deploy RMS Client by using Group Policy You can use the Software Installation and Maintenance feature of Group Policy to deploy the RMS client on target computers. Group Policy is the recommended method for actively managing the deployment of the RMS clients for small- to medium-sized organizations or ones who are not already using a corporate update management solution such as Systems Management Server 2003. When you use Group Policy to distribute a program, you can assign the program to computers. The program is installed when the computer starts and is available to all users who log on to the computer. For more information about Group Policy, see Designing a Group Policy Infrastructure (). This procedure assumes you are using the Group Policy Management Console (GPMC). To download GPMC, see Group Policy Management Console with Service Pack 1 (). The following procedure provides a quick guide for administrators unfamiliar with Group Policy–based distribution of software. You can modify these steps as necessary to meet the needs of your organization. To deploy the RMS client by using Group Policy On a domain controller, open the Active Directory Users and Computers Microsoft Management Console (MMC) snap-in. Create a new organizational unit (OU) or select an existing OU. If you created a new OU, add the computers on which you want to install the RMS Client. Right-click the OU, and then choose Properties. Select the Group Policy tab. Click New to create a new Group Policy object (GPO). Click Edit to edit the new GPO. In the console tree, expand Computer Configuration, Software Settings, and then select Software installation. Right-click in the details pane, click New, and then click Package. Provide a path to the MSDRMclient.msi file on a network shared folder that the client computers can access. Click OK to assign the package. Repeat steps 5 through 10 to create a GPO that installs the RMClientBackCompat.msi file. Important You cannot use Group Policy to distribute the RMClientBackCompat.msi file if the domain controller is running Windows Server 2008 or Windows Server 2008 R2. Note These steps are provided only as guidance for users that are not experienced in using Group Policy. If you are an experienced Group Policy administrator, you can follow your own operational procedures to distribute the MSDrmClient.msi package. In addition, these steps are for a domain controller running Windows Server 2003 — the process and terminology might be different on a Windows 2000 domain. Upgrading from a Previous Version It is possible to use an unattended installation method within a script that will detect whether the RMS with SP2 client is installed. If the client is not installed, the script either upgrades the existing client or installs the RMS with SP2 client. The script is as follows: Set objShell = Wscript.CreateObject("Wscript.Shell") Set objWindowsInstaller = Wscript.CreateObject("WindowsInstaller.Installer") Set colProducts = objWindowsInstaller.Products For Each product In colProducts strProductName = objWindowsInstaller.ProductInfo (product, "ProductName") if strProductName = "Windows Rights Management Client with Service Pack 2" then strInstallFlag = "False" Exit For else strInstallFlag = "True" end if Next if strInstallFlag = "True" then objShell.run "WindowsRightsManagementServicesSP2-KB917275-Client-ENU.exe -override 1 /I MsDrmClient.msi REBOOT=ReallySuppress /q -override 2 /I RmClientBackCompat.msi REBOOT=ReallySuppress /q " else wscript.echo "No installation required" end if Note This script does not work with Windows Vista because the RMS client is built into the operating system.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-rights-management-services-rms/cc747749(v=ws.10)
2018-05-20T12:51:55
CC-MAIN-2018-22
1526794863410.22
[]
docs.microsoft.com
This chapter provides guidelines and warnings for preparing for changes expected in a future release, such as announcements of deprecated interfaces. You are not required to make changes related to the topics in this section at this time, but you should plan to do so in the future. In MarkLogic 9, the Packaging API has been deprecated. It will be removed from the product in MarkLogic 10. The following table lists the deprecated endpoints and the new alternative. The XQuery library modules in the info and infodev namespaces are deprecated as of MarkLogic 9 and will be removed from the product in a future release. For example, the functions info:ticket and infodev:ticket-create are deprecated. The search:parse XQuery function and search.parse Server-Side JavaScript functions can return an annotated cts:query if you pass in "cts:annotated-query" as the output format parameter value. As of MarkLogic 9, use of "cts:annotated-query" is deprecated. Support for this format will be removed in a future release. If you currently use the annotated query output as an intermediate step in a transformation, you should use the structured query ( "search:query") output format instead. Runtime modification of queries is a primary use case for structured query. For more details, see Searching Using Structured Queries in the Search Developer's Guide. If you currently use the annotated query output format to recover the original query text using search:unparse, you should cache the original query text yourself. Customizing the string query grammar through the Search API grammar query option is now deprecated. Support for this feature will be removed in a future release. If your application currently relies on a Search API grammar customization, you should consider alternatives such as the following: The -tolerate_errors option of the mlcp import command is deprecated (and ignored) as of MarkLogic 9.0-2. The option will be removed in a future release. Mlcp now always tolerates errors. The XQuery prolog option xdmp:transaction-mode is deprecated as of MarkLogic 9.0-2. Use the xdmp:commit and xdmp:update prolog options instead. Note that the new prolog options differ from xdmp:transaction-mode in that they affect only the transaction create after their declaration, where as xdmp:transaction-mode settings persist across an entire session. The following table illustrates the correspondence between the old and new options settings. Note that the default values for xdmp:commit and xdmp:update are both 'auto', so you do not need to set this value explicitly in most cases. For more details, see xdmp:update and xdmp:commit in the XQuery and XSLT Reference Guide. The transaction-mode option of the xdmp:eval XQuery function and the xdmp.eval JavaScript function is deprecated as of MarkLogic 9.0-2. Use the commit and update options instead. For more details, see the function reference documentation for xdmp:eval (XQuery) and xdmp.eval (JavaScript). This option deprecation (and alternative option settings apply to the following functions: The following table illustrates the correspondence between the old and new option settings: Use of Session.setTransactionMode to specify commit semantics and transaction type is deprecated as MarkLogic 9.0-2. This function will be removed in a future version. Use the new Session.setAutoCommit and Session.setUpdate methods instead. The following table illustrates how to replace calls to setTransactionMode with equivalent calls to setAutoCommit and setUpdate. release 9.0-5 and will be removed from the product in a future release.
http://docs.marklogic.com/guide/relnotes/chap5
2018-05-20T11:52:49
CC-MAIN-2018-22
1526794863410.22
[]
docs.marklogic.com
FAQs¶ I’m getting Unable to find path information errors.¶ In order for sublime-jekyll to create new posts for your static site, it must know where to put them. There are 2 required settings that must be set before you use this package: jekyll_posts_path and jekyll_drafts_path. If those are not set in either your User settings file, or your Project settings file, sublime-jekyll will fail with a MissingPathException. What happened to all the syntax files?¶ Syntax files in Sublime Text suck - period. They were becoming really difficult to manage and debug, and in my opinion they weren’t all that good anyway. I have moved them to a separate repository where folks can feel free to push pull requests for any bugs or fixes. I don’t plan on maintaining this respository with proactive updates (outside of community pull requests). If you want my recommendation for a syntax package, install Markdown Extended or MarkdownEditing - both are very good and well maintained. Where do I put my Project settings?¶ When you create a new project in Sublime Text, you are asked to save a file with a suffix of .sublime-project. By default, that file has some minimal settings, and allows you to control things about your specific project (project documentation). To add Project specific settings for sublime-jekyll, you can just add your Jekyll settings under the “settings” key in your .sublime-project file. { "folders": [ { "follow_symlinks": true, "path": "/Users/username/site/" } ], "settings": { "Jekyll": { "jekyll_posts_path": "/Users/username/site/_posts", "jekyll_drafts_path": "/Users/username/site/_drafts", } } } How do I log a bug?¶ Bugs suck - and I’m sorry you had to find one. I’m typically pretty responsive to fixing them if you help me gather as much information as possible. First, enable debug mode for sublime-jekyll by setting the jekyll_debugsetting to true, and restart Sublime Text. Next, try to reproduce the bug again. If it still happens, open up the Sublime console ( Ctrl + `or View > Show Console) and copy the Jekyll-specific debugging output (it should have a Jekyllor Jekyll Utilityprefix). Check the list of open issues on the GitHub issue tracker for similar problems with other users. If you find one, add your name to it. If no issues exist, open a new one being sure to include the following information: - A summary or description of the specific issue - Your version of Sublime Text (2 or 3, as well as build) - Your operating system (Windows, OS X, Linux) - The debug output of the Sublime console Lastly, be open to us asking some questions about your bug as we attempt to reproduce and squash it!
http://sublime-jekyll.readthedocs.io/en/latest/faq.html
2018-05-20T12:04:39
CC-MAIN-2018-22
1526794863410.22
[]
sublime-jekyll.readthedocs.io
Checking NuGet package vulnerabilities with OWASP SafeNuGet Note: This method of scanning vulnerabilities is outdated. Check out our integrated vulnerability report for a better way of analyzing potential vulnerabilities. Use of libraries with known vulnerabilities can be an issue for software and components you create: check the excellent whitepaper "The Unfortunate Reality of Insecure Libraries". In the OWASP Top 10 2013, consuming vulnerable packages is listed under A9 Using Known Vulnerable Components. Automatic checking for known vulnerabilities can be done: OWASP has released a NuGet package which is able to check known vulnerabilities in other NuGet packages. The SafeNuGet package contains an MSBuild task which will warn you about consuming such packages. Installing SafeNuGet into a project Installing SafeNuGet into a project is as easy as installing any other NuGet package: Install-Package SafeNuGet This will add a .targets file to all projects in the open solution, adding a check for possibly vulnerable packages during build. How are potentially vulnerable packages shown? A repository with vulnerable packages and the reason for that can be found on the SafeNuGet GitHub project. When running a build which references vulnerable NuGet packages, the warnings list will contain some information about this as well as a link with some explanation: When a library referencing a potential unsafe package is built using MyGet Build Services, a warning will also be displayed in the build log: Can my build fail when such packages are consumed? It would be great if the build would fail entirely when such package is found. This can be done> Read our contribution guidance or edit this page's source on GitHub.
https://docs.myget.org/docs/how-to/checking-nuget-package-vulnerabilities-with-owasp-safenuget
2018-05-20T11:48:52
CC-MAIN-2018-22
1526794863410.22
[array(['Images/owasp-warning.png', 'OWASP SafeNuGet'], dtype=object) array(['Images/build-services-owasp.png', 'MyGet Build Services using OWASP SafeNuGet'], dtype=object)]
docs.myget.org
glReadPixels — read a block of pixels from the frame buffer_ALPHA, GL_RGB, and GL_RGBA. type Specifies the data type of the pixel data. Must be one of GL_UNSIGNED_BYTE, GL_UNSIGNED_SHORT_5_6_5, GL_UNSIGNED_SHORT_4_4_4_4, or GL_UNSIGNED_SHORT_5_5_5_1. data Returns the pixel data. corner at for and . This pixel is said to be the th pixel in the th row. Pixels are returned in row order from the lowest to the highest row, left to right in each row. format specifies the format. The final values are clamped to the range . Finally, the components are converted to the proper format, as specified by type. When type is GL_UNSIGNED_BYTE, each component is multiplied by . When type is GL_UNSIGNED_SHORT_5_6_5, GL_UNSIGNED_SHORT_4_4_4_4, or GL_UNSIGNED_SHORT_5_5_5_1, each component is multiplied by , where is the number of bits in the bitfield. Return values are placed in memory as follows. If format is GL_ALPHA, a single value is returned and the data for the th pixel in the th row is placed in location .. GL_INVALID_ENUM is generated if format or type is not an accepted value. and GL_IMPLEMENTATION_COLOR_READ_TYPE. GL_INVALID_FRAMEBUFFER_OPERATION is generated if the currently bound framebuffer is not framebuffer complete (i.e. the return value from glCheckFramebufferStatus is not GL_FRAMEBUFFER_COMPLETE). glGet with argument GL_IMPLEMENTATION_COLOR_READ_FORMAT or GL_IMPLEMENTATION_COLOR_READ_TYPE glGet with argument GL_PACK_ALIGNMENT glCheckFramebufferStatus, glPixelStorei Copyright © 1991-2006 Silicon Graphics, Inc. This document is licensed under the SGI Free Software B License. For details, see.
http://docs.gl/es2/glReadPixels
2018-05-20T11:57:35
CC-MAIN-2018-22
1526794863410.22
[]
docs.gl
The OpenClinica data model is designed to mirror the structure and nomenclature of the CDISC ODM standard as closely as possible. Key tables in the physical schema represent studies, study subjects, CRFs, items, item data, and other objects, with the relationships between them modelled as foreign keys. The data model follows an 'Entity Attribute Value', or EAV, approach, where data values are saved as individual records in a 'long & skinny' table (item_data) with the entity name and attributes (metadata and other properties) related to the value through foreign key relationships [1]. The diagram above is a 'cheat sheet' version of the OpenClinica logical model [2], showing key tables in the schema and their relationships. Note that shorthand abbreviations are used rather than the full table names, so for example 'IG' is used in place of 'item_group'. The arrows represent foriegn keys, pointing toward the primary keys. The circled stars mark repeating objects (those with an 'ordinal' column). The lines through IGM and IFM indicate that they are ternary: each of their instances describes a 1:1 relation of a CRFV and an Item. For a more comprehensive diagram of the current physical data model, see. From here you can also use the tabs at the top of the page to navigate to more detailed technical views of the database objects. Alternatively, a technical report on the OpenClinica 3.1 Database Model can be downloaded as a PDF here. Here's a mapping[3] of how key tables in the data model map to CDISC ODM, first for study metadata: and next, for clinical data that is part of a study: In principle, the OpenClinica data model is designed to closely mirror the structure and nomenclature of CDISC ODM. In practice there are deviations, either where the logical design of OpenClinica is different from ODM or where the physical implementation of the data model is different than ODM's XML structure. While we try to avoid both types of deviations they are sometimes unavoidable. There are also some deviations that are simply legacy artifacts, without any really good reason for departing from the standard. Where these deviations do exist, we look for ways to refactor the database schema to ensure better harmonization with ODM, since we believe this makes OpenClinica more consistent, more easily understood, and easier to develop for. [1] - For more on EAV data models, see Wikipedia and Nadkarni, et al, Organization of heterogeneous scientific data using the EAV/CR representation. Journal of the American Medical Informatics Association 1999 Nov-Dec;6(6):478-93. Abstract. [2] - Many thanks to Marco van Zwetselaar of Kilimanjaro Clinical Research Institute for contributing this diagram on the OpenClinica users mailing list. [3] - See the OpenClinica Blog for more detail on how the data model maps to ODM. This page is not approved for publication.
http://docs.openclinica.com/3.1/technical-documents/openclinica-3.1-database-model
2018-05-20T11:52:19
CC-MAIN-2018-22
1526794863410.22
[]
docs.openclinica.com
sp_generatefilters (Transact-SQL) Creates filters on foreign key tables when a specified table is replicated. This stored procedure is executed at the Publisher on the publication database. Transact-SQL Syntax Conventions Syntax sp_generatefilters [ @publication =] 'publication' Arguments - [ **@publication=**\] 'publication' Is the name of the publication to be filtered. publication is sysname, with no default. Return Code Values 0 (success) or 1 (failure) Remarks sp_generatefilters is used in merge replication. Permissions Only members of the sysadmin fixed server role or the db_owner fixed database role can execute sp_generatefilters. See Also
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms173525(v=sql.100)
2018-05-20T12:34:13
CC-MAIN-2018-22
1526794863410.22
[]
docs.microsoft.com
Create a group Set up groups and assign the necessary roles and users. The users in the group inherit the roles of the group, so you do not have to assign roles to each user separately. Before you beginRole required: admin About this task There are a few good practices when creating groups: Create one group for administrators and assign the admin role to this group only. Create as many groups as needed in your organization. For example, create a staff group for each geographic location or function, such as building maintenance or building security. Assign the necessary users to those groups, and then assign the staff role to those groups. Procedure Navigate to User Administration > Groups. Click New. Fill in the fields on the form, as appropriate. See for an explanation of each field..
https://docs.servicenow.com/bundle/kingston-customer-service-management/page/product/planning-and-policy/task/t_CreateAGroup.html
2018-05-20T12:13:10
CC-MAIN-2018-22
1526794863410.22
[]
docs.servicenow.com
After you create a template, you can clone it to a template. Templates are master copies of virtual machines that let you create ready-for-use virtual machines. You can make changes to the template, such as installing additional software in the guest operating system, while preserving the state of the original template. Prerequisites Verify that you have the following privileges: on the source template. on the folder where the template is created. on all datastores where the template is created.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-B30319D9-80F0-4780-9CBF-A43CE9F977A1.html
2018-05-20T12:03:15
CC-MAIN-2018-22
1526794863410.22
[]
docs.vmware.com
< Main Index Web Services. When opening Visual Page Editor on Vista users were previously presented with 2 security check dialogs caused by regedit being used to check for XULRunner installations. This check is now removed and Vista users should have two less UAC dialogs to worry about. We did another round of speed improvements to the Visual Page Editor and now typing directly in the visual part is more fluent. There is now a CSS class wizard that can be used to add or edit CSS styles in existing CSS style files. This wizard is also utilized in the Visual Page Editor to allow editing of tags external styles directly instead. The visual page editor now has a option to show non-visual tags. When enabled a small grey box with the non-visual tags will be drawn in the visual part to more easily see where the boundaries are and to provide a way to select them more easily by just clicking. It have always been able to setup a template for unknown tags by right clicking them in the Visual Page Editor: We have improved the dialog for defining the template to allow you to select which html tag to use and instead of restricted styling options you can know use CSS to define the style. Visual Page Editor has a preference for showing the raw EL or render the translated string for resource bundles. In previous versions there were problems with rendering the resource bundles when the content came from nested includes. That is now fixed making it available at all levels. Richfaces tags <rich:insert> and <aj4:include> now has OpenOn support <rich:insert> <aj4:include> This means you can use F3 or Ctrl+Click on the src attribute and navigate directly to the resource. EL code completion have been enabled for more JSF Core tags. Drag'n'Drop of components now show a caret to indicate where the component will be dropped. Richfaces tags now have unique icons in the JBoss Tools palette instead of all using the default tag icon. rich:hotKey, rich:ajaxValidator, rich:graphValidator, rich:beanValidator, rich:extendedDataTable is now supported in the Visual Page Editor. rich:hotKey, rich:ajaxValidator, rich:graphValidator, rich:beanValidator, rich:extendedDataTable We also added initial support for some new tags from JSF 2: h:body, h:head, h:outputscript, h:outputstyle h:body, h:head, h:outputscript, h:outputstyle Richfaces 3.2 is now fully supported in code completion and visual page editor. Code completion now has icons illustrating what they are from. Currently we separate between resource bundles, seam and jsf components.. Managed beans listed in faces-config.xml in jar's are now being loaded and available in code completion. rich:editor and a4j:queue is now supported in the visual editor.
http://docs.jboss.org/tools/whatsnew/vpe/vpe-news-3.0.0.GA-full.html
2018-05-20T12:01:14
CC-MAIN-2018-22
1526794863410.22
[]
docs.jboss.org
The RESULT_CACHE Option¶ As of Oracle Database 11g, the function result cache has entered the caching fray. It offers the benefits of just-in-time package-level caching (and more!) but without the hassle. All you have to do is add the RESULT_CACHE option to the function declaration section and that’s it. It couldn’t be much easier! The function result cache is ideal for data from tables that are queried from more frequently than they are modified, for instance lookup tables and materialized views (between refreshes). When a table’s data changes every few seconds or so, the function result cache may hurt performance as Oracle needs to fill and clear the cache before the data can be used many times. On the other hand, when a table (or materialized view) changes, say, every 10 minutes or more, but it is queried from hundreds or even thousands of times in the meantime, it can be beneficial to cache the data with the RESULT_CACHE option. Recursive functions, small lookup functions, and user-defined functions that are called repeatedly but do not fetch data from the database are also ideal candidates for the function result cache. With Oracle Database 11gR2, the RELIES ON clause is deprecated, which means that you don’t even have to list the dependencies: Oracle will figure everything out for you! The database does not cache SQL statements contained in your function. It ‘merely’ caches input values and the corresponding data from the RETURN clause. Oracle manages the function result cache in the SGA. In the background. Whenever changes are committed to tables that the cache relies on, Oracle automatically invalidates the cache. Subsequent calls to the function cause the cache to be repopulated. Analogously, Oracle ages out cached results whenever the system needs more memory, so you, the database developer, are completely relieved of the burden of designing, developing, and managing the cache. Since the function result cache is in the SGA rather than the PGA, it is somewhat slower than PGA-based caching. However, if you have hidden SELECT statements in functions, the SGA lookups thanks to the function result cache beat any non-cached solutions with context switches hands down. Sounds too good to be true? It is. First, the function result cache only applies to stored functions not functions defined in the declaration section of anonymous blocks. Second, the function cannot be a pipelined table function. Third, the function cannot query from data dictionary views, temporary tables, SYS-owned tables, sequences, or call any non-deterministic PL/SQL function. Moreover, pseudo-columns (e.g. LEVEL and ROWNUM) are prohibited as are SYSDATE and similar time, context, language (NLS), or GUID functions. The function has to be free of side effects, that is, it can only have IN parameters; IN OUT and OUT parameters are not allowed. Finally, IN parameters cannot be a LOB, REF CURSOR, collection, object type, or record. The RETURN type can likewise be none of the following: LOB, REF CURSOR, an object type, or a record or collection that contains a LOB, REF CURSOR, and/or an object type. The time to look up data from the function result cache is on par with a context switch or a function call. So, if a PL/SQL function is almost trivial and called from SQL, for instance a simple concatenation of first_name and last_name, then the function result cache solution may be slower than the same uncached function. Inlining, or rather hard coding, of simple business rules seems to be even faster as demonstrated by Adrian Billington, although we hopefully all agree that hard coding is a bad practice, so we shall not dwell on these results and pretend they never existed. Beware that the execution plan of a SQL statement does not inform you that a function result cache can or even will be used in clear contrast to the query result cache. The reason is both simple and obvious: RESULT_CACHE is a PL/SQL directive and thus not known to the SQL engine. Latches¶ The result cache is protected by a single latch, the so-called result cache (RC) latch. Since latches are serialization devices, they typically stand in the way of scalability, especially in environments with a high degree of concurrency, such as OLTP applications. Because there is only one latch on the result cache, only one session can effectively create fresh result cache entries at any given moment. A high rate of simultaneous changes to the result cache are therefore detrimental to the performance of a database. Similarly, setting the parameter RESULT_CACHE_MODE to FORCE is a guarantee to bring a database to its knees, as every single SQL statement will be blocking the RC latch. Scalability issues have dramatically improved from 11gR1 to 11gR2, but latch contention still remains an issue when rapidly creating result sets in the cache. It should be clear that the function result cache only makes sense for relatively small result sets, expensive SQL statements that do not experience high rates of concurrent execution, and SQL code that is against relatively static tables. IR vs DR Units¶ The default mode of PL/SQL units is to run with the definer’s rights (DR). Such units can benefit from the function result cache without further ado. Invoker’s rights (IR) subprograms, created with the AUTHID CURRENT_USER rather than AUTHID DEFINER, cannot use the function result cache, and an attempt at compilation leads to a PLS-00999 error, at least prior to DB12c. The reason is that a user would have been able to retrieve data cached by another user, to which the person who originally requested the data should not have access because its privileges are not sufficient. This restriction has been lifted with 12c, and the security implications have been resolved. The solution to the security conundrum is that the function result cache is per user for IR units. This means of course that the RESULT_CACHE option is only useful for functions that the same user calls many times with the same input values. Memory Consumption¶ That’s all very nice, but how much memory does the function result cache gobble up? A DBA can run EXEC DBMS_RESULT_CACHE.MEMORY_REPORT(detailed => true) to see detailed information about the memory consumption. However, the purpose of these pages is to help fellow developers to learn about optimization techniques, which means that DBMS_RESULT_CACHE is out of the question. You can check the UGA and PGA memory consumption by looking at the data for your session from the following query: You can provide the name of the statistic you’re interested in. A full list of statistics can be found in the official documentation. For example, 'session uga memory' or 'session pga memory'. These are current values, so you’d check the metrics before and after you run your function a couple of times to see the PGA and UGA memory consumption of your function. Obviously, there will be no (or very little) PGA consumption in the case of the function result cache. There are also several solutions available that calculate the various statistics for you. They typically work by checking the metrics before running a function several times, then run the function, after which they check the metrics again. In case you need help configuring the function result cache, here’s a helping hand.
http://oracle.readthedocs.io/en/latest/plsql/cache/alternatives/result-cache.html
2018-05-20T11:38:18
CC-MAIN-2018-22
1526794863410.22
[]
oracle.readthedocs.io
Viewing Volume Information You can view descriptive information for your Amazon EBS volumes in a selected region at a time in the AWS Management Console. You can also view detailed information about a single volume, including the size, volume type, whether the volume is encrypted, which master key was used to encrypt the volume, and the specific instance to which the volume is attached. View information about an EBS volume using the console Open the Amazon EC2 console at. In the navigation pane, choose Volumes. To view more information about a volume, select it. In the details pane, you can inspect the information provided about the volume. To view what EBS (or other) volumes are attached to an Amazon EC2 instance Open the Amazon EC2 console at. In the navigation pane, choose Instances. To view more information about an instance, select it. In the details pane, you can inspect the information provided about root and block devices. To view information about an EBS volume using the command line You can use one of the following commands to view volume attributes. For more information, see Accessing Amazon EC2. describe-volumes (AWS CLI) Get-EC2Volume (AWS Tools for Windows PowerShell)
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-describing-volumes.html
2018-05-20T12:05:41
CC-MAIN-2018-22
1526794863410.22
[]
docs.aws.amazon.com
Displaysolution: Stsadm operation (Windows SharePoint Services) Applies To: Windows SharePoint Services 3.0 Topic Last Modified: 2007-04-20 Operation name: Displaysolution Description>**
https://docs.microsoft.com/en-us/previous-versions/office/sharepoint-2007-products-and-technologies/cc288013(v=office.12)
2018-05-20T12:41:08
CC-MAIN-2018-22
1526794863410.22
[]
docs.microsoft.com
When creating a meal plan in NutriAdmin, especially if it comprises many days, you may want to recommend some foods multiple times to your clients. NutriAdmin allows you to save time by letting you copy and paste foods. If you are familiar with the copy and paste operations of your computer or phone, the concept is the same. To demonstrate the capabilities of the copy/paste functionality, let's illustrate the concept with an example. Let's create a 7 day meal plan: You will see the following empty table as soon as you click Create new plan from scratch. Now, clicking on one of the blue plus signs will allow you to add foods to your meal plan. In this example, let's add a sample breakfast for Monday: This sample breakfast contains: bacon, eggs, bread, and coffee – all of them in specific amounts. Now, let's imagine you want to add the same breakfast for Wednesday, Friday, and Sunday. All you have to do is to click on the big "Copy" icon on the bottom right of the Monday/Breakfast box. Clicking this button will copy your meal into the memory, kind of like a virtual clipboard that NutriAdmin uses. You will know it worked by a green alert showing on the top of your screen Once you have clicked on "copy", each of the cells in your meal plan table will display a "paste" icon. Let's click on the paste icon for Wednesday breakfast. As soon as you click "paste", your breakfast will be copied to Wednesday. You can repeat the operation above to copy the breakfast for Friday and Sunday as well – saving you the time of having to search all items one by one. All the nutritional information for macronutrients and micronutrients will be updated automatically as you perform the copy/pasting operations. Same goes for your shopping list. Finally, you can also copy an individual item by clicking on the small "copy" icon next to any given food or drink That's it! Copying foods and meals is easy, and it can save you a lot of time when recommending common items over and over. For added efficiency, be sure to setup all food amounts to the appropriate quantities first, so that when you copy a meal over you don't have to change quantities a second time.
https://docs.nutriadmin.com/how-to-copy-and-paste-foods-and-meals-in-meal-plans-to-save-time
2018-05-20T11:48:24
CC-MAIN-2018-22
1526794863410.22
[array(['uploads/2017-01-12 07.57.49 pm-1484247813846.png', '7 day meal plan'], dtype=object) array(['uploads/2017-01-12 08.03.21 pm-1484247934525.png', 'new empty meal plan'], dtype=object) array(['uploads/2017-01-12 08.05.21 pm-1484247981411.png', 'sample breakfast'], dtype=object) array(['uploads/2017-01-12 08.08.12 pm-1484248104600.png', 'copying breakfast'], dtype=object) array(['uploads/2017-01-12 08.09.30 pm-1484248184271.png', 'copy success'], dtype=object) array(['uploads/2017-01-12 08.10.36 pm-1484248269186.png', 'copying to wednesday'], dtype=object) array(['uploads/2017-01-12 08.10.59 pm-1484248297482.png', 'wednesday breakfast has been copied'], dtype=object) array(['uploads/2017-01-12 08.12.34 pm-1484248360696.png', 'copying to Friday and Sunday'], dtype=object) array(['uploads/2017-01-12 08.14.22 pm-1484248512415.png', 'copying individual items'], dtype=object) ]
docs.nutriadmin.com
Execute an SQL statement in a separate thread. ASYNC gfxd identifier. An identifier must not be the same as any other identifier for an async statement on the current connection. You cannot reference a statement previously prepared and named by the gfxd Prepare command in this command. gfxd creates a new thread in the current or designated connection to issue the SQL statement. The separate thread is closed once the statement completes. gfxd(PEERCLIENT)> async aInsert 'insert into firsttable values (40,''Forty'')'; gfxd(PEERCLIENT)> insert into firsttable values (50,'Fifty'); 1 row inserted/updated/deleted gfxd(PEERCLIENT)> wait for aInsert; 1 row inserted/updated/deleted -- the result of the asynchronous insert
http://gemfirexd.docs.pivotal.io/docs/1.0.0/userguide/reference/gfxd_commands/async.html
2018-05-20T12:08:13
CC-MAIN-2018-22
1526794863410.22
[]
gemfirexd.docs.pivotal.io
DETERMINISTIC vs RESULT_CACHE¶ A common question with caching is whether the DETERMINISTIC option or the RESULT_CACHE is best. As always, the answer is: ‘It depends.’ When you call a deterministic function many times from within the same SQL statement, the RESULT_CACHE does not add much to what the DETERMINISTIC option already covers. Since a single SQL statement is executed from only one session, the function result cache cannot help with multi-session caching as there is nothing to share across sessions. As we have said, marking a deterministic function as DETERMINISTIC is a good idea in any case. When you call a deterministic function many times from different SQL statements — in potentially different sessions or even instances of a RAC — and even PL/SQL blocks, the RESULT_CACHE does have benefits. Now, Oracle can access a single source of cached data across statements, subprograms, sessions, or even application cluster instances. The ‘single source of cached data’ is of course only true for DR units. For IR units, the function result cache is user-specific, which probably dampens your euphoria regarding the function result cache somewhat. Nevertheless, both caching mechanisms are completely handled by Oracle Database. All you have to do is add a simple DETERMINISTIC and/or RESULT_CACHE to a function’s definition.
http://oracle.readthedocs.io/en/latest/plsql/cache/alternatives/deterministic-vs-result-cache.html
2018-05-20T11:45:48
CC-MAIN-2018-22
1526794863410.22
[]
oracle.readthedocs.io
What Is Amazon Cognito? Amazon Cognito provides authentication, authorization, and user management for your web and mobile apps. Your users can sign in directly with a user name and password, or through a third party such as Facebook, Amazon, or Google. The two main components of Amazon Cognito are user pools and identity pools. User pools are user directories that provide sign-up and sign-in options for your app users. You can use identity pools and user pools separately or together. An Amazon Cognito user pool and identity pool used together See the diagram for a common Amazon Cognito scenario. Here the goal is to authenticate your user, and then gain access to another AWS service. In the first step your app user signs in through a user pool and receives bearer tokens after a successful authentication. Next, your app exchanges the user pool tokens for AWS credentials through an identity pool. Finally, your app user can then use those AWS credentials to access other AWS services such as Amazon S3 or DynamoDB. For more examples using identity pools and user pools, see Common Amazon Cognito Scenarios. Amazon Cognito is compliant with SOC 1-3, PCI DSS, ISO 27001, and is HIPAA-BAA eligible. For more information, see AWS Services in Scope. See also Regional Data Considerations. Topics Features of Amazon Cognito User pools. User pools provide: A built-in, customizable web UI. For more information about user pools, see Getting Started with User Pools and the Amazon Cognito User Pools API Reference. Identity pools With an identity pool, your users can obtain temporary AWS credentials to access AWS services, such as Amazon S3 and DynamoDB. Identity pools support anonymous guest users, as well as the following identity providers that you can use to authenticate users for identity pools: Amazon Cognito user pools Social sign-in with Facebook, Google, and Login with Amazon OpenID Connect (OIDC) providers SAML identity providers Developer authenticated identities To save user profile information, your identity pool needs to be integrated with a user pool. For more information about identity pools, see Getting Started with Amazon Cognito Identity Pools (Federated Identities) and the Amazon Cognito Identity Pools API Reference. Getting Started with Amazon Cognito For a guide to top tasks and where to start, see Getting Started with Amazon Cognito. For videos, articles, documentation, and sample apps, see Amazon Cognito Developer Resources. To use Amazon Cognito, you need an AWS account. For more information, see Using the Amazon Cognito Console. Pricing for Amazon Cognito For information about Amazon Cognito pricing, see Amazon Cognito Pricing.
https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html?icmpid=docs_menu
2018-05-20T11:58:13
CC-MAIN-2018-22
1526794863410.22
[array(['images/scenario-cup-cib2.png', 'Amazon Cognito overview'], dtype=object) ]
docs.aws.amazon.com
Syntax sp_script_synctran_commands [@publication = ] 'publication' [ , [@article = ] 'article'] Arguments [ **@publication** = ] 'publication' Is the name of the publication to be scripted. publication is sysname, with no default. [ **@article** = ] 'article' Is the name of the article to be scripted. article is sysname, with a default of all, which specifies all articles are scripted. Return Code Values 0 (success) or 1 (failure) Results Set sp_script_synctran_commands returns a result set that consists of a single nvarchar(4000) column. The result set forms the complete scripts necessary to create both the sp_addsynctrigger and sp_addqueued_artinfo calls to be applied at Subscribers. Remarks sp_script_synctran_commands is used in snapshot and transactional replication. sp_addqueued_artinfo is used for queued updatable subscriptions. Permissions Only members of the sysadmin fixed server role or db_owner fixed database role can execute sp_script_synctran_commands.
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008-r2/ms190280(v%3Dsql.105)
2018-05-20T12:08:05
CC-MAIN-2018-22
1526794863410.22
[]
docs.microsoft.com
You can duplicate a questionnaire in NutriAdmin to create a separate copy of a given form. This can be especially useful when you want to create a variation of a questionnaire with a few minor changes. For example, let's imagine you want an initial intake questionnaire for your clients but want a version for men, another for women, and another for children. In this case, you can create just one questionnaire, and then duplicate it and edit the copies to match your target patients. This way, you do not have to start each variation from scratch and can save considerable time. To duplicate a questionnaire, follow the steps below: Step 1: Click on Manage Questionnaires. Step 2: Click on Actions for the questionnaire you wish to copy. Step 3: Click on Duplicate questionnaire in the drop-down menu. Step 4: Enter a new name for your questionnaire copy. In this example, we are creating a 5 day food diary as a copy of a 3 day food diary. The new copy will be edited to ask the client for 5 days instead of just 3 in their food diary. Click on Duplicate once you have chosen a name for your questionnaire (you can change the name later). Your new copy will be created and available to be edited. Editing your new questionnaire copy is analogous to editing any questionnaire. For a guide on how to edit your questionnaire(s), please click here.
https://docs.nutriadmin.com/how-to-duplicate-a-questionnaire-to-create-a-variation-of-the-same-form
2018-05-20T11:41:03
CC-MAIN-2018-22
1526794863410.22
[array(['uploads/2017-07-15 10.50.17 am-1500112343157.png', 'manage questionnaires menu'], dtype=object) array(['uploads/2017-07-15 10.50.30 am-1500112343157.png', 'actions for questionnaire'], dtype=object) array(['uploads/2017-07-15 10.50.38 am-1500112345593.png', 'duplicate questionnaire button'], dtype=object) array(['uploads/2017-07-15 10.50.55 am-1500112345593.png', 'duplicating a questionnaire'], dtype=object) array(['uploads/2017-07-15 10.51.04 am-1500112347282.png', 'new questionnaire copy'], dtype=object) ]
docs.nutriadmin.com
Copy data from PostgreSQL by using Azure Data Factory This article outlines how to use the Copy Activity in Azure Data Factory to copy data from a PostgreSQL database. It builds on the copy activity overview article that presents a general overview of copy activity. Note This article applies to version 2 of Data Factory, which is currently in preview. If you are using version 1 of the Data Factory service, which is generally available (GA), see PostgreSQL connector in V1. Supported capabilities You can copy data from PostgreSQL database to any supported sink data store. For a list of data stores that are supported as sources/sinks by the copy activity, see the Supported data stores table. Specifically, this PostgreSQL connector supports PostgreSQL version 7.4 and above. Prerequisites To use this PostgreSQL connector, you need to: - Set up a Self-hosted Integration Runtime. See Self-hosted Integration Runtime article for details. - Install the Ngpsql data provider for PostgreSQL with version between 2.0.12 and 3.1.9 on the Integration Runtime machine. Getting started You can create a pipeline with the copy activity by using one of the following tools or SDKs. Select a link to go to a tutorial with step-by-step instructions to create a pipeline with a copy activity. - Copy Data tool - Azure portal - .NET SDK - Python SDK - Azure PowerShell - REST API - Azure Resource Manager template. The following sections provide details about properties that are used to define Data Factory entities specific to PostgreSQL connector. Linked service properties The following properties are supported for PostgreSQL linked service: Example: { "name": "PostgreSqlLinkedService", "properties": { "type": "PostgreSql", PostgreSQL dataset. To copy data from PostgreSQL, set the type property of the dataset to RelationalTable. The following properties are supported: Example { "name": "PostgreSQLDataset", "properties": { "type": "RelationalTable", "linkedServiceName": { "referenceName": "<PostgreSQL linked service name>", "type": "LinkedServiceReference" }, "typeProperties": {} } } Copy activity properties For a full list of sections and properties available for defining activities, see the Pipelines article. This section provides a list of properties supported by PostgreSQL source. PostgreSQL as source To copy data from PostgreSQL, set the source type in the copy activity to RelationalSource. The following properties are supported in the copy activity source section: Note Schema and table names are case-sensitive. Enclose them in "" (double quotes) in the query. Example: "activities":[ { "name": "CopyFromPostgreSQL", "type": "Copy", "inputs": [ { "referenceName": "<PostgreSQL input dataset name>", "type": "DatasetReference" } ], "outputs": [ { "referenceName": "<output dataset name>", "type": "DatasetReference" } ], "typeProperties": { "source": { "type": "RelationalSource", "query": "SELECT * FROM \"MySchema\".\"MyTable\"" }, "sink": { "type": "<sink type>" } } } ] Data type mapping for PostgreSQL When copying data from PostgreSQL, the following mappings are used from PostgreSQL.
https://docs.microsoft.com/en-us/azure/data-factory/connector-postgresql
2018-05-20T11:39:12
CC-MAIN-2018-22
1526794863410.22
[]
docs.microsoft.com
ObsPy Tutorial¶ Note A one-hour introduction to ObsPy is available at YouTube. This tutorial does not attempt to be comprehensive and cover every single feature. Instead, it introduces many of ObsPy’s most noteworthy features, and will give you a good idea of the library’s flavor and style. A pdf version of the Tutorial is available here. There are also IPython notebooks available online with an introduction to Python (with solutions/output), an introduction to ObsPy (with solutions/output) and an brief primer on data center access and visualization with ObsPy. Introduction to ObsPy¶ - 1. Python Introduction for Seismologists - 2. UTCDateTime - 3. Reading Seismograms - 4. Waveform Plotting Tutorial - 5. Retrieving Data from Data Centers - 6. Filtering Seismograms - 7. Downsampling Seismograms - 8. Merging Seismograms - 9. Beamforming - FK Analysis - 10. Seismogram Envelopes - 11. Plotting Spectrograms - 12. Trigger/Picker Tutorial - 13. Poles and Zeros, Frequency Response - 14. Seismometer Correction/Simulation - 15. Clone an Existing Dataless SEED File - 16. Export Seismograms to MATLAB - 17. Export Seismograms to ASCII - 18. Anything to MiniSEED - 19. Beachball Plot - 20. Basemap Plots - 21. Interfacing R from Python - 22. Coordinate Conversions - 23. Hierarchical Clustering - 24. Visualizing Probabilistic Power Spectral Densities - 25. Array Response Function - 26. Continuous Wavelet Transform - 27. Time Frequency Misfit - 28. Visualize Data Availability of Local Waveform Archive - 29. Travel Time and Ray Path Plotting - 30. Cross Correlation Pick Correction - 31. Handling custom defined tags in QuakeML and the ObsPy Catalog/Event framework - 32. Handling custom defined tags in StationXML with the Obspy Inventory - 33. Creating a StationXML file from Scratch - 34. Connecting to a SeedLink Server Advanced Exercise¶ In the advanced exercise we show how ObsPy can be used to develop an automated processing workflow. We start out with very simple tasks and then automate the routine step by step. For all exercises solutions are provided.
http://docs.obspy.org/tutorial/
2018-05-20T11:55:16
CC-MAIN-2018-22
1526794863410.22
[]
docs.obspy.org
GSOC 2013 Project Ideas/template From Joomla! Documentation < GSOC 2013 Project IdeasRevision as of 13:19, 13 March 2013 by Dextercowley (Talk | contribs)
https://docs.joomla.org/index.php?title=GSOC_2013_Project_Ideas/template&direction=prev&oldid=100678
2015-10-04T07:58:24
CC-MAIN-2015-40
1443736672923.0
[]
docs.joomla.org
Discounts Offering discounts, coupons and sales can be a vital marketing strategy for your Shopify store. Discount codes. At this time, discount codes are available for Basic plans and higher. To upgrade the plan that you are on, log in to your Shopify admin and go to Settings > Account page. Scroll down the field entitled "Account overview & details." Click on the Change plan type button. To generate or import many discount codes at once (e.g. for a Groupon promotion), check out this free app from the Shopify App Store. Shopify discounts: important info There are no limits to the number of discount codes you can create. Make as many as you want!. If you set an expiry date for a discount, the discount will expire at 11:59:59 pm on that day.. Unfortunately, Discount type section on the Add a discount page. Product specific discount codes will apply to all quantities of a product when the customer has gone to the checkout and applied the discount code. For example: if a customer adds 10 of the same product, the product specific code will discount all 10. You can perform Bulk actions to disable many Discount codes at once.
https://docs.shopify.com/manual/your-store/discounts
2015-10-04T07:27:30
CC-MAIN-2015-40
1443736672923.0
[]
docs.shopify.com
.6.5 - 2.7.x. No other Python libraries are required for basic Django usage. Django 1.5 also has experimental support for Python 3.2.3 and above...6 versus newer Python versions, such as Python 2.7?¶ Not in the core framework. Currently, Django itself officially supports Python 2.6 (2.6.5 or higher) and 2.7. However, newer versions of Python are often faster, have more features, and are better supported. If you use a newer version of Python you will also have access to some APIs that aren’t available under older versions of Python. Third-party applications for use with Django are, of course, free to set their own version requirements. All else being equal, we recommend that you use the latest 2.x release (currently Python 2.7). This will let you take advantage of the numerous improvements and optimizations to the Python language since version 2.6. Generally speaking, we don’t recommend running Django on Python 3 yet; see below for more. Can I use Django with Python 3?¶ Django 1.5 introduces experimental support for Python 3.2.3 and above. However, we don’t yet suggest that you use Django and Python 3 in production. Python 3 support should be considered a “preview”. It’s offered to bootstrap the transition of the Django ecosystem to Python 3, and to help you start porting your apps for future Python 3 compatibility. But we’re not yet confident enough to promise stability in production. Our current plan is to make Django 1.6 suitable for general use with Python 3..
https://docs.djangoproject.com/en/1.5/faq/install/
2015-10-04T07:34:22
CC-MAIN-2015-40
1443736672923.0
[]
docs.djangoproject.com
Revision history of "JDatabaseImporterMySQL::setDbo" View logs for this page There is no edit history for this page. This page has been deleted. The deletion and move log for the page are provided below for reference. - 11:28, 20 June 2013 JoomlaWikiBot (Talk | contribs) deleted page JDatabaseImporterMySQL::setDbo (cleaning up content namespace and removing duplicated API references)
https://docs.joomla.org/index.php?title=JDatabaseImporterMySQL::setDbo&action=history
2015-10-04T08:22:01
CC-MAIN-2015-40
1443736672923.0
[]
docs.joomla.org
It is possible to create Render Textures where each pixel contains a high precision “depth” value (see RenderTextureFormat.Depth). This is mostly used when some effects need scene’s depth to be available (for example, soft particles, screen space ambient occlusion, translucency would all need scene’s depth). Pixel values in the depth texture range from 0 to 1 with a nonlinear distribution. Precision is usually 24 or 16 bits, depending on depth buffer used. When reading from depth texture, a high precision value in 0..1 range is returned. If you need to get distance from the camera, or otherwise linear value, you should compute that manually. Depth textures in Unity are implemented differently on different platforms. Most of the time depth textures are used to render depth from the camera. UnityCG.cginc include file contains some macros to deal with the above complexity in this case: For example, this shader would render depth of its objects: = mul (UNITY_MATRIX_MVP, v.vertex); UNITY_TRANSFER_DEPTH(o.depth); return o; } half4 frag(v2f i) : SV_Target { UNITY_OUTPUT_DEPTH(i.depth); } ENDCG } } }
http://docs.unity3d.com/Manual/SL-DepthTextures.html
2015-10-04T07:27:56
CC-MAIN-2015-40
1443736672923.0
[]
docs.unity3d.com
Difference between revisions of "JCacheStorageXcache::gc"::gc Description Garbage collect expired cache data. Description:JCacheStorageXcache::gc [Edit Descripton] SeeAlso:JCacheStorageXcache::gc [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JCacheStorageXcache::gc&diff=cur&oldid=56073
2015-10-04T08:11:45
CC-MAIN-2015-40
1443736672923.0
[]
docs.joomla.org
ComboBox Column GridViewComboBoxColumn derives from GridViewBoundColumnBase, which means that it inherits all of the functionality too. In addition, GridViewComboBoxColumn provides a RadComboBox editor for editing cell values. It also takes care to translate the Data Member value of the column to the corresponding DisplayMember value of RadComboBox.__GridViewComboBoxColumn.ItemsSource__. SelectedValueMemberPath - used in conjunction with DisplayMemberPath in the process of translation of a value to display as content. It also tells the RadComboBox editor which property to use as a Value when the user makes selection. IsComboBoxEditable - allows you to configure whether the editor (RadComboBox) is editable. The type of properties configured as DataMemberBinding and SelectedValueMembetPath should be the same. Since Q3 2012 SP typing a letter in GridViewComboBoxColumn will point to the first item starting with the same letter. The following example assumes that you have data as shown in Figure 1: Figure 1.: [XAML] Example 1: Define GridViewComboBoxColumn. <telerik:RadGridView x: <telerik:RadGridView.Columns> <telerik:GridViewComboBoxColumn /> </telerik:RadGridView.Columns> </telerik:RadGridView> [XAML] Example 2: Define DataMemberBinding. <telerik:RadGridView x: <telerik:RadGridView.Columns> <telerik:GridViewComboBoxColumn </telerik:RadGridView.Columns> </telerik:RadGridView> [C#] Example 2: Define DataMemberBinding. column.DataMemberBinding = new Binding( "CountryId" ); [C#] Example 3: Setting ItemsSource. ((GridViewComboBoxColumn)this.radGridView.Columns["Country"]).ItemsSource = RadGridViewSampleData.GetCountries(); [VB.NET] Example 3: Setting ItemsSource. DirectCast(Me.radGridView.Columns("Country"), GridViewComboBoxColumn).ItemsSource = RadGridViewSampleData.GetCountries() [XAML] Example 4: Configure DisplayMemberPath and SelectedValuePath properties in XAML. <telerik:GridViewComboBoxColumn [C#] Example 4: Configure DisplayMemberPath and SelectedValuePath properties in code. column.SelectedValueMemberPath = "Id"; column.DisplayMemberPath = "Name"; The application result should be similar to Figure 2.: RadGridView binds to a collection of objects representing the teams. The team object exposes a collection containing the current drivers, which is used as source for the editor. As in the previous example, it also exposes a DriverID property that the column will later translate to an appropriate display value. [XAML] Example 5: Configure GridViewComboBoxColumn with ItemsSourceBinding. <telerik:GridViewComboBoxColumn Figure 4. and Figure 5. show the result of configuring ItemsSourceBinding property. Figure 4. Figure 5. When using ItemsSourceBinding property, the values displayed in the column’s filtering control will be the values corresponding to the DataMemberBinding (0, 1, 2). If you want to have the displayed ones (S.Vettel, K. Raikkonen, M. Webber), then you need to set GridViewComboBoxColumn.FilterMemberPath to a property containing the values used as DisplayMemberPath. You can download a runnable project of the previous example from our online SDK repository:ComboboxColumnItemsSourceBinding. If you are setting GridViewComboBoxColumn's ItemsSource property you should specify a valid Source for it. Please refer to this troubleshooting article. You can download a runnable project suggesting a modification for performance improvement from our online SDK repository: LightweightComboboxColumn.: [XAML]="*"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBlock Text="{Binding ID}"></TextBlock> <TextBlock Text="{Binding Name}" Grid.</TextBlock> </Grid> </DataTemplate> </telerik:GridViewComboBoxColumn.ItemTemplate> </telerik:GridViewComboBoxColumn> The multi-column ComboBoxColumn in this example will have two columns showing the ID and Name of the City respectively. When you run the example, Figure 6 shows the result when the customer tries to edit in a column. Figure 6.
http://docs.telerik.com/devtools/wpf/controls/radgridview/columns/columntypes/column-types-combobox-column.html
2015-10-04T07:32:44
CC-MAIN-2015-40
1443736672923.0
[array(['images/RadGridView_ColumnTypes_1.png', None], dtype=object) array(['images/RadGridView_ColumnTypes_2.png', None], dtype=object) array(['images/RadGridView_ColumnTypes_3.png', None], dtype=object) array(['images/RadGridView_ColumnTypes_4.png', None], dtype=object) array(['images/RadGridView_ColumnTypes_5.png', None], dtype=object) array(['images/gridview_multi_column_combo.png', None], dtype=object)]
docs.telerik.com
The health of the Aave Protocol is dependant on the 'health' of the loans within the system, also known as the 'health factor'. When the 'health factor' of an account's total loans is below 1, anyone can make a liquidationCall() to the LendingPool contract, paying back part of the debt owed and receiving discounted collateral in return (also known as the liquidation bonus as listed here). This incentivises third parties to participate in the health of the overall protocol, by acting in their own interest (to receive the discounted collateral) and as a result, ensure loans are sufficiently collateralised. There are multiple ways to participate in liquidations: By calling the liquidationCall() directly in the LendingPool contract. By creating your own automated bot or system to liquidate loans. For liquidation calls to be profitable, you must take into account the gas cost involved in liquidating the loan. If a high gas price is used, then the liquidation may be unprofitable for you. See the 'Calculating profitability vs gas cost' section for more details. When making a liquidationCall(), you must: Know the account (i.e. the ethereum address: user) whose health factor is below 1. Know the valid debt amount ( debtToCover) and debt asset ( debt) that can be paid. The close factor is 0.5, which means that only a maximum of 50% of the debt can be liquidated per valid liquidationCall(). You must already have a sufficient balance of the debt asset, which will be used by the liquidationCall() to pay back the debts. Know the collateral asset ( collateral) you are closing. I.e. the collateral asset that the user has 'backing' their outstanding loan that you will partly receive as a 'bonus'. Whether you want to receive aTokens or the underlying asset ( receiveAToken) after a successful liquidationCall(). Only user accounts that have a health factor below 1 can be liquidated. There are multiple ways you can get the health factor, with most of them involving 'user account data'. "Users" in the Aave Protocol refer to a single ethereum address that has interacted with the protocol. This can be an externally owned account or contract. To gather user account data from on-chain data, one way would be to monitor emitted events from the protocol and keep an up to date index of user data locally. Events are emitted each time a user interacts with the protocol (deposit, repay, borrow, etc). See the contract source code for relevant events. When you have the user's address, you can simply call getUserAccountData() to read the user's current healthFactor. If the healthFactor is below 1, then the account can be liquidated. Similarly to the sections above, you will need to gather user account data and keep an index of the user data locally. Since GraphQL does not provide real time calculated user data such as the healthFactor, you will need to compute this yourself. The easiest way is to use the Aave.js package, which has methods to compute summary user data. The data you will need to pass into the Aave.js method's can be fetched from our subgraph, namely the UserReserve objects. Once you have the account(s) to liquidate, you will need to calculate the amount of collateral that can be liquidated: Use getUserReserveData() on the Protocol Data Provider contract (for Solidity) or the UserReserve object (for GraphQL) with the relevant parameters. For reserves that have usageAsCollateralEnabled as true, the currentATokenBalance multiplied by the current close factor is the amount that can be liquidated. For example, if the current close factor is 0.5 and the aToken balance is 1e18 tokens, then the maximum amount that can be liquidated is 5e17. You can also pass in type(uint).max as the debtToCover in liquidationCall() to liquidate the maximum amount allowed. Below is an example contract. When making the liquidationCall() to the LendingPool contract, your contract must already have at least debtToCover of debt. pragma solidity ^0.6.6;import "@openzeppelin/contracts/token/ERC20/IERC20.sol";import "./ILendingPoolAddressesProvider.sol";import "./ILendingPool.sol";contract Liquidator {address constant lendingPoolAddressProvider = INSERT_LENDING_POOL_ADDRESSfunction myLiquidationFunction(address _collateral,address _reserve,address _user,uint256 _purchaseAmount,bool _receiveaToken)external{ILendingPoolAddressesProvider addressProvider = ILendingPoolAddressesProvider(lendingPoolAddressProvider);ILendingPool lendingPool = ILendingPool(addressProvider.getLendingPool());require(IERC20(_reserve).approve(address(lendingPool), _purchaseAmount), "Approval error");// Assumes this contract already has `_purchaseAmount` of `_reserve`.lendingPool.liquidationCall(_collateral, _reserve, _user, _purchaseAmount, _receiveaToken);}} pragma solidity ^0.6.6;interface ILendingPoolAddressesProvider {function getLendingPool() external view returns (address);} pragma solidity ^0.6.6;interface ILendingPool {function liquidationCall ( address _collateral, address _reserve, address _user, uint256 _purchaseAmount, bool _receiveAToken ) external payable;} A similar call can be made with a package such as Web3.js/web.py. The account making the call must already have at least the debtToCover of debt. // Import the ABIs, see: DaiTokenABI from "./DAItoken.json"import LendingPoolAddressesProviderABI from "./LendingPoolAddressesProvider.json"import LendingPoolABI from "./LendingPool.json"// ... The rest of your code ...// Input variablesconst collateralAddress = 'THE_COLLATERAL_ASSET_ADDRESS'const daiAmountInWei = web3.utils.toWei("1000", "ether").toString()const daiAddress = '0x6B175474E89094C44Da98b954EedeAC495271d0F' // mainnet DAIconst user = 'USER_ACCOUNT'const receiveATokens = trueconst lpAddressProviderAddress = '0xB53C1a33016B2DC2fF3653530bfF1848a515c8c5' // mainnetconst lpAddressProviderContract = new web3.eth.Contract(LendingPoolAddressesProviderABI, lpAddressProviderAddress)// Get the latest LendingPool contract addressconst lpAddress = await lpAddressProviderContract.methods.getLendingPool().call().catch((e) => {throw Error(`Error getting lendingPool address: ${e.message}`)})// Approve the LendingPool address with the DAI contractconst daiContract = new web3.eth.Contract(DAITokenABI, daiAddress)await daiContract.methods.approve(lpAddress,daiAmountInWei).send().catch((e) => {throw Error(`Error approving DAI allowance: ${e.message}`)})// Make the deposit transaction via LendingPool contractconst lpContract = new web3.eth.Contract(LendingPoolABI, lpAddress)await lpContract.methods.liquidationCall(collateralAddress,daiAddress,user,daiAmountInWei,receiveATokens,).send().catch((e) => {throw Error(`Error liquidating user with error: ${e.message}`)}) from web3 import Web3import jsonw3 = Web3(Web3.HTTPProvider(PROVIDER_URL))def loadAbi(abi):return json.load(open("./abis/%s"%(abi)))def getContractInstance(address, abiFile):return w3.eth.contract(address, abi=loadAbi(abiFile))def liquidate(user, liquidator, amount):allowance = dai.functions.allowance(user, lendingPool.address).call()# Approve lendingPool to spend liquidator's fundsif allowance <= 0:tx = dai.functions.approve(lendingPool.address, amount).transact({"from": liquidator,})# Liquidation Call, collateral: weth and debt: dailendingPool.functions.liquidationCall(weth.address,dai.address,user,amount,True).transact({"from": liquidator})dai = getContractInstance("0x6B175474E89094C44Da98b954EedeAC495271d0F", "DAI.json")weth = getContractInstance("0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2", "WETH.json")lendingPoolAddressProvider = getContractInstance("0xB53C1a33016B2DC2fF3653530bfF1848a515c8c5", "LENDING_POOL_PROVIDER.json")lendingPool = getContractInstance(# Get address of latest lendingPool from lendingPoolAddressProviderlendingPoolAddressProvider.functions.getLendingPool().call(),"LENDING_POOL.json")liquidate(alice, bob, amount) Depending on your environment, preferred programming tools and languages, your bot should: Ensure it has enough (or access to enough) funds when liquidating. Calculate the profitability of liquidating loans vs gas costs, taking into account the most lucrative collateral to liquidate. Ensure it has access to the latest protocol user data. Have the usual fail safes and security you'd expect for any production service. One way to calculate the profitability is the following: Store and retrieve each collateral's relevant details such as address, decimals used, and liquidation bonus as listed here. Get the user's collateral balance ( aTokenBalance). Get the asset's price according to the Aave's oracle contract ( getAssetPrice()). The maximum collateral bonus you can receive will be the collateral balance (2) multiplied by the liquidation bonus (1) multiplied by the collateral asset's price in ETH (3). Note that for assets such as USDC, the number of decimals are different from other assets. The maximum cost of your transaction will be your gas price multiplied by the amount of gas used. You should be able to get a good estimation of the gas amount used by calling estimateGas via your web3 provider. Your approximate profit will be the value of the collateral bonus (4) minus the cost of your transaction (5). The health factor is calculated from: the user's collateral balance (in ETH) multiplied by the current liquidation threshold for all the user's outstanding assets, divided by 100, divided by the user's borrow balance and fees (in ETH). More info here. This can be both calculated off-chain and on-chain, see Aave.js and the GenericLogic library contract, respectively. At the moment, liquidation bonuses are evaluated and determined by the risk team based on liquidity risk and updated here. This will change in the future with Aave Protocol Governance. Aave Protocol uses Chainlink as a price oracle, with a backup oracle in case of a Chainlink malfunction. See our Price Oracle section for more details. The health factor of accounts is determined by the user's account data and the price of relevant assets, as last updated by the Price Oracle.
https://docs.aave.com/developers/guides/liquidations
2021-07-24T07:38:36
CC-MAIN-2021-31
1627046150134.86
[]
docs.aave.com
.setNamedUser("NamedUserID") UAirship.setNamedUser(null) UrbanAirship.getNamedUser().then((namedUser) => { console.log('Named User: ', namedUser) }). UrbanAirship.editChannelAttributes() .setAttribute("device_name", "Bobby's Phone") .setAttribute("average_rating", 4.99) .removeAttribute("connection_type") .apply() Tags allow you to attribute arbitrary metadata to a specific device. Common examples include favorites such as sports teams or news story categories. UrbanAirship.addTag("some tag"); UrbanAirship.removeTag("other tag"); UrbanAirship.getTags().then((tags) => { console.log('Tags: ', tags) }); Tag Groups Tag groups are configurable namespaces for organizing tags for the channel and Named User. Please view the Tag Groups documentation for more details. UrbanAirship.editChannelTagGroups() .addTags("loyalty", ["silver-member"]) .removeTags("loyalty", ["bronze-member"]) .apply() UrbanAirship.editNamedUserTagGroups() .addTags("loyalty", ["silver-member"]) .removeTags("loyalty", ["bronze-member"]) .apply() By default, Tag Groups cannot be modified from the device. In this case if a device attempts to modify Tag Groups, the modification will fail and the SDK will log an error. In order to change this setting, follow the steos in Manage Tag Groups..getChannelId().then(channelId => { console.log('Channel: ', channelId); })); Categories
https://docs.airship.com/platform/react-native/segmentation/
2021-07-24T07:13:53
CC-MAIN-2021-31
1627046150134.86
[]
docs.airship.com
. Within a robotic process's code, you can set up properties to capture additional information and pass it to Appian as part of the Retrieve Execution Results integration. To get started, set up a hashmap to capture data in key-value pairs. In the example below, we're assigning the hashmap to the results variable. Next, update the results based on the data extracted: The helper methods setCurrentItem and setCurrentItemResultToOK are explained in detail in the Client module documentation. Finally, set the properties using the results variable: Once properties are set, they can be viewed within the execution details in the Appian RPA Console and queries using the Retrieve Execution Results integration. In Appian, you'll use the Retrieve Execution Results integration to query for properties. The data will be passed back as a dictionary called properties within the result dictionary. The data is structured as follows: On This Page
https://docs.appian.com/suite/help/20.3/rpa-7.2/rpa_in_apps/rpa-connected-system.html
2021-07-24T08:43:56
CC-MAIN-2021-31
1627046150134.86
[]
docs.appian.com
PlatformColor(color1, [color2, ...colorN]); PlatformColorfunction to access native colors on the target platform by supplying the native color’s corresponding string value. You pass a string to the PlatformColorfunction,. Web: If you’re familiar with design systems, another way of thinking about this is that PlatformColorlets you tap into the local design system's color tokens so your app can blend right in! ?attrprefix @android:colorprefix' }, }), }, }); PlatformColorfunction.
https://docs.expo.io/versions/v39.0.0/react-native/platformcolor/
2021-07-24T07:43:52
CC-MAIN-2021-31
1627046150134.86
[]
docs.expo.io
. Feature enablement The in-app automation component can be enabled or disabled. When disabled it prevents any automations from executing and any events from counting towards a trigger’s goal. Disabling the automations: Airship.Instance.InAppAutomationEnabled = true; Pausing automations Pausing is similar to disabling, but automations can continue to be triggered and queued up for execution. This is useful for preventing in-app message displays on screens where it would be detrimental to the user experience, such as splash screens, settings screens, or landing pages. Pausing the manager: Airship.Instance.InAppAutomationPaused = true; Display interval The display interval controls the amount of time to wait before the manager is able to display the next triggered in-app message. The default value is set to 30 seconds but can be adjusted to any amount of time in seconds. Setting the display interval to 10 seconds: Airship.Instance.InAppAutomationDisplayInterval = 10; Airship supports standard in-app messages and in-app automation for Xamarin. See: Categories
https://docs.airship.com/platform/xamarin/in-app-messaging/
2021-07-24T08:34:38
CC-MAIN-2021-31
1627046150134.86
[]
docs.airship.com
DeleteContact To remove a contact from Incident Manager, you can delete the contact. Deleting a contact removes them from all escalation plans and related response plans. Deleting an escalation plan removes it from all related response plans. You will have to recreate the contact and its contact channels before you can use it again. Request Syntax { "Contact:
https://docs.aws.amazon.com/incident-manager/latest/APIReference/API_SSMContacts_DeleteContact.html
2021-07-24T09:23:03
CC-MAIN-2021-31
1627046150134.86
[]
docs.aws.amazon.com
Functions for working with files and directories are described here. The finfo is used for getting information about a file and has the following fields: str Name - base name of the file int Size - length in bytes for regular files int Mode - file's mode and permission bits time Time - last modification time bool IsDir - true if it is a directory str Dir - directory where the file is located. This field is only filled when calling the function ReadDir(str, int, str) The file type is used in functions that work with the open file descriptor. The AppendFile function appends data of buf variable or a string to a file named by filename. If the file does not exist, AppendFile creates it with 0644 permissions. The ChDir function changes the current directory. The ChMode function changes the attributes of the file. The CloseFile function closes the file descriptor that was opened with the OpenFile function. The CopyFile function copies src file to dest file. If dest file exists it is overwritten. The file attributes are preserved when copying. The function returns the number of copied bytes. The CreateDir function creates a directory named dirname, along with any necessary parents. If dirname is already a directory, CreateDir does nothing. The CreateFile function creates a file with the specified name. If the trunc parameter is true and the file already exists, its size becomes 0. The ExistFile function returns true if the specified file or directory exists. Otherwise, it returns false. The FileInfo function receives information about the specified file and returns the finfo structure. The file must be opened using the OpenFile function. The FileInfo function gets information about the named file and returns finfo structure. The FileMode function returns the file attributes. The GetCurDir function returns the current directory. The IsEmptyDir function returns true if the specified directory is empty. Otherwise, it returns false. The Md5File function returns the MD5 hash of the specified file as a hex string. The obj function converts a variable of finfo type into an object. The resulting object has fields: name, size, mode, time, isdir, dir. The OpenFile function opens the specified file and returns a variable of file type with an open file descriptor. After working with the file, the open file descriptor must be closed using the CloseFile function. The flags parameter may be zero or a combination of the following flags: CREATE - if the file does not exist, it will be created. TRUNC - file will be truncated to zero length after opening. READONLY - the file will be open for reading only. file f = OpenFile(fname, CREATE)Write(f, buf("some test string"))SetPos(f, -15, 1)buf b &= Read(f, 5)CloseFile(f) The Read function reads the size number of bytes from the current position in the file that was opened using the OpenFile function. The function returns a variable of the buf type, which contains the read data. The ReadDir function reads the directory named by dirname and returns a list of directories and files entries. The ReadDir function reads the dirname directory with the specified name and returns the list of its subdirectories and files according to the specified parameters. The flags parameter can be a combination of the following flags: RECURSIVE - In this case there will be a recursive search for all subdirectories. ONLYFILES - The returned array will contain only files. ONLYDIRS - The returned array will contain only directories. REGEXP - The pattern parameter contains a regular expression for matching file names. If you specify the ONLYFILES and ONLYDIRS flags at the same time, the files and directories will be searched. The pattern parameter can contain a wildcard for files or a regular expression. In this case, the files and directories that match the specified pattern will be returned. The wildcard can contain the following characters: '*' - matches any sequence of non-separator characters '?' - matches any single non-separator character for item in ReadDir(ftemp, RECURSIVE, `*fold*`) {ret += item.Name}for item in ReadDir(ftemp, RECURSIVE | ONLYFILES | REGEXP, `.*\.pdf`) {ret += item.Name} The ReadDir function reads the dirname directory with the specified name and returns the list of its subdirectories and files according to the specified parameters. The parameter flags is described above. The patterns parameter is an array of strings and may contain file masks or regular expressions. The ignore parameter also contains file wildcards or regular expressions, but such files or directories will be skipped. If you want to specify a regular expression in these arrays, enclose it between '/' characters. arr.str aignore = {`/txt/`, `*.pak`}arr.str amatch = {`/\d+/`, `*.p?`, `/di/`}.for item in ReadDir(ftemp, RECURSIVE, amatch, aignore) {ret += item.Name} The ReadFile function reads the specified file and returns the contents as a string. The ReadFile function reads the file named by filename to the buf variable out and returns it. The ReadFile function reads data from the filename file starting at offset offset and length length. If offset is less than zero, then the offset is counted from the end to the beginning of the file. The Remove function removes a file or an empty directory. The RemoveDir function removes dirname directory and any children it contains. The Rename function renames (moves) oldpath to newpath. If newpath already exists and is not a directory, Rename replaces it. The SetFileTime function changes the modification time of the named file. The SetPos function sets the current position in the file for read or write operations. The file must be opened with the OpenFile function. The function returns the offset of the new position. The whence parameter can take the following values: 0 - the off offset is specified from the beginning of file. 1 - the off offset is specified from the current position. 2 - the off offset is specified from the end of a file. The Sha256File function returns the SHA256 hash of the specified file as a hex string. The TempDir function returns the default temporary directory. The TempDir function creates a new temporary directory in the directory path with a name beginning with prefix and returns the path of the new directory. If path is the empty string, TempDir uses the default temporary directory. The Write function writes data from a variable of the buf type to a file that was opened using the OpenFile function. The function returns the f parameter. The WriteFile function writes data of buf variable or a string to a file named by filename. If the file does not exist, WriteFile creates it with 0777 permissions, otherwise the file is truncated before writing.
https://docs.gentee.org/stdlib/file
2021-07-24T09:06:54
CC-MAIN-2021-31
1627046150134.86
[]
docs.gentee.org
I would like to enable Application insights in appservices.. I have a web api and my configuration is: In the appservice of the SPA app. I have the configuration: I don't know if my settings are correct. I would like to enable Application insights in appservices.. I have a web api and my configuration is: In the appservice of the SPA app. I have the configuration: I don't know if my settings are correct. Did you set it up from your IDE (Visual Studio?) It is simple to setup using the GUI IDE (Right click project --> Application Insights -->Configure Application Insights) You will know it is working when you go into Azure Cloud Shell and use this command to check --> Get-AzureRmApplicationInsights, this will show you what app is connected and help. See for further reference. Hi @juanmaximilianoaguilarabanto-6444, You have to create a App Insight service in Azure first. When you go on your app service, you have an instrumentation key. you need to set this instrumentation key in you app service config (actually, is empty in your screen. The first line) and / or set this key in your code (in appsettings file if you have a standard app, directly in your telemetry service if you use workers). 12
https://docs.microsoft.com/en-us/answers/questions/4170/enable-application-insights-in-appservice.html
2021-07-24T09:31:17
CC-MAIN-2021-31
1627046150134.86
[array(['/answers/storage/attachments/1254-sin-titulo100.png', 'alt text'], dtype=object) array(['/answers/storage/attachments/1255-sin-titulo200.png', 'alt text'], dtype=object) array(['/answers/storage/attachments/1216-appinsights-javascript-enabled.png', 'alt text'], dtype=object) ]
docs.microsoft.com
Extending Supervisor’s XML-RPC API¶ Supervisor can be extended with new XML-RPC APIs. Several third-party plugins already exist that can be wired into your Supervisor configuration. You may additionally write your own. Extensible XML-RPC interfaces is an advanced feature, introduced in version 3.0. You needn’t understand it unless you wish to use an existing third-party RPC interface plugin or if you wish to write your own RPC interface plugin. Configuring XML-RPC Interface Factories¶ An additional RPC interface is configured into a supervisor installation by adding a [rpcinterface:x] section in the Supervisor configuration file. In the sample config file, there is a section which is named [rpcinterface:supervisor]. By default it looks like this: [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface This section must remain in the configuration for the standard setup of supervisor to work properly. If you don’t want supervisor to do anything it doesn’t already do out of the box, this is all you need to know about this type of section. However, if you wish to add additional XML-RPC interface namespaces to a configuration of supervisor, you may add additional [rpcinterface:foo] sections, where “foo” represents the namespace of the interface (from the web root), and the value named by supervisor.rpcinterface_factory is a factory callable written in Python which should have a function signature that accepts a single positional argument supervisord and as many keyword arguments as required to perform configuration. Any key/value pairs defined within the rpcinterface:foo section will be passed as keyword arguments to the factory. Here’s an example of a factory function, created in the package my.package. def make_another_rpcinterface(supervisord, **config): retries = int(config.get('retries', 0)) another_rpc_interface = AnotherRPCInterface(supervisord, retries) return another_rpc_interface And a section in the config file meant to configure it. [rpcinterface:another] supervisor.rpcinterface_factory = my.package:make_another_rpcinterface retries = 1
https://docs.red-dove.com/supervisor/xmlrpc.html
2021-07-24T07:05:13
CC-MAIN-2021-31
1627046150134.86
[]
docs.red-dove.com
Invite your team members to your Olvy organization by following the below steps, 2. Click on the organization name at the top of the left sidebar menu, and under Organization Settings, click on ‘Invites’. 3. Click on the ‘Invite People’ button, and on the pop-up, enter the email ID of your team member, and click ‘Invite’.
https://docs.olvy.co/how-to-invite-my-team-members-to-the-organization/
2021-07-24T07:57:26
CC-MAIN-2021-31
1627046150134.86
[array(['https://docs.olvy.co/content/images/2021/01/xScreen-Shot-2021-01-06-at-5.42.37-AM.png.pagespeed.ic.nXYOWyOqpN.png', None], dtype=object) array(['https://docs.olvy.co/content/images/2021/01/xScreen-Shot-2021-01-06-at-5.45.56-AM.png.pagespeed.ic.kkpRnEpNbY.png', None], dtype=object) ]
docs.olvy.co
When a cloud API supports both REST and SOAP APIs how should I evaluate the proper base for the connector, and whether to write a SOAP connector or a REST connector? Deciding whether to use a SOAP connector or REST connector depends on the study you do on the APIs. When deciding you need to consider the number of users using each API and the operations exposed by each API. When a cloud API supports REST/SOAP APIs and also has a Java SDK, how should I evaluate a proper base for the connector? How do I decide whether to write a SOAP/REST connector or a Java based connector by wrapping the SDK through a set of Class mediators? SOAP/REST connectors are preferred over JAVA API based connectors. This is due to the flexibility provided by WSO2 ESB to debug issues in SOAP/REST connectors in case something goes wrong. Out of the init method (template) approach and the config as local entry approach, what is the best approach to be used to initialize a connector? what are the relevant use cases for each approach? We recommend the config as local entries/ registry entries approach. This is because users can maintain environment specific configs if this approach is followed. Should I leave a response from a cloud API as it is within the connector template itself or should I process/transform the response to some other format within the connector template? Leave the response as it is. You only need to remove the domain specific headers because they can fail the next call operation. ...
https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=51484280&selectedPageVersions=7&selectedPageVersions=8
2021-07-24T07:00:21
CC-MAIN-2021-31
1627046150134.86
[]
docs.wso2.com
Hosted Exchange Email Optimise your productivity with Standard Email or Microsoft Exchange. Each package includes a full range of business-class features, access from any device and 24/7 expert support. PC Docs 125GB Hosted Exchange 2013 Mailbox - Microsoft Exchange Server 2013 Included - 125GB per mailbox Included, the biggest in the industry - Instant push email using Outlook Included - Outlook Web Access (OWA) Included - Mobile ‘push & sync’ technology Included - Shared calendars, contacts and tasks Included - Triple hosted in UK Data centres Included - 99.9% financially backed SLA Included - Award-winning Anti-Spam/Anti-Virus Included - Optional Outlook 2011/2013/2016 licences - No minimum mailboxes per account - PC Docs Dedicated Hosted Servers - 99.99% Guaranteed Uptime - UK Based Data Centres - Lower operational costs - Flexibility in service delivery - Scalable technology to meet your changing needs - Increased security for company data and applications
https://www.pc-docs.co.uk/service/hosted-exchange-email/
2021-07-24T07:54:45
CC-MAIN-2021-31
1627046150134.86
[]
www.pc-docs.co.uk
The Sizzle theme for Sphinx¶ This documentation describes Sizzle, a theme for Sphinx. This theme was inspired by another theme, Guzzle. Sizzle uses some of the styling elements of Guzzle, but has diverged a fair bit in numerous areas. The repository where this theme is developed is to be found here. Theme Options¶ Sizzle inherits from Sphinx’s basic theme. The following theme options are defined: globaltoc_collapse– as for Guzzle: this determines whether the global TOC shown in the sidebar is collapsed or not. Defaults to false. globaltoc_depth– as for Guzzle: the depth to which the global TOC is expanded. Defaults to 5. globaltoc_includehidden– as for Guzzle: whether to include hidden entries in the global TOC. Defaults to true. project_logo_name– this replaces Guzzle’s project_nav_name. The name change reflects that the value can be shown elsewhere than in a navigation panel. base_url. The new name better reflects how the value is used - as a base URL for sitemap links. google_fonts– this allows you to specify additional Google fonts to be included for use in any custom styles. Defaults to None. google_analytics_id– this replaces Guzzle’s google_analytics_account. The name better reflects that the value is a tracker ID. show_index– controls whether a link to the index is shown in the header. This defaults to true- set it to Falseto hide the link. show_filter– controls whether a filter to apply to TOC titles is shown. This defaults to true- set it to Falseto hide the filter (which is not needed if you have a short enough list of entries in the TOC). enable_tooltips– controls if tooltips are shown. This defaults to true- set it to Falseto disable tooltips. glossary_permalinks– controls if glossary terms have permalinks. This defaults to true- set it to Falseto disable the permalinks. Layout¶ The layout has a scrolling area, consisting of sidebar and content, between a fixed header and footer. The footer is small (for copyright information and links) and the header has the following elements: - A gradient background - The project name (as determined by project_logo_name). Except when in the home page, you can click on this to get to the home page - The title of the current document - The search box, assuming the browser window is wide enough. If it isn’t, the search box relocates to the top of the sidebar. - A link to the index. This is conditionally visible (controlled by the show_indextheme option) and styled as a button - A link to the source for the current page, if available. This is conditionally visible (controlled by the show_sourceoption) and styled as a button - Links (styled as buttons) to take you to previous and next pages, if any The sidebar and content area can scroll independently. Typography¶ Font Awesome is integrated. You can use the markup role fato introduce an icon into your content. For example, the markup :fa:`diamond`produces in the finished output. Document and section titles use Source Serif Pro. The default body font is Roboto, falling back to Guzzle’s slightly less compact choice of Open Sans. The monospace font used for code blocks is Iosevka, which is a condensed font allowing more content to be shown than the fallbacks of Roboto Mono, Source Code Pro and Consolas. An example: @real fox.quick(h) { *is_brown && it_jumps_over(doges.lazy) } Google Fonts¶ If you want to use other Google fonts in your documentation, you can do this via a theme option: html_theme_options = { # other stuff omitted 'google_fonts': ['Acme', 'Raleway:400,700'], # other stuff omitted } This would make the Acme and Raleway fonts (the latter with the specific weights indicated) for use in your documentation, so that you could use Acme and Raleway in font-family values in your custom CSS. Custom Roles¶ This theme adds two specific roles which you might find useful in documenting your projects: The farole, as described above. A generic spanrole, which can be used as follows: the markup :span:`c1,c2,c3|some text`will result in the output <span class="c1 c2 c3">some text</span> This isn’t intended to be used to provide lots of ad-hoc styles (which would detract from the quality of the documentation), but it can be useful in some scenarios (such as trying things out). You can, of course, create your own roles in reStructuredText markup using the role directive .. role:: <rolename> This approach is preferable when your usage of a particular style is systematic rather than ad hoc. The section on Summary-Detail Lists gives an example where the spanrole can be useful. Use of JavaScript, CSS and font assets¶ The version of jQuery used is 3.3.1. The version of Bootstrap used is 3.3.7. These are loaded from CDN, as are the fonts. No additional external assets beyond these are used, though you can add some in the usual way to a specific project – see the section Custom Styles and JavaScript for more details. Styling Lists using Font Awesome¶ You can style bulleted lists using Font Awesome. For example, the following list: - Arcturus - Betelgeuse - VY Canis Majoris was produced using this markup: .. cssclass:: styled-list using-star * Arcturus * Betelgeuse * VY Canis Majoris A class starting with using- is used to style the list, with using- being replaced by fa- in the actual style applied. You can override individual items with specific icons. For example, - Arcturus - Betelgeuse - VY Canis Majoris was produced by this markup: .. cssclass:: styled-list using-star * :fa:`star-o` Arcturus * :fa:`star-half-o` Betelgeuse * VY Canis Majoris Summary-Detail Lists¶ HTML5 has a handy feature - summary-detail lists, which are marked up like this: <details> <summary>The summary goes here.</summary> <p>The detail goes here.</p> </details> The idea is that the whole thing can be closed (when only the summary is visible) or open (when both the summary and detail parts are visible). However, browser support is patchy and inconsistent, and styling options are limited. Here’s how the element looks when open and closed in Firefox and Chrome: Of course, docutils and Sphinx don’t offer any reStructuredText markup which maps to this HTML5 element. With the Sizzle theme, you can achieve a similar effect like this: .. cssclass:: summary-detail * :span:`The summary goes here.` The detail goes here. The Sizzle theme code looks for this specific CSS class and arranges for it to be shown like this: The summary goes here. Custom Styles and JavaScript¶ If you have custom styles and/or JavaScript, you can install them in one of two ways, depending on the version of Sphinx you’re using. If you’re using Sphinx 1.8 or later, you should use configuration options in conf.py like this: html_css_files = ['css/project.css'] html_js_files = ['js/project.js'] If you’re using an earlier Sphinx version than 1.8, then in your conf.py, have code something like this: def setup(app): app.add_stylesheet('css/project.css') app.add_javascript('js/project.js') The CSS file will be loaded after Sizzle’s own CSS, allowing you to tweak styles where needed. The JavaScript file will be added after all other external JavaScript files. Bear in mind that the Sizzle theme arranges to first add a JavaScript object to the DOM using a jQuery call: $(document).data('sizzle', {on_load: []}); // code in the Sizzle theme This is done before your custom JavaScript is included. If you want to have some JavaScript code of yours called after the entire document is loaded, you can do something like function my_custom_function() { // whatever } var sizzle = $(document).data('sizzle'); sizzle.on_load.push(my_custom_function); in your custom JavaScript file. When the document has loaded, the Sizzle theme’s code calls any functions pushed onto the on_load array: $(document).ready(function() { // code in the Sizzle theme // other stuff omitted ... var sizzle = $(document).data('sizzle'); if (sizzle.on_load) { sizzle.on_load.forEach(function(f) { f(); }); } // other stuff omitted ... } So your my_custom_function should get called once the document has loaded. Example – styling columns in a table¶ Here’s an example function which I implemented for a project, using the functionality described above: function add_column_styles() { $('table').each(function() { $(this).find('tr').each(function() { $(this).find('td, th').each(function(i) { $(this).addClass('col-' + i); }); }); }); } This adds a col-N class to every cell in the Nth column of every table, including header rows. By judicious application of CSS, you might be able to use this approach to style tables in your content as you wish. For instance, /* centre all columns except the first */ #some-table td:not(.col-0), #some-table th:not(.col-0) { text-align: center; } /* apply padding to the first column only */ #some-table td.col-0, #some-table th.col-0 { padding-left: 6px; } Glossary Improvements¶ Starting with version 0.0.9, there have been some improvements to Sphinx glossary functionality. Tooltips¶ By default, you can see tooltips when you hover over a glossary term in documentation. You can try them out in the Supervisor documentation set: there are some glossary terms at the top of the home page - just hover over them to see the tooltips with the glossary definitions of those terms. You can disable tooltips by setting enable_tooltips to False in the theme options. Code Block Improvements¶ Starting with version 0.0.9, code blocks with captions get a little button which, when clicked, copies the contents of the code block to the clipboard. The idea was shamelessly borrowed from recent Django documentation! Here’s an example:
https://docs.red-dove.com/sphinx_sizzle_theme/index.html
2021-07-24T08:22:35
CC-MAIN-2021-31
1627046150134.86
[]
docs.red-dove.com
Import resource-based policies for all services How to import policies for all services. - On the Service Manager page, click Import. The Import Policy page appears. - Select the file to import.You can only import policies in JSON format. - (Optional) Configure the import operation: - The Override Policy option deletes all policies of the destination repositories. - Zone Mapping – when no destination is selected, all services are imported. When a destination is selected, only the services associated with that security zone are imported. - Service Mapping maps the downloaded file repository, i.e. source repository to destination repository. You can use the red x symbols to remove services from the import. Scroll down to view all service mappings. - Click Import.A confirmation message appears after the file is imported.
https://docs.cloudera.com/runtime/7.2.10/security-ranger-authorization/topics/security-ranger-resource-policies-import-for-all-services.html
2021-07-24T09:08:10
CC-MAIN-2021-31
1627046150134.86
[]
docs.cloudera.com
, enter the limit in the Maximum AM Resource Limit text box. - Click Save. For information about setting Application Master resource limits on all the queues, see Set Application Master Resource Limit.
https://docs.cloudera.com/runtime/7.2.10/yarn-allocate-resources/topics/yarn-set-application-master-resource-limit-for-a-specific-queue.html
2021-07-24T09:10:14
CC-MAIN-2021-31
1627046150134.86
[]
docs.cloudera.com
expo-batteryprovides battery information for the physical device (such as battery level, whether or not the device is charging, and more) as well as corresponding event listeners. expo install expo-battery If you're installing this in a bare React Native app, you should also follow these additional installation instructions. import * as React from 'react'; import * as Battery from 'expo-battery'; import { StyleSheet, Text, View } from 'react-native'; export default class App extends React.Component { state = { batteryLevel: null, }; componentDidMount() { this._subscribe(); } componentWillUnmount() { this._unsubscribe(); } async _subscribe() { const batteryLevel = await Battery.getBatteryLevelAsync(); this.setState({ batteryLevel }); this._subscription = Battery.addBatteryLevelListener(({ batteryLevel }) => { this.setState({ batteryLevel }); console.log('batteryLevel changed!', batteryLevel); }); } _unsubscribe() { this._subscription && this._subscription.remove(); this._subscription = null; } render() { return ( <View style={styles.container}> <Text>Current Battery Level: {this.state.batteryLevel}</Text> </View> ); } } import * as Battery from 'expo-battery'; trueon Android and physical iOS devices and falseon iOS simulators. On web, it depends on whether the browser supports the web battery API. Promisethat resolves to a numberbetween 0 and 1 representing the battery level, or -1 if the device does not provide it. await Battery.getBatteryLevelAsync(); // 0.759999 BatteryState.UNKNOWN. Promisethat resolves to a Battery.BatteryStateenum value for whether the device is any of the four states. await Battery.getBatteryStateAsync(); // BatteryState.CHARGING false, even if the device is actually in low-power mode. Promisethat resolves to a booleanvalue of either trueor false, indicating whether low power mode is enabled or disabled, respectively. await Battery.isLowPowerModeEnabledAsync(); // true Battery.BatteryStateenum value trueif lowPowerMode is on, falseif lowPowerMode is off await Battery.getPowerStateAsync(); // { // batteryLevel: 0.759999, // batteryState: BatteryState.UNPLUGGED, // lowPowerMode: true, // } "android.intent.action.BATTERY_LOW"or rises above "android.intent.action.BATTERY_OKAY"from a low battery level. See here to read more from the Android docs. batteryLevelkey. EventSubscriptionobject on which you can call remove()to unsubscribe from the listener. Battery.BatteryStateenum value for whether the device is any of the four states. On web, the event never fires. batteryStatekey. EventSubscriptionobject on which you can call remove()to unsubscribe from the listener. lowPowerModekey. EventSubscriptionobject on which you can call remove()to unsubscribe from the listener. BatteryState.UNKNOWN- if the battery state is unknown or unable to access BatteryState.UNPLUGGED- if battery is not charging or discharging BatteryState.CHARGING- if battery is charging BatteryState.FULL- if the battery level is full
https://docs.expo.io/versions/v39.0.0/sdk/battery/
2021-07-24T08:00:16
CC-MAIN-2021-31
1627046150134.86
[]
docs.expo.io
Contents Data-Center Synchronization - Configuration This section details the steps necessary to perform a Workbench Data-Center Synchronization: - Go to the configuration page -> Data-Center section and click the below button to display the remote Data-Center synchronization form - In the displayed form, please fill the mandatory fields, remote zookeeper hostname and port. If remote zookeeper has enabled authentication, enter the username and password as well. - After filling the form click the sync button and wait, If your remote Zookeeper address is valid and able to connect, it will start progress synchronization and display the progress status on the screenWarningPlease wait for the Workbench Data-Center synchronization to complete; do not perform any Workbench Configuration Changes during this time - Once synchronization completed you can close the modal window and able to see the synchronized remote Data-Center information on the page. - Check the new/additional remote Workbench Data-Center Host(s) are present in Workbench\Configuration\Hosts - Check the number of Data-Centers and their names are present in Workbench\Configuration\Overview - Repeat the above steps for any other Workbench Data-Center deployments that you wish to form in a Workbench distrbuted architecture Workbench Data-Center - Post Formation Warning - The folders ‘<WB_HOME_FOLDER>\Karaf\resources\windows\wbagent_9.1.100.00_installscripts’ directory (Windows) and ‘<WB_HOME_FOLDER>/Karaf/resources/linux/wbagent_9.1.000.00_installscripts’ directory (Linux) WILL NEED to be *DELETED* first as new folders will be created with the updated details - When forming a Workbench Cluster, for example adding a Workbench Node 2 or Node 3, or Node N, on completion of forming the Workbench Cluster, the Workbench IO (i.e. WB_IO_Primary) Application now needs to be restarted to regenerate the correct Workbench Agent Remote JSON configuration file” Workbench Data-Center - Renaming Warning - The folders ‘<WB_HOME_FOLDER>\Karaf\resources\windows\wbagent_9.1.100.00_installscripts’ directory (Windows) and ‘<WB_HOME_FOLDER>/Karaf/resources/linux/wbagent_9.1.100.00_installscripts’ directory (Linux) WILL NEED to be deleted first as new folders will be created with the updated details - If/when a Workbench Data-Center is renamed, the Workbench IO (i.e. WB_IO_Primary) Application needs to be restarted to regenerate the correct Workbench Agent Remote JSON configuration file” Workbench Data-Center - Renaming - Workbench Agent Remote Warning - Post the renaming of a Workbench Data-Center, if an existing host requires a Workbench Agent Remote re-installation, the newly generated binaries in the folders ‘<WB_HOME_FOLDER>\Karaf\resources\windows\wbagent_9.1.100.00_installscripts’ directory (Windows) and ‘<WB_HOME_FOLDER>/Karaf/resources/linux/wbagent_9.1.100.00_installscripts’ directory (Linux), will first need to be copied to the host before running the “installer.exe” (Windows) or “installer” (Linux) executable” This page was last edited on May 6, 2021, at 04:49.
https://docs.genesys.com/Documentation/ST/latest/WorkbenchUG/DC_Sync_Config
2021-07-24T08:01:22
CC-MAIN-2021-31
1627046150134.86
[]
docs.genesys.com
It's important to know the complete process of creating a new E2E Test, this to keep a good maintenance rate, so in this guide you will find all the necessary tools. As e2e testing is developed using cypress, all tests are located inside the cypress directory, so first of all find this directory, it's located in E2EDemo and Samples directory inside WFNetKendoComponents and PBKendoComponents repository. Inside the cypress/integration directory create a new file with .spec.ts as extension, this file will be the new tests container, once you have it you can add user automated interactions using cypress syntax, so to learn about it you can visit their tutorial here. After you have created your test, follow the How to run E2E Test guide.
https://docs.mobilize.net/webmap/general/frontend/guides/e2e-maintenance/how-to-create-a-new-e2e-test
2021-07-24T08:18:41
CC-MAIN-2021-31
1627046150134.86
[]
docs.mobilize.net
Identity Details Columns under the identity header are static fields that appear in every AWS Cost and Usage report. You can use the identity line items in the AWS Cost and Usage report to find specific line items that have been split across multiple AWS Cost and Usage report files. This includes the following columns: - identity/LineItemId An ID that identifies every line item in a single given version of the AWS Cost and Usage report. The line item ID is not consistent between different AWS Cost and Usage reports and can't be used to identify the same line item across different AWS Cost and Usage reports. For example, the AWS Cost and Usage report created for November 29 can be large enough to require multiple files. The LineItemId is consistent between the November 29 AWS Cost and Usage report files, but doesn't match the LineItemId for the same resource in the November 30 AWS Cost and Usage report. Multiple lines in the AWS Cost and Usage report can have the same LineItemId, but for different hours of instance usage. - identity/TimeInterval. For example, 2017-11-01T00:00:00Z/2017-12-01T00:00:00Zincludes the entire month of November, 2017.
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/enhanced-identity-columns.html
2018-10-15T11:35:07
CC-MAIN-2018-43
1539583509170.2
[]
docs.aws.amazon.com
Text to be displayed in the column heading Member of Grid Column (PRIM_GDCL) Data Type - String The Caption property specifies the text to be displayed in the column heading. This will only be visible if the CaptionType is set to Caption. A caption can be either a literal string or a Multilingual Variable that will show the correct string for the current runtime language. If the caption is too long to fit in the control it will be clipped. Ellipses can be shown via the Ellipses property. All Component Classes Technical Reference Febuary 18 V14SP2
https://docs.lansa.com/14/en/lansa016/prim_gdcl_caption.htm
2018-10-15T11:15:06
CC-MAIN-2018-43
1539583509170.2
[]
docs.lansa.com
Before using DataMart, ensure that your system meets the requirements. General Requirements - Login credentials with both public and sysadmin server roles enabled in SQL Server. - Database server requirements for the AirWatch DataMart are identical to the host server requirements for the AirWatch Console. No additional hardware or upgrades are necessary. Software Requirements Windows Server 2008 R2, 2012 (64-bit), and 2014 (64-bit) with the latest service packs and recommended updates from Microsoft (). .NET Framework 3.5 & 4. A Windows post-installation update is required to update additional software components for .NET Framework 4. Microsoft SQL Server 2012, 2014, or 2016 with Client Tools (SQL Management Studio, Reporting Services, Integration Services, SQL Server Agent, latest server packs). Important: For dedicated SaaS installations, only install DataMart once. Subsequent clients are added to the DataMart database manually.
https://docs.vmware.com/en/VMware-AirWatch/9.1/vmware-airwatch-guides-91/GUID-AW91-DataMart_Requirements.html
2018-10-15T11:32:42
CC-MAIN-2018-43
1539583509170.2
[]
docs.vmware.com
Chunks and Symmetry As described in Computer Physics Communications, Vol. 181, pp. 687-702, 2010, Meep subdivides geometries into chunks. Each chunk is a contiguous region of space — a line, rectangle, or parallelepiped for 1d/2d/3d Cartesian geometries, or an annular section in a cylindrical geometry—whose sizes are automatically determined by libmeep. In parallel calculations, each chunk is assigned, in its entirety, to precisely one process — that is, no chunk exists partly on one processor and partly on another. Many internal operations in Meep consist of looping over points in the Yee grid, generally performing some operation involving the field components and material parameters at each point. In principle, this involves nested for loops; in practice, it is complicated by several factors, including the following: For calculations that exploit symmetry, only a portion of the full grid is actually stored in memory, and obtaining values for field components at a point that isn't stored requires a tricky procedure discussed below. Similarly, for Bloch-periodic geometries, only grid points in the unit cell are stored, but we may want the fields at a point lying outside the unit cell, again requiring a bit of a shell game to process correctly. Because of the staggered nature of the Yee grid, "looping over grid points" can mean multiple things — are we visiting only E-field sites, or only H-field sites, or both? Either way, obtaining a full set of field-component values at any one grid point necessarily involves a certain average over neighboring grid points. To shield developers from the need to grapple with these complications when implementing loops over grid points, libmeep provides a convenient routine called loop_in_chunks and a set of macros that take care of many of the above hassles. This is discussed in more detail below. - Chunks and Symmetry - Chunk Data Structures - Chunking of a 2d Geometry - Symmetries - The loop_in_chunks Routine - Is There a Version of loop_in_chunks for dft_chunks? - How the Images were Created Chunk Data Structures For each chunk in a geometry, libmeep creates instances of the data structures structure_chunk (storing data on the geometry of the chunk and the material properties at grid points in the chunk) and fields_chunk (storing the actual values of the time-domain field components at grid points in the chunk). Frequency-domain (DFT) field components are handled by a separate data structure called dft_chunk. Each instance of dft_chunk is associated with a single instance of fields_chunk (namely, whichever one stores the time-domain fields at the grid points covered by the DFT chunk); however, because DFT fields are typically only tabulated on a subset of the full grid, the grid volume covered by a dft_chunk may be only a subset of the volume covered by its parent fields_chunk, and not all fields_chunks have dft_chunks associated with them. Chunking of a 2d Geometry Our running example throughout this page will be a 2d geometry, of dimensions , with PML layers of thickness 1 on all sides, discretized with 5 points per unit length to yield a 40 × 30 grid. Chunking in the Single-Processor Case In a single-processor run, libmeep subdivides this geometry into 9 chunks (click for larger image): The width of the 8 chunks around the perimeter is set by the PML thickness. Note that the chunks are not of uniform sizes and that their ordering is somewhat arbitrary. In particular, consecutive chunks are not necessarily adjacent. Chunk Statistics As noted above, each chunk is a contiguous region of space defined by a Cartesian product of intervals for each coordinate; to specify the extents of the chunk it thus suffices to specify the endpoints of the interval for each coordinate, or equivalently the coordinates of the lower-left and upper-right grid points in the chunk. For each chunk, these are represented by ivecs named is and ie (stored in the fields_chunk and dft_chunk structures). Here's an example of how this looks for chunk 3 in the figure above: In this case we have is=(29,-29) and ie=(39,-19). Chunking in the Multiprocessor Case When running in parallel mode, each of the chunks identified for the single-processor case may be further subdivided into new chunks which can be assigned to different processors. For example, on a run with 8 processors, the 9 chunks identified in the single-processor case become 24 chunks: In this image, grid points with different coordinates (different heights off the plane) are handled by different processors, while points with the same coordinate but different colors live in different chunks. In this case, processes 0, 2, 5, and 7 each own 4 chunks, while processes 1, 3, 4, and 6 each own 2 chunks. Symmetries Meep's approach to handling symmetries is discussed from the user's perspective in Exploiting Symmetry and from a high-level algorithmic perspective in Computer Physics Communications, Vol. 181, pp. 687-702, 2010. The following is a brief synopsis of the implementation of this feature. The action of the symmetry group classifies grid points into orbits, sets of grid points that transform into one another under symmetry transformations. For example, in the figure with XY mirror symmetry below, the orbit of is . Meep chooses one element from each orbit (the "parent" grid point) to serve as a representative of the orbit, with the remaining elements of the orbit classified as "children" equivalent to the parent under the group action. (Any point in the orbit could serve equally well as the parent; the convention in meep is to choose the point with the lowest (most negative) grid indices, i.e the point closest to the lower-left corner of the overall grid--- in this case---but nothing depends on this particular choice.) For each orbit, field components are only stored for the parent, not for any children. This reduces memory requirements by a factor , the number of points in each orbit, known in meep as the ``multiplicity'' of the symmetry; for example, for a geometry with Y-mirror symmetry, for an XY-mirror symmetry, for an N-fold rotational symmetry, etc. Loops over grid points run only over parent points, i.e. points with field components stored in memory. However, each parent point is now visited times, once for each distinct symmetry transformation in the symmetry group (including the identity transformation). On the th visit to a given parent point , we (1) look up the components of the fields stored in memory for , (2) apply the transformation to both the grid-point coordinates and the field components of the parent point to yield the coordinates and field components of the th child point, i.e. If the materials are anisotropic (i.e. the permittivity and/or permeability are tensors) we must transform those appropriately as well. (3) use the coordinates and field components of the child point to carry out the operation in question. Chunking in the Presence of Symmetries As noted above, in the presence of symmetries only a portion of the full grid is actually stored in memory. For example, adding a mirror symmetry (symmetry under reflection about the -axis) eliminates points in the upper half-plane ; the points that remain are now subdivided into 6 chunks (in the single-processor case): Adding an mirror symmetry on top of this (so that now the geometry has both and mirror symmetry) reduces the number of stored grid points by an additional factor of 2; now the geometry is subdivided into just 4 chunks in the single-processor case: In these figures, points in shaded regions are "children" — that is, points for which Meep stores no field components, since they are related by symmetry to "parent" points in the unshaded region. In the second figure we have indicated one complete orbit: parent point is carried to child points under the operations of the symmetry group. Coordinates and Field Components of Symmetry-Reduced Points Symmetry transformations in libmeep are described by a class called simply symmetry, which offers class methods for transforming grid points and field components: symmetry S = mirror(X,gv) + mirror(Y,gv); // XY mirror symmetry ivec iparent; // grid indices of parent point vec rparent; // cartesian coordinates of parent point ... ivec ichild = S.transform(iparent, +1); // grid indices of child point vec rchild = S.transform(rparent, +1); // cartesian coordinates of child point component cchild = Ex; // desired field component at child point component cparent = S.transform(cchild, -1); // corresponding component at parent point The loop_in_chunks Routine To allow developers to implement loops over grid points without stressing out over the various complications outlined above, the fields class in libmeep offers a convenient method called loop_in_chunks. To use this routine, you will first write a "chunk-loop function" which carries out some operation involving grid points and (optionally) field components at grid points. Then you pass your routine to loop_in_chunks with some additional arguments customizing the type of loop you want (see below). Your loop function will then be called once for every chunk in the problem---including both chunks whose fields are present in memory, and those whose aren't due to being eliminated by symmetry---with a long list of arguments describing the chunk in question. The body of your chunk-loop function will typically want to execute a loop over all grid points in the chunk. This is facilitated by a host of utility macros and API functions that operate on the arguments to your function to yield quantities of interest: grid-point coordinates, field-component values, etc. The Chunk Loop Function The chunk-loop function that you write and pass to loop_in_chunks has the following prototype: void field); Notwithstanding this formidable-looking beast of a calling convention, most of the arguments here are things that you can blindly pass on to API functions and convenience macros, which will return quantities whose significance is easy to understand. Here's a skeleton chunk-loop function that executes a loop over all grid points in the chunk, obtaining on each loop iteration both the integer indices and the cartesian coordinates of the child point, as well as values for a list of field components of interest (specified before the loop in the constructor of chunkloop_field_components). You can fill in the rest of the loop body to do whatever you want with ichild, rchild, and data.values, and the results will be identical whether or not you declare symmetries when defining your geometry. (Well, the results will be identical assuming the physical problem you're considering really is symmetric, which Meep does not check.) void my) { // some preliminary setup vec rshift(shift * (0.5*fc->gv.inva)); // shift into unit cell for PBC geometries // prepare the list of field components to fetch at each grid point component components[] = {Ex, Hz}; chunkloop_field_components data(fc, cgrid, shift_phase, S, sn, 2, components); // loop over all grid points in chunk LOOP_OVER_IVECS(fc->gv, is, ie, idx) { // get grid indices and coordinates of parent point IVEC_LOOP_ILOC(gv, iparent); // grid indices IVEC_LOOP_LOC(gv, rparent); // cartesian coordinates // apply symmetry transform to get grid indices and coordinates of child point ivec ichild = S.transform(iparent, sn) + shift; vec rchild = S.transform(rparent, sn) + rshift; // fetch field components at child point data.update_values(idx); std::complex<double> Ex = data.values[0], Hz = data.values[1]; } } Is There a Version of loop_in_chunks for dft_chunks? No, but the routine process_dft_component() in src/dft.cpp effectively implements such a routine for a hard-coded set of operations on DFT components (namely: outputting to HDF5, fetching DFT array slices, and computing mode-overlap coefficients). How the Images were Created The images above were obtained with the help of a simple C++ code called WriteChunkInfo that calls libmeep API functions to obtain info on the chunk structure of the 40×30 grid we considered. This code (plus a simple hand-written Makefile) lives in the doc/docs/Developer_Codes subdirectory of the meep source distribution.
https://meep.readthedocs.io/en/latest/Chunks_and_Symmetry/
2018-10-15T11:39:28
CC-MAIN-2018-43
1539583509170.2
[]
meep.readthedocs.io
Handles and Objects An object is a data structure that represents a system resource, such as a file, thread, or graphic image. An application cannot directly access object data or the system resource that an object represents. Instead, an application must obtain an object handle, which it can use to examine or modify the system resource. Each handle has an entry in an internally maintained table. These entries contain the addresses of the resources and the means to identify the resource type.
https://docs.microsoft.com/en-us/windows/desktop/SysInfo/handles-and-objects
2018-10-15T10:38:21
CC-MAIN-2018-43
1539583509170.2
[]
docs.microsoft.com
Quickstart Examples¶ Running a basic workflow¶ A Toil workflow can be run with just three steps: Install Toil (see Installation) Copy and paste the following code block into a new file called the name of the job store and run the workflow: using the default Batch System, singleMachine. using the file job store. Toil uses batch systems to manage the jobs it creates. The singleMachine batch system is primarily used to prepare and debug workflows on a local machine. Once validated, try running them on a full-fledged batch system (see Batch System API). Toil supports many different batch systems such as Apache Mesos and Grid Engine; its versatility makes it easy to run your workflow in all kinds of places. Toil is totally customizable! Run python helloWorld.py --help to see a complete list of available options. For something beyond a “Hello, world!” example, refer to Sort Toil with Extra Features). (venv) $ pip install 'toil[cwl]' This installs the toil-cwl-runnerand cwltoilexecutables. Running a basic WDL workflow¶ The Workflow Description Language (WDL) is another emerging language for writing workflows that are portable across multiple workflow engines and platforms. Running WDL workflows using Toil is still in alpha, and currently experimental. Toil currently supports basic workflow syntax (see WDL in Toil for more details and examples). Here we go over running a basic WDL helloworld workflow. First ensure that Toil is installed with the wdlextra (see Installing Toil with Extra Features). (venv) $ pip install 'toil[wdl]' This installs the toil-wdl-runnerexecutable. Copy and paste the following code block into wdl-helloworld.wdl: workflow write_simple_file { call write_file } task write_file { String message command { echo ${message} > wdl-helloworld-output.txt } output { File test = "wdl-helloworld-output.txt" } } and this code into wdl-helloworld.json: { "write_simple_file.write_file.message": "Hello world!" } To run the workflow simply enter (venv) $ toil-wdl-runner wdl-helloworld.wdl wdl-helloworld.json Your output will be in wdl-helloworld-output.txt (venv) $ cat wdl-helloworld-output.txt Hello world! To learn more about WDL, see the main WDL website . Sort') parser.add_argument("--sortMemory", dest="sortMemory", help="Memory for jobs that sort chunks of the file.", default=None) parser.add_argument("--mergeMemory", dest="mergeMemory", help="Memory for jobs that collate results.", default=None) options = parser.parse_args() if not hasattr(options, "sortMemory") or not options.sortMemory: options.sortMemory = sortMemory if not hasattr(options, "mergeMemory") or not options.mergeMemory: options.mergeMemory = sortMemory #, options=options,, options): """ Sets up the sort. Returns the FileID of the sorted file """ job.log("Starting the merge sort") return job.addChildJobFn(down, inputFile, N, downCheckpoints, options = options, preemptable=True, memory=sortMemory).rv() setup really only does two things. First it writes to the logs using Job.log(), options,.log(, options=options, preemptable=True, memory=options.sortMemory).rv(), job.addChildJobFn(down, job.fileStore.writeGlobalFile(t2), N, downCheckpoints, checkpoint=downCheckpoints, options=options, preemptable=True, memory=options.mergeMemory).rv(), preemptable=True, options=options, memory=options.sortMemory).rv() else: # We can sort this bit of the file job.log(, options,.log( Class Commandline Options. Please see the Status Command section for more on gathering runtime and resource info on multiple AWS instances). For more information on running Toil workflows on a cluster, see Running in AWS. Also! Remember to use the destroy-cluster Command command when finished to destroy the cluster! Otherwise things may not be cleaned up properly. Launch a cluster in AWS using the Launch-Cluster Command command. The arguments keyPairName, leaderNodeType, and zoneare required to launch a cluster. (venv) $ toil launch-cluster <cluster-name> --keyPairName <AWS-key-pair-name> --leaderNodeType t2.medium --zone us-west-2a Copy helloWorld.pyto the /tmpdirectory on the leader node using the Rsync-Cluster Command. Also! Remember to use the destroy-cluster Command command when finished to destroy the cluster! Otherwise things may not be cleaned up properly. First launch a node in AWS using the Launch-Cluster Command command. (venv) $ toil launch-cluster <cluster-name> --keyPairName <AWS-key-pair-name> --leaderNodeType t2.medium --zone us-west-2a Copy example.cwland example-job.yamlfrom the CWL example to the node using the Rsync-Cluster Command command. (venv) $ toil rsync-cluster --zone us-west-2a <cluster-name> example.cwl :/tmp (venv) $ toil rsync-cluster --zone us-west-2a <cluster-name> example-job.yaml :/tmp SSH into the cluster’s leader node using the Ssh-Cluster Command utility. (venv) $ toil ssh-cluster --zone us-west-2a <cluster-name> Once on the leader node, it’s a good idea to update and install the following: sudo apt-get update sudo apt-get -y upgrade sudo apt-get -y dist-upgrade sudo apt-get -y install git sudo pip install mesos.cli Now create a new virtualenvwith the --system-site-packagesoption and activate: virtualenv --system-site-packages venv source venv/bin/activate Now run the CWL workflow: (venv) $ toil-cwl-runner /tmp/example.cwl /tmp/example-job.yaml Tip When running a CWL workflow on AWS, input files can be provided either on the local file system or in S3 buckets using s3://URI references. Final output files will be copied to the local file system of the leader node. Finally, log out of the leader node and from your local computer, destroy the cluster. (venv) $ toil destroy-cluster --zone us-west-2a <cluster-name> Running a Workflow with Autoscaling on AWS - Cactus¶ Cactus is a reference-free whole-genome multiple alignment program. Also! Remember to use the destroy-cluster Command command when finished to destroy the cluster! Otherwise things may not be cleaned up properly. Download pestis.tar.gz. Launch a leader node in AWS using the Launch-Cluster Command command. (venv) $ toil launch-cluster <cluster-name> --keyPairName <AWS-key-pair-name> --leaderNodeType t2.medium --zone us-west-2c - Setting the environment variable TOIL_AWS_ZONEeliminates having to do this for each later command. (venv) $ export TOIL_AWS_ZONE=us-west-2c Copy the required files, i.e., seqFile.txt (a text file containing the locations of the input sequences as well as their phylogenetic tree, see here), organisms’ genome sequence files in FASTA format, and configuration files (e.g. blockTrim1.xml, if desired), up to the leader node. (venv) $ toil rsync-cluster <cluster-name> pestis-short-aws-seqFile.txt :/tmp (venv) $ toil rsync-cluster <cluster-name> GCF_000169655.1_ASM16965v1_genomic.fna :/tmp (venv) $ toil rsync-cluster <cluster-name> GCF_000006645.1_ASM664v1_genomic.fna :/tmp (venv) $ toil rsync-cluster <cluster-name> GCF_000182485.1_ASM18248v1_genomic.fna :/tmp (venv) $ toil rsync-cluster <cluster-name> GCF_000013805.1_ASM1380v1_genomic.fna :/tmp (venv) $ toil rsync-cluster <cluster-name> setup_leaderNode.sh :/tmp (venv) $ toil rsync-cluster <cluster-name> blockTrim1.xml :/tmp (venv) $ toil rsync-cluster <cluster-name> blockTrim3.xml :/tmp Log into the leader node. (venv) $ toil ssh-cluster <cluster-name> Set up the environment of the leader node to run Cactus. $ bash /tmp/setup_leaderNode.sh $ source cact_venv/bin/activate (cact_venv) $ cd cactus (cact_venv) $ pip install --upgrade . Run Cactus as an autoscaling workflow. (cact_venv) $ TOIL_APPLIANCE_SELF=quay.io/ucsc_cgl/toil:3.12.0 cactus --provisioner aws --nodeType c3.4xlarge --maxNodes 2 --minNodes 0 --retry 10 --batchSystem mesos --disableCaching --logDebug --logFile /logFile_pestis3 --configFile /tmp/blockTrim3.xml aws:us-west-2:cactus-pestis /tmp/pestis-short-aws-seqFile.txt /tmp/pestis_output3.hal Note In this example, we specify the version of Toil to be 3.12.0; if the latest one is desired, please eliminate TOIL_APPLIANCE_SELF=quay.io/ucsc_cgl/toil:3.12.0. The flag --maxNodes 2creates up to two instances of type c3.4xlarge and launches Mesos worker containers inside them. The flag --logDebugis equivalent to --logLevel DEBUG. --logFile /logFile_pestis3: Write log in a file named logFile_pestis3 under /folder. The --configFileflag is not required, depending on whether a specific configuration file is intended to run the alignment. Toil creates a bucket in S3 called aws:us-west-2:cactus-pestis to store intermediate job files and metadata. The result file, named pestis_output3.hal, is stored under /tmpfolder of the leader node. Use cactus --helpto see all the Cactus and Toil flags available. Log out of the leader node. (cact_venv) $ exit Download the resulted output to local machine. (venv) $ toil rsync-cluster <cluster-name> :/tmp/pestis_output3.hal <path-of-folder-on-local-machine> Destroy the cluster. (venv) $ toil destroy-cluster <cluster-name> For other examples and Toil resources see !
https://toil.readthedocs.io/en/3.15.0/gettingStarted/quickStart.html
2018-10-15T10:16:07
CC-MAIN-2018-43
1539583509170.2
[]
toil.readthedocs.io
All content with label editor+wcm. Related Labels: future, post, downloads, template, content, page, examples, gatein, tags, categories, demo, configuration, composition, s, started, installation, api, getting, tasks, uploads, templates, design more » ( - editor, - wcm ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/editor+wcm
2018-10-15T10:59:49
CC-MAIN-2018-43
1539583509170.2
[]
docs.jboss.org
Installation Building from Source Building Meep directly from the source code can be challenging for users unfamiliar with building Unix software, mainly because of the many prerequisites that must be installed combined with the need to tell Meep's build scripts where to find these prerequisites. Meep's build systems uses the standard GNU Autotools ./configure && make && make install machinery, but requires a number of prerequisites in order to obtain a full-featured Meep installation: MPB, Libctl, Harminv, libGDSII, MPI, HDF5, Python, and Guile. MPB and Harminv, in turn, require LAPACK and BLAS and FFTW to be installed. Gzipped tarballs of stable versions of the source are available on the releases page, and you can also do a git clone of the master branch of the Meep repository on Github if you have Autotools installed. For more information, see Build From Source. The latest version of Meep preinstalled on Ubuntu can be accessed on Amazon Web Services (AWS) Elastic Compute Cloud (EC2) as a free Amazon Machine Image (AMI). To access this AMI, follow these instructions. Conda Packages Official Releases The recommended way to install PyMeep is using the Conda package manager. Binary packages for serial and parallel PyMeep on Linux and macOS are currently available (64 bit architectures only), and are updated with each MEEP release. The easiest way to get started is to install Miniconda, which comes with everything necessary to create Python environments with Conda. For example, to install Miniconda with Python 3 on Linux: wget -O miniconda.sh bash miniconda.sh -b -p <desired_prefix> export PATH=<desired_prefix>/bin:$PATH Next, we create a Conda environment for PyMeep to isolate it from other Python libraries that may be installed. conda create -n mp -c chogan -c conda-forge pymeep This creates an environment called "mp" (you can name this anything you like) with PyMeep and all its dependencies. This will default to the version of Python in your Miniconda installation (Python 3 for us since we installed Miniconda3), but if you want to work with Python 2, just add python=2 to the end of the command. Next, we need to activate the environment before we can start using it. source activate mp Now, python -c 'import meep' should work, and you can try running some of the examples in the meep/python/examples directory. Warning: The pymeep package is built to work with OpenBLAS, which means numpy should also use OpenBLAS. Since the default numpy is built with MKL, installing other packages into the enviornment may cause conda to switch to an MKL-based numpy. This can cause segmentation faults when calling MPB. To work around this, you can make sure the no-mkl conda package is installed, make sure you're getting packages from the conda-forge channel (they use OpenBLAS for everything), or as a last resort, run import meep before importing any other library that is linked to MKL> Installing parallel PyMeep follows the same pattern, but the package is called pymeep-parallel. conda create -n pmp -c chogan -c conda-forge pymeep-parallel source activate pmp The environment includes mpi4py, so you can run an MPI job with 4 processes like this: mpirun -np 4 python <script_name>.py If you run into issues, make sure your PYTHONPATH environment variable is unset. Note: For pymeep-parallel on macOS, a bug in openmpi requires that the environment variable TMPDIR be set to a short path like /tmp. Without this workaround, you may see errors similar to this: [laptop:68818] [[53415,0],0] ORTE_ERROR_LOG: Bad parameter in file ../../orte/orted/pmix/pmix_server.c at line 264 [laptop:68818] [[53415,0],0] ORTE_ERROR_LOG: Bad parameter in file ../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 666 Nightly Builds To experiment with new features before they are distributed in an official release, you can try the nightly development builds. Just put the dev label before the other channels like this: # Serial pymeep conda create -n mp_test -c chogan/label/dev -c chogan -c conda-forge pymeep # Parallel pymeep conda create -n pmp_test -c chogan/label/dev -c chogan -c conda-forge pymeep-parallel Installation on Linux For most. For easy access to the Python interface, we provide a binary installation in the form of Conda packages. Details can be found below. The following precompiled packages are available: BLAS and LAPACK possibly as part of a package for Atlas BLAS, Guile, MPI, and HDF5. One thing to be careful of is that many distributions split packages into two parts: one main package for the libraries and programs, and a devel package Ubuntu which has precompiled packages for Meep: apt-get install meep h5utils Installation on macOS Since macOS is, at its heart, a Unix system, one can, in principle compile and install Meep and all its prerequisites just as on any other Unix system. However, this process is much easier using the Homebrew package to install most of the prerequisites, since it will handle dependencies and other details for you. You will need administrator privileges on your Mac. The first steps are: - Install Xcode, the development/compiler package from Apple, free from the Apple Xcode web page. - Install Homebrew: download from the Homebrew site and follow the instructions there. - Run the following commands in the terminal to compile and install the prerequisites. This may take a while to complete because it will install lots of other stuff first brew doctor brew install homebrew/science/hdf5 homebrew/science/openblas guile fftw h5utils Now, install the Harminv, libctl, MPB, and Meep packages from source. Download Harminv and, in the harminv directory, do: ./configure && make && make install Use the same commands for libctl, MPB, and Meep. For more detailed information, see Build From Source. You are done, and can now run Meep (Scheme interface) just by typing meep. You can run make check in the meep directory if you want to perform a self-test. To build the latest version of Meep from source on macOS Sierra, follow these instructions.
https://meep.readthedocs.io/en/latest/Installation/
2018-10-15T11:40:41
CC-MAIN-2018-43
1539583509170.2
[]
meep.readthedocs.io
The Page Content particle displays content assigned to a given page. For example, if you have articles that you would like to have appear on your home page amongst your other particles and modules, you can do so by including the Page Content particle in your layout. The Page Content particle itself doesn't have any specific particle settings. It's simply a portal by which any content you have assigned to that page (or type of page) is displayed. You can still, of course, use the Block settings tab to refine the look and presentation of the content block as you would any other particle.
http://docs.gantry.org/gantry5/particles/page-content
2018-10-15T10:20:56
CC-MAIN-2018-43
1539583509170.2
[]
docs.gantry.org
TerraformTerraform The recommended way to deploy a OSS DC/OS cluster on GCE is by using Terraform. Disclaimer: Please note this is a community driven project and not officially supported by Mesosphere. PrerequisitesPrerequisites - Terraform 0.11.x - Google Cloud Platform (GCP) Credentials configured via: gcloud auth login - SSH Keys - Existing Google Project. This is automated with Terraform using project creation as documented here. Authenticate to GoogleAuthenticate to Google Authenticate to the Google Cloud Platform using the credentials listed in the Prerequisites. Your credentials will be downloaded locally for Terraform to use. $ gcloud auth login Configure your GCP SSH KeysConfigure your GCP SSH Keys You must set the private key that you will be using with ssh-agent and set public key in Terraform. Setting a private key will allow you to log in to to the cluster after DC/OS is deployed. A private key also helps Terraform set up your cluster at deployment time. $ ssh-add ~/.ssh/your_private_key.pem $ cat desired_cluster_profile.tfvars gcp_ssh_pub_key_file = "INSERT_PUBLIC_KEY_PATH_HERE" ... Configure a Pre-existing Google ProjectConfigure a Pre-existing Google Project Currently terraform-dcos assumes that a project already exists in GCP, for you to start deploying your resources against. This repository will soon have support for Terraform to create projects on behalf of the user via this document. For now, you will have to create this project ahead of time, or else use an existing project. $ cat desired_cluster_profile.tfvars gcp_project = "massive-bliss-781" ... Example of Terraform DeploymentsExample of Terraform Deployments Quick StartQuick Start The typical defaults to experiment with DC/OS are listed as follows: - Three agents will be deployed for you: two private agents and one public agent. - It is not required to git clonethis repository. Terraform does this for you. Run the following commands to deploy a multi-master setup in the cloud. terraform init -from-module github.com/dcos/terraform-dcos//gcp terraform apply -var gcp_project="your_existing_project" Custom terraform-dcos variables The default variables are tracked in the variables.tf file. However, if you run terraform get --updates when you want to fetch new releases of DC/OS upgrade to, this file may get overwritten. Therefore, it is best to use the desired_cluster_profile.tfvars and set your custom Terraform and DC/OS flags there. This way you can keep track of a single file that you can use to manage hroughout the lifecycle of your cluster. For a list of supported operating systems for this repository, see the ones that DC/OS recommends here. You can find the list that Terraform supports here. To apply the configuration file, run the following command: terraform apply -var-file desired_cluster_profile.tfvars Advanced YAML configurationAdvanced YAML configuration We have designed this task to be flexible. In the following example, the working variables allow customization using a single tfvars file. For advanced users with stringent requirements, here are the DC/OS flag examples where you can simply paste your YAML configuration into your desired_cluster_profile.tfvars. The alternative to YAML is to convert it to JSON. $ cat desired_cluster_profile.tfvars dcos_version = "1.10.2" os = "centos_7.3" num_of_masters = "3" num_of_private_agents = "2" num_of_public_agents = "1" expiration = "6h" dcos_security = "permissive" dcos_cluster_docker_credentials_enabled = "true" dcos_cluster_docker_credentials_write_to_etc = "true" dcos_cluster_docker_credentials_dcos_owned = "false" dcos_cluster_docker_registry_url = "" gcp_ssh_pub_key_file = "INSERT_PUBLIC_KEY_PATH_HERE" Note: The YAML comment is required for the DC/OS specific YAML settings. Upgrading DC/OSUpgrading DC/OS You can upgrade your DC/OS cluster with a single command. This Terraform script was built to perform installs and upgrades. With the upgrade procedures below, you can also have finer control on how masters or agents upgrade at a given time. This will allow you to change the parallelism of master or agent upgrades. DC/OS UpgradesDC/OS Upgrades Rolling upgradeRolling upgrade Supported upgrade by dcos.io Masters sequential, agents parallel:Masters sequential, agents parallel: terraform apply -var-file desired_cluster_profile.tfvars -var state=upgrade -target null_resource.bootstrap -target null_resource.master -parallelism=1 terraform apply -var-file desired_cluster_profile.tfvars -var state=upgrade All roles simultaneouslyAll roles simultaneously This command is not supported by dcos.io, but it works without dcos_skip_checks enabled. terraform apply -var-file desired_cluster_profile.tfvars -var state=upgrade MaintenanceMaintenance If you would like to add or remove private or public agents from your cluster, you can do so by telling Terraform your desired state and it will make the required changes. For example, if you have two private agents and one public agent in your -var-file, you can override that flag by specifying the -var flag. The var flag has higher priority than the -var-file. Adding agentsAdding agents terraform apply \ -var-file desired_cluster_profile.tfvars \ -var num_of_private_agents=5 \ -var num_of_public_agents=3 Removing agentsRemoving agents Caution: Always remember to save your desired state in your desired_cluster_profile.tfvars before removing an agent. terraform apply \ -var-file desired_cluster_profile.tfvars \ -var num_of_private_agents=1 \ -var num_of_public_agents=1 Redeploy an existing masterRedeploy an existing master If you want to redeploy a problematic master (for example, your storage has filled up, the cluster is not responsive), you can tell Terraform to redeploy during the next cycle. Note: This only applies to DC/OS clusters that have set their dcos_master_discovery to master_http_loadbalancer and not static. Master NodeMaster Node Taint master nodeTaint master node terraform taint google_compute_instance.master.0 # The number represents the agent in the list. Redeploy master nodeRedeploy master node terraform apply -var-file desired_cluster_profile.tfvars Redeploy an existing agentRedeploy an existing agent If you want to redeploy a problematic agent, you can tell Terraform to redeploy during the next cycle. Private AgentsPrivate Agents Taint private agentTaint private agent terraform taint google_compute_instance.agent.0 # The number represents the agent in the list. Redeploy agentRedeploy agent terraform apply -var-file desired_cluster_profile.tfvars Public AgentsPublic Agents Taint private agentTaint private agent terraform taint google_compute_instance.public-agent.0 # The number represents the agent in the list Redeploy agentRedeploy agent terraform apply -var-file desired_cluster_profile.tfvars ExperimentalExperimental Adding GPU private agentsAdding GPU private agents Destroying a clusterDestroying a cluster You can shut down and/or destroy all resources from your environment by running the following command: terraform destroy -var-file desired_cluster_profile.tfvars
https://docs.mesosphere.com/1.11/installing/evaluation/cloud-installation/gce/
2018-10-15T10:34:43
CC-MAIN-2018-43
1539583509170.2
[]
docs.mesosphere.com
Scheme and technology applications, see the Simpetus projects page. In order to convert the HDF5 output files of Meep into images of the fields, this tutorial uses the h5utils package. You could also use any other package (i.e., Octave or Matlab) that supports reading HDF5 files. The Scheme Script File The use of Meep revolves around the script (or control) file, abbreviated "ctl" and typically called something like foo.ctl. The script file specifies the geometry, the current sources, the outputs computed, and everything else specific to your calculation. Rather than a flat, inflexible file format, however, the script and you don't need to be an experienced programmer. You will appreciate the flexibility that a scripting language gives you: e.g., you can input things in any order, without regard for whitespace, insert comments where you please, omit things when reasonable defaults are available, etc. The script file is actually implemented on top of the libctl library, a set of utilities that are in turn built on top of the Scheme language. Thus, there are three sources of possible commands and syntax for a script file: - Scheme: a programming language developed at MIT. The syntax is particularly simple: all statements are of the form (function arguments...). We run Scheme under the Guile interpreter which is designed to be plugged into programs as a scripting and extension language. You don't need to know much Scheme for a basic script file, but it is always there if you need it. For more details, see Guile and Scheme Information. - libctl: a library built on top of Guile to simplify protocols for scientific computation. libctl sets the basic tone of the interface and defines a number of useful functions (such as multi-variable optimization, numeric integration, and so on). See the libctl documentation. -. MPB has a similar interface. Let's continue with our tutorial. The Meep program is normally invoked by running something like the following at the Unix command line: unix% meep foo.ctl >& foo.out which reads foo.ctl and executes it, saving the output to the file foo.out. However, if you invoke meep with no arguments, you are dropped into an interactive mode in which you can type commands and see their results immediately. You can paste in the commands from the tutorial as you follow it and see what they do. Fields in a Waveguide For our first example, let's examine the field pattern excited by a localized CW source in a waveguide — first straight, then bent. The waveguide will have frequency-independent ε=12 and width 1 μm. The unit length in this example is 1 μm. See also Units. A Straight Waveguide Before we define the structure, however, we have to define the computational cell. We're going to put a source at one end and watch it propagate down the waveguide in the x direction, so let's use a cell of length 16 μm in the x direction to give it some distance to propagate. In the y direction, we just need enough room so that the boundaries don't affect the waveguide mode; let's give it a size of 8 μm. We specify these sizes indicates that the computational cell has no size in the z direction, i.e. it is two-dimensional. We can add the waveguide. Most commonly, the structure is specified by a list of geometric-objects, stored in the geometry variable. (set! geometry (list (make block (center 0 0) (size infinity 1 infinity) (material (make medium (epsilon 12)))))) The waveguide is specified by a block (parallelepiped) of size , with ε=12, centered at (0,0) which is the center of the computational cell. By default, any place where there are no objects there is air (ε=1), although this can be changed by setting the default-material variable. The resulting structure is shown below. We have the structure and need to specify the current sources using the sources object. The simplest thing is to add a single point source : (set! sources (list (make source (src (make continuous-src (frequency 0.15))) (component Ez) (center -7 0)))) We gave the source a frequency of 0.15, and specified a continuous-src which is just a fixed-frequency sinusoid that by default is turned on at =0. Recall that, in Meep units, frequency is specified in units of 2πc, which is equivalent to the inverse of the vacuum wavelength. Thus, 0.15 corresponds to a vacuum wavelength of about 1/0.15=6.67 μm, or a wavelength of about 2 μm in the ε=12.g. if we wanted a magnetic current, we would specify Hx, Hy, or Hz). The current is located at (-7,0), which is 1 μm to the right of the left edge of the cell — we always want to leave a little space between sources and the cell boundaries, to keep the boundary conditions from interfering with them. As μm side. We note. For more information, see Perfectly Matched Layer. Meep will discretize this structure in space and time, and that is specified by a single variable, resolution, that gives the number of pixels per distance unit. We'll set this resolution to 10 pixels/μm, which corresponds to around 67 pixels/wavelength, or around 20 pixels/wavelength in the high-index material. In general, at least 8 pixels/wavelength in the highest dielectric is a good idea. This will give us a 160×80 cell. (set! resolution)) We are outputting the dielectric function and the electric-field component , analyze and visualize these files with a wide variety of packages: We see that the the source has excited the waveguide mode, but has also excited radiating fields propagating away from the waveguide. At the boundaries, the field quickly goes to zero due to the PML layers. If we look carefully, we see somethinge else — the image is "speckled" towards the right side. This is because, by turning on the current abruptly at , we have excited high-frequency components (very high order modes), and we have not waited long enough for them to die away; we'll eliminate these in the next section by turning on the source more smoothly. A 90° Bend medium (epsilon 12)))) (make block (center 3.5 2) (size 1 12 infinity) (material (make medium (epsilon 12)))))) (set! pml-layers (list (make pml (thickness 1.0)))) (set! resolution 10) Note that we have two blocks, both off-center to produce the bent waveguide structure pictured at right. As illustrated in the figure, the origin (0,0) of the coordinate system is at the center of the computational cell, with positive being downwards in h5topng, and thus the block of size 121 is centered at (-2,-3.5). Also shown in green is the source plane at which is shifted to so that it is still inside the waveguide. There are a couple of items to note. First, a point source does not couple very efficiently to the waveguide mode, so we'll expand this into a line source the same width as the waveguide by adding a size property to the source. An eigenmode source can also be used which is described in Tutorial/Optical Forces. Second, instead of turning the source on suddenly at t=0 which excites many other frequencies because of the discontinuity, we will ramp it on slowly. Meep uses a hyperbolic tangent (tanh) turn-on function over a time proportional to the width of 20 time units which is a little over three periods. Finally, just for variety, we'll specify the vacuum wavelength instead of the frequency; again, we'll use a wavelength such that the waveguide is half a wavelength wide. (set! sources (list (make source (src (make continuous-src (wavelength (* 2 (sqrt 11))) ))) "ez" determines the name of the output file, which will be called ez.h5 if you are running interactively or will be prefixed with the name of the file name for a script 162162330 array, where the last dimension is time. This is rather a large file, 69MB; later, we'll see ways to reduce this size if we only want images. and there are other tools that work as well. unix% convert ez.t*.png ez.gif We are using an animated GIF format for the output. This results in the following animation: It is clear that the transmission around the bend is rather low for this frequency and structure — both large reflection and large radiation loss are clearly visible. Moreover, since we are operating direction, in which you can see that the second-order leaky mode decays away, leaving us with the fundamental mode propagating downward. Instead of doing an animation, another interesting possibility is to make an image from a slice. Here is the slice, which gives us an image of the fields in the first waveguide branch as a function of time. unix% h5topng -0y -35 -Zc dkbluered ez.h5 The -0y -35 specifies the slice, where we have multiplied by 10 (our resolution) to get the pixel coordinate. Output Tips and Tricks Above, we outputted the full 2d data slice at every 0.6 time units, resulting in a 69MB file. This is not large. script file is filename.ctl. What if we want to output an slice, as above? To do this, we only really wanted the values at , and therefore we can exploit another powerful Meep output feature — Meep allows us to output only a subset of the computational cell. This is done using the in-volume function, which similar tox330 corresponding to the desired slice. Transmittance Spectrum of a Waveguide Bend We have computed the field patterns for light propagating around a waveguide bend. While this can be visually informative, the results are not quantitatively satisfying. We'd like to know exactly how much power makes it around the bend (transmittance), how much is reflected (reflectance), and how much is radiated away (scattered loss). How can we do this? The basic principles are described in Introduction. The computation involves keeping track of the fields and their Fourier transform in a certain region, and from this computing the flux of electromagnetic energy as a function of ω. Moreover, we'll get an entire spectrum of the transmittance in a single run, by Fourier-transforming the response to a short pulse. However, in order to normalize the transmitted flux by the incident power to obtain the transmittance, we'll have to do two runs, one with and one without the bend (i.e., a straight waveguide). This script will be more complicated than before, so it is more convenient to run as a file (bend-flux.ctl) rather than typing it interactively. do meep sx=17 bend-flux.ctl to change the size to 17, without editing the script, we will have to use arithmetic. For example, the center of the horizontal waveguide will be given by -0.5*(sy-w-2*pad). At least, that is what the expression would look like in C; in Scheme, the syntax for 1+2 is (+ 1 2), and so on, so we will define the horizontal and vertical waveguide centers as: (define wvg-xcen (* 0.5 (- sx w (* 2 pad)))) ; x center of vert. wvg (define wvg-ycen (* -0.5 (- sy w (* 2 pad)))) ; y center of horiz. wvg We proceed to define the geometry, as before. This time, however, we really want two geometries: the bend, and also a straight waveguide for normalization. We could do this with two separate script files, but that is annoying. Instead, we'll define a parameter no-bend? which is true for the straight-waveguide case and false for the bend. (define-param no-bend? false) ; if true, have straight waveguide, not bend We define the geometry via two cases, with an if statement — the Scheme syntax is (if predicate? if-true if-false). (set! geometry (if no-bend? (list (make block (center 0 wvg-ycen) (size infinity w infinity) (material (make medium (epsilon 12))))) (list (make block (center (* -0.5 pad) wvg-ycen) (size (- sx pad) w infinity) (material (make medium (epsilon 12)))) (make block (center wvg-xcen (* 0.5 pad)) (size w (- sy pad) infinity) (material (make medium (epsilon 12))))))) Thus, if no-bend? is true we make a single block for a straight waveguide, and otherwise we make two blocks for a bent waveguide. The source is, everything will shift automatically. The boundary conditions and resolution are set as before, except that μm from the boundary of the cell, so that they do not lie within the absorbing PML regions. Again, there are two cases: the transmitted flux is either computed at the right or the bottom of the computational cell, depending on whether the waveguide is straight or bent. The fluxes will be computed for nfreq=100 frequencies centered on fcen, from fcen-df/2 to fcen+df/2. That is, we only compute fluxes for frequencies within our pulse bandwidth. This is important because, far outside the pulse bandwidth, the spectral power is so low that numerical errors make the computed fluxes useless. As described in Introduction, computing the reflection spectra requires some care bend-flux-refl-flux.h5 (the script four. Octave/Matlab): the first column is the frequency, the second is the transmitted power, and the third is the reflected power. We need to run the simulation twice, once with no-bend?=true and once with no-bend?=false (the default): unix% meep no-bend?=true bend-flux.ctl | tee bend0.out unix% meep We import them to Octave/Matlab (using its dlmread command), and plot the results: What are we plotting here? The transmittance is the transmitted flux (second column of bend.dat) divided by the incident flux (second column of bend0.dat), to give us the fraction of power transmitted. The reflectance is the reflected flux (third column of bend.dat) divided by the incident flux (second column of bend0.dat). We also have to multiply by -1 because all fluxes in Meep are computed in the positive-coordinate direction by default, and we want the flux in the direction. Finally, the scattered loss is simply . We should also check whether our data is converged. We can do this by increasing the resolution and cell size and seeing by how much the numbers change. In this case, we'll transmittance and loss, probably stemming from interference between light radiated directly from the source and light propagating around the waveguide. Angular Reflectance Spectrum of a Planar Interface We turn to a similar but slightly different example for which there exists an analytic solution via the Fresnel equations: computing the broadband reflectance spectrum of a planar air-dielectric interface for an incident planewave over a range of angles. Similar to the previous example, we will need to run two simulations: (1) an empty cell with air/vacuum (n=1) everywhere to obtain the incident flux, and (2) with the dielectric (n=3.5) interface to obtain the reflected flux. For each angle of the incident planewave, a separate simulation is necessary. A 1d cell must be used since a higher-dimensional cell will introduce artificial modes due to band folding. We will use a Gaussian source spanning visible wavelengths of 0.4 to 0.8 μm. Unlike a continuous-wave (CW) source, a pulsed source turns off. This enables a termination condition of when there are no fields remaining in the cell (due to absorption by the PMLs) via the run function stop-when-fields-decayed, similar to the previous example. Creating an oblique planewave source typically requires specifying two parameters: (1) for periodic structures, the Bloch-periodic wavevector via k-point, and (2) the source amplitude function amp-func for setting the spatial dependence ( is the position vector). Since we have a 1d cell and the source is at a single point, it is not necessary to specify the source amplitude (see this 2d example for how this is done). The magnitude of the Bloch-periodic wavevector is specified according to the dispersion relation formula for a planewave in homogeneous media with index n: . As the source in this example is incident from air, is simply equal to the frequency ω (the minimum frequency of the pulse which excludes the 2π factor). Note that a fixed wavevector only applies to a single frequency. Any broadband source is therefore incident at a specified angle for only a single frequency. This is described in more detail in Section 4.5 ("Efficient Frequency-Angle Coverage") in Chapter 4 ("Electromagnetic Wave Source Conditions") of the book Advances in FDTD Computational Electrodynamics: Photonics and Nanotechnology. In this example, the plane of incidence which contains and the surface normal vector is . The source angle θ is defined in degrees in the counterclockwise (CCW) direction around the axis with 0 degrees along the + axis. In Meep, a 1d cell is defined along the direction. When is not set, only the Ex and Hy field components are permitted. A non-zero results in a 3d simulation where all field components are allowed and are complex (the fields are real, by default). A current source with Ex polarization lies in the plane of incidence and corresponds to the convention of -polarization. In order to model the -polarization, we must use an Ey source. This example involves just the -polarization. The simulation script is refl-angular.ctl (set-param! resolution 200) ; pixels/um (define-param dpml 1) ; PML thickness (define-param sz 10) ; size of computational cell (without PMLs) (set! sz (+ sz (* 2 dpml))) (set! pml-layers (list (make pml (thickness dpml)))) (set! geometry-lattice (make lattice (size no-size no-size sz))) (define-param wvl-min 0.4) ; minimum wavelength of source (define-param wvl-max 0.8) ; maximum wavelength of source (define fmin (/ wvl-max)) ; minimum frequency of source (define fmax (/ wvl-min)) ; maximum frequency of source (define fcen (* 0.5 (+ fmin fmax))) ; center frequency of source (define df (- fmax fmin)) ; frequency width of source (define-param nfreq 50) ; number of frequency bins ; rotation angle (in degrees) of source: CCW around Y axis, 0 degrees along +Z axis (define-param theta 0) (define theta-r (deg->rad theta)) ; if source is at normal incidence, force number of dimensions to be 1 (set! dimensions (if (= theta-r 0) 1 3)) ; plane of incidence is xz (set! k-point (vector3* fmin (vector3 (sin theta-r) 0 (cos theta-r)))) (set! sources (list (make source (src (make gaussian-src (frequency fcen) (fwidth df))) (component Ex) (center 0 0 (+ (* -0.5 sz) dpml))))) (define-param empty? true) ; add a block with n=3.5 for the air-dielectric interface (if (not empty?) (set! geometry (list (make block (size infinity infinity (* 0.5 sz)) (center 0 0 (* 0.25 sz)) (material (make medium (index 3.5))))))) (define refl (add-flux fcen df nfreq (make flux-region (center 0 0 (* -0.25 sz))))) (if (not empty?) (load-minus-flux "refl-flux" refl)) (run-sources+ (stop-when-fields-decayed 50 Ex (vector3 0 0 (+ (* -0.5 sz) dpml)) 1e-9)) (if empty? (save-flux "refl-flux" refl)) (display-fluxes refl) The simulation script above computes and prints to standard output the reflectance at each frequency. Also included in the output is the wavevector component and the corresponding angle for the (, ω) pair. For those frequencies not equal to the minimum frequency of the source, this is not the same as the specified angle of the incident planewave, but rather sin-1(kx/ω). The following Bash shell script runs the simulation for the wavelength range of 0 to 80 in increments of 5. For each run, the script pipes the output to one file and extracts the reflectance data to a different file. #!/bin/bash for i in `seq 0 5 80`; do meep empty?=true theta=${i} refl-angular.ctl |tee -a flux0_t${i}.out; grep flux1: flux0_t${i}.out |cut -d , -f2- > flux0_t${i}.dat meep empty?=false theta=${i} refl-angular.ctl |tee -a flux_t${i}.out; grep flux1: flux_t${i}.out |cut -d , -f2- > flux_t${i}.dat done Two-dimensional plots of the angular reflectance spectrum based on the simulated data and the analytic Fresnel equations are generated using the Octave/Matlab script below. The plots are shown in the accompanying figure with four insets. The top left inset shows the simulated and analytic reflectance spectra at a wavelength of 0.6 μm. The top right inset shows the simulated reflectance spectrum as a function of the wavelength λ and wavevector : . The lower left inset is a transformation of into . Note how the range of angles depends on the wavelength. For a particular angle, the reflectance is a constant for all wavelengths due to the dispersionless dielectric. The lower right inset is the analytic reflectance spectrum computed using the Fresnel equations. There is agreement between the simulated and analytic results. The Brewster's angle, where the transmittance is 1 and the reflectance 0, is tan-1(3.5/1)=74.1°. This is also verified by the simulated results. In order to generate results for the missing portion of the reflectance spectrum (i.e., the white region), we will need to rerun the simulations for different wavelength spectra. theta_in = [0:5:80]; Rmeep = []; for j = 1:length(theta_in) f0 = dlmread(sprintf("flux0_t%d.dat",theta_in(j)),','); f = dlmread(sprintf("flux_t%d.dat",theta_in(j)),','); Rmeep = [Rmeep -f(:,2)./f0(:,2)]; endfor freqs = f(:,1); % convert frequency to wavelength wvl = 1./freqs; % create a 2d matrix for the wavelength by repeating the column vector for each angle wvls = repmat(wvl,1,length(theta_in)); wvl_min = 0.4; wvl_max = 0.8; fcen = (1/wvl_min+1/wvl_max)/2; kx = fcen*sind(theta_in); kxs = repmat(kx,length(wvl),1); thetas = asind(kxs./freqs); figure; pcolor(kxs,wvls,Rmeep); shading interp; c = colormap("hot"); colormap(c); colorbar; eval(sprintf("axis([%0.2g %0.2g %0.2g %0.2g])",kx(1),kx(end),min(wvl),max(wvl))); eval(sprintf("set(gca, 'xtick', [%0.2g:0.2:%0.2g])",kx(1),kx(end))); eval(sprintf("set(gca, 'ytick', [%0.1g:0.1:%0.1g])",wvl(end),wvl(1))); xlabel("wavevector of Bloch-Periodic boundary condition (k_x/2π)"); ylabel("wavelength (μm)"); title("reflectance (meep)"); figure; pcolor(thetas,wvls,Rmeep); (meep)"); n1 = 1; n2 = 3.5; % compute angle of refracted planewave in medium n2 % for incident planewave in medium n1 at angle theta_in theta_out = @(theta_in) asin(n1*sin(theta_in)/n2); % compute Fresnel reflectance for P-polarization in medium n2 % for incident planewave in medium n1 at angle theta_in R_fresnel = @(theta_in) abs((n1*cos(theta_out(theta_in))-n2*cos(theta_in))./(n1*cos(theta_out(theta_in))+n2*cos(theta_in))).^2; Ranalytic = R_fresnel(thetas*pi/180); figure; pcolor(thetas,wvls,Ranalytic); (analytic)"); Modes of a Ring Resonator As described in Introduction, another common task for FDTD simulation is to find the resonant modes — frequencies and decay rates — of some cavity structure. You might want to read that again to recall the basic simulation strategy. We will show how this works for a ring resonator, which is simply a waveguide bent into a circle. This script can be also found in ring.ctl. In fact, since this structure has cylindrical symmetry, we can simulate it much more efficiently by using cylindrical coordinates, but for illustration here medium (index n)))) (make cylinder (center 0 0) (height infinity) (radius r) (material air)))) (set! pml-layers (list (make pml (thickness dpml)))) (set-param! resolution 10) Later objects in the geometry list take precedence over (lie "on top of") earlier objects, so the second air (ε=1) cylinder cuts a circular hole out of the larger cylinder, leaving a ring of width w. We don't know the frequency of the mode(s) ahead of time, so we'll just hit the structure with a broad Gaussian pulse to excite all of the Ez-polarized modes in a chosen bandwidth: (define-param fcen 0.15) ; pulse center frequency (define-param df 0.1) ; pulse frequency width + 300 (at-beginning output-epsilon) (after-sources (harminv Ez (vector3 (+ r 0.1)) fcen df))) The signal processing is performed by the harminv function, which takes four arguments: the field component Ez and position (+0.1,0) to analyze, and a frequency range given by a center frequency and bandwidth 8101 at the given point, and expresses this as a sum of modes (in the specified bandwidth): for complex amplitudes and complex frequencies ω. The six columns relate to these quantities. The first column is the real part of ω, expressed in our usual 2πc units, and the second column is the imaginary part — a negative imaginary part corresponds to an exponential decay. This decay rate, for a cavity, is more often expressed as a dimensionless "lifetime" , defined by: is the number of optical periods for the energy to decay by , and 1/ is the fractional bandwidth at half-maximum of the resonance peak in Fourier domain. This is the third column of the output. The fourth and fifth columns are the absolute value and complex amplitudes . The last column is. For example, there are three modes. The last has a of 1677, which means that the mode decays for about 2000 periods or about 2000/0.175 = 104 time units. We have only analyzed it for about 300 time units, however, and the estimated uncertainty in the frequency is 10-7 (with an actual error of about 10-6, from below). In general, you need to increase the run time to get more accuracy, and to find very high values, but not by much. In some cases, modes with of around 109 can be found with only 200 periods. In this case, we found three modes in the specified bandwith, at frequencies of 0.118, 0.147, and 0.175, with corresponding values of 81, 316, and 1677. As was shown by Marcatilli in 1969, the of a ring resonator increases exponentially with the product of ω and ring radius. 1/ fcen by appending the command: (run-until (/ 1 fcen) (at-every (/ 1 fcen 20) output-efield-z)) axis, whereas we excited only the even modes due to our source symmetry. Equivalently, one can form clockwise and counter-clockwise propagating modes by taking linear combinations of the even/odd modes, corresponding to an angular dependence for m=3, 4, and 5 in this case. You may have noticed, by the way, that when you run with the narrow-bandwidth source, harminv gives you slightly different frequency and 10-6 from the earlier estimate; the difference in is, of course, larger because a small absolute error in ω gives a larger relative error in the small imaginary frequency. Exploiting Symmetry In this case, because we have a mirror symmetry plane (the axis) that preserves both the structure and the sources, we can exploit this mirror symmetry to speed up the computation. See also Exploiting Symmetry. In particular, everything about the input file is the same except that we add a single line, right after we specify the sources: (set! symmetries (list (make mirror-sym (direction Y)))) This tells Meep to exploit a mirror-symmetry plane through the origin perpendicular to. In general, the symmetry of the sources may require some phase. For example, if our source was in the direction instead of the direction, then the source would be odd under mirror flips through the axis. We would specify this by (make mirror-sym (direction Y) (phase -1)). See User Interface for how to solve for modes of this cylindrical geometry much more efficiently. Visualizing 3d Structures The previous examples were based on 1d or 2d structures which can be visualized using h5topng of the h5utils package. In order to visualize 3d structures, you can use Mayavi. The following example, which includes a simulation script and shell commands, involves a sphere with index 3.5 perforated by a conical hole. There are no other simulation parameters specified. The permittivity data is written to an HDF5 file using output-epsilon. The HDF5 data is then converted to VTK using h5tovtk. VTK data can be visualized using Mayavi or Paraview via the IsoSurface module. (set-param! resolution 50) (set! geometry-lattice (make lattice (size 3 3 3))) (set! geometry (list (make sphere (radius 1) (material (make medium (index 3.5))) (center 0 0 0)) (make cone (radius 0.8) (radius2 0.1) (height 2) (material air) (center 0 0 0)))) (init-fields) (output-epsilon) (exit) #!/bin/bash meep sphere-cone.ctl; h5tovtk -o epsilon.vtk structure_demo-eps-000000.00.h5; mayavi2 -d epsilon.vtk -m IsoSurface &> /dev/null & Editors and ctl It is useful to have emacs use its scheme-mode for editing script gedit. There is also a syntax highlighting feature for Meep/MPB.
https://meep.readthedocs.io/en/latest/Scheme_Tutorials/Basics/
2018-10-15T11:40:30
CC-MAIN-2018-43
1539583509170.2
[array(['../../images/Tutorial-wvg-straight-eps-000000.00.png', None], dtype=object) array(['../../images/Tutorial-wvg-straight-ez-000200.00.png', None], dtype=object) array(['../../images/Tutorial-wvg-bent-eps-000000.00.png', None], dtype=object) array(['../../images/Tutorial-wvg-ez.gif', None], dtype=object) array(['../../images/Tutorial-wvg-bent2-ez-000300.00.png', None], dtype=object) array(['../../images/Tutorial-wvg-bent-ez-tslice.png', None], dtype=object) array(['../../images/Tut-bend-flux.png', None], dtype=object) array(['../../images/reflectance_angular_spectrum.png', None], dtype=object) array(['../../images/Tut-ring-ez-0.118.gif', None], dtype=object) array(['../../images/Tut-ring-ez-0.147.gif', None], dtype=object) array(['../../images/Tut-ring-ez-0.175.gif', None], dtype=object) array(['../../images/sphere_epsilon.png', None], dtype=object)]
meep.readthedocs.io
Source code: Lib/tempfile.py This module creates temporary files and directories. It works on all supported platforms. TemporaryFile, NamedTemporaryFile, TemporaryDirectory, and SpooledTemporaryFile are high-level interfaces which provide automatic cleanup and can be used as context managers. mkstemp() and mkdtemp() are lower-level functions which require manual cleanup. All the user-callable functions and constructors take additional arguments which allow direct control over the location and name of temporary files and directories. Files names used by this module include a string of random characters which allows those files to be securely created in shared temporary directories. To maintain backward compatibility, the argument order is somewhat odd; it is recommended to use keyword arguments for clarity. The module defines the following user-callable items: tempfile.TemporaryFile(mode='w+b', buffering=None, encoding=None, newline=None, suffix=None, prefix=None, dir=None) Return a file-like object that can be used as a temporary storage area. The file is created securely, using the same rules as mkstemp(). It will be destroyed as soon as it is closed (including an implicit close when the object is garbage collected). Under Unix, the directory entry for the file is either not created at all or is removed immediately after the file is created. Other platforms do not support this; your code should not rely on a temporary file created using this function having or not having a visible name in the file system. The resulting object can be used as a context manager (see Examples). On completion of the context or destruction of the file object the temporary file will be removed from the filesystem. have the same meaning and defaults as with mkstemp(). The returned object is a true file object on POSIX platforms. On other platforms, it is a file-like object whose file attribute is the underlying true file object. The os.O_TMPFILE flag is used if it is available and works (Linux-specific, requires Linux kernel 3.11 or later). Changed in version 3.5: The os.O_TMPFILE flag is now used if available. tempfile.NamedTemporaryFile(mode='w+b', buffering=None, encoding=None, newline=None, suffix=None, prefix=None, dir=None, delete=True) This function operates exactly as TemporaryFile() does, except that the file is guaranteed to have a visible name in the file system (on Unix, the directory entry is not unlinked). That name can be retrieved from the name attribute of the returned file-like. tempfile.SpooledTemporaryFile(max_size=0, mode='w+b', buffering=None, encoding=None, newline=None, suffix=None, prefix=None, dir=None). tempfile.TemporaryDirectory(suffix=None, prefix=None, dir=None) This function securely creates a temporary directory using the same rules as mkdtemp(). The resulting object can be used as a context manager (see Examples).. tempfile.mkstemp(suffix=None, prefix=None, dir=None, text=False) not None, the file name will end with that suffix, otherwise there will be no suffix. mkstemp() does not put a dot between the file name and the suffix; if you need one, put it at the beginning of suffix. If prefix is not None, the file name will begin with that prefix; otherwise, a default prefix is used. The default is the return value of gettempprefix() or gettempprefixb(), as appropriate. If dir is not None, any of suffix, prefix, and dir are not None, they must be the same type. If they are bytes, the returned name will be bytes instead of str. If you want to force a bytes return value with otherwise default behavior, pass suffix.mkdtemp(suffix=None, prefix=None, dir=None).gettempdir() Return the name of the directory used for temporary files. This defines the default value for the dir argument to all functions in this module. Python searches a standard list of directories to find one which the calling user can create files in. The list is: TMPDIRenvironment variable. TEMPenvironment variable. TMPenvironment variable. C:\TEMP, C:\TMP, \TEMP, and \TMP, in that order. /tmp, /var/tmp, and /usr/tmp, in that order. The result of this search is cached, see the description of tempdir below. tempfile.gettempdirb() Same as gettempdir() but the return value is in bytes. New in version 3.5. tempfile.gettempprefix() Return the filename prefix used to create temporary files. This does not contain the directory component. tempfile.gettempprefixb() Same as gettempprefix() but the return value is in bytes. New in version 3.5. The module uses a global variable to store the name of the directory used for temporary files returned by gettempdir(). It can be set directly to override the selection process, but this is discouraged. All functions in this module take a dir argument which can be used to specify the directory and this is the recommended approach. tempfile.tempdir When set to a value other than None, this variable defines the default value for the dir argument to the functions defined in this module. If tempdir is unset or None at any call to any of the above functions except gettempprefix() it is initialized following the algorithm described in gettempdir(). A historical way to create temporary files was to first generate a file name with the mktemp() function and then create a file using this name. Unfortunately this is not secure, because a different process may create a file with this name in the time between the call to mktemp() and the subsequent attempt to create the file by the first process. The solution is to combine the two steps and create the file immediately. This approach is used by mkstemp() and the other functions described above. tempfile.mktemp(suffix='', prefix='tmp', dir=None) Return an absolute pathname of a file that did not exist at the time the call is made. The prefix, suffix, and dir arguments are similar to those of mkstemp(), except that bytes file names, suffix=None and prefix=None are not supported..name '/tmp/tmptjujjt' >>> f.write(b"Hello World!\n") 13 >>> f.close() >>> os.unlink(f.name) >>> os.path.exists(f.name) False © 2001–2018 Python Software Foundation Licensed under the PSF License.
http://docs.w3cub.com/python~3.6/library/tempfile/
2018-10-15T10:59:33
CC-MAIN-2018-43
1539583509170.2
[]
docs.w3cub.com
Use our Quick Start Training Guide to get started with the most commonly used functions in daily operations. For in depth information and a step-by-by step guide of the MyPMS system, go to MyPMS User Manual We have a variety of video tutorials on front desk functions. We recommend the MyPMS Front desk Training Video for all first time users.
https://docs.bookingcenter.com/pages/viewpage.action?pageId=4490543
2018-02-18T00:48:26
CC-MAIN-2018-09
1518891811243.29
[]
docs.bookingcenter.com