content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here. Class: Aws::KinesisVideoMedia::Client - Inherits: - Seahorse::Client::Base - Object - Seahorse::Client::Base - Aws::KinesisVideoMedia::Client - Defined in: - (unknown) Overview An API client for Amazon Kinesis Video Streams Media. To construct a client, you need to configure a :region and :credentials. kinesisvideomedia = Aws::KinesisVideoMedia:::KinesisVideoMedia:::KinesisVideoMedia::Client constructor Constructs an API client. API Operations collapse - #get_media(options = {}) ⇒ Types::GetMediaOutput Use this API to retrieve media content from a Kinesis video stream.::KinesisVideoMedia::Client Constructs an API client. Instance Method Details #get_media(options = {}) ⇒ Types::GetMediaOutput. You must first call the GetDataEndpoint API to get an endpoint. Then send the GetMedia requests to this endpoint using the --endpoint-url parameter. You must first call the: A client can call GetMediaup to five times per second per stream. Kinesis Video Streams sends media data at a rate of up to 25 megabytes per second (or 200 megabits per second) during a GetMediasession. If an error is thrown after invoking a Kinesis Video Streams:
https://docs.amazonaws.cn/sdk-for-ruby/v2/api/Aws/KinesisVideoMedia/Client.html
2021-05-06T08:58:33
CC-MAIN-2021-21
1620243988753.91
[]
docs.amazonaws.cn
_appthemes_fix_contextual_help_issue() Function: Source: framework/load.php:245 Function: Source: framework/load.php:245 Method: Get the attachments for a given object. Source: framework/media-manager/media-manager.php:718 Method: Unassigns or deletes any previous attachments that are not present on the current attachment enqueue list. Source: framework/media-manager/media-manager.php:738 Method: Attempts to fetch an oembed object with metadata for a provided URL using oEmbed. Source: framework/media-manager/media-manager.php:781 Function: Fires for each custom column in the Media list table. Source: framework/media-manager/functions.php:637 Method: Ajax callback to retrieves the db options for a specific media manager ID. Source: framework/media-manager/media-manager.php:798 Class: Source: framework/media-manager/media-manager.php:23 Method: Delete any stored transients when media manager UI is closed. Source: framework/media-manager/media-manager.php:823 Method: Source: framework/media-manager/media-manager.php:30 Method: Assign a meta key containing the media manager parent ID AND a default attach type to each new media attachment added through the media manager. Source: framework/media-manager/media-manager.php:832
https://docs.arthemes.org/taskerr/reference/package/framework/
2021-05-06T10:36:02
CC-MAIN-2021-21
1620243988753.91
[]
docs.arthemes.org
Release Version 1.56 (09-Feb-2021) Note: The content on this site may have changed or moved since you last viewed it. Always access the latest content via the Hevo Docs website. Account Management Billing Notifications - Added an option for users to enable/disable the daily e-mail notifications about the activity in their Hevo account. Read Billing Notifications. Business Plans - Enhanced the Business Plan offering into a customized plan based on the customers’ requirements. Read Pricing Plans and Setting up Pricing Plans, Billing, and Payments. Pay Now - Provided an option on the UI to clear any outstanding dues that were not automatically paid through the configured payment method. Read Viewing Billing History. Destinations Google BigQuery - Enabled the automatic promotion of data type of the Destination table column to accommodate Source data of different data types for tables smaller than 50GB. Read Handling Different Data Types in Source Data. Hevo Support Live Chat - Modified the Live Chat support for users on Basic plan to a complementary 30-day feature. Read Hevo Support. Sources ElasticSearch - Added support for ElasticSearch as a Source. Read ElasticSearch. Kafka User Interface Step-by-Step Guide - Enhanced the UI for database Sources to provide step-by-step guidance for Pipeline creation. Documentation Updates The following pages have been created, enhanced, or removed in Release 1.56: About Hevo Billing - Billing Notifications (New) - Purchasing Additional Sources and Events Setting up Pricing Plans, Billing, and Payments - Destinations Events Schema Mapper - Mapping a Source Event Type Field with a Destination Table Column - Resolving Incompatible Schema Mappings Sources ElasticSearch (New) - Last updated on 04 Mar 2021
https://docs.hevodata.com/release-notes/v1.56/
2021-05-06T10:06:55
CC-MAIN-2021-21
1620243988753.91
[]
docs.hevodata.com
.1 Manage Big Data Relationships Big Data Management Component Architecture Clients and Tools Application Services Repositories Hadoop Environment Hadoop Utilities Big Data Management Engines Blaze Engine Architecture Spark Engine Architecture Hive Engine Architecture Big Data Process Step 1. Collect the Data Step 2. Cleanse the Data Step 3. Transform the Data Step 4. Process the Data Step 5. Monitor Jobs Connections Connections Hadoop Connection Properties HDFS Connection Properties HBase Connection Properties Hive Connection Properties JDBC Connection Properties Sqoop Connection-Level Arguments Creating a Connection to Access Sources or Targets Creating a Hadoop Connection Mappings in the Hadoop Environment Mappings in the Hadoop Environment Overview Mapping Run-time Properties Validation Environments Execution Environment Updating Run-time Properties through infacmd Truncating Partitions in a Hive Target Enabling Data Compression on Temporary Staging Tables Step 1. Configure the Hive Connection to Enable Data Compression on Temporary Staging Tables Step 2. Configure the Hadoop Cluster to Enable Compression on Temporary Staging Tables Transformation Support on the Spark Engine Transformation Support on the Spark Engine Some restrictions and guidelines apply to processing transformations on the Spark engine. The following table describes rules and guidelines for the transformations that are supported on the Spark engine: Transformation Rules and Guidelines Transformations not listed in this table are not supported. Aggregator Mapping validation fails in the following situations: The transformation contains stateful variable ports. The transformation contains unsupported functions in an expression. When a mapping contains an Aggregator transformation with an input/output port that is not a group by port, the transformation might not return the last row of each group with the result of the aggregation. Hadoop execution is distributed, and thus it might not be able to determine the actual last row of each group. Expression Mapping validation fails in the following situations: The transformation contains stateful variable ports. The transformation contains unsupported functions in an expression. If an expression results in numerical errors, such as division by zero or SQRT of a negative number, it returns an infinite or an NaN value. In the native environment, the expression returns null values and the rows do not appear in the output. Filter Supported without restrictions. Java You must copy external .jar files that a Java transformation requires to the Informatica installation directory on the Hadoop cluster at the following location: [$HADOOP_NODE_INFA_HOME]/services/shared/jars . To run user code directly on the Spark engine, the JDK version that the Data Integration Service uses must be compatible with the JRE version on the cluster. For best performance, create the environment variable DIS_JDK_HOME on the Data Integration Service in the Administrator tool. The environment variable contains the path to the JDK installation folder on the machine running the Data Integration Service. For example, you might enter a value such as /usr/java/default . The Partitionable property must be enabled in the Java transformation. The transformation cannot run in one partition. For date/time values, the Spark engine supports the precision of up to microseconds. If a date/time value contains nanoseconds, the trailing digits are truncated. When you enable high precision and the Java transformation contains a field that is a decimal data type, a validation error occurs. The following restrictions apply to the Transformation Scope property: The value Transaction for transformation scope is not valid. If you enable an input port for partition key, the transformation scope must be set to All Input. Stateless must be enabled if the transformation scope is row. The Java code in the transformation cannot write output to standard output when you push transformation logic to Hadoop. The Java code can write output to standard error which appears in the log files. Joiner Mapping validation fails in the following situations: Case sensitivity is disabled. The join condition in the Joiner transformation contains binary data type or binary expressions. Lookup Mapping validation fails in the following situations: Case sensitivity is disabled. The lookup condition in the Lookup transformation contains binary data type. The transformation is not configured to return all rows that match the condition. The lookup is a data object. The cache is configured to be shared, named, persistent, dynamic, or uncached. The cache must be a static cache. The mapping fails in the following situations: The transformation is unconnected. When you use Sqoop and look up data in a Hive table based on a column of the float data type, the Lookup transformation might return incorrect results. Router Supported without restrictions. Sorter Mapping validation fails in the following situations: Case sensitivity is disabled. The Data Integration Service logs a warning and ignores the Sorter transformation in the following situations: There is a type mismatch in between the target and the Sorter transformation sort keys. The transformation contains sort keys that are not connected to the target. The Write transformation is not configured to maintain row order. The transformation is not directly upstream from the Write transformation. The Data Integration Service treats null values as high even if you configure the transformation to treat null values as low. Union Supported without restrictions. Transformations not listed in this table are not supported. Transformations in a Hadoop Environment Updated July 03, 2018 Download Guide Send Feedback Explore Informatica Network Communities Knowledge Base Success Portal Back to Top Back
https://docs.informatica.com/data-engineering/data-engineering-integration/10-1-1-hotfix-1/_user-guide_big-data-management_10-1-1-hotfix-1_ditamap/mapping_objects_in_the_hadoop_environment/transformations_in_a_hadoop_environment/transformation_support_on_the_spark_engine.html
2021-05-06T09:59:01
CC-MAIN-2021-21
1620243988753.91
[]
docs.informatica.com
An image can be split into multiple image files. If a disk doesn't have enough space to contain a full image of your selected drives, you will automatically be requested to enter another target location. It is also possible to manually set the maximum size of an image file before creating the image. your Drive imaging/Imaging options/File size/Split image into smaller files with fixed sizes you can specify the size of individual image files.
https://docs.oo-software.com/en/oodiskimage-9/settings-for-drive-imaging-2-2/specify-the-size-of-an-image-file-2-2
2021-05-06T10:20:14
CC-MAIN-2021-21
1620243988753.91
[]
docs.oo-software.com
.. The Xojo programming language feels familiar to programmers who have used other languages such as Visual Basic and Java because it uses a similar object-oriented programming model, with similar data types and constructs.
http://docs.xojo.com/index.php?title=Programming_the_Raspberry_Pi_with_Xojo&printable=yes
2021-05-06T10:33:59
CC-MAIN-2021-21
1620243988753.91
[]
docs.xojo.com
An enormous thank you to Lina Srivastava for her talk, Making Your Media Matter: Narrative Strategy and Social Impact, held at Watershed on 5 June. Lina took the audience on a journey through a number of case studies of media projects she has worked on that have had incredible impact in their communities – local or issue-based – and presented a number of concrete take-aways for anyone working in the media impact space. Perhaps encapsulating her company CIEL‘s raison d’etre, she said, Creative media and cultural expression have the power to contextualise human need and experience and to catalyse change. It’s often stated vaguely that the way to connect with an audience is through story, but Lina teased this idea further apart, looking at: + How to Tell a Story Together + How to Tell a Story of Complexity + How to Tell a Story Based on Lived Experience You want tool kits? Lina has toolkits. Importantly, though, they were all deeply context specific – from a project using Augmented Reality to spark interest from kids, to a project that had almost no tech at all as it wasn’t suited to the community. See below for links to some of the resources that she and her company CIEL have developed. Finally, Lina talked about her latest work on Transformational Change Leadership – using a systems approach to developing more nuanced leadership styles and methods within communities aiming for positive change. Resources: 1) Narrative Design for Social Impact Canvas 2) Who Is Dayani Cristal? On US/Mexico cross-border migration. 3) Who Is Dayani Cristal? Impact report 4) WelcomeALL Toolkit. On building a welcoming community culture for migrants. 5) Media Interventions and the Syrian Crisis 6) Memria.org. Allowing partners and clients to capture audio feedback stories from participants. 7) Video for Change Impact Toolkit 8) Transformational Change Leadership 9) Transformational Change Leadership: discussion guide Original post: In partnership with Digital Cultures Research Centre – Creative Economy Unit and Pervasive Media Studio, i-Docs is thrilled to be hosting a talk this week by Lina Srivastava, founder of CIEL – Creative Impact and Experience Lab. Using case studies from her own work, Lina will discuss strategies to create social impact with documentary stories and immersive media. The talk will focus on the use of strategic planning and design tools for production, outreach, and distribution for impact in the areas of human rights and international development. Lina Srivastava is the founder of CIEL | Creative Impact and Experience Lab, a social innovation strategy group in New York City. She is an American Film Fulbright Specialist, and on faculty for the MFA Design and Social Innovation at the School of Visual Arts. The former Executive Director of Kids with Cameras, and of the Association of Video and Filmmakers, Lina has worked on strategic project design with organizations such as UNICEF, UNESCO, the World Bank and on social engagement campaigns for award-winning documentaries, including Oscar-winning Born into Brothels, Oscar-winning Inocente, and Sundance-award winning Who Is Dayani Cristal? TALK DETAILS: 1-2pm, Wednesday 5 June, 2019 Watershed Cinema 2 1 Canon’s Rd, Bristol, BS1 5TX Tickets are free but essential – book your seat here.
http://i-docs.org/making-your-media-matter-narrative-strategy-and-social-impact-with-lina-srivastava/
2021-05-06T10:38:47
CC-MAIN-2021-21
1620243988753.91
[]
i-docs.org
Note The RAET transport is in very early development, it is functional but no promises are yet made as to its reliability or security. As for reliability and security, the encryption used has been audited and our tests show that raet is reliable. With this said we are still conducting more security audits and pushing the reliability. This document outlines the encryption used in RAET New in version 2014.7.0. The Reliable Asynchronous Event Transport, or RAET, is an alternative transport medium developed specifically with Salt in mind. It has been developed to allow queuing to happen up on the application layer and comes with socket layer encryption. It also abstracts a great deal of control over the socket layer and makes it easy to bubble up errors and exceptions. RAET also offers very powerful message routing capabilities, allowing for messages to be routed between processes on a single machine all the way up to processes on multiple machines. Messages can also be restricted, allowing processes to be sent messages of specific types from specific sources allowing for trust to be established. Using RAET in Salt is easy, the main difference is that the core dependencies change, instead of needing pycrypto, M2Crypto, ZeroMQ, and PYZMQ, the packages libsodium, libnacl, ioflo, and raet are required. Encryption is handled very cleanly by libnacl, while the queueing and flow control is handled by ioflo. Distribution packages are forthcoming, but libsodium can be easily installed from source, or many distributions do ship packages for it. The libnacl and ioflo packages can be easily installed from pypi, distribution packages are in the works. Once the new deps are installed the 2014.7 release or higher of Salt needs to be installed. Once installed, modify the configuration files for the minion and master to set the transport to raet: /etc/salt/master: transport: raet /etc/salt/minion: transport: raet Now start salt as it would normally be started, the minion will connect to the master and share long term keys, which can then in turn be managed via salt-key. Remote execution and salt states will function in the same way as with Salt over ZeroMQ. The 2014.7 release of RAET is not complete! The Syndic and Multi Master have not been completed yet and these are slated for completion in the 2015.5.0 release. Also, Salt-Raet allows for more control over the client but these hooks have not been implemented yet, thereforre the client still uses the same system as the ZeroMQ client. This means that the extra reliability that RAET exposes has not yet been implemented in the CLI client. Why make an alternative transport for Salt? There are many reasons, but the primary motivation came from customer requests, many large companies came with requests to run Salt over an alternative transport, the reasoning was varied, from performance and scaling improvements to licensing concerns. These customers have partnered with SaltStack to make RAET a reality. RAET has been designed to allow salt to have greater communication capabilities. It has been designed to allow for development into features which out ZeroMQ topologies can't match. Many of the proposed features are still under development and will be announced as they enter proof of concept phases, but these features include salt-fuse - a filesystem over salt, salt-vt - a parallel api driven shell over the salt transport and many others. RAET is reliable, hence the name (Reliable Asynchronous Event Transport). The concern posed by some over RAET reliability is based on the fact that RAET uses UDP instead of TCP and UDP does not have built in reliability. RAET itself implements the needed reliability layers that are not natively present in UDP, this allows RAET to dynamically optimize packet delivery in a way that keeps it both reliable and asynchronous. When using RAET, ZeroMQ is not required. RAET is a complete networking replacement. It is noteworthy that RAET is not a ZeroMQ replacement in a general sense, the ZeroMQ constructs are not reproduced in RAET, but they are instead implemented in such a way that is specific to Salt's needs. RAET is primarily an async communication layer over truly async connections, defaulting to UDP. ZeroMQ is over TCP and abstracts async constructs within the socket layer. Salt is not dropping ZeroMQ support and has no immediate plans to do so. RAET uses Dan Bernstein's NACL encryption libraries and CurveCP handshake. The libnacl python binding binds to both libsodium and tweetnacl to execute the underlying cryptography. This allows us to completely rely on an externally developed cryptography system. For more information on libsodium and CurveCP please see:
https://ansible-cn.readthedocs.io/en/latest/topics/transports/raet/index.html
2021-05-06T09:29:50
CC-MAIN-2021-21
1620243988753.91
[]
ansible-cn.readthedocs.io
- Can I use a Passthrough mapping to move data from one data type to another? No. Data types in a mapping MUST always match. You will notice for example that when you select an Integer or "int" the selection for the target automatically filters just "int" targets. - What do I do if I need to change a data type? For example, change "int" to "string" Go to the Fields Screen. Find the field value you would like to change. By Clicking on it you will notice a button. Click on the Pencil button and change the data type to the one you desire. - How do I delete a Passthrough mapping? Simply hover over the mapping. A will appear on the mapping. Click "Delete Mapping" to completely delete the mapping. - What happens if I accidentally delete a Passthrough Mapping. Don't worry. Pass through mappings are easy and quick to create. - Is there any difference between doing Passthrough Mappings from the Mapping or Fields screen? No. Passthrough Mapping FAQs Updated Jul 17, 2019
https://docs.broadpeakpartners.com/a/1111978-passthrough-mapping-faqs
2021-05-06T10:24:02
CC-MAIN-2021-21
1620243988753.91
[]
docs.broadpeakpartners.com
OESortOverlayScores¶ void OESortOverlayScores(OESystem::OEIter<OEBestOverlayScore> &dst, OESystem::OEIter<OEBestOverlayResults> &scores, const OESystem::OEBinaryPredicate<OEBestOverlayScore, OEBestOverlayScore> &sorter, int nbest=1, bool conforder=false) Sort the scores from multiple sets of OEBestOverlayResults into a single iterator of OEBestOverlayScore. The sorting function can be one of the pre-defined functors or a user-defined version. The nbest default value of 1 implies that on the best overlay for each ref-fit conformer pair is returned. A value greater than one will mean multiple overlays for each ref-fit conformer pair will be kept. Setting conforder to true forces the results to come out in conformer index order. If nbest is greater than 1 and conforder is true, then for each ref-fit conformer pair, the scores will be sorted by the sorting function, but each set of ref-fit results will come out together. Several of the OEBestOverlay examples show variations of these values. Perhaps the easiest way to understand them is to modify one of the examples and observe the change in the number and order of the scores.
https://docs.eyesopen.com/toolkits/csharp/shapetk/OEShapeFunctions/OESortOverlayScores.html
2021-05-06T09:47:43
CC-MAIN-2021-21
1620243988753.91
[]
docs.eyesopen.com
With APM's Ruby agent, you can monitor applications that reside in the Google App Engine (GAE) flexible environment. Adding New Relic to your GAE flex app gives you insight into the health and performance of your app and extends GAE with metrics you can view using Full-Stack Observability options like APM and browser monitoring. This document explains how to add New Relic to your GAE flex app using either of these methods: - Google App Engine's "native mode" installation with a standard GAE runtime - Docker installation using a custom runtime The custom runtime method includes an example of deploying a Sinatra app. If you need specific libraries or headers, New Relic recommends using the custom runtime method. Deploy using GAE's native support When using Google App Engine "native mode" installation, you provide your app code and an app.yaml file. Google App Engine then deploys to a standard prebuilt Docker image. To deploy with native support for Sinatra or Rails: - Follow New Relic's standard procedures to install the gem, including your license key. - Install the Ruby agent configuration file. Once the gem and configuration file have been installed, the Ruby agent can automatically monitor applications that reside in the GAE flexible environment. Wait until the deployment completes, then view your GAE flex app data in the APM Summary page. Build a custom runtime using Docker ヒント If your Ruby app needs specific libraries or headers, New Relic recommends using the custom runtime method. In addition, New Relic recommends that you allow Google App Engine to handle health checks. See Google's documentation for building custom runtimes. This example describes how to add New Relic to your GAE flex app by building a custom runtime for Docker. The example uses a Sinatra app for Ruby. For more information about deploying and configuring your Ruby app in the GAE flexible environment, see: - Google App Engine's documentation for Ruby - Google App Engine's tutorials for Ruby Recommendation: Handle. New Relic recommends that you allow health checks for Ruby apps so that Google can check that your service is up and balanced properly. However, if excessive health checks cause congested transaction traces, you can set the Ruby agent to ignore the health check requests. - To handle health checks, add a route for _ah/healthin your app. - To ignore health check requests, set the rules.ignore_url_regexesconfig setting in the application’s Ruby agent config to include '_ah/health'. Get New Relic agent troubleshooting logs from GAE Use these resources to troubleshoot your GAE flex environment app: To connect to the GAE instance and start a shell in the Docker container running your code, see Debugging an instance. To redirect New Relic Ruby agent logs to Stackdriver in the Cloud Platform Console, change the newrelic.jsconfiguration file to:log_file_name: STDOUT To view the logs, use the Cloud Platform Console's Log Viewer.。
https://docs.newrelic.com/jp/docs/agents/ruby-agent/installation/install-new-relic-ruby-agent-gae-flexible-environment/?q=
2021-05-06T10:40:10
CC-MAIN-2021-21
1620243988753.91
[]
docs.newrelic.com
This document describes: The encryption schemas used to encrypt personal data and data shared with a team. Termius encrypts SSH server and telnet configs, snippets, meta info like tags and labels, credentials for SSH and telnet authentication, i.e. SSH keys, username, and password. Termius uses the username and password authentication. After successful authentication, the app uses the password as a key for data encryption as described later in this article. For authenticating in the Termius cloud, the app communicates with the server in a way that prevents sending a password or password hash over the network using a modified SRP6a protocol. The authentication process looks like this: To complete authentication, the client and the Termius cloud must prove that each party has the same key: The client gets a random piece of data from the Termius cloud, salt for Argon2id password hash and User Identifier. The client sends its random data and client proof. The client gets a server proof, an encrypted API Key and a salt. The client validates the server proof and decrypts the API Key. For authenticating in the Termius cloud, the app calculates the SHA256 of the password and sends it using HTTPS-protected REST API. The Termius cloud calculates PBKDF2 hash using the default Django implementation. Termius uses hybrid (new) encryption for personal data that has been migrated to this type of encryption, and symmetric (old) encryption for other data. The app generates a key pair for the user and syncs the private key encrypted by the user password. Using the key pair, the app generates a personal encryption key for the user. The app uses the personal encryption key to encrypt personal data. Personal data that hasn't been migrated to the hybrid encryption is encrypted with the RNCryptor library. Implementations: iOS: open source one. Android: custom one. Desktop: custom one. RNCryptor uses PBKDF2 with 10,000 rounds, encrypt-then-mac HMAC, and AES256-CBC with a random IV. For all data, Termius uses a single encryption key and HMAC key, both derived from the password used for authentication. Salts are generated on the sync server. The encryption and HMAC keys are stored on the device, namely in: Android: shared preferences, encrypted by a key stored in Android KeyStore. Desktop: Electron IndexedDB encrypted by a key stored in OS KeyChain when KeyChain is available and in localStorage as a fallback. Termius uses hybrid encryption for team shared data: Each team member and the team admin has a key pair used for personal data encryption. The team admin generates a team encryption key. The team admin exchanges the team encryption key with each team member by encrypting the key using a team member's public key and utilizes the team admin’s private key for creating a MAC. A team member decrypts the exchanged team encryption key with the private key and uses the team admin’s public key to verify the MAC. The encryption of shared data and personal data uses a common schema because: It allows implementing an option to restore an account when the password is lost. It prevents re-encryption of the whole database on password change. The Termius team's experience shows that this is an error-prone action. Termius uses Libsodium for encrypting team shared data and personal data that has been migrated to new encryption. Termius uses the 1.0.17 version of Libsodium and custom C++ binding for iOS, Android, and Desktop applications. Termius uses the following APIs in Libsodium: For public-key encryption: crypto_box_keypair, crypto_box_easy and crypto_box_open_easy – it uses X25519 key exchange, XSalsa20 stream cipher, and Poly1305 MAC. For secret key encryption: crypto_secretbox_keygen, crypto_secretbox_easy, crypto_secretbox_open_easy – it uses XSalsa20 stream cipher and Poly1305 MAC. For password hashing: crypto_pwhash with options: OPSLIMIT_INTERACTIVE, MEMLIMIT_INTERACTIVE, and ARGON2ID13. For generating a nonce: randombytes_buf. Termius uses SPR implementation from Botan and GRPC over TLS as a transport for SRP protocol. Termius uses the 2.14.0 version of Botan and custom C++ binding for iOS, Android, and Desktop applications. The encryption key and key pair are stored on the devices, namely in: Android: shared preferences encrypted by a key stored in Android KeyStore. Desktop: Electron IndexedDB encrypted by a key stored in OS Keychain when Keychain is available and in IndexedDB as a fallback. Please, email us at [email protected].
https://docs.termius.com/termius-handbook/synchronization/synchronization-security-overview
2021-05-06T08:39:39
CC-MAIN-2021-21
1620243988753.91
[]
docs.termius.com
Developing and Contributing¶ Working on Eskapade¶ You have some cool feature and/or algorithm you want to add to Eskapade. How do you go about it? First clone Eskapade. git clone eskapade then pip install -e eskapade this will install Eskapade in editable mode, which will allow you to edit the code and run it as you would with a normal installation of eskapade. To make sure that everything works try executing eskapade without any arguments, e.g. eskapade_run or you could just execute the tests using either the eskapade test runner, e.g. cd eskapade eskapade_trial . or cd eskapade python setup.py test That’s it. Contributing¶ When contributing to this repository, please first discuss the change you wish to make via issue, email, or any other method with the owners of this repository before making a change. You can find the contact information on the index page. Note that when contributing that all tests should succeed.
https://eskapade.readthedocs.io/en/latest/developing.html
2021-05-06T09:50:56
CC-MAIN-2021-21
1620243988753.91
[]
eskapade.readthedocs.io
Add a language to your smartphone You can use the BlackBerry Desktop Software to add a typing input or display language to your BlackBerry smartphone. To download the BlackBerry Desktop Software, from your computer, visit and select the appropriate option for your computer. Connect your smartphone to your computer and open the BlackBerry Desktop Software For more information about adding typing input and display languages, see the Help in the BlackBerry Desktop Software. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/38289/1783677.jsp
2014-10-20T10:16:20
CC-MAIN-2014-42
1413507442420.22
[]
docs.blackberry.com
. - 'scons' and watch it compile the sources.winCompile controls the behaviour. If you need to exolicitly set whether the Cygwin path code is included in the build then you can set an option on the command line: scons cygwinCompile -: - how it goes.
http://docs.codehaus.org/pages/viewpage.action?pageId=133922933
2014-10-20T09:48:46
CC-MAIN-2014-42
1413507442420.22
[]
docs.codehaus.org
Installation¶ Pyjnius depends on Cython and Java. Installation on the Desktop¶ You need the Java JDK and JRE installed (openjdk will do), and Cython. Then, just type: sudo python setup.py install If you want to compile the extension within the directory for any development, just type: make You can run the tests suite to make sure everything is running right: make tests
http://pyjnius.readthedocs.org/en/latest/installation.html
2014-10-20T09:38:49
CC-MAIN-2014-42
1413507442420.22
[]
pyjnius.readthedocs.org
With the DependsOn attribute you can specify that the creation of a specific resource follows another. When you add a DependsOn attribute to a resource, that resource is created only after the creation of the resource specified in the DependsOn attribute. DependsOn to explicitly specify dependencies, which overrides the default parallelism and directs CloudFormation to operate on those resources in a specified order. The DependsOn attribute can take a single string or list of strings. "DependsOn" : [ String, ...] The following template contains an AWS::EC2::Instance resource with a DependsOn attribute that specifies myDB, an AWS::RDS::DBInstance. When AWS CloudFormation creates this stack, it first creates myDB, then creates Ec2Instance. { " : { "ImageId" : { "Fn::FindInMap" : [ "RegionMap", { "Ref" : "AWS::Region" }, "AMI" ] } }, "DependsOn" : "myDB" }, "myDB" : { "Type" : "AWS::RDS::DBInstance", "Properties" : { "AllocatedStorage" : "5", "DBInstanceClass" : "db.m1.small", "Engine" : "MySQL", "EngineVersion" : "5.5", "MasterUsername" : "MyName", "MasterUserPassword" : "MyPassword" } } } } group Amazon EC2 instances Elastic Load Balancing load balancers Elastic IP address A VPN gateway route propagation depends on a VPC-gateway attachment when you have a VPN gateway The following snippet shows a sample gateway attachment and an Amazon EC2 instance that depends on a gateway attachment: "GatewayToInternet" : { "Type" : "AWS::EC2::VPCGatewayAttachment", "Properties" : { "VpcId" : { "Ref" : "VPC" }, "InternetGatewayId" : { "Ref" : "InternetGateway" } } }, "EC2Host" : { "Type" : "AWS::EC2::Instance", "DependsOn" : "GatewayToInternet", "Properties" : { "InstanceType" : { "Ref" : "EC2InstanceType" }, "KeyName" : { "Ref" : "KeyName" }, "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" }, { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "EC2InstanceType" }, "Arch" ] } ] }, "NetworkInterfaces" : [{ "GroupSet" : [{ "Ref" : "EC2SecurityGroup" }], "AssociatePublicIpAddress" : "true", "DeviceIndex" : "0", "DeleteOnTermination" : "true", "SubnetId" : { "Ref" : "PublicSubnet" } }] } }
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-dependson.html
2014-10-20T09:40:51
CC-MAIN-2014-42
1413507442420.22
[]
docs.aws.amazon.com
Some email folders appear dimmed If you can't forward email from an email folder, the check box beside the folder appears dimmed. Try the following actions: - Wait for a few seconds. The email folders might become available after a short period of time. - Turn off wireless email reconciliation, and then turn it on again. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/38289/1489037.jsp
2014-10-20T10:14:25
CC-MAIN-2014-42
1413507442420.22
[]
docs.blackberry.com
Quick Reference Guide Local Navigation BlackBerry Theme Builder overview You can create themes to personalize the screens on the BlackBerry® device. For example, you can change the background image, change the color of the font, change the applications that appear on the screens, set an application to open a webpage, add a ring tone, and create screen transitions. You can use sample images that are included in the BlackBerry Theme Builder or you can create your own images and use them in the BlackBerry Theme Builder. You can create a theme for your device or you can make themes available online (for example, on the BlackBerry App World™ storefront) so that BlackBerry device users can download them. Users can use the themes that are provided on the device or they can download themes from online retailers. You can install a BlackBerry Smartphone Simulator to test themes on your desktop before you distribute them to devices, or you can install the themes directly on a device. Was this information helpful? Send us your comments.
http://docs.blackberry.com/fr-fr/developers/deliverables/21087/Plazmic_Theme_Builder_overview_568241_11.jsp
2014-10-20T10:08:13
CC-MAIN-2014-42
1413507442420.22
[]
docs.blackberry.com
Used to show all of the published Contacts in a given Category. Note that Contact Categories are separate from Article Categories. Contacts and Contact Categories are entered by selecting Components/Contacts. See Contact Category Manager and Contact Manager for more information. To create a new Category List Menu Item: To edit an existing Category List Menu Item, click its Title in Menu Manager: Menu Items. Used to show contacts:
http://docs.joomla.org/index.php?title=Help25:Menus_Menu_Item_Contact_Category&diff=100140&oldid=70774
2014-10-20T09:59:20
CC-MAIN-2014-42
1413507442420.22
[]
docs.joomla.org
Icinga Icinga is used to monitor alerts that we have set up. It can be a bit hard to navigate but there are only a few views you need to know about (listed in the left-hand navigation): - Unhandled Services - Alert History Alternatively, you can pull these into one local dashboard by setting up Nagstamon. Scanning alerts - If an alert is red or has a icon it is critical - If an alert is yellow or has a icon it is a warning - If an alert is green or has a icon it has recently recovered - If an alert is purple or has a icon Icinga cannot retrieve data for it - If an alert has a icon the alert is coming on and off or 'flapping' External URLs A service may have two additional URLs associated with it which will assist in investigating alerts. These are included in the Icinga interface with these icons: - Action URL ( ) typically links to a graph. If the check uses Graphite for its source of data then the graph will also include the warning and critical threshold bands. - Notes URL ( ) links to a page in this manual describing why a given check exists and/or how to go about resolving it. They will appear next to the service name in the service overview page or on the top-right of the page when viewing a specific service. Digging deeper If you want to dig a little deeper into the history of a specific alert click on it in the "Unhandled Services" view. In the top left of the main window there a few links. "View Alert Histogram For This Service" and "View Trends For This Service" are particularly useful.
https://docs.publishing.service.gov.uk/manual/icinga.html
2020-11-23T22:10:11
CC-MAIN-2020-50
1606141168074.3
[]
docs.publishing.service.gov.uk
Exploitable Vulnerabilities You can connect InsightVM or Nexpose, Rapid7's vulnerability management solutions, with InsightIDR to see all the exploitable vulnerabilities found in your environment. InsightIDR applies user context to vulnerabilities, showing you which users may be "clickbait." How to View Exploitable Vulnerabilities On the Assets & Endpoints page, you will see a card that displays the top Exploitable Vulnerabilities on the right, along with the number of assets affected. At the bottom of the card, you can click More to see the top 100 vulnerabilities. You can also click on the Exploitable Vulnerabilities metric on the "Assets & Endpoints" page to see a complete list. The Top 100 Vulnerabilities displays information about the exact title, the threat source count, the type of vulnerability, and the number of users and assets affected. You can sort these columns by clicking on them. Asset Vulnerability Details When you click on the title of the vulnerability, you will see a detailed page about it. It will display additional information about the type, name, source, and description of the vulnerability. The "Users" table then displays the name of the user, their department, and their affected asset. You have the option of restrict their asset by clicking on the Target icon. How to Connect InsightVM or Nexpose Please see Nexpose/InsightVM Integration for instructions.
https://docs.rapid7.com/insightidr/exploitable-vulnerabilities/
2020-11-23T22:52:36
CC-MAIN-2020-50
1606141168074.3
[array(['/areas/docs/_repos//product-documentation__master/7a5dab79b82f82c703654fbf2200657d61e31590/insightidr/images/Screen Shot 2018-08-29 at 12.29.37 PM.png', None], dtype=object) ]
docs.rapid7.com
Buttplug Intimate Sex Toy Control Library Welcome to the Buttplug Intimate Sex Toy Control Library. If you're here, we're assuming you know why you're here and will dispense with the "this is what this library is" stuff. If you don't know why you're here, check out our main website or our github repo for more introductory information. Requirements buttplug-rs uses async/await heavily, and requires a minimum of Rust 1.39. While we use async-std internally, buttplug-rs should work with any runtime. Currently Implemented Capabilities The library currently contains a complete implementation of the Buttplug Client, which allows connecting to Buttplug Servers (currently written in C# and JS), then enumerating and controlling devices after successful connection. There are also connectors included for connecting to servers via Websockets. Examples Code examples are available in the github repo. The Buttplug Developer Guide may also be useful, though it does not currently have Rust examples. Attributes The following attributes are available Default attributes are client-ws and server. Plans for the Future The next 2 goals are: - Creating an FFI layer so that we can build other language libraries on top of this implementation. - Writing the server portion in Rust. These will be happening simultaneously after the v0.0.2 release. Contributing Right now, we mostly need code/API style reviews and feedback. We don't really have any good bite-sized chunks to apportion out on the implementation yet, but one we do, those will be marked "Help Wanted" in our github issues.
https://docs.rs/crate/buttplug/0.0.2
2020-11-23T22:52:36
CC-MAIN-2020-50
1606141168074.3
[]
docs.rs
Important! - Ensure that you have first updated Orchestrator to the matching version, and that it is running, prior to upgrading Insights. - If the Insights Admin made direct changes to the out-of-the-box dashboards from the Insights portal and re-shared them, those dashboards will be replaced on upgrade. If you wish to retain your current out-of-the-box dashboards, please export them prior to upgrading. You can import them once upgrade is complete. To upgrade your existing Insights installation, follow these steps: - Run the UiPathInsightsInstaller.exeinstaller as an administrator. The UiPath Insights Installer wizard is displayed. Select Upgrade. - Select the Check here to accept the license agreement check box to agree to the terms in the agreement, then click Next. The Insights Server Configuration is displayed. - Select Please read prior to upgrading to review the risks and attendant recommendations to backup your data prior to upgrading. - Provide the connection details for your Orchestrator instance, as follows: - Orchestrator Endpoint - the URL of your Orchestrator. - Username - the username of the Host tenant. By default, this is adminand cannot be edited. - Password - the password for the Host admin account. - Click the I have backed up Insights and I understand that failure to do so may cause data loss in order to proceed. - Click Upgrade. The upgrade process starts. Once completed, click Close to exit the installer. Important! After Insights installation has finished: - Ensure the .NET Trust Level of the Sisense app is set to Full. - Open the consts.jsfile located in the C:\Program Files\Sisense\app\query-proxy-service\src\commondirectory and set the HEALTH_CHECK_TIMEOUTparameter to 100000. - Restart the Sisense.QueryProxyservice. If you would like to change out-of-the-box dashboard access for your users, you will need to run the Insights Admin Tool for the respective tenant once update is completed. Updated about a month ago
https://docs.uipath.com/installation-and-upgrade/docs/insights-upgrading
2020-11-23T22:04:48
CC-MAIN-2020-50
1606141168074.3
[array(['https://files.readme.io/f09becd-insights_upgrade.png', 'insights_upgrade.png'], dtype=object) array(['https://files.readme.io/f09becd-insights_upgrade.png', 'Click to close...'], dtype=object) ]
docs.uipath.com
Deploy apps that use the VMware Workspace ONE SDK for iOS (Swift) to the App Store without dependency on other Workspace ONE UEM components. The SDK includes a mode for your application for use during the Apple App Review process. This app review mode removes dependencies on the broker applications such as the Workspace ONE Intelligent Hub for iOS, VMware Container, and the Workspace ONE application. It also enables the app reviewer to access the application without enrolling with Workspace ONE UEM. Use this work flow only on applications built with the Workspace ONE SDK that you submit to the App Store for review. Do not use this work flow for any other application development processes. Also, do not use the process in a production environment. This process is only supported for use in a test environment for applications you submit to Apple's App Review. App review mode includes several steps. Integrate the SDK with your application. Configure the app review mode testing environment in the Workspace ONE UEM console, upload the application IPA file, assign it an SDK profile, and deploy it to the test environment. See Configure an App Review Mode Testing Environment in the [%=Variables.Product Name%] Console. Assign an app review mode server and a group ID to the SDK PLIST. See Declare the App Review Server and Group ID in the SDK PLIST. Test the IPA in the test environment. See Test the App Review Mode Testing Environment in the [%=Variables.Product Name%] Console. Run the app store build script. See Build Script Information for App Store Submission. Submit your application for review to the Apple App Store ensuring to add the app review mode server, group ID, and user credentials from the test environment to the submission.
https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/services/VMware-Workspace-ONE-SDK-for-iOS-(Swift)/GUID-AWT-APPREV-HIGHLVL-STEPS.html
2020-11-23T23:14:35
CC-MAIN-2020-50
1606141168074.3
[]
docs.vmware.com
Returns the site primary timezone if the application is configured to override user preferences; otherwise it returns the preferred timezone of the given user or the site primary timezone if the user doesn't have a preference set. usertimezone( user ) user: (user) The user (or the username of the user) for whom the timezone should be returned. Text The timezone is returned as a string, such as "GMT" or "EST". You can copy and paste these examples into the Expression Rule Designer to see how this works. usertimezone(pv!user) returns GMT On This Page
https://docs.appian.com/suite/help/20.3/fnc_scripting_usertimezone.html
2020-11-23T22:41:02
CC-MAIN-2020-50
1606141168074.3
[]
docs.appian.com
When it comes to efficiency, a well-designed workflow is kings. Workflows are like personal assistants; they not only you carry out specific tasks for you, but they also save you time in the process. With Flexie, you can create as many as workflows as you want. A workflow lets you set event-based actions. Think of it like an “if/then” statement: once a condition is met, a trigger is activated. For example, if the lead opens an email, add five points to it. Or if the lead submits a form, send a follow-up email. Automate repetitive tasks and do more with less. Below are some of the main Flexie’s workflow building blocks: Workflow sources Entity forms Form submissions are arguably one of the best ways of gathering new leads. Any time a lead submits a form, it will be added to the workflow. Entity lists You might want to include smart lists or a specific lead list into your workflow. Once you choose a list, all entities that are part of that list will be be automatically added to the workflow. You can choose as many lists as you want, and they will all be part of the workflow. Entity events In Flexie CRM, there are two types of entity events: On Entity Insert and On Entity Update respectively. If you choose On Entity Update, the selected custom fields will be changed automatically once an entity gets updated. Notice that this works for updates only. The same goes foe Entity Insert. Depending on your needs, you can choose one or both. Manual This source is self-explanatory: any time an entity is manually selected from a user, it will be added to the workfkow. Listener This source is an another amazing feature of Flexie’s workflow. What the listener does is “listening” to events, ‘Watch For’ events like Form submission, Incoming Email, Page Visit, etc. So, any time an entity fires a ‘Watch For’ event, it will be automatically added to the workflow. Actions Add Note Adding a note automatically has never been easier. Choose the timing of the action, and let the workflow do the rest of the work for you. Add Task Once you select the Add task action, you can add task for every entity inside the workflow. You can set the timing, the owner of the task (the person who’s in charge of the task) and you can show it in the calendar. The workflow will create the task for you. Simple, time-saving and very effective. Adjust lead points What if you want to add points to a lead when there is a form submission, or a visit to the page? With Flexie, you can increase or decrease points. Give a name to it, choose the time when you want to action to take place, choose how many points do you want to give or subtract from the lead and then click Add. Any time an event takes place, for example, the lead submits a form, the workflow will automatically add points to the lead. Change workflows This source can add entities to specific workflow, or it can remove them from specific workflows once an event is triggered. Simple and quite intuitive. Modify lead’s list With Flexie, you can automatically modify lead’s lists any time you want. You can add a specific lead to your selected list, or you can also remove the lead from your selected lists. Send email Sending automatic emails has never been easier. Set the the time when you want the email to be sent, select the email you want to send, choose the email field and then click Add. The email will be automatically sent to the lead(s) you want. Send email notification What if you want to notify a specific user, or a group of users through an email? Once again, choose the time when you want the email notification to be sent, choose the owner type, write a subject and then write the message. You can also give it a name to identify it easily. Once you click the Add button, the workflow will do the rest for you. Send sms notification What if you want to send a sms notification to a user or a group of users? Flexie allows you to send automatic sms notifications to selected user(s). Once you fill in the all the fields, the workflow will automatically send sms notifications to the selected recipients. Send web notification To send automatic web notifications to a user(or group of users), set the time you want the notification to be sent, choose the owner type, give it a title and then write the message. Once you click Add, the selected user(s) will be notified as predicted. Using this action allows you to share entities to groups, users and roles all at once. Once you fill in the respective fields, click Add. The workflow will automatically share users, groups and roles, just as predicted. Update Lead Say you want to update a lead, but you don’t want to do that manually. With Flexie, you can automatically update lead’s fields any time you want. Set a time when you want the update to take place, select the lead fields you want to update and click the Add button. Once again, the workflow will do the update for you. Update lead’s owner With Flexie, you can change the owner of workflow leads any time you want. To do this, fill in the fields and then click the Add button. The workflow will update the lead’s owner as predicted. Decisions Conditions In Flexie’s workflow, conditions help you further specify the workflow process. Complex filters in the forms of AND and OR conditions allow you to trigger actions when the entity meets specific conditions. You can add as many rules as you want. The workflow will keep triggering actions based on the selected conditions. Watch For events Watch For events will execute actions any time there’s an event. For example, lead submits a form, visits a page, opens marketing email, etc.) Form submission This event will execute actions any time a lead submits a form. You can choose the time when the actions will take place. Incoming email This event executes actions any time there is an incoming personal email. Check the reading (Read Only, Unread Only) and the Email Type (New Conversation Email Only, Reply Email Only), and then click the Add button. The workflow will do the rest for you. Open Marketing Email This event executes actions any time a marketing email is opened. To properly build this event, you have to connect a “Send Email” action to the top of this decision. This way you filter which email to watch for. Open Personal Email This event will execute actions any time a personal email is opened. A personal email in Flexie CRM is every email sent and received in the Mailbox. Page visit This executes actions upon a page/url hit. To do this, first insert the URL of the page where you placed the tracking pixel. You can also set the number of times a lead/contact visits the page, if it returns within or after a specified period of time and the total time spent on your page. Note that we’re constantly working on adding new features to Flexie , and the above list is far from conclusive. To stay updated with the latest features, news and how-to articles and videos, please join our group on Facebook, Flexie CRM Academy and subscribe to our YouTube channel Flexie CRM.
https://docs.flexie.io/docs/administrator-guide/workflow-and-automation-overview/
2020-11-23T22:05:45
CC-MAIN-2020-50
1606141168074.3
[array(['https://flexie.io/wp-content/uploads/2017/08/Capture-12.png', None], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-83.png', 'Entity lists'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-84.png', 'Entity events'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/10/Capture-4.png', 'Manual'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Listener-1.png', 'Listener'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-54.png', 'Add note'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Add-task-.png', 'Add task'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Lead-points.png', 'Adjust lead points'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-115.png', 'Change workflows'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-85.png', 'Modify lead list'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-13.png', 'Send email'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-86.png', 'Send email notification'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-41.png', 'Send sms notification'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Send-web-notification.png', 'Send web notification'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Share.png', 'Share'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/10/Capture-9.png', 'Update lead'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-43.png', "Update lead's owner"], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Flexie-CRM-87.png', 'Conditions'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Watch-for-Form-submission.png', 'Form submission'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-117.png', 'Incoming email'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/10/Capture-15.png', 'Opens marketing email'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Capture-118.png', 'Open personal email'], dtype=object) array(['https://flexie.io/wp-content/uploads/2017/08/Page-visit.png', 'Page visit'], dtype=object) ]
docs.flexie.io
17. References¶ MDAnalysis and the included algorithms are scientific software that are described in academic publications. Please cite these papers when you use MDAnalysis in published work. It is possible to automatically generate a list of references for any program that uses MDAnalysis. This list (in common reference manager formats) contains the citations associated with the specific algorithms and libraries that were used in the program. 17.1. Citations for the whole MDAnalysis library¶ When using MDAnalysis in published work, please cite [Michaud-Agrawal2011] and [Gowers2016]. (We are currently asking you to cite both papers if at all possible because the 2016 paper describes many updates to the original 2011 paper and neither paper on its own provides a comprehensive description of the library. We will publish a complete self-contained paper with the upcoming 1.0 release of MDAnalysis, which will then supersede these two citations.) 17.2. Citations for included algorithms and modules¶ If you use the RMSD calculation ( MDAnalysis.analysis.rms) or alignment code ( MDAnalysis.analysis.align) that uses the qcprot module please also cite [Theobald2005b] and [Liu2010b]. If you use the helix analysis algorithm HELANAL in MDAnalysis.analysis.helanal please cite [Bansal2000b]. If you use the GNM trajectory analysis code in MDAnalysis.analysis.gnm please cite [Hall2007b]. If you use the water analysis code in MDAnalysis.analysis.waterdynamics please cite [Araya-Secchi2014b]. If you use the Path Similarity Analysis (PSA) code in MDAnalysis.analysis.psa please cite [Seyler2015b]. If you use the implementation of the ENCORE ensemble analysis in MDAnalysis.analysis.encore please cite [Tiberti2015b]. If you use the streamline visualization in MDAnalysis.visualization.streamlines and MDAnalysis.visualization.streamlines_3D please cite [Chavent2014b]. If you use the hydrogen bond analysis code in MDAnalysis.analysis.hydrogenbonds.hbond_analysis please cite [Smith2019]. If you use rmsip() or rmsip() please cite [Amadei1999] and [Leo-Macias2004] . If you use cumulative_overlap() or cumulative_overlap() please cite [Yang2008] . 17.3. Citations using Duecredit¶ Citations can be automatically generated using duecredit, depending on the packages used. Duecredit is easy to install via pip. Simply type: pip install duecredit duecredit will remain an optional dependency, i.e. any code using MDAnalysis will work correctly even without duecredit installed. A list of citations for yourscript.py can be obtained using simple commands. cd /path/to/yourmodule python -m duecredit yourscript.py or set the environment variable DUECREDIT_ENABLE DUECREDIT-ENABLE=yes python yourscript.py Once the citations have been extracted (to a hidden file in the current directory), you can use the duecredit program to export them to different formats. For example, one can display them in BibTeX format, using: duecredit summary --format=bibtex Please cite your use of MDAnalysis and the packages and algorithms that it uses. Thanks!
https://docs.mdanalysis.org/stable/documentation_pages/references.html
2020-11-23T22:28:24
CC-MAIN-2020-50
1606141168074.3
[]
docs.mdanalysis.org
Document type: facet_value There are 0 pages with document type 'facet_value' in the GOV.UK search index. Rendering apps This document type is not rendered by any apps. This could mean that the document type is not meant to be rendered (a redirect for example), or that there aren't actually any pages with this document type.
https://docs.publishing.service.gov.uk/document-types/facet_value.html
2020-11-23T21:51:37
CC-MAIN-2020-50
1606141168074.3
[]
docs.publishing.service.gov.uk
This documentation does not apply to the most recent version of ITSI. Click here for the latest version. threshold_labels.conf The following are the spec and example files for threshold_labels.conf. threshold_labels.conf.spec # Copyright (C) 2005-2019 Splunk Inc. All Rights Reserved. # # This file contains all possible attribute/value pairs for configuring settings # for severity-level thresholds. Use this file to configure # threshold names and color mappings. # # To map threshold names and colors, place a threshold_label.conf in # $SPLUNK_HOME/etc/apps/itsi/local/. For examples, see threshold_label.conf.example. # # To learn more about configuration files (including precedence) see the documentation # located at # # CAUTION: You can drastically affect your Splunk installation by changing any settings in # this file other than the colors. Consult technical support () # if you are not sure how to configure this file. [<name>] color = <string> * A valid color code. * Required. lightcolor = <string> * A valid color code to display for Episode Review "prominent mode". * When you view Episode Review in prominent mode, the entire row is colored rather than just the colored band on the side. * Required. threshold_level = <integer> * A threshold level that is used to create an ordered list of the labels. * For example, if you set the 'Normal' threshold level to "1", it appears first when the levels are listed in the UI. * Optional. health_weight = <integer> * The weight or importance of this status. * This value should be between 0 and 1. * In general, regular levels like Normal and Critical have a weight of "1", while less important levels like Maintenance and Info have a weight of "0". * Required. health_min = <integer> * The minimum threshold value. * This value must be a number between 0 and 100. 0 and 100 are inclusive but the minimum threshold value is exclusive. * Required. health_max = <integer> * Themaximum threshold value. * This value must be a number between 0 and 100. 0 and 100 are inclusive but the maximum threshold value is exclusive. * Required. score_contribution = <integer> * The number, traditionally from 0 to 100, that this particular level will contribute towards health score calculations. * Required. threshold_labels.conf.example # Copyright (C) 2005-2019 Splunk Inc. All Rights Reserved. # This is an example threshold_labels.conf. Use this file to # configure settings for severity-level thresholds. # # To use one or more of these configurations, copy the color code # into threshold_labels. # # This file contains examples of brighter severity colors, with "Normal" severity # being replaced with "Low" severity. [info] color = #6AB7C7 threshold_level = 1 [low] color = #65A637 threshold_level = 2 [medium] color = #FAC51C threshold_level = 3 [high] color = #F7902B threshold_level = 4 [critical] color = #D85D3C threshold_level = 5 Last modified on 24 June, 2019 This documentation applies to the following versions of Splunk® IT Service Intelligence: 4.3.0, 4.3.1 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/ITSI/4.3.1/Configure/threshold_labels.conf
2020-11-23T23:11:03
CC-MAIN-2020-50
1606141168074.3
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
High Availability Large enterprise organizations typically require production systems to have some level of failover or high availability setup to achieve uptime targets. Snow Commander is an application that can be run in an Active-Passive clustered configuration to meet this common enterprise requirement. This guide explains the process by which Commander is made highly available by using a two-node cluster of application servers. To further protect the availability, install the application against a highly available Microsoft SQL database. (Note that the default Postgres SQL database can't be used for a High Availability deployment.) This guide uses as its example the Commander HA configuration with a front-end load balancer, but the same configuration will work for any service-aware load balancing solution.
https://docs.embotics.com/high-availability/high-availability.htm
2020-11-23T22:45:08
CC-MAIN-2020-50
1606141168074.3
[]
docs.embotics.com
Publishing an automation project means archiving the project folder so that it can be sent to Robots and then executed. By default, all the files in the project folder are published. If you want to prevent a specific file from being included in the published package, right-click it in the Project panel, and then select Ignore from Publish. In the case of libraries, ignoring a workflow file from publish prevents it from appearing as a reusable component in the Activities panel when the published library is installed in a project. You can publish automation projects to Orchestrator, a custom NuGet feed, or locally. After publishing to Orchestrator, the archived project is displayed on the Packages page and you can create a process to be distributed to Robots. When you publish an automation process to the Orchestrator Personal Workspace or you publish test cases, a process is created automatically if one does not already exist, and existing processes are automatically updated to the latest published version. Additionally, automation projects may be published to a custom NuGet feed, with the option to also add an API key if the feed requires authentication. Publishing projects locally requires you to provide a path on the local machine, different than the location where process packages are published. From here, you can later manually send the packages to the Robots, so they can be executed. The default local publish location is %ProgramData%\\UiPath\\Packages. This can be easily done by using the Publish button on the Design ribbon tab. Please note that automation projects cannot be published if the project.json file is located in a read-only location. To publish an automation project: - In Studio, create a new project. - In the Design ribbon tab, click Publish. The Publish window opens. Notice that the window's title bar changes depending on the context: - Publish Process when publishing a process; - Publish Library when publishing a library project; - Publish UI Library when publishing a UI library project; - Publish Test Cases when publishing test cases. - Publish Templates when publishing templates. - In the Package Properties tab: - Enter a name for the package. The drop-down list contains up to 5 of the most recent names of packages that you previously published. - In the Version section, review the Current Version of your project, and type a New Version if needed. Check the Is Prerelease box to mark the version as alpha. Please note that this automatically changes the project’s version schema to semantic. When publishing a new version of the file locally, make sure that the custom location does not already include a file with the same proposed version number. For more details about project versioning, check the About Automation Projects page. - In the Release Notes text box, enter details about the version and other relevant information. Release notes for published projects are visible in the Packages section in Orchestrator. Please note that the Release Notes field accepts a maximum of 10,000 characters. - Click Next. If you are publishing a template, the Template info tab opens next (step 5). Otherwise, proceed to step 6. - (For templates only) In the Template info tab, provide the following information, and then click Next: - Name - The name of the template. - Description - The template description in the Templates tab. - Default Project Name - The default project name when creating a new project using this template. - Default Project Description - The default description when creating a new project using this template. - Icon URL - Optional template icon specified as a public URL. The icon is visible in the Templates tab on this specific template. In the Publish options tab, select where to publish the project. The available options depend on the type of project you are publishing: - For processes (including StudioX projects): - Assistant (Robot Defaults) - the default package location for the Robot and Assistant, C:\ProgramData\UiPath\Packages. Projects published here automatically appear in the Assistant. The option is not available if Studio is connected to Orchestrator. - Custom - either a custom NuGet feed URL or local folder. Adding an API Key is optional. - Orchestrator Tenant Processes Feed, Orchestrator Personal Workspace Feed, and any tenant folder with a separate package feed - available if Studio is connected to Orchestrator. Please note that the Orchestrator Personal Workspace Feed is only available if the connected Orchestrator has the Personal Workspace feature enabled. - For test cases: - The same options that are available for processes, with the exception of Orchestrator Personal Workspace Feed. - For libraries and UI libraries: -. - For templates: - Local - the location for publishing templates locally, by default: C:\Users\User\Documents\UiPath\.templates. -. If you are publishing a library, additional settings are available in the Publish options tab under Library Settings: - Activities Root Category - enter a name for the category under which the reusable component will be listed in the Activities panel. - Include Sources - select this option to package all .xamlsources within the generated assembly file, including workflows that were previously made private. This is helpful during debugging workflows. - Compile activities expressions - select this option to compile and package all activities expressions with the library. This results in an improved execution time. Note: To find out what might prevent a library from being published successfully, read about the limitations when publishing libraries. - Click Next to advance to the Certificate signing tab, or Publish to publish your project. - (Optional) In the Certificate Signing tab, add a local Certificate Path next to the Certificate box. Furthermore, add the Certificate Password and an optional certificate Timestamper if needed. For more details, check out the Signing Packages page. Note: Currently .pfxand .p12certificate extensions are accepted for signing projects. - Click Publish. The entire project folder is archived into a .nupkgfile, and uploaded to Orchestrator, the custom NuGet feed or saved in the local directory. - If the project is published successfully, the Info dialog box is displayed and the project is copied to the NuGet location set in the NuGetServerUrlparameter, in the UiPath.settingsfile. The Info dialog box displays: - The name under which the package was published. - The version number under which the package was published; - The location where the project was published if the project was published locally or in the Robot's Default. Click the path to go to the package, except if the publish location was Orchestrator. - The Details option which expands a list containing the names of project files that were published. - The Copy to Clipboard option. Information added during publishing, like the publish location is persisted in the window, so it can be used for subsequent publish actions performed for the same type of project. Each time you click Publish, a new version of the project is created and sent to the packages feed. Publishing to a secure feed can be authenticated either through the Robot Key, Orchestrator credentials, Windows authentication, or API key. Important: Published projects must not be unpackaged. To make any changes, please open the initial .xamlfile in Studio, perform the changes, and then publish the project again. Updated 11 days ago
https://docs.uipath.com/studio/docs/about-publishing-automation-projects
2020-11-23T22:46:29
CC-MAIN-2020-50
1606141168074.3
[array(['https://files.readme.io/7336887-ribbon.png', 'ribbon.png'], dtype=object) array(['https://files.readme.io/7336887-ribbon.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/b202d61-publish_package_properties.png', 'publish_package_properties.png'], dtype=object) array(['https://files.readme.io/b202d61-publish_package_properties.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/50a94b0-publish_templateinfo.png', 'publish_templateinfo.png'], dtype=object) array(['https://files.readme.io/50a94b0-publish_templateinfo.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/a6c6d5f-publish2.png', 'publish2.png'], dtype=object) array(['https://files.readme.io/a6c6d5f-publish2.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/9987c86-publish3.png', 'publish3.png'], dtype=object) array(['https://files.readme.io/9987c86-publish3.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/ab24ac1-info.png', 'info.png'], dtype=object) array(['https://files.readme.io/ab24ac1-info.png', 'Click to close...'], dtype=object) ]
docs.uipath.com
Notable changes for desktop users Change the Desktop release criteria for ARM and AArch64 The following release criteria changes are now present in Fedora 32: Drop Xfce on 32-bit ARM from release-blocking desktops. Workstation on AArch64to release-blocking desktops. The new additions to release-blocking deliverables include IoT and CoreOS architecture. To reduce the overall test coverage and release-blocking desktops, it no longer blocks on the 32-Bit ARM Xfce Desktop spin and adds Workstation on AArch64 as a release-blocking desktop. As a result, this change reduces the release-blocking desktops and potential for blocker bugs. Selected bitmap fonts are now available as OpenType In Fedora 31, the Pango library switched to the HarfBuzz back end, which does not support bitmap fonts. Applications that use Pango for font rendering, such as GNOME Terminal, can no longer use bitmap fonts. This release introduces new packages that provide selected bitmap fonts converted to the OpenType format. This format is supported by Pango. The following packages now provide OpenType versions of bitmap fonts: bitmap-lucida-typewriter-opentype-fonts bitmap-fangsongti-opentype-fonts bitmap-console-opentype-fonts bitmap-fixed-opentype-fonts ucs-miscfixed-opentype-fonts terminus-fonts You cannot install both the bitmap and OpenType versions of the packages, with the exception of terminus-fonts, which includes both formats. Fonts language Provides moved to the langpacks package This change aims to provide more reliable, predictable, and consistent fonts installation as well as better user experience around font dependencies. To achieve this, the change moves the Provides: font(:lang=*) tags into the langpacks-core-font-\* sub-packages of the langpacks package. The sub-packages already obtain the default font, locale and input-method for each language. As a result, whenever a missing glyph font installation is requested, the langpacks-core-font-<lang> package will be installed and will get the default font using the existing Requires: tag.
https://docs.fedoraproject.org/es/fedora/f32/release-notes/desktop/Desktop_index/
2020-11-23T22:19:58
CC-MAIN-2020-50
1606141168074.3
[]
docs.fedoraproject.org
- Configuring CNTLM - Configuring Docker for downloading images - Adding Proxy variables to the Runner config - Adding the proxy to the Docker containers - Proxy settings when using dind service=", "http_proxy=", right. Proxy settings when using dind service When using the docker-in-docker executor (dind), it can be necessary to specify docker:2375,docker:2376 in the NO_PROXY environment variable. This is because the proxy intercepts the TCP connection between: dockerdfrom the dind container. dockerfrom the client container. The ports can be required because otherwise docker push will be blocked as it originates from the IP mapped to docker. However, in that case, it is meant to go through the proxy. When testing the communication between dockerd from dind and a docker client locally (as described here:), dockerd from dind is initially started as a client on the host system by root, and the proxy variables are taken from /root/.docker/config.json. For example: { "proxies": { "default": { "httpProxy": "", "httpsProxy": "", "noProxy": "docker:2375,docker:2376" } } } However, the container started for executing .gitlab-ci.yml scripts will have the environment variables set by the settings of the gitlab-runner configuration ( /etc/gitlab-runner/config.toml). These are available as environment variables as is (in contrast to .docker/config.json of the local test above) in the dind containers running dockerd as a service and docker client executing .gitlab-ci.yml. In .gitlab-ci.yml, the environment variables will be picked up by any program honouring the proxy settings from default environment variables. For example, wget, apt, apk, docker info and docker pull (but not by docker run or docker build as per:). docker run or docker build executed inside the container of the docker executor will look for the proxy settings in $HOME/.docker/config.json, which is now inside the executor container (and initially empty). Therefore, docker run or docker build executions will have no proxy settings. In order to pass on the settings, a $HOME/.docker/config.json needs to be created in the executor container. For example: before_script: - mkdir -p $HOME/.docker/ - 'echo "{ \"proxies\": { \"default\": { \"httpProxy\": \"$HTTP_PROXY\", \"httpsProxy\": \"$HTTPS_PROXY\", \"noProxy\": \"$NO_PROXY\" } } }" > $HOME/.docker/config.json' Because it is confusing to add additional lines in a .gitlab-ci.yml file that are only needed in case of a proxy, it is better to move the creation of the $HOME/.docker/config.json into the configuration of the gitlab-runner ( /etc/gitlab-runner/config.toml) that is actually affected: [[runners]] $HOME/.docker/config.json" "is needed here because this is the creation of a JSON file with a shell specified as a single string inside a TOML file. Because it is not YAML anymore, do not escape the :. Note that if the NO_PROXY list needs to be extended, wildcards * only work for suffixes but not for prefixes or CIDR notation. For more information, see
https://docs.gitlab.com/12.10/runner/configuration/proxy.html
2020-11-23T22:22:01
CC-MAIN-2020-50
1606141168074.3
[]
docs.gitlab.com
Introduction Simple Network Management Protocol (SNMP) is an internet standard protocol that allows you to exchange management information between network devices. Usually, the network devices come with in-built SNMP agents that you can enable and configure to initiate the communication with the network management system. SNMP operates using a manager-agent model that exchanges the following types of messages between them: - TRAP - GET - GET-NEXT - GET-RESPONSE - SET SNMP TRAP is the most commonly used SNMP message. The main advantage of using TRAP is the instantaneous trigger in the event of any issue in your device. OpsRamp platform as part of monitoring your resources lets you monitor your SNMP devices using SNMP Traps Configurations. You can create SNMP Trap monitors and receive traps for the desired devices. SNMP trap An SNMP trap indicates a specific condition that is defined in the Management Information Base (MIB) and is expressed in variable bindings that include a trap message. OpsRamp processes these traps and translates them to alerts. The SNMP agent sends Trap messages directly to the SNMP agent without waiting for the approval of the SNMP manager. OID Object Identifier (OID) in SNMP is an address that identifies the devices and lets you track the status of each device. The SNMP OID is in the form of numbers. For example, 1 . 3 . 6 . 1 . 4 . 1 . 1452 . 1 . 2 . 5 . 1 . 3. 21 . 1 . 4 . 7. You can either include or exclude OID while creating SNMP Trap monitor in the OpsRamp platform. Setting up SNMP traps You can create an SNMP Trap for each client and then define the resources to receive traps. Creating SNMP trap monitors You can create a filter for trap monitoring, where users can define a list of traps to monitor from target SNMP enabled devices. - Go to Setup > Monitoring > SNMP Traps Configuration. - From SNMP TRAP MONITORS LIST screen, click the + icon to add a monitor. - From CREATE SNMP TRAP MONITOR screen, provide the following: - Client Name: Refers to the name of the client that you want to configure the SNMP Trap Monitor. - Monitor Name: Refers to the name of the monitor. - Perform the following steps for OIDs: - From the options displayed, select one of the following options: - Exclude OID: Refers to the OIDs that you do not want to receive the trap from. - Include OID: Refers to the OIDs that you want to receive the trap from. - OID: Refers to the OID to include or exclude for a particular event. Note: Enter at least one OID to save the monitoring. - Click + if you want to add more OIDs. - Select one of the three options available for Devices. - If you have selected Receive traps from specified devices only, select devices from the Available Devices section and move to the Selected Devices section. - If you have selected Discard traps from specific IP addresses, provide the list of IP addresses of devices from which you do not want to receive SNMP Traps. Note: Discard trap option works only when Gateway is used as the trap receiver. - Click Save. SNMP TRAPS MONITORS LIST screen displays the created SNMP TRAP. You can view the newly configured SNMP Trap monitors in SNMP TRAP MONITORS LIST. After creating the SNMP Trap monitor, you can perform one of the following: - Edit: Click name of SNMP Trap Monitor to modify the existing details. - Delete: Select check box of an SNMP Trap monitor and click the Delete icon. - Export: Select check box of an SNMP Trap monitor and export as CSV or PDF file. Viewing SNMP trap monitor lists After creating the SNMP Trap monitors, you can view the configured details in the SNMP TRAP MONITORS LIST page. Refer to the following table to view the SNMP Trap Monitors. Scenarios Do not receive specific SNMP traps A user does not want to receive SNMP traps from specific OIDs. Solution Create an SNMP trap monitor by excluding the OIDs from which you do NOT wish to receive the trap. Receive all SNMP traps A user wants to receive SNMP traps from all the resources. Solution Create an SNMP trap monitor by selecting the option Receive traps from all devices. Receive specific SNMP traps A user wants to receive SNMP traps from specific resources only. Solution While configuring the SNMP traps, select the option Receive traps from specified devices only, and select the devices that you need to receive the SNMP traps. Discard SNMP traps A user wants to discard SNMP traps from specific IP addresses. Solution Create an SNMP trap monitor by selecting the option Discard traps from specific IP addresses.
https://docs.opsramp.com/solutions/events-incidents/events-and-logs/monitoring-using-snmp-traps/
2020-11-23T22:44:01
CC-MAIN-2020-50
1606141168074.3
[]
docs.opsramp.com
Contact Form If you want your site visitors to be able to send you messages from your site, you can add a contact form. To add a contact form: - Go to the Modules tab, select Contact Form, and drag the module to the page. - On the Settings tab, specify the following: - Recipient's email address. You can specify several email addresses separating them with commas (,) or semicolons (;). - Message subject. - Text to be shown on the button that sends the message. - Protection from automated spam postings. Leave the checkbox Enable the protection from automated spam postings selected if you want to avoid receiving spam sent by scripts or spam bots through the contact form. The protection is based on a highly efficient mechanism, called reCAPTCHA. In the contact form, it is shown as an input box accompanied by a combination of distorted words or symbols that can be recognized only by humans. Before a message can be sent through the contact form, a user is prompted to recognize the symbols and type them in. - If you want to add, move, or remove input fields from the form, or change their labels, click the Fields tab, and make the required changes. - If you want to change the default message "Your message was sent. Thank you." which is shown when a message is sent, click the Reply tab and type the new text. - Click OK. To remove a contact form: Place the mouse pointer over the form and click Remove.
https://docs.plesk.com/de-DE/12.5/administrator-guide/websiteverwaltung/creating-sites-with-presence-builder/bearbeiten-von-websites/content-text-tables-images-video-forms-and-scripts/contact-form.69146/
2020-11-23T22:09:30
CC-MAIN-2020-50
1606141168074.3
[]
docs.plesk.com
GeoServer "data dir" versioning GeoServer stores its configuration files in a special directory, which everyone calls the "data dir" (but really, no GIS data should be stored in here ...). When someone updates the GeoServer configuration, XML files are modified in this directory, and the updateSequence value is incremented. Why would you want to version this directory ? Well, we found several advantages to this, and now, we're doing it everytime we deploy a new GeoServer instance: * it's a way to track changes when several people have admin rights, * it's so much easier to rollback to a previous state, * one gets a better insight of what happens behind the scene, * it can turn into a backup solution, * it can fork a GeoServer instance into a testing one, then pull back the changes once OK, * it can distribute a config among a distributed stack of GeoServers * ... Setting up the repository From the template one If you're creating a new geoserver instance, you should really start from the "data dir" we provide: sudo mkdir /opt/geoserver_data_dir sudo chown tomcat8 /opt/geoserver_data_dir sudo -u tomcat8 git clone /opt/geoserver_data_dir cd /opt/geoserver_data_dir sudo -u tomcat8 git remote rename origin upstream At this stage, you already have a local repository for your geoserver "data dir". From an existing "data dir" In case you're starting from an existing "data dir": cd /path/to/your/geoserver_data_dir sudo -u tomcat8 git init sudo -u tomcat8 git add --all . sudo -u tomcat8 git commit -m "initial repository state" Let's also ignore the changes to the logs, temp, gwc folders: sudo -u tomcat8 cat > /path/to/your/geoserver_data_dir/.gitignore << EOF logs temp gwc EOF Also exclude folders containing datas if you don't want them to be versioned. Finally: cd /path/to/your/geoserver_data_dir sudo -u tomcat8 git add .gitignore sudo -u tomcat8 git commit -m "git ignores temp, logs and gwc folders" Managing the repository Easy steps if you're familiar with git ... Commiting changes There are two strategies: either you're doing it manually (but this may soon become a pain), or you leave it to a cron task. cd /path/to/your/geoserver_data_dir sudo -u tomcat8 git add --all . sudo -u tomcat8 git commit -m "my commit message" Viewing changes To view the commit history: sudo -u tomcat8 git log To identify the changes introduced by a revision: sudo -u tomcat8 git diff xxxxxx ... where xxxxxx is the commit hash. Temporary rollback Let's say you want to temporarily rollback to a given revision. First commit your working state (see above). Then: sudo -u tomcat8 git checkout xxxxxx Don't forget you have to reload the geoserver catalog from the data dir. This is done in the geoserver web interface with the "reload config" button. To go back to the latest state: sudo -u tomcat8 git checkout master ... and reload the configuration again. Complete rollback This is achieved with: sudo -u tomcat8 git reset --hard xxxxxx --force ... where xxxxxx is the revision hash you want to go to. Note that the --force option will also discard any uncommited change. Git as a backup solution If your repository has a remote where you have the right to push to, git can easily turn into a backup solution for your data dir. Check your remotes with: cd /path/to/your/geoserver_data_dir sudo -u tomcat8 git remote -v Either you have no remote or you may see something like this (in case you're starting from our minimal data dir): upstream (fetch) upstream (push) Once your "origin" remote is setup, you don't have to do this anymore. Just push the changes with: sudo -u tomcat8 git push origin In case you opt for automatic backups with git, a cron job should regularly: - add the changes - commit them - push the master branch to the remote repository
https://georchestra-user-guide.readthedocs.io/en/19.04/good_practices/geoserver_data_dir_versioning/
2020-11-23T22:04:29
CC-MAIN-2020-50
1606141168074.3
[]
georchestra-user-guide.readthedocs.io
I knew it was time to post a new Blog but was not sure what to write about. This is not because there is nothing to share but because I was struggling to decide just which thing I wanted to share next. Then I saw this Blog post and shared it to the Nevus Support Facebook page - It was called ' Choosing to love a child you could lose in spite of diagnosis'. It really struck home for me. I am sure it struck home for a few of you as well. Thankfully, unlike the mother in the article, my baby was not taken away to another hospital entirely. I can only begin to imagine how hard that would be. My baby was whisked away from me initially but within a few hours I had her well and truly by my side. I was not allowed to bath her because they didn't know how her skin would react but I had her with me. The nurses kept coming in and taking swabs of her head where it was falling apart. I didn't feel that I had the choice to either love or not love this tiny, precious girl, covered in we didn't know what. There was no choice. I had loved her before she was born and what I felt was an overwhelming protectiveness. I was prepared to do anything for her. Mumma bear kicked in quickly and aggressively. It was a long time ago. Before the internet was really becoming big and certainly before Facebook. We had no idea in the early days of what her condition or prognosis was. I think I believed that if I just loved her enough then everything would be ok. Now I realise that is called bargaining and boy did I bargain! All I knew was that my baby was covered in black marks, that her head bled, cracked and wept continuously, that she was a source of concern and curiosity for the hospital staff. Mostly I knew that I did not want the mums of the other babies anywhere near her. I was terrified, overwhelmed and over protective all at the same time. Over the next couple of days it became clear that this could well be a very big deal. That it wasn't going to go away and that we had some big challenges to face together as a family. Thank goodness we lived where we had reasonably quick access to a great dermatologist, who is her dermatologist to this day and who she shares a special bond with. When she was a few days old we were sent to a consultation with a Plastic Surgeon who had come home from a holiday in New Zealand to see her. We saw him on the Friday afternoon. We were told to go home and enjoy the weekend as a family and to bring her to the major paediatric hospital in our city by 7am Monday Morning. It was Christmas time. We took our girl home. I recall feeling an overwhelming desire to ensure she experienced, that we made memories and that she would be able to palpably feel the love. That she would know she was so loved it hurt. We took her to the Magic Cave to have her photo taken with Father Christmas and her brothers. He said he thought she was the youngest child he had ever had his photo taken with. I don't know why but getting that photo meant a lot to me. She had experienced the magic of Christmas somehow in my mind. During that weekend I took her outside to watch the sunrise and the sunset. It rained, so we went and stood in it. I let her feel wet leaves. I don't think she was put down the whole time. She was loved so much that weekend. We didn't know what would happen after Monday and I wanted her to experience life and love, even if it was only for a short while. It is terrifying and confronting loving a child that you think you could lose. I did not experience the calm that some people talk about. It was more a desperation to make sure that what life she had would be filled with love and life. The path we walked over the next 18 months was a frightening and difficult one. We could not take one day for granted. Now that tiny, sick baby is a 13 year old girl. She is incredible. My baby is brave, kind and beautiful. She dances, plays sport, does well in school, has beautiful friends and is becoming a wonderful support and mentor for younger ones with CMN. It has been a heck of ride and I am forever grateful we are on it together. Loving her, even with an uncertain diagnosis, was never a question. Beautifully written, my friend. Your words resonate deep with me. You and your amazing family are a huge source of inspiration for me and mine. Love you guys. Kiz x right back at you lovely xxx Dear.I am very honored to introduce a free date/photo recovery products to you, it is very useful and free,and professional technical support.THANK YOU ! Wow, so thankful! I like your blog very much. By the way, you can choose some discount ray ban sunglasses here. Hi friend, How are you? I hope you are all well.Today I am talking about love.So please drawing kind attention from all.Do you know mean love? I will tell you.Let me explain,Love has no color! It is pure and more than skin deep. Today, the world has become a global village economically, socially and politically thanks to the power of the Internet and technology.Love is a nonprofit organization with the mission of inspiring people to love unconditionally.If you know more about love you can visit-love Welcome to visit US Mehbub. So check out virtual data room pricing. Thanks on your marvelous posting! It is very useful and good.Come on. I want to introduce an app store optimization tips, I try it and I feel it is so good to rank app to top in app store search results, have you ever heard it?
http://docs.nevussupport.com/2015/04/i-knew-it-was-time-to-post-new-blog-but.html
2020-11-23T22:01:38
CC-MAIN-2020-50
1606141168074.3
[]
docs.nevussupport.com
@Generated(value="OracleSDKGenerator", comments="API Version: 20170115") public final class SessionPersistenceConfigurationDetails extends Object The configuration details for implementing session persistence based on a user-specified cookie name (application cookie stickiness). Session persistence enables the Load Balancing service to direct any number of requests that originate from a single logical client to a single backend web server. For more information, see Session Persistence. With application cookie stickiness, the load balancer enables session persistence only when the response from a backend application server includes a Set-cookie header with the user-specified cookie name. To disable application cookie stickiness on a running load balancer, use the updateBackendSet operation and specify null for the SessionPersistenceConfigurationDetails object. Example: Session SessionPersistenceConfigurationDetails.Builder. This model distinguishes fields that are null because they are unset from fields that are explicitly set to null. This is done in the setter methods of the Session"}) @Deprecated public SessionPersistenceConfigurationDetails(String cookieName, Boolean disableFallback) public static SessionPersistenceConfigurationDetails.Builder builder() Create a new builder. public String getCookieName() The name of the cookie used to detect a session initiated by the backend server. Use ’*’ to specify that any cookie set by the backend causes the session to persist. Example: example_cookie public Boolean getDisableFallback() Whether the load balancer is prevented from directing traffic from a persistent session client to a different backend server if the original server is unavailable. Defaults to false. Example: false public Set<String> get__explicitlySet__() public boolean equals(Object o) equalsin class Object public int hashCode() hashCodein class Object public String toString() toStringin class Object
https://docs.cloud.oracle.com/en-us/iaas/tools/java/1.17.5/com/oracle/bmc/loadbalancer/model/SessionPersistenceConfigurationDetails.html
2020-11-23T22:48:38
CC-MAIN-2020-50
1606141168074.3
[]
docs.cloud.oracle.com
If a user has never connected to a desktop on an RDS host, and the user launches an application that is hosted on the RDS host, the Windows basic theme is not applied to the application even if a GPO setting is configured to load the Aero-styled theme. Horizon 7 does not support the Aero-styled theme but supports the Windows basic theme. To make the Windows basic theme apply to the application, you must configure another GPO setting. Prerequisites - Verify that the Group Policy Management feature is available on your Active Directory server. The steps for opening the Group Policy Management Console differ in the Windows 2012, Windows 2008, and Windows 2003 Active Directory versions. See "Create GPOs for Horizon 7 Group Policies" in the Configuring Remote Desktop Features in Horizon 7 document. Procedure - On the Active Directory server, open the Group Policy Management Console. - Expand your domain and Group Policy Objects. - Right-click the GPO that you created for the group policy settings and select Edit. - In the Group Policy Management Editor, navigate to . - Enable the setting Force a specific visual style file or force Windows classic and set the Path to Visual Style as %windir%\resources\Themes\Aero\aero.msstyles.
https://docs.vmware.com/en/VMware-Horizon-7/7.9/horizon-published-desktops-applications/GUID-931FF6F3-44C1-4102-94FE-3C9BFFF8E38D.html
2020-11-23T23:22:18
CC-MAIN-2020-50
1606141168074.3
[]
docs.vmware.com
Returns the date the given number of workdays before or after the given date. workday( starting date, days, [holidays] ) starting_date: (Date) The starting date. days: (Integer) The number of days to advance into the future (positive numbers) or days to retreat into the past (negative numbers). holidays: (Date) A list of holidays not counted as workdays. Date An array of holidays may be given by enclosing them in braces, such as {"12/25/2004","12/31/2004"}. You can experiment with this function in the test box below. Test Input Test Output Test Output workday(date(2011,12,13),-6) returns 12/5/2011 On This Page
https://docs.appian.com/suite/help/20.3/fnc_date_and_time_workday.html
2020-11-23T22:54:31
CC-MAIN-2020-50
1606141168074.3
[]
docs.appian.com
Choose your class Work each professions Stockpile your ressources Craft new items Choose your pet Face creatures Rise up to the first place of the rankings and become number one New in the corner? Hard to start in the RPG part without enlightments. On the next page, you will find a tutorial to launch your adventure.
https://docs.en.akimitsu.xyz/rpg-universe/untitled
2020-11-23T22:46:18
CC-MAIN-2020-50
1606141168074.3
[]
docs.en.akimitsu.xyz
Monitoring Dynamic Discovery Assets discovered through McAfee ePolicy Orchestrator connections are not virtual and therefore appear in the regular Assets list. Since discovery is an ongoing process as long as the connection is active, you may find it useful to monitor events related to discovery. The Discovery Statistics page includes several informative tables: - Assets lists the number of currently discovered virtual machines, hosts, data centers, and discovery connections. It also indicates how many virtual machines are online and offline. - Dynamic Site Statistics lists each dynamic site, the number of assets it contains, the number of scanned assets, and the connection through which discovery is initiated for the site’s assets. - Events lists every relevant change in the target discovery environment, such as virtual machines being powered on or off, renamed, or being added to or deleted from hosts. Dynamic Discovery is not meant to enumerate the host types of virtual assets. The application categorizes each asset it discovers as a host type and uses this categorization as a filter in searches for creating dynamic asset groups. See Performing filtered asset searches. Possible host types include Virtual machine and Hypervisor. The only way to determine the host type of an asset is by performing a credentialed scan. So, any asset that you discover through Dynamic Discovery and do not scan with credentials will have an Unknown host type, as displayed on the scan results page for that asset. Dynamic discovery only finds virtual assets, so dynamic sites will only contain virtual assets. Listings in the Events table reflect discovery over the preceding 30 days. To monitor Dynamic Discovery, take the following steps: - Click the Administration icon. - In the Discovery Options area of the Administration page, click the View* link for Events.
https://docs.rapid7.com/insightvm/monitoring-dynamic-discovery/
2020-11-23T22:38:37
CC-MAIN-2020-50
1606141168074.3
[array(['/areas/docs/_repos//product-documentation__master/f9d3e5e101c2b1c3b68bc988ad2b8944b48ae896/insightvm/images/s_nx_admin_discovery_statistics_AWS.jpg', None], dtype=object) ]
docs.rapid7.com
. software and the authentication system. You need to create a script with handlers that implement those functions. To integrate your authentication system with Splunk Enterprise, deployments. Enterprise and external systems. Important: These scripts are provided as examples that you can modify or extend as needed. They are not supported and there is no guarantee that they will fully meet your authentication and security!
https://docs.splunk.com/Documentation/Splunk/7.2.6/Security/ConfigureSplunkToUsePAMOrRADIUSAuthentication
2020-11-23T22:57:36
CC-MAIN-2020-50
1606141168074.3
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Represents the current configuration and status of any running services in the project, which may be inspected and modified via the Garden CLI's environment command. Several named environment configurations may be defined (e.g. dev, testing, ...) in the project's garden.yml. The unit of building in Garden. A module is defined by its garden.yml configuration file, located in the module's top-level directory. Each module has a plugin type, and may define one or more services. Essentially, a project is organized into modules at the granularity of its build steps. A module's build step may depend on one or more other modules having already been built, as specified in its garden.yml, in which case those modules will be built first, and their build output made available to the requiring module's build step. The top-level unit of organization in Garden. A project consists of one or more modules, along with a project-level garden.yml configuration file. Garden CLI commands are run in the context of a project, and are aware of all its configuration, modules and services. An implementation of a plugin type (e.g. local-kubernetes for the container plugin). Whenever "a module's type" is mentioned in the documentation, what's meant is "which provider will handle this module?" Providers are responsible for implementing a module type's behaviors—e.g. how to build, deploy or test the module. Providers need to be specified for all the module types used in the project. For example, both the local-kubernetes and kubernetes providers ( kubernetes is the provider for remote Kubernetes) implement the container module type, but they handle deployments differently. local-kubernetes deploys to a local cluster, where kubernetes deploys to a remote cluster. For a comprehensive list of providers available in Garden, check out the References The unit of deployment in Garden. Services are defined in their parent module's garden.yml, each exposing one or more ingress endpoints. Services may depend on services defined in other modules, in which case those services will be deployed first, and their deployment output made available to the requiring service's deploy step.
https://docs.garden.io/reference/glossary
2020-11-23T22:37:17
CC-MAIN-2020-50
1606141168074.3
[]
docs.garden.io
Introduction Order-to-Cash is to the set of business processes from receiving and processing customer sales orders for goods and services to payment. The Order-to-Cash process starts from the Order and completes at Payment received from the customer. The most important artifacts of the Order-to-Cash process are sales orders, goods delivery, billing and payment. Steps in the Order to Cash process There are several variants of the Order-to-Cash process. The SAP Order-To-Cash Connector in UiPath Process Mining is based on the Sales from Stock variant. This is the most standard variant of the main process that consists of the following steps. This thus excludes the process where goods first need to be ordered or manufactured. The process begins with the system receiving an order from the customer. This can be done via different ways, for example, email, a webshop, a sales person, or by some form of Electronic Data Interchange. An order can be a simple purchase request for one particular product or can contain different products and quantities. The order is documented, and the company begins the task of fulfilling the order, which means the order is prepared for shipment to the customer. Afterwards the order is shipped to the customer. Once the product has been shipped and delivered, the most important stage of the cycle begins with regard to cash management. The invoice is created and sent to the customer for payment. The customer pays for the invoice and the payment is logged in your accounting books as part of the accounts receivable against the raised order. Subprocesses Next to the main Sales from Stock process, subprocesses can be identified that involve additional activities. For example the following subprocesses are available in the SAP Order-To-Cash Connector in UiPath Process Mining: If the price calculated for the customer was too high a credit memo request is created. This may happen due to, for example, prices are incorrectly scaled or because a discount was forgotten. When the credit memo request is approved, a credit memo starts a billing process and reduced the amounts receivable from a customer. When the customer wants to send delivered goods back to your company, for example due to a complaint, a corresponding returns document can be created to reproduce the process in the system. If a customer ordered some goods from your company and you are not sure about the customer's commitment, you can request a payment advance from the customer by issuing a down payment request. The rest of the amount will be paid after the goods are delivered. Process insights The above process describes an ideal scenario in which the customer gets the product or service and the company gets paid in time. However, in many cases there will be differences and deviations to this. A significant portion of the operating costs within a company is spent in managing the Order-to-Cash process. The greater the inefficiencies in the process, the greater the risk of a negative impact on the business’s cash inflow. Managing a consistent Order-to-Cash process provides a reliable and healthy company. Order-to-Cash activities impact operations throughout the organization such as inventory management and supply chain management. Optimizing the Order-to-Cash process eliminates inefficiencies and can lead to benefits throughout the entire organization. With UiPath Process Mining you get more insight in the actual execution of the Order-to-Cash process, and detailed information to analyze the statuses of orders, deliveries, invoices and payments. Monitoring the Order to Cash process in AppOne With UiPath Process Mining, you get insight in how your Order-to-Cash process actually performs. For example on delivery times or payment times. With AppOne you can easily monitor your Order-to-Cash process to check the progress and quality of the process on a regular basis. If deviations are detected, you can take action to improve or change the process or parts of the process. In AppOne, default KPI’s and Tags are defined, which enable you to keep an overview of the process. With KPI’s (Key Performance Indicators) you can measure for example: - The complete order to cash cycle time; - Number of days between shipment or service and billing; - The on-time delivery percentage; - The average payment period; - Percentage of billing errors; - Percentage of automated invoices. Tags are properties that are important for the Order-to-Cash process and help you to monitor the performance. Tags can indicate inefficiencies like rework, but also violations of your policies or SLA. There are several default Tags defined for the Order-to-Cash process in the SAP Order-to-Cash Connector. Updated 4 days ago
https://docs.uipath.com/process-mining/docs/order-to-cash-process
2020-11-23T22:58:45
CC-MAIN-2020-50
1606141168074.3
[]
docs.uipath.com
Looking for another feature or improvement in the theme? Or just wish to speak out your mind? Please feel free to write to us! 1.2 29 January 2019 - Add WordPress 5.0 Supported - Add Fully Gutenberg Supported - Add Automatic Install Sliders - Add Automatic Updates - Fix Outdated WooCommerce Files - Fix Portfolio Navigation - Fix Child Theme - Update norwood.pot File - Update Setup Wizard - Update Demo Content - Update All Premium Plugins - Update Documentation - Tweak Removed Envato Market from Requirement Plugins - Tweak Some Improvements 1.1.1 22 December 2018 - Add Houzz Icon to Menu List - Fix Outdated WooCommerce - Fix Single Post / Work - Tweak Some Improvements 1.1 17 November 2018 - Add Visual Portfolio Compatible - Fix Footer Menu for Singular Posts - Tweak Some Improvements 1.0.7 14 November 2018 - Fix Phantom Menu Symbol - Tweak Some Improvements 1.0.6 19 September 2018 - Fix WooCommerce 1.0.5 18 September 2018 - Add New Portfolio Layout - Add One Click Demo Import (as a spare) - Fix HTML Class - Fix Search Icon - Fix Convertation “i” Tag to “svg” - Tweak Disabled Custom Scrollbar (by default) 1.0.4 09 August 2018 - Fix Portfolio Navigation - Fix Blog Navigation 1.0.3 12 July 2018 - Fix Search Icon - Fix Translatable Strings - Fix Child Theme - Fix Tooltip for Partner Shortocde - Update Norwood Helper Plugin - Update Documentation - Tweak Disabled Smooth Effects for Typography - Tweak Some Improvements 1.0.2 28 June 2018 - Add Templatera Import File - Fix Responsive Images - Update Setup Wizard 1.0.1 22 May 2018 - Add Merlin Setup Wizard - Add Revolution Sliders to the Demo Folder - Update Norwood Helper Plugin 1.0 18 April 2018 - Initial Release
https://docs.vlthemes.com/changelog/norwood/
2020-11-23T22:04:11
CC-MAIN-2020-50
1606141168074.3
[]
docs.vlthemes.com
Microsoft speech client samples Microsoft Speech Service provides end-to-end samples showing how to use Microsoft speech recognition API in different use cases, for example command recognition, continuous recognition, and intent detection. All samples are available on GitHub, and can be downloaded by the following links: The README.md in each repository as well as the client libraries page provide details about how to build and run the samples. All Microsoft Cognitive Services SDKs and samples are licensed with the MIT License. For more information, see LICENSE.
https://docs.microsoft.com/en-us/azure/cognitive-services/speech/samples
2018-03-17T12:56:15
CC-MAIN-2018-13
1521257645069.15
[]
docs.microsoft.com
Event ID 1003 — DHCP Client Lease Validity Updated: December 11, 2007 Applies To: Windows Server 2008 Each time a Dynamic Host Configuration Protocol (DHCP) client starts, it requests IP addressing information from a DHCP server, including: - IP address - Subnet mask - Additional configuration parameters, such as a default gateway address, Domain Name System (DNS) server addresses, a DNS domain name, and Windows Internet Name Service (WINS) server addresses When a DHCP server receives a request, it selects an available IP address from a pool of addresses defined in its database (along with other configuration parameters) and offers it to the DHCP client. If the client accepts the offer, the IP addressing information is leased to the client for a specified period of time. Event Details Resolve Start the DHCP Server service On the DHCP server, configure the DHCP Server service to start automatically, and then start the service. To perform these procedures, you must be a member of the Administrators group, or you must have been delegated the appropriate authority. To configure the DHCP Server service to start automatically: - At the DHCP server computer, click Start, click Run, type services.msc, and then click OK. - Double-click DHCP Server. - On the General tab, in the Startup type box, click Automatic, and then click Apply. - Click Start, wait for the progress bar to complete, and then click OK. - On the File menu, click Exit. Verify To verify that the computer has a valid lease: - Lease Validity
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc774831(v=ws.10)
2018-03-17T13:03:12
CC-MAIN-2018-13
1521257645069.15
[array(['images/dd300143.red%28ws.10%29.jpg', None], dtype=object)]
docs.microsoft.com
Unwrap by changing deltas between values to 2*pi complement. Unwrap radian phase p by changing absolute jumps greater than discont to their 2*pi complement along the given axis. Notes If the discontinuity in p is smaller than pi, but larger than discont, no unwrapping is done because taking the 2*pi complement would only make the discontinuity larger. Examples >>> phase = np.linspace(0, np.pi, num=5) >>> phase[3:] += np.pi >>> phase array([ 0. , 0.78539816, 1.57079633, 5.49778714, 6.28318531]) >>> np.unwrap(phase) array([ 0. , 0.78539816, 1.57079633, -0.78539816, 0. ])
http://docs.scipy.org/doc/numpy-1.4.x/reference/generated/numpy.unwrap.html
2015-06-29T23:13:05
CC-MAIN-2015-27
1435375090887.26
[]
docs.scipy.org
Spinnaker™ Architecture Armory Enterprise architecture Armory Enterprise is an enterprise version of open source Spinnaker. It is composed of several microservices for resiliency and follows the single-responsibility principle. It allows for faster iteration on each individual component and a more pluggable architecture for custom components. Armory Enterprise microservices Clouddriver Clouddriver is a core component of Armory Enterprise and facilitates the interaction between a given cloud provider such as AWS, GCP or Kubernetes. There is a common interface that is used so that additional cloud providers can be added. Deck Deck is the UI for interactive and visualizing the state of cloud resources. It depends on Gate to interact with the cloud providers. Echo Echo is the service for Spinnaker which manages notifications, alerts and scheduled pipelines (Cron). It can also propagate these events out to other REST endpoints such as an Elastic Search, Splunk’s HTTP Event Collector or a custom event collector/processor. Fiat Fiat is the microservice responsible for authorization (authz) for the other microservices. By default, it is not enabled, so users are able to perform any action in Armory Enterprise. Front50 Front50 is the persistent datastore for Spinnaker. Most notabily pipelines, configurations, and jobs. Igor Igor is a wrapper API which communicates with Jenkins. It is responsible for kicking-off jobs and reporting the state of running or completing jobs. Kayenta Kayenta is Spinnaker’s canary analysis service, integrating with 3rd party monitoring services such as Datadog or Prometheus. Orca Orca is responsible for the orchestration of pipelines, stages, and tasks within Armory Enterprise. Orca acts as the “traffic cop” within Armory Enterprise making sure that sub-services, their executions and states are passed along correctly. The smallest atomic unit within Orca is a task - stages are composed of tasks and pipelines are composed of stages. Rosco Rosco is the “bakery” service. It is a wrapper around Hashicorp’s Packer command line tool which bakes images for AWS, GCP, Docker, Azure, and other builders. Armory Enterprise proprietary microservices Armory Agent for Kubernetes The Armory Agent is a lightweight, scalable service that monitors your Kubernetes infrastructure and streams changes back to the Clouddriver service. Dinghy Dinghy is the. Policy Engine Terraformer Terraformer is the microservice behind Armory’s Terraform Integration. It allows Armory to natively use your infrastructure-as-code Terraform scripts as part of a deployment pipeline. Installation and management Armory Operator The Armory Operator is a Kubernetes Operator that makes it easy to install, deploy, and upgrade Armory Enterprise. Armory Halyard Armory-extended Halyard is a versatile command line interface (CLI) to configure and deploy Armory Enterprise in Kubernetes or any cloud environment. Feedback Was this page helpful? Thank you for letting us know! Sorry to hear that. Please tell us how we can improve. Last modified September 21, 2021: (bafa325)
https://docs.armory.io/docs/overview/architecture/
2021-11-27T01:50:13
CC-MAIN-2021-49
1637964358078.2
[array(['https://d33wubrfki0l68.cloudfront.net/82daf9f0e4d26cf5f00ca0d7d4176719c28eb6d7/8fbc7/images/overview/spinnakerarchitecture.png', 'Architecture Diagram'], dtype=object) ]
docs.armory.io
Start and stop SharePoint 2010 Timer Service To stop and restart the SharePoint 2010 Timer service for Community Central: Step Action Description 1. Navigate to Start > Administrative Tools > Services. 2. Select the SharePoint 2010 Timer service. 3. Stop and restart the timer using the options at the top, left of the window or by right-clicking on SharePoint 2010 Timer. Check the timer job(s) again to make sure they are running and that the community, blog, and forum sites display current data.
https://docs.bamboosolutions.com/document/start_and_stop_timer_jobs/
2021-11-27T02:02:43
CC-MAIN-2021-49
1637964358078.2
[]
docs.bamboosolutions.com
How to configure backups in ISPmanager A backup copy is a copy of all websites, databases and user mailboxes. Backups allow the following: - restoring information in case of problems in the operation of the website; - restoring the website when moving from server to server; - saving data in case of possible server failures, software failures, hardware problems, etc. Backups can be stored on the server with ISPmanager or in an external storage. The following can be used as the external storage: - Dropbox; - Google Drive; - Amazon S3; - S3-compatible storage; - FTP server; - SFTP server (with connection via SSH). By default, backups for all users are created automatically once a day. You can customize the backup schedule. Read more about the operating principle of backups in Backup system: general information. Note Backups are not performed for directories and files located on mounted devices. Backups are not performed for directories and files that are symbolic links. To manage backups, enter Tools → Backup copies . Configuring backup schedule When you open the Backup copies section for the first time, ISPmanager offers to configure backup settings. Press OK to enter the settings. Activate the Enable backups option to run the backup process with the specified settings. Note In order to enable/disable backups for a specific user, you need to enable/disable this option in the user settings. - Select the place to store backups in the Storage type field. - Specify the Backup copy password . Specify the settings for the selected storage type:Local directory Dropbox - Path to directory — the directory on the server where the backups will be saved. Google Drive - Access code — access code to Dropbox. You can follow the link and login to Dropbox. After that, the field will be filled in automatically. - Path to backup directory — the directory in Dropbox where the backups will be saved. Amazon S3 - Access code — access code to Google Drive. You can follow the link and login to Google Drive. After that, the field will be filled in automatically. - Path to backups — the directory in Google Drive where the backups will be saved. - Key identifier — access key identifier. - Secret key — secret access key. - Bucket — Amazon S3 container name for storing backups. Read more about Amazon S3 settings in the official documentation .S3-compatible storage FTP server - Storage URL — URL for API requests to the storage. - Key identifier — access key identifier. - Secret key — secret access key. - Bucket — container name for storing backups. - Bucket addressing model : - subdomain — the URL of the form http[s]://bucket.host[:port][/path] will be used to access the bucket. For example, . - URL-path — the URL of the form http[s]://host[:port][/path]/bucket/ will be used to access the bucket. For example, . SFTP server (over SSH) - Server address — domain name or IP address of the server. - Port FTP — connection port. Default value — 21. - Path to backup directory — the directory on the server where the backups will be saved. - User — FTP user name. - Password — FTP user password. - Server URL — domain name or IP address of the server. - SSH port — connection port. Default value — 22. — the directory on the server where the backups will be saved. — type of authorization: password or SSH key. In case of password authorization, ISPmanager will generate a key that will be used to access the remote server. - Username — SSH user name. - Password — SSH user password. - Private key — content of the private SSH key. - In the Backup servers field, select the cluster nodes that will be used to archive backups. - Set Limits on backups creation: Total size in bytes. You can specify a unit of measurement in this field. E.g., 100Mib. Note - For local storage, the limit applies to each node of the cluster separately. If this value is exceeded, the oldest backups will be deleted; - You can leave this field blank; in that case the backups will be stored until the storage runs out of space; - You can limit the total number of backups using the BackupCountLimit configuration file parameter. The default value of the parameter is 14 (7 daily and 7 weekly backups). - Maximum number of Full backup copies. A full backup contains all user data. It is created the first time you run a backup and on Sundays. - Maximum number of Daily backup copies. The daily backup contains changes in user data from the last day. It is created daily, except on Sundays. Read more in Backup system: general information. In the Exclude files field, specify which files should not be included in the backup. Each exception has to be specified in a new line. Note - File paths are set relative to the user's home directory (default is /var/www/username/ ). E.g., data/.filemgr-tmp ; - You can use the * symbol to replace any characters in the file name. - In the Exclude databases field, specify which databases should not be included in the backup. Each database has to be specified in a new line. - Press OK . Configuring backup parameters To change the settings, enter Tools → Backup copies → Settings button. Creating a backup manually To create a backup manually: - Log in under a user account: Users → select a user → Log in as user button. - Enter Backup copies → New button. ISPmanager will create a backup and download it to your computer in the tar.gz archive format. Note You can create backups this way no more than once an hour. If you press the New button again within an hour after creating the first backup, ISPmanager will download the same archive as the first one. Restoring data from a backup Recovery of a user and all user's data To restore user data from a backup, enter Tools → Backup copies → select the backup→ Details button → select the user → Restore button → OK . When the data are restored, the message "Backup restore has been completed successfully" will appear in the ISPmanager interface. Note: The existing files are not overwritten. Before the recovery, delete the database with the same name from the server. Otherwise, ISPmanager will add the files to the existing database rather than recover them from a backup. Recovery of a deleted user You can recover a deleted user from a backup under a different name. Enter Tools → Backup copies → select the backup → Details button → select the user → Restore as button → specify the User name to which the data will be restored from the backup or Create user with a new name → Ok. In this case, ISPmanager will not restore matching entities. In addition, the backups created under the old name will not be available to the user. Note: If you recover a deleted user from an earlier backup, the data contained in subsequent backups will not be available to the user. For example, a user was deleted on March 10, but that user's backups for January and February are available. After restoring the user from the January 15 backup, the backups created later than that date will not be displayed to the user; i.e. the user will not be able to access the backups for the period from January 15 to March 10. Restoring individual files To restore individual files from a user backup: - Log in with a user account: Accounts → Users → select the user → Log in button. - Open the user's backup: Tools → Backup copies → select the copy → Details button. - Select the data type — Databases , Mail , Files . - Select the required files. - Click the Restore button to restore files from the backup. When the data are restored, the message "Backup restore has been completed successfully" will appear in the ISPmanager interface. Downloading the backup To download one of the backups to the local computer, enter Tools → Backup copies → select the backup → Details button → select the user → Download button. The backup will be downloaded as a tar archive with the file name of YYYY-MM-DD-user.tar.gz. YYYY-MM-DD — backup creation date user — user name
https://docs.ispsystem.com/ispmanager6-business/backup-system/backup-copies
2021-11-27T02:29:17
CC-MAIN-2021-49
1637964358078.2
[]
docs.ispsystem.com
Introduction MySQL is a free database management system. ISPmanager 6 Lite (Pro, Host) enables you to install several alternative MySQL versions on a single server. This new feature is based on Docker container virtualization. System requirements Supported operating systems: CentOS 7, Ubuntu 18, Debian 8 and later. OpenVZ and LXC virtualizations are not supported. Docker requires at least 2 GB of RAM for correct operation. Installing an alternative MySQL server To set up a MySQL server navigate to Settings → Database servers, and click Add. On the server creation form, select a MySQL version for setup. If you select MySQL you will be able to select an action for this server. You can: - Connect the existing local or remote server - Install a new local MySQL server The following versions are currently supported: - MySQL 5.5 - MySQL 5.6 - MySQL 5.7 - MySQL 8.0 - Mariadb 10.0 - Mariadb 10.1 - Mariadb 10.2 - Mariadb 10.3 - Mariadb 10.4 - Mariadb 10.5 Enter a name for the new server that will be displayed in the control panel and the root password. Note: by default, the newly created server will listen to IP address 127.0.0.1 (localhst). To make it accessible from the outside select the checkbox Accessible from outside. The server will listen to 0.0.0.0. Once completed, click Ok. If you want to use this server for the installation of APS-scripts, select the "Install APS" checkbox. You can change a server for scripts in the list of servers by clicking the "Default server" button. Once you are done, click OK. If Install APS is not set for your servers, scripts will be installed from local server. If it is not specified, from the first server on the list. Creating a database After your server is successfully installed and configured it will be added into the list of servers. Navigate to Tools → Databases and select the newly created server in the database creation form. Technology All alternative MySQL-servers are deployed within containers (isolated environments). Data from each container are kept in a separate directory /var/lib/server_name. Docker is responsible for the creation and management of containers, and MySQL versions are kept in its repositories. ALL MySQL versions will be used for the setup of the local MySQL-server. You should not delete those versions. Container setup procedure: - A directory is created for the container(/var/lib/server_name) - A selected MySQL version is uploaded from the repository - A free port is selected for the container ( the first free port starting from 3310) - The server is configured, and the root password is set up.
https://docs.ispsystem.com/ispmanager6-lite/database-management-systems/alternative-mysql-versions
2021-11-27T02:36:15
CC-MAIN-2021-49
1637964358078.2
[]
docs.ispsystem.com
Reporting Bugs¶ Krita is, together with many other projects, part of the KDE community. Therefore, bugs for Krita are tracked in KDE’s bug tracker: KDE’s bug tracker. The bug tracker is a tool for Krita’s developers to help them manage bugs in the software, prioritize them and plan fixes. It is not a place to get user support! The bug tracker contains two kinds of reports: bugs and wishes. Bugs are errors in Krita’s code that interrupt using Krita. Wishes are feature requests: the reporter thinks some functionality is missing or would be cool to have. Do not just create a feature request in the bug tracker: follow Feature Requests to learn how to create a good feature request. This guide will help you create a good bug report. If you take the time to create a good bug report, you have a much better chance of getting a developer to work on the issue. If there is not enough information to work with, or if the bug report is unreadable, a developer will not be able to understand and fix the issue. Contents Only Report Bugs¶ A bug is a problem in Krita’s code. If you have problems with your drawing tablet, for instance no support for pressure, then that is unlikely to be a problem in Krita’s code: it is almost certain to be a problem with your setup or the driver for your tablet. If you’ve lost the toolbox, that’s not a bug. If you have deleted your work, that is not a bug. If Krita works differently from another application, that’s not a bug. If Krita works differently than you expected, that’s not a bug. If Krita is slower than you expected, that’s not a bug. If Krita crashes, that’s a bug. If a file you save comes out garbled, that’s a bug. If Krita stops working, that’s a bug. If Krita stops displaying correctly, that’s a bug. Check the FAQ¶ If you’ve got a problem with Krita, first check the FAQ. See whether your problem is mentioned there. If it is, apply the solution. Ask on Krita Artists or IRC Chat Channel¶ If uncertain, ask us in the chatroom “#krita” via matrix. A introduction about Matrix is given here. Create a matrix on kde.org account and join the #krita:kde.org channel. Or ask a question on Krita Artists forum. Krita’s chat channel is maintained on Libera.Chat. Developers and users hang out to discuss Krita’s development and help people who have questions. Important Most Krita developers live in Europe, and the channel is very quiet when it’s night in Europe. You also have to be patient: it may take some time for people to notice your question even if they are awake. Also … Krita does not have a paid support staff. You will chat with volunteers, users and developers. It is not a help desk. But you can still ask your question, and the people in the channel are a friendly and helpful lot. Use the Latest Version of Krita¶ Check Krita’s website to see whether you are using the latest version of Krita. There are two “latest” versions: Latest stable: check the Download page. Always try to reproduce your bug with this version. Stable and Unstable Nightly builds: The stable nightly build is built from the last release plus all bug fixes done since the last release. This is called Krita Plus The unstable nightly build contains new features and is straight from the development branch of Krita. This is called Krita Next. You can download these builds from the Download page. Be Complete and Be Completely Clear¶ Give all information. That means that you should give information about your operating system, hardware, the version of Krita you’re using and, of course about the problem. Open the the bug tracker. If you do not have an account yet, create one. In the New Bug form, fill in the following fields: Component: if you experience an issue when running a filter, select Filters. If you don’t know the component, select “* Unknown” Version: select the correct version. You can find the version of Krita in Severity: if you have experienced a crash, select “crash”. If you are making a feature request, select “wish”. Otherwise, “normal” is correct. Do not select “major” or “grave”, not even if you feel the issue you are reporting is really important. Platform: select the from the combobox the platform you run Krita on, for instance “Microsoft Windows” OS: this probably already correctly preselected. (If you’re wondering why there are two fields that have more or less the same meaning, it’s because “Platform” should allow you to select between Windows Installer, Windows Portable Zip File, Windows Store or Steam”, it’s a bug in bugzilla that it doesn’t have those options.) Summary: a one line statement of what happened, like “Krita crashes when opening the attached PSD file”. Description: this is the most important field. Here you need to state very clearly: what happened, what had you expected to happen instead, how the problem can be reproduced. Give a concise and short description, then enumerate the steps needed to reproduce the problem. If you cannot reproduce the problem, and it isn’t a crash, think twice before making the report: the developers likely cannot reproduce it either. The template here is used for all projects in the KDE community and isn’t especially suitable for Krita. Attachments In all cases, attach the contents of thedialog to the bug report. In all cases, attach the contents of thedialog to the bug report. Your file If at all possible, attach your original Krita file (the one that ends in .kra) to the bug report, or if it’s too big, add a link for download. If you do that, make sure the file will be there for years to come: do not remove it. If the problem is with loading or saving a file in another format, please attach that file. A video If you think it would be useful, you can also attach or link to a video. Note that the Krita developers and bug triagers are extremely busy, and that it takes less time to read a good description and a set of steps to reproduce than it takes to watch a video for clues for what is going on. When making a video or a screenshot, include the whole Krita window, including the titlebar and the statusbar. If you are reporting a crash, attach a crash log. On Windows, you will find a kritacrash.log file in the local AppData folder. On Linux, follow your distribution’s instructions to install debug symbols if you have installed Krita from a distribution package. It is not possible to create a useful crash log with Linux AppImages. After You Have Filed the Report¶ After you have filed your bug, mail will be sent out to all Krita developers and bug triagers. You do not have to go to the chat channel and tell us you created a bug. When a developer decides to investigate your report, they will start adding comments to the bug. There might be additional questions: please answer them as soon as possible. When the developer has come to a conclusion, they will resolve the bug. That is done by changing the resolution status in the bug tracker. These statuses are phrased in developer speak, that is to say, they might sound quite rude to you. There’s nothing that we can do about that, so do not take it personally. The bug reporter should never change the status after a developer changed it. These are the most used statuses: Unconfirmed: your bug has not been investigated yet, or nobody can reproduce your bug. Confirmed: your bug is a bug, but there is no solution yet. Assigned: your bug is a bug, someone is going to work on it. Resolved/Fixed: your bug was a genuine problem in Krita’s code. The developer has fixed the issue and the solution will be in the next release. Duplicate: your bug has been reported before. Needinfo/WaitingForInfo. You need to provide more information. If you do not reply within a reasonable amount of time the bug will be closed automatically. Resolved/Not a Bug: your report was not about a bug: that is, it did not report something that can be fixed in Krita’s code. Resolved/Upstream: the issue you observed is because of a bug in a library Krita uses, or a hardware driver, or your operating system. We cannot do anything about it. Resolved/Downstream: Only on Linux. The issue you observed happens because your Linux distribution packages Krita in a way that causes problems. See also our chapter on Bug Triaging
https://docs.krita.org/en/untranslatable_pages/reporting_bugs.html
2021-11-27T02:39:15
CC-MAIN-2021-49
1637964358078.2
[array(['../_images/bugzilla_simple.png', "the bug tracker's new bug form, advanced fields hidden"], dtype=object) ]
docs.krita.org
No. What you can do is periodically list the owners and remove any "excessive" entries. @AMaDAC-0347, Agree with michev. It has no place to control this number. The only limitation for number of owners per team is 100. I looked through the documents and didn't find any PowerShell command that can do that. You can use PowerShell to Add-TeamUser or Remove-TeamUser or Get-TeamUser. 28 people are following this question.
https://docs.microsoft.com/en-us/answers/questions/534016/can-we-set-maximum-number-of-teams-owner-to-certai.html
2021-11-27T03:42:01
CC-MAIN-2021-49
1637964358078.2
[array(['/answers/storage/attachments/127853-ms-teams-ownership.png', '127853-ms-teams-ownership.png'], dtype=object) ]
docs.microsoft.com
Kestrel Server Limits. Max Concurrent Connections Property Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Gets or sets the maximum number of open connections. When set to null, the number of connections is unlimited. Defaults to null. public: property Nullable<long> MaxConcurrentConnections { Nullable<long> get(); void set(Nullable<long> value); }; public long? MaxConcurrentConnections { get; set; } member this.MaxConcurrentConnections : Nullable<int64> with get, set Public Property MaxConcurrentConnections As Nullable(Of Long) Property Value Remarks When a connection is upgraded to another protocol, such as WebSockets, its connection is counted against the MaxConcurrentUpgradedConnections limit instead of MaxConcurrentConnections.
https://docs.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.server.kestrel.core.kestrelserverlimits.maxconcurrentconnections?view=aspnetcore-5.0
2021-11-27T03:43:17
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
RGNDATA structure (wingdi.h) The RGNDATA structure contains a header and an array of rectangles that compose a region. The rectangles are sorted top to bottom, left to right. They do not overlap. Syntax typedef struct _RGNDATA { RGNDATAHEADER rdh; char Buffer[1]; } RGNDATA, *PRGNDATA, *NPRGNDATA, *LPRGNDATA;.
https://docs.microsoft.com/en-us/windows/win32/api/wingdi/ns-wingdi-rgndata
2021-11-27T04:02:08
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
Maintainer Notes¶ This section maps out development logistics for the core team of OpenDP developers. Contents: Summary¶ Base all code on a long-lived branch called main, where we enforce a linear history. Do development on short-lived feature branches, which are merged to mainwith squash commits. Generate new versions via GitHub Releases, using long-lived release branches. Rationale¶ Our process should be as simple as feasible – but not simpler! We need to balance developer friendliness with the special requirements of privacy-sensitive software. We want a clean main branch that is always in a known good state. It should always be usable for development, and all tests should pass. This allows us to create releases easily, using the most up-to-date code. We need linear code history, without unanticipated changes introduced by merge commits. This is important so that contributions are validated using the exact code that will land on main. We need separate release branches to allow bug fixes for previous versions. This is important to support library users who can’t upgrade to the latest version of OpenDP yet. All release tasks should be automated and drivable from GitHub. Creating a release should not require setup of a local environment, and should not have any dependencies on non-standard tools. This is important to allow for delegation of tasks and continuity of the project. Task Tracking¶ Use GitHub Issues to track all tasks. This is helpful to know who’s working on what. Use the OpenDP Development GitHub Project to organize work and prioritize tasks for development. Manage all changes using GitHub Pull Requests. Code Hygiene¶ Follow the Rust guidelines for coding style. Code should be formatted using the default settings of rustfmt. (TODO: Automatic marking of style issues on PRs –) Write API docs for all public and significant private APIs. We use inline comments to explain complicated code. Make sure mainis always in a good state: Code compiles, tests pass. If mainis ever broken, it should be the team’s top priority to fix it. Use GitHub Actions to check PRs automatically, and don’t allow merges if checks fail. Branching Strategy¶ Use a single, long-lived branch named mainfor core project history. Maintain a linear history on main, meaning every commit is based only on the previous commit. Don’t do merge commits onto main. Do development on short-lived feature branches, derived from main. Feature branches have the naming scheme <nnn>-<short-desc>, where <nnn>is the number of the GitHub issue tracking this task, and <short-desc>is a short description of the change. For instance, 123-new-measurement Manage all changes using GitHub PRs from feature branches onto main. Check for test success and do code reviews before approving PRs. To maintain linear history, require PRs to be up to date with main. This means that developers may need to rebase feature branches periodically. Try to keep PRs relatively small and self contained. This simplifies code reviews, and reduces the likelihood of rebasing hassles. Generally, squash feature branches when merging PRs, so that there’s a 1-to-1 correspondence between issues/PRs and commits. To enforce this strategy, use the following branch protections on main: Require pull request reviews before merging Dismiss stale pull request approvals when new commits are pushed Require status checks to pass before merging Require branches to be up to date before merging Require linear history Because this is the real world, allow for exceptions to these rules in case of excessive misery! Release Process¶ Overview¶ For every release, designate a Release Manager. This person is charged with performing the key tasks of the release process. Responsibility for this should be rotated to avoid burnout. Use semantic versioning to identify all releases: Golden master (GM) releases have a semantic version of the form <MAJ>.<MIN>.<PAT>. For example, 1.2.0. Release candidate (RC) releases have a semantic version of the form <MAJ>.<MIN>.<PAT>-rc.<NUM>, where <NUM> starts at 1. For example, 1.2.0-rc.1. The versions of the Rust crates and the Python package (and any other language bindings) are always kept in sync, even if there are no changes in one or the other. For example, version 1.2.0will comprise Rust crates opendp 1.2.0and opendp-ffi 1.2.0, and Python package opendp 1.2.0. For major and minor releases, create a new release branch. Release branches have the naming scheme release/<MAJ>.<MIN>.x(where xis literal “ x”). For example, version 1.2.0→ branch release/1.2.x. Release branches remain alive as long as that minor version is supported. (Note: Release branch names don’t contain the -rc.<NUM>suffix, even when doing an RC release. All RC releases and the GM release use the same branch.) For patch releases, don’t create a new release branch. Use the existing branch for the corresponding major or minor version. For example, version = 1.2.1→ branch release/1.2.x. For all releases, create a tag. Tags have the naming scheme v<MAJ>.<MIN>.<PAT>[-rc.<NUM>]. For example, for version = 1.2.0, tag = v1.2.0. (Note: Tag names do contain the -rc.<NUM>suffix, when doing an RC release.) Use RC releases to validate the system end-to-end before creating the GM release. There should be at least one successful RC release before creating the GM release. Use a GitHub Release to initiate each OpenDP release. This will run the GitHub Workflows that handle the build and publish process (see below). Playbook¶ Identify names: Update CHANGELOG.mdon main(based on Keep a Changelog) . Create/update the release branch: Major or minor release ONLY: Create a new release branch, based on the desired point in main. Patch release ONLY: Use the existing branch from the previous major or minor release, and cherry-pick changes from maininto the release branch. Set the RC number to 1. Specify the version for this iteration: <MAJ>.<MIN>.<PAT>[-rc.<NUM>] Update the version field(s) in the following files: VERSION rust/opendp/Cargo.toml rust/opendp-ffi/Cargo.toml(two entries!!!) python/setup.cfg docs/source/conf.py Commit the version number changes to the release branch. Create a GitHub Release with the following parameters: - Tag version v<MAJ>.<MIN>.<PAT>[-rc.<NUM>] - Target release/<MAJ>.<MIN>.<PAT>[-rc.<NUM>] - Release title OpenDP <MAJ>.<MIN>.<PAT>[-rc.<NUM>] - Describe this release (Changelog)[<MAJ><MIN><PAT>—<ISO-8601-DATE>] - This is a pre-release <CHECKED IF RC> - Create a discussion… <UNCHECKED> Build and publish process is triggered by the creation of the GitHub Release. If this is a GM release, you’re done! If this is an RC release, download and sanity check the Rust crates and Python package. (TODO: Release validation scripts –) If fixes are necessary, do development on regular feature branches and merge them to main, then cherry pick the fixes into the release branch. Increment the RC number Return to Step 4. Release Workflows¶ These are the GitHub workflows that support the release process. sync-branches.yml¶ Keeps the tracking branches latestand stablein sync with their targets. This is used when generating docs, so that we have a consistent path to each category. Triggered on every push to main, or when release is published. Whenever there’s a push to main, it advances latestto the same ref. Whenever a release is created, it advances stableto the release tag. release.yml¶ Triggered whenever a GH Release is created. Rust library is compiled, creating shared libraries for Linux, macOS, Windows. Python package is created. Rust crates are uploaded to crates.io. Python packages are uploaded to PyPI. docs.yml¶ Generates and publishes the docs to Triggered whenever sync-branches.htmlcompletes (i.e., whenever latestor stablehave changed). Runs make versions Generates Python API docs Generates Sphinx docs Pushes HTML to gh-pagesbranch, which is linked to
https://docs.opendp.org/en/v0.2.3/developer/maintainer-notes.html
2021-11-27T02:17:07
CC-MAIN-2021-49
1637964358078.2
[]
docs.opendp.org
On this Page Resumable Pipelines suspend the flow of data when an endpoint becomes inaccessible. If an exception disables a target endpoint, the Resumable Pipeline's execution state is saved in the Snaplex nodes. After restoring connectivity to the target endpoint, you can resume the suspended Pipeline starting at the point of failure, so that successfully processed documents are not processed again. Every Snap in a Resumable Pipeline runs to completion before any of its output documents are passed to the next downstream Snap. In contrast, a Snap in a standard Pipeline passes the document to the next downstream Snap, possibly even before the first Snap completes its document processing. The SnapLogic Monitoring Dashboard displays the status of Pipelines. If a Pipeline is suspended, the Snaps that have completed execution are displayed in green, and the remaining Snaps are displayed in orange. You can resume a suspended Pipeline from the SnapLogic Dashboard. Your Org must be subscribed to Resumable Pipelines to use this feature. In SnapLogic Designer, select the target Pipeline and open the Properties menu. You can enable Resumable Mode in existing Pipelines..
https://docs-snaplogic.atlassian.net/wiki/plugins/viewsource/viewpagesrc.action?pageId=721944618
2021-11-27T02:54:55
CC-MAIN-2021-49
1637964358078.2
[]
docs-snaplogic.atlassian.net
Bake and Share Amazon Machine Images Across Accounts Overview of sharing AMIs across accounts In many environments, SpinnakerTM runs under a different AWS account than the target deployment account. This guide shows you how to configure Spinnaker to share an AMI created where Spinnaker lives with the AWS account where your applications live. This guide is assuming that AWS roles are already properly setup for talking to the target account. Spinnaker configuration for sharing baked AMIs You can add the following snippet to your SpinnakerService manifest and apply it after replacing the example values with ones that correspond to your environment. The example adds an AWS account and configures the baking service (Rosco) with default values: apiVersion: spinnaker.armory.io/v1alpha2 kind: SpinnakerService metadata: name: spinnaker spec: spinnakerConfig: config: aws: enabled: true accounts: - name: my-aws-account requiredGroupMembership: [] providerVersion: V1 permissions: {} accountId: 'aws-account-id' # Use your AWS account id regions: # Specify all target regions for deploying applications - name: us-west-2 assumeRole: role/SpinnakerManagedProfile # Role name that worker nodes of Spinnaker cluster caassume in the target account to make deployments and scan infrastructure primaryAccount: my-aws-account bakeryDefaults: baseImages: [] defaultKeyPairTemplate: '{{"{{"}}name{{"}}"}}-keypair' defaultRegions: - name: us-west-2 defaults: iamRole: BaseIAMRole ... # Config omitted for brevity service-settings: rosco: env: SPINNAKER_AWS_DEFAULT_REGION: "us-west-2" # Replace by default bake region SPINNAKER_AWS_DEFAULT_ACCOUNT: "target-aws-account-id" # Target AWS account id ... # Config omitted for brevity First, add the AWS provider account with Halyard. Next, make sure to enable the AWS provider: hal config provider aws enable Then, add a rosco.yml file under ~/.hal/default/service-settings/ that contains the following snippet: env: SPINNAKER_AWS_DEFAULT_REGION: "YOUR_DEFAULT_REGION" SPINNAKER_AWS_DEFAULT_ACCOUNT: "YOUR_DEFAULT_AWS_ACCOUNT_ID" SPINNAKER_AWS_DEFAULT_ACCOUNT is the target account ID. Spinnaker pipeline Bake stage configuration Make sure to check the Show Advanced Options checkbox. Then where it says Template File Name use aws-multi-ebs.json as the value. Then add an Extended Attribute. Have the key be share_with_1 and the value being the target AWS account ID that was used for SPINNAKER_AWS_DEFAULT_ACCOUNT. share_with_1 is for ami_users inside Packer. You can also copy the resulting AMI to different regions by overriding the copy_to_1 values. These match up to ami_regions inside Packer. Feedback Was this page helpful? Thank you for letting us know! Sorry to hear that. Please tell us how we can improve. Last modified January 25, 2021: (1b76da5)
https://docs.armory.io/docs/armory-admin/bake-and-share/
2021-11-27T03:20:45
CC-MAIN-2021-49
1637964358078.2
[array(['https://d33wubrfki0l68.cloudfront.net/284fc1dfc76e82a7c545a26eba39ec35308306b8/dd96a/images/bake-and-share-1.png', 'Bake Stage'], dtype=object) ]
docs.armory.io
The goal of create_report is to generate profile reports from a pandas DataFrame. create_report utilizes the functionalities and formats the plots from dataprep. It provides the following information: Overview: detect the types of columns in a dataframe Variables: variable type, unique values, distint count, missing values Quantile statistics like minimum value, Q1, median, Q3, maximum, range, interquartile range Descriptive statistics like mean, mode, standard deviation, sum, median absolute deviation, coefficient of variation, kurtosis, skewness Text analysis for length, sample and letter Correlations: highlighting of highly correlated variables, Spearman, Pearson and Kendall matrices Missing Values: bar chart, heatmap and spectrum of missing values In the following, we break down the report into different sections to demonstrate each part of the report. Here we load the titanic dataset into a pandas dataframe and use it to demonstrate our functionality: from dataprep.datasets import load_dataset df = load_dataset("titanic") After getting a dataset, we could generate the report object by calling create_report(df). The following shows an example: from dataprep.eda import create_report report = create_report(df, title='My Report') Once we have a report object, we can show it in the notebook: Or we want to open the report in browser: report.show_browser() Or just save the report to local: report.save(filename='report_01', to='~/Desktop') You can see the full report here here In this section, we can see the types of columns and the statistics of the dataset. In this section, we can see the statistics and plots for each of variable in the dataset. For numerical variable, the report shows quantile statistics, descriptive statistics, histogram, KDE plot, QQ norm plot and box plot. For categorical variable, the report shows text analysis, bar chart, pie chart, word cloud, word frequencies and word length. For datetime variable, the report shows line chart In this section, the report will show an interactive plot, user can use the dropdown menu above the plot to select which two variables user wants to compare. The plot has scatter plot and the regression line regarding to the two variabes. In this section, we can see the correlations bewteen variables in Spearman, Pearson and Kendall matrices. In this section, we can see the missing values in the dataset through bar chart, spectrum and heatmap.
https://docs.dataprep.ai/user_guide/eda/create_report.html
2021-11-27T02:32:45
CC-MAIN-2021-49
1637964358078.2
[]
docs.dataprep.ai
Standard Test Interface Standard Discovery, Staging and Invocation of Integration Tests. Version: 2.0.0. Definitions.. test runner for each set of tests it runs. The testing system MUST stage the following package on the test runner: standard-test-roles The testing system MUST clone the dist-git repository for the test on the test runner, by the testing system independently and executed in a clear test environment: - *
https://docs.fedoraproject.org/my/ci/standard-test-interface/
2021-11-27T03:31:54
CC-MAIN-2021-49
1637964358078.2
[]
docs.fedoraproject.org
💡 💡 💡 💡 Lootex Developer Portal Search… Lootex Developer Portal Documentation The Basics of Minting APIs GitBook Lootex Developer Portal Our mission is to make people’s virtual assets real by providing a trading platform built on the blockchain. What is Lootex? We are experts on managing Digital Items (known as NFTs). Lootex has served business clients from software integration to channel sales. Do feel free to drop a message here: The new version of "Digital Items" are: Secure and Trusted Every listed digital item is an NFT (Non-Fungible Token). Each piece is unique and the full on-chain history can be viewed by anyone. Differently from traditional digital items, once the limited artworks have been sold, no extra copies will ever be created. Blockchain technology prevents forgery and fraud transactions. True Ownership We help game developers, celebrities, artists and illustrators submit their limited-edition artwork onto blockchain. Blockchain provides an immutable, trustworthy and reliable source of ownership. Everyone can trace the origin and trade history of the artworks. It means if you have bought something, you can prove you really own it in a legal way. Explore and Share We love creativeness and all fun ideas. Welcome to share them with us! Let us help you transform your digital assets real. If you are a collector, here’s your oasis. Go hunt and find your hidden gems. Feel free to buy or sell your collectibles, or even show off your treasures. Enjoy the surprises it may bring. Lootex’s Forge enables developers to create cryptoitems or find items to support their games. More importantly, Marketplace connects players and developers to a larger community. All these make the ecosystem energetic and ever-advancing. Let's begin and see how Forge API works: Next - Documentation The Basics of Minting APIs Last modified 1yr ago Copy link
https://docs.forge.lootex.dev/
2021-11-27T02:10:27
CC-MAIN-2021-49
1637964358078.2
[]
docs.forge.lootex.dev
sec.userRemoveRoles( $user-name as String, $role-names as String[] ) as null Removes the roles ($role-names) from the list of roles granted to the user ($user-name). If a user with name equal to $user-name is not found, an error is returned. If one of $role-names does not correspond to an existing role, an error is returned. If the current user is limited to granting only his/her roles, and one of $role-names is not a subset of the current user's roles, then an error is returned. This function must be executed against the security database. // execute this against the security database declareUpdate(); const sec = require('/MarkLogic/security.xqy'); sec.userRemoveRoles("Jim", ("admin", "admin-builtins")) // Removes the "admin" and "admin-builtins" roles from the user, "Jim." Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
https://docs.marklogic.com/sec.userRemoveRoles
2021-11-27T03:25:46
CC-MAIN-2021-49
1637964358078.2
[]
docs.marklogic.com
Overview of the Microsoft 365 admin center The Microsoft 365 admin center has two views: simplified view helps smaller organizations manage their most common tasks. Dashboard view includes more complex settings and tasks. You can switch between them from a button at the top of the admin center. Watch: The admin center in simplified view With the Microsoft 365 admin center, you can reset passwords, view your invoice, add or remove users, and much more all in one place. Sign in to Office.com with your work account, and select the app launcher. If you have permission to access the admin center, you'll see Admin in the list. Select it. At the top of the admin center, review the top actions for you. You may see different actions depending on what you've already set up, such as creating new accounts, using Teams, setting up email, and installing Office apps. Under Your organization on the Users tab is a list of people who can access apps and services, add new users, reset passwords, or use the three dots (more actions) menu. Select a person to view or edit their information and settings. On the Teams tab, create a new team or manage existing teams. You can manage the members of a team or select the three dots (more actions) to change other Teams settings. On the Subscriptions tab, add more products, add licenses, or use the three dots (more actions) menu to modify licenses or payment method. On the Learn tab, browse videos and articles about the admin center and other Microsoft 365 features. To explore more advanced features of the admin center, open the navigation menu and expand the headings to see more. Select Show all to see everything in the navigation menu or use the search bar to quickly find what you're looking for. If you need assistance, select Help & support. Search for topic you want help with and view the recommended solution or select the headset to contact support, and then enter your question and contact information. Watch: The admin center in dashboard view The Microsoft 365 admin center is where you manage your business in the cloud. You can complete such tasks as adding and removing users, changing licenses, and resetting passwords. Specialist workspaces, like Security or Device management, allow for more granular control. For more information about how the admin centers work together, see What about the specific types of IT roles and other workspaces like Security, Device Management, or Exchange? in this article. To get to the Microsoft 365 admin center, go to admin.microsoft.com or, if you're already signed in, select the app launcher, and choose Admin. On the home page, you can create cards for tasks that you perform frequently. To add a new card, select Add card, then select the plus sign next to the card you want to add. When you are finished, close the window. You can rearrange the cards by selecting and then dragging them to where you want. To remove a card, select the three dots (more actions), and then choose Remove. To view more admin tasks, expand the navigation menu. You'll find advanced configuration settings in the additional admin centers at the bottom. One common task that you might perform in the admin center is adding a user. To do this, select Users, Active users, and then select Add a user. Enter the user's name and other information, and then select Next. Follow the prompts to finish adding the user. When you are done, select Finish adding, and then select Close. You can sort your active users by columns, such as Display name or Licenses. To add more columns, select Choose columns, select the columns you want to add, and then select Save. Select a user to see more options, such as managing their product licenses. To enable more features that come with your subscription, select Setup. Here you can turn on sign-in security, mobile app protection, DLP, and other features included with your subscription. If you need support at any time, choose Need help. Enter your question, then check out the links that appear. If you don't get your answer here, choose Contact support to open a service request. For more information on managing billing, passwords, users, and admins, see the other lessons in this course. Who is an admin? By default, the person who signs up for and buys an Microsoft 365 for business subscription gets admin permissions. That person can assign admin permissions to other people to help them manage Microsoft 365 for their organization.. If you have no idea who to contact at your work or school for help, try asking the person who gave you your user account and password. Note Targeted release admins have first access to new features. New features later roll out to all admins. This means that you might not see the admin center, or it might look different than what is described in help articles. To be among the first to see new features, see Participate in the admin center, below. Turn on Targeted release Sign in at admin.microsoft.com, go to the navigation pane and select Settings > Org settings > Organization profile tab.. Admin center feedback While in the? Frequently asked questions Don't see your questions answered here? Go to the Feedback section at the bottom of this page and ask your question. for. What language options are available the Admin Center? The Microsoft 365 admin center is fully localized in 40 languages. Related content What is a Microsoft 365 admin? (video) Add an admin (video) Customize the Microsoft 365 theme for your organization (article)
https://docs.microsoft.com/en-GB/microsoft-365/business-video/admin-center-overview?view=o365-worldwide
2021-11-27T01:47:41
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
CDI Event Bus Notifier Since Payara Server 5.182 The CDI Event Bus Notifier provides a way to send notifications from the Notification service into the internal Payara event bus. CDI Event Bus Notifier Configuration Enabled Enables/Disables the CDI Event Bus notifier. Dynamic Applies changes to the notifier without a server restart. Loop Back Enables/Disables whether messages should also fire on the same instance or not Make sure that the "Enabled" box is ticked so that the notifier will be used. If you would like the changes to take effect without needing a restart, tick the "Dynamic" box as well. If you want to receive the message events on the same instance, tick the "Loop Back" box as well. Otherwise, messages will be received only by remote Payara instances. To make these changes via the asadmin tool, use the following command, which mirrors the above screenshot: asadmin> notification-cdieventbus-configure --loopBack=true --dynamic=true --enabled=true --hazelcastEnabled=true To check the current applied configuration from asadmin, run the command: asadmin> get-cdieventbus-notifier-configuration This will return the current configuration, with whether it is currently enabled and if looping back is enabled: $ asadmin get-cdieventbus-notifier-configuration Enabled LoopBack true true Observing Notification Events Any application deployed to any instance in the same cluster can observe notification events triggered by the CDI event bus notifier. It would receive an instance of EventbusMessage (which extends Notification) that provides structured data about specific event type, such as HealthCheckNotificationData or RequestTracingNotificationData. It also provides the same information in a String form in the title and message fields. In order to observe the events in an application, use the Payara API artifact as a compile time dependency. Notification events can be observed as a standard @Inbound CDI event of type EventbusMessage or its supertypes: public void observe(@Observes @Inbound EventbusMessage event) { String shortInfo = event.getSubject() String detailedMessage = event.getMessage(); String domainName = event.getDomain(); String sourceInstanceName = event.getInstance(); if (event.getData() instanceof HealthCheckNotificationData) { Optional<HealthCheckResultEntry> mostCritical = event.getData() .as(HealthCheckNotificationData.class).getEntries() .stream().sorted().findFirst(); } }
https://docs.payara.fish/community/docs/5.182/documentation/payara-server/notification-service/notifiers/cdi-event-bus-notifier.html
2021-11-27T02:01:43
CC-MAIN-2021-49
1637964358078.2
[array(['../../../../_images/notification-service/cdi-event-bus/cdi-event-bus-notif-config.png', 'Admin console config'], dtype=object) ]
docs.payara.fish
) installation. If the DAS is on another machine then check the Remote Domain box, otherwise select Local Domain. Read and accept the license agreement. Enter the following details for the domain: Domain → This is the name of the domain you want to use. By default this is domain1. Host → This is the hostname that the server will be listening on. By default this is localhost. DAS Port → This is the port that the admin-listenerlistens on. By default this is 4848. HTTP Port → This is the port that the application will be hosted on. By default this is 8080. Target → This is the name of the node that the application will be getting deployed to. This can be left blank if it is a local installation. Username → This is the admin username for Payara Server. By default this is admin. Password → This is the admin password for Payara Server. By default this is blank. That’s how to add Payara Server to NetBeans. You’ll see a window where the final configuration settings can be changed. The previous options are all repeated here, with a few new options. You can click the help button in the bottom right to see what they all do in detail. Disabling the registered Derby server will speed up server start time. Managing Payara Server from NetBeans All of the management of Payara Server from NetBeans is done through the Services tab which can be found next to the Projects and Files tabs. Once Payara Server is added to NetBeans it can be found under Servers with the configured name. Right clicking on the server let’s you do the following things: Start or stop the server. Start debugging the server. Open admin console. Open server log. Exploding the server dropdown will list Applications, Resources and Web Services. From here you can view as well as manage some of these things from the NetBeans interface. For example, under Resources → JDBC → Connection Pools → DerbyPool (right click) → Properties you can change details like the port number. Deploying Applications to Payara Server An application can be deployed and undeployed easily from within NetBeans. First, the application run settings need to be configured to use Payara Server. Go to Properties on a project and then the Run section of this window. This menu lists the configuration for deploying the application..
https://docs.payara.fish/community/docs/5.191/documentation/ecosystem/netbeans-plugin/payara-server.html
2021-11-27T03:23:58
CC-MAIN-2021-49
1637964358078.2
[array(['../../../_images/netbeans-plugin/payara-server/netbeans-plugin-configure-server.png', 'Server Configuration Window'], dtype=object) array(['../../../_images/netbeans-plugin/payara-server/netbeans-services.png', 'NetBeans Services'], dtype=object) array(['../../../_images/netbeans-plugin/payara-server/netbeans-project-run-configuration.png', 'Project Run Configuration'], dtype=object) ]
docs.payara.fish
Microsoft Collaborate program is offered through Partner Center and requires registration. If you already have an account in Partner Center, it is best to use the same account to enroll in Collaborate. Individual accounts are being deprecated so please use Company when registering for Collaborate. Important You can use the following account to work in Partner Center: - Azure AD (organizational account) Only users with global administrator role can register using Azure AD account. If you do not have this role, you can try to find global administrator for your organization to help you register. How to register Navigate to the Partner Center Directory. If you're not already signed in, sign in now using existing account or create a new account. Scroll down to Developer programs section and click on Get Started link for Microsoft Collaborate. The Get Started link will take you to the registration page. Note If you signed in with the existing Partner Center account, the page would contain information from that account. You can modify Publisher display name and Contact info if needed. Important The following error indicates that user is signed in with an Azure AD account that does not have administrator privileges and registration cannot be completed. We could not validate your identity as a global administrator. Try to find global administrator for your organization or sign out and sign in again using Microsoft Account. Select the Account country/region in which you live, or where your business is located. You won't be able to change this later. Select your Account type (Company). You won't be able to change this later, so be sure to choose the right type of account. Enter the Publisher display name that you wish to use (50 characters or fewer). Select this carefully, as this name will be used when you interact with Collaborate (download content, submit feedback etc.). For company accounts, be sure to use your organization's registered business name or trade. If someone else is using a publisher display name for which you hold the trademark or other legal right, contact Microsoft. Enter the contact info you want to use for your account. Note We'll use this info to contact you about account-related matters. For example, you'll receive an email confirmation message after you complete your registration. When you register as a Company, you'll also need to enter the name, email address, and phone number of the person who will approve your Company's account. Review your account info and confirm that everything is correct. Then, read and accept the terms and conditions of the Collaborate Agreement. Check the box to indicate you have read and accepted these terms. Click Finish to confirm your registration. How to configure access for multiple users Partner Center leverages Azure AD for multi-user account access and management. If your organization already uses Office 365 or other business services from Microsoft, you already have Azure AD. Otherwise, you can create a new Azure AD tenant from within Partner Center at no additional charge. - Associate Azure Active Directory with your Partner Center account - Add users, groups, and Azure AD applications to your Partner Center account - Set roles and custom permissions for account users What happens when an Azure AD tenant is linked to a Partner Center account? - Tenant ID is added to the account data - Account Administrator gets the ability to view users of the Azure AD tenant and add them to the account - Tenant Global Admin gets the ability to add new tenant users in Partner Center - Tenant Global Admin gets the ability to invite guest users in Partner Center No changes are made to the Azure AD tenant itself. How to register as an organization Before you begin To create an account on Partner Center, you’ll need to have on hand the following information. You may want to take a few minutes to gather these items before you get started: Global administrator work email. If you're not sure what your company's work account is, see how to find global administrator. If your company doesn’t have a work account, you can create one during the account creation process. Your company’s legal business name, address, and primary contact. We need this information to confirm that your company has an established profile and that you are authorized to act on its behalf. Authority to sign legal agreements. Ensure that you are authorized to sign legal agreements on your company's behalf as you’ll be asked to do so during the enrollment process. Name and company email of the person you want to act as your primary contact. Guidelines When creating a company account, we suggest that you follow these guidelines, especially if more than one person needs to access the account. - Create your Microsoft account using an email address that doesn't already belong to you or another individual, such as [email protected]. You may not be able to use an email address at your company's domain, especially if your company already uses Azure AD. - If you plan to join Windows program for app development in future and want to reuse your partner center account, then it is recommended that you enroll to Windows program first and then join Collaborate. Otherwise you might have to create separate accounts for these programs. - Add a company phone number that does not require an extension and is accessible to key team members. Next steps Navigate to the portal - Navigate to the Collaborate homepage:. - If your organization created multiple Azure AD tenants, select the one it uses for Collaborate. Click on badge icon on the right of the screen to view the list of available tenants. - If your organization opened multiple accounts in Partner Center, select the one it uses for Collaborate. Click on the account name in the left navigation menu to view list of account. - When authentication is completed, you will see the homepage displaying your name and organization. Tip Homepage will look different if you participate in at least one engagement - you will see links to resources available to you. Request access Before you can download content or submit feedback, you need to join an engagement. Depending on how engagement is configured, you can: - Join - Request access - Contact engagement administrators (users with Engagement Owner or Power User role) using other channels, for example e-mail, and ask them to add you to the engagement. Tip Power User is a representative from your organization who manages engagement access. Depending on how engagement is configured, owner approval might be required for users to join. Some engagements only require acceptance of terms of use. Click on the Join engagements link to browse the list of new engagements available to you and your organization. Find the engagement you are interested in and click on its name. Page with detailed engagement information will open. Review Description and Terms of use to make sure you understand the engagement purpose and terms of use. Check I accept Terms of Use field and click Join or Request Access button. If owner approval is not required (Join option), engagement will be added to the engagement list and you can start using it. If you do not see the engagement in the list - press F5 to refresh the page. If owner approval is required, you will be asked to provide justification for requesting access. Engagement owner and Power User will be notified about access request via e-mail. They will review the request and configure engagement access. Usually they will notify you when access is granted. If you do not receive a notification, review the list of engagements to check if your access request was approved.
https://docs.microsoft.com/en-us/collaborate/registration
2021-11-27T04:15:39
CC-MAIN-2021-49
1637964358078.2
[array(['images/first-time-user.png', 'Collaborate homepage'], dtype=object)]
docs.microsoft.com
Add this page to your book Remove this page from your book This is an old revision of the document! Table of Contents Introduction To install our slackbuilds we have an application to help the process. To make the process a little simpler we have an application to download and install them: sepkg for building from slackbuilds, although sbopkg can also be used. It supports queue files, and will even download the queues from our server if needed.. 2)
https://docs.slackware.com/studioware:quick_start?rev=1468867912
2021-11-27T01:54:50
CC-MAIN-2021-49
1637964358078.2
[]
docs.slackware.com
Data Quality¶ This module will serve two purposes: - provide routines to create simple radar data quality related fields. - provide routines to decide which radar pixel to choose based on the competing information in different quality fields. Data is supposed to be stored in ‘aligned’ arrays. Aligned here means that all fields are structured such that in each field the data for a certain index is representative for the same physical target. Therefore no assumptions are made on the dimensions or shape of the input fields except that they exhibit the numpy ndarray interface.
https://docs.wradlib.org/en/1.5.0/qual.html
2021-11-27T03:17:06
CC-MAIN-2021-49
1637964358078.2
[]
docs.wradlib.org
To configure the WSO2 Governance Registry: - First download WSO2 Governance Registry from the official product site. Download the WebSEAL authentication JAR files from the following URL: Copy the JAR files you downloaded in the previous step to the <GREG_HOME>/repository/components/dropins/directory. Alternatively you can install WebSEAL based authenticator feature from the p2-repo, because it is not shipped with Governance Registry. Add the following entry to the < GREG_HOME>/repository/conf/security/authenticators.xmlfile. <Authenticator disabled="false" name="WebSealUIAuthenticator"> <Priority>3</Priority> </Authenticator> Start the Governance Registry server. For more information, see Running the Product. - Login to the management console using the default admin user name and password (admin and admin). - Go to Configure and click Users and Roles from the menu. - Create a new user called “webSealUser”. - Go to Configure and click Users and Roles. Create a new role called “delegated-admin” and assign the “webSealUser” to this role. Note: This user name and password is used by WebSEAL as authentication for the Governance Registry server. Grant “login” permission to the “everyone” role.
https://docs.wso2.com/display/Governance500/Configure+the+Governance+Registry
2021-11-27T02:10:31
CC-MAIN-2021-49
1637964358078.2
[]
docs.wso2.com
eProsima Fast DDS Monitor Documentation.). Furthermore, the user can check the status of the deployed DDS network at any time, i.e. see for each DDS Domain which DomainParticipants are instantiated, as well as their publishers and subscribers and the topics under which they publish or to which they subscribe respectively. It is also possible to see the physical architecture of the network on which the DDS applications that use Fast DDS are running. eProsima Fast DDS Monitor is designed to meet the following criteria: Monitoring: real-time tracking of network status and DDS communication. Intuitive: graphical user interface developed following a user experience design approach. Introspection: easily navigate through the deployed and active DDS entities being able to inspect their configuration and physical deployment. Troubleshooting: detect at a glance the possible issues or anomalous events that may occur in the communication. Contacts and Commercial support¶ Find more about us at eProsima’s webpage. Support available at: Phone: +34 91 804 34 48 Contributing to the documentation¶ Fast DDS Monitor Documentation is an open source project, and as such all contributions, both in the form of feedback and content generation, are most welcomed. To make such contributions, please refer to the Contribution Guidelines hosted in our GitHub repository. Structure of the documentation¶ This documentation is organized into the sections below.
https://fast-dds-monitor.readthedocs.io/en/latest/
2021-11-27T01:41:26
CC-MAIN-2021-49
1637964358078.2
[array(['_images/logo.png', 'eProsima'], dtype=object)]
fast-dds-monitor.readthedocs.io
GitHub API In this Article Overview You can update Projects that are synchronized with GitHub repositories by using the APIs described in this article. Use these APIs with Continuous-Integration/Continuous-Deployment workflows to pull changes from one development stage to another and checkout GitHub repositories. You still need commit changes through the Manager interface. The Authorization type supported by the APIs is Basic Auth. Pull Request Pull changes from a branch into the specified project. You must have already created the project and done a checkout of the branch in the Manager UI prior to calling this endpoint. This endpoint can be useful to update a project with the latest changes on a branch for testing. Syntax Example Sample Request and Response Do a checkout of the repository at the given reference into the project at the specified path. You must create the project first and do a checkout in the Manager UI before calling this endpoint. This endpoint can be useful to update a project to a newly tagged version or revert to a previously stable tag. You need to provide the repository path in the body of the call.
https://docs-snaplogic.atlassian.net/wiki/spaces/SD/pages/551026744/GitHub+API
2021-11-27T02:20:45
CC-MAIN-2021-49
1637964358078.2
[]
docs-snaplogic.atlassian.net
ZPRINT (ObjectScript) Synopsis ZPRINT:pc lineref1:lineref2 ZP:pc lineref1:lineref2 Arguments Description The ZPRINT. ZPRINT, ZPRINT ZPRINT to correctly count lines and line offsets that correspond to the source (MAC) version. You can use the $TEXT function to return a single line of INT code. Arguments pc An optional postconditional expression. Caché executes the ZPRINT command if the postconditional expression is true (evaluates to a nonzero numeric value). Caché does not execute the command if the postconditional expression is false (evaluates to zero). For further details, refer to Command Postconditional Expressions in Using Caché, Caché prints the label line. label+1 prints the line after the label. A label may be longer than 31 characters, but must be unique within the first 31 characters. ZPRINT, ZPRINT ignores lineref2 and displays the single line of code specified by lineref1. If lineref2 specifies a non-existent label or offset, ZPRINT Caché ObjectScript The Spool Device in Caché I/O Device Guide
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RCOS_czprint
2021-11-27T02:28:27
CC-MAIN-2021-49
1637964358078.2
[]
docs.intersystems.com
Configure FC on an existing SVM You can configure FC on an existing storage virtual machine (SVM)and create a LUN and its containing volume with a single wizard. The FC protocol must already be enabled but not configured on the SVM. This information is intended for SVMs for which you are configuring multiple protocols, but have not yet configured FC. Your FC fabric must be configured and the desired physical ports must be connected to the fabric. Navigate to the SVMs window. Select the SVM that you want to configure. In the SVMDetails pane, verify that FC/FCoE is displayed with a gray background, which indicates that the protocol is enabled but not fully configured. If FC/FCoE is displayed with a green background, the SVM is already configured. Click the FC/FCoE protocol link with the gray background. The Configure FC/FCoE Protocol window is displayed. Configure the FC service and LIFs from the Configure FC/FCoE protocol page: Select the Configure Data LIFs for FC check box. Enter 2in the LIFs per node field. Two LIFs are required for each node, to ensure availability and data mobility. In the Provision a LUN for FCP storage area, enter the desired LUN size, host type, and WWPNs of the host initiators. Click Submit & Close. Review the Summary page, record the LIF information, and then click OK.
https://docs.netapp.com/us-en/ontap-sm-classic/fc-config-windows/task_configuring_iscsi_fc_creating_lun_on_existing_svm.html
2021-11-27T03:37:22
CC-MAIN-2021-49
1637964358078.2
[]
docs.netapp.com
Overview. are pattern-matching widgets that provide a layer of abstraction on top of regular expressions. Instead of having to specify the sometimes complex underlying regular expression, you can specify a simple token to represent the underlying expression. -: The above records can be described by any of the following patterns:. implements a version of regular expressions based off of RE2 and PCRE regular expressions.implements a version of regular expressions based off of RE2 and PCRE regular expressions. Useto quickly assemble sophisticated patterns to match in your data. The following example includes the equivalent as the previous regular expression: - The back-ticks around the pattern indicate that it is a . For more information, see Text Matching.
https://docs.trifacta.com/pages/diffpages.action?pageId=143186044&originalId=147224174
2021-11-27T01:47:13
CC-MAIN-2021-49
1637964358078.2
[]
docs.trifacta.com
VRChat 2020.4.3 Release - 3 December 2020 - Build 1026 Client Features - Introducing VRChat Plus! If you'd like to support the continuing development of VRChat, you now have an option to do so by subscribing to VRChat Plus. In return, you'll get some cool new features, with more on the way! Read more at our blog post. - VRChat Plus costs US$9.99 per month, or US$99.99 per year. - VRChat Plus is available on the Steam platform. We are working towards implementing VRChat Plus on other platforms. - All VRChat players can now store up to 25 favorite avatars in their first avatar favorites row! - VRChat Plus Benefit - Active VRChat Plus Supporters have a total of 100 favorite avatars in four rows! - New nameplates with an updated design, new voice effect, automatic distance and name-length scaling, mode switching, and more! - Customize the appearance of your nameplates in the Quick Menu (under UI Elements) or in the Action Menu. Adjust the size, opacity, and display mode of the nameplates. - Switch to icon-only mode for a more compact look. Users with an icon will only show their icon, and users without an icon will show the first part of their name. - Revamped the friend icon to be more visible at range, and to get out of the way when you're close. - VRChat Plus Benefit - Active VRChat Plus Supporters will have the ability to add an icon to their nameplate! - This icon appears on the left side of your nameplate. - Take a picture in VRChat by selecting the User Icons button on the side of your Quick Menu and use it as your icon! - Upload an image of your own on the VRChat Home website! - Store up to 64 different icons and swap whenever you want for a fresh look! - VRChat Plus Benefit - Show off your support with a "VRChat Plus Supporter" indicator in your Social details. - When VRChat Plus goes Live, players who support us early on will receive a "VRChat Plus Early Explorer" badge in thanks! It will be visible in your Social details. This badge will only be available for a limited time. - VRChat Plus Benefit - Supporting VRChat via VRChat Plus will confer a small one-time boost to your Trust. You're supporting us, so we'll support you. This feature will not be live until VRChat Plus launches. - If you are a Visitor Trust Rank, it will boost you to New User, permitting uploading avatar and world content. If you are at or above New User, it will confer a small amount of Trust that may or may not increase your Trust rank. At ranks exceeding User, the trust boost is unlikely to change your Trust rank. Changes - Added "VRC+" option to the main menu. - Added VRC+ menu section. When not a VRChat Plus Supporter, this page will list the VRC+ benefits and features. When viewing as a currently-active VRChat Plus Supporter, this page will list details of the subscription. - Added User Icon image to the Quick Menu. If you are a currently-active VRChat Plus Supporter, you can click the icon in the top right to take a new picture for your user icon. It will display your currently-chosen user icon. - Added User Icon management page, where you can select from your previously-uploaded icons. There is a limit of 64 icons. You can disable your icon by selecting the "blank" icon, which is displayed as the first few letters of your name. - Added UI to support taking an image in VRChat for your User Icon. - Added "User Icon" report reason for users. - Moved the "My Creations" row above the Favorites row in the Avatars menu. - VRChat Plus Benefit - Added three additional favorite rows for avatars. These rows will display the number of avatars saved in that row, and show the number of avatars in that row. - The Friend icon is no longer visible in Full mode when you're nearby. It will appear in Icon-only mode, and when you get far away from the nameplate. Friends are indicated by yellow names instead of the standard white color. - Trust Rank is no longer visible on nameplates by default. Open your Quick Menu to see the Trust Rank of a user. - Added User Icons to the Safety menu as a new category. If you choose to hide user icons for a specific rank, they will be obfuscated with a mosaic effect. - Avatar Favorite Groups are now collapsed further while empty. - Added VRCat to a bunch of places in the menu. They get lonely, give them a few clicks every so often! SDK Udon - Fixed Strafe Speed in default World Prefab - Fixed issue where example prefabs didn't have their programs attached - Fixed issue where Set Variable nodes would reset their in-line values Updated 12 months ago Did this page help you?
https://docs.vrchat.com/docs/vrchat-202043
2021-11-27T02:32:56
CC-MAIN-2021-49
1637964358078.2
[]
docs.vrchat.com
Debian Packages Why use Debian packages While Spinnaker is flexible enough. Spinnaker also gets the version from the package and automatically appends it to the package name within Rosco. This makes it easy to specify your package in Rosco without the version number, mycompany-app. However, during the bake provisioning process Spinnaker installs the version that was specified by the Jenkins build: mycompany-app.3.24.9-3. Debian packaging allows service teams to easily add their app specific configuration to common Packer templates. If you’re using any Debian-based system (Ubuntu, DSL, Astra, etc), you’ll likely be using Debian packages for your system configuration and dependency management. So it’s a natural extension to use a Debian package for your own applications. Using Debian packages helps reduce the variations in Packer templates, or variables passed to Packer templates, during the bake process. Creating Debian packages You can create a Debian package by using various open source packaging tools. If you’re using Java, use the OS Package library. You can also use the packaging tools provided by Debian. Example: Debian package with OSPackage Gradle plugin Begin by creating a build.gradle. You also need to create a config/scripts/post-install.sh file in your project directory. Below is an example of what a Gradle file might look like for an app that builds a war. This uses the gradle-ospackage-plugin package. Basic usage of the Deb Plugin in the Deb Plugin docs. buildscript { repositories { jcenter() maven { url "" } } dependencies { classpath 'com.netflix.nebula:gradle-ospackage-plugin:8.5 Feedback Was this page helpful? Thank you for letting us know! Sorry to hear that. Please tell us how we can improve. Last modified April 12, 2021: (8405118)
https://docs.armory.io/docs/spinnaker-user-guides/debian-packages/
2021-11-27T02:36:48
CC-MAIN-2021-49
1637964358078.2
[]
docs.armory.io
Date: Sun, 15 Apr 2001 15:14:41 -0700 From: "Kevin Oberman" <[email protected]> To: "Ted Mittelstaedt" <[email protected]> Cc: "Joe Heuring" <[email protected]>, [email protected] Subject: Re: shells Message-ID: <[email protected]> In-Reply-To: Your message of "Sat, 14 Apr 2001 23:19:09 PDT." <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help > From: "Ted Mittelstaedt" <[email protected]> > Date: Sat, 14 Apr 2001 23:19:09 -0700 > Sender: [email protected] > > Most of the core FreeBSD developers prefer the C shell. There's > no real technical reason for their preference, they just like > it better. To be more precise, I think they prefer it as an interactive shell. I doubt many of them routinely write csh scripts. Also, on FreeBSD, csh is REALLY tcsh which is far newer and far more powerful than the old csh. It's also standard from platform to platform where every vendor seems to have their own hacks on csh, some of them highly incompatible. Finally, tcsh is pretty much upwards compatible from csh. If you can do it under csh, it will work the same under tcsh. And tcsh has many features that any modern shell has and csh does not including filename completion, command completion, more flexible prompts, and many others. R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: [email protected] Phone: +1 510 486-8634 To Unsubscribe: send mail to [email protected] with "unsubscribe freebsd-questions" in the body of the message Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=130747+0+/usr/local/www/mailindex/archive/2001/freebsd-questions/20010422.freebsd-questions
2021-11-27T02:42:23
CC-MAIN-2021-49
1637964358078.2
[]
docs.freebsd.org
Task Sets Docker¶ Task sets are for sharing a set of steps, like a tutorial. You make them with the task-set docker. Task sets can record any kind of command also available via the shortcut manager. It is not a macro recorder, right now, Krita does not have that kind of functionality. The tasksets docker has a record button, and you can use this to record a certain workflow. All Actions can be recorded. These include every action available in the Main Menu, but also all actions available via Ctrl + Enter. Then use this to let items appear in the taskset list. Afterwards, turn off record. You can then click any action in the list to make them happen. Press the Save icon to name and save the taskset. Task sets are a resource. As such, they can be saved, tagged, reordered. They are stored as *.kts files, which are XML files: <Taskset name="example" version="1"> <action>add_new_paint_layer</action> <action>add_new_clone_layer</action> <action>add_new_file_layer</action> </Taskset>
https://docs.krita.org/en/reference_manual/dockers/task_sets.html
2021-11-27T02:40:27
CC-MAIN-2021-49
1637964358078.2
[array(['../../_images/Task-set.png', '../../_images/Task-set.png'], dtype=object) ]
docs.krita.org
Important! After the installation is finished, please make sure to delete the folder "install". To start using DeepSound, you can purchase it from Envato Market.Purchase DeepSound To start using DeepSound, you have to download it from your Envato downloads page.Download DeepSound Get your purchase code. Upload the files, and get started The installation is pretty easy, please follow the steps below Click on "i agree to the terms of use and privacy policy, then click next button we check that all the script requirements then click on next button".
http://docs.deepsoundscript.com/start.html
2021-11-27T02:52:51
CC-MAIN-2021-49
1637964358078.2
[]
docs.deepsoundscript.com
ListStreams. Request Syntax { "ExclusiveStartStreamArn": " string", "Limit": number, "TableName": " string" } Request Parameters The request accepts the following data in JSON format. In the following list, the required parameters are described first. - ExclusiveStartStreamArn The ARN (Amazon Resource Name) of the first item that this operation will evaluate. Use the value that was returned for LastEvaluatedStreamArnin the previous operation. Type: String Length Constraints: Minimum length of 37. Maximum length of 1024. Required: No - Limit The maximum number of streams to return. The upper limit is 100. Type: Integer Valid Range: Minimum value of 1. Required: No - TableName If this parameter is provided, then only the streams associated with this table name are returned. Type: String Length Constraints: Minimum length of 3. Maximum length of 255. Pattern: [a-zA-Z0-9_.-]+ Required: No Response Syntax { "LastEvaluatedStreamArn": "string", "Streams": [ { "StreamArn": "string", "StreamLabel": "string", "TableName": "string" } ] } Response Elements If the action is successful, the service sends back an HTTP 200 response. The following data is returned in JSON format by the service. - LastEvaluatedStreamArn The stream ARN of the item where the operation stopped, inclusive of the previous result set. Use this value to start a new operation, excluding this value in the new request. If LastEvaluatedStreamArnis empty, then the "last page" of results has been processed and there is no more data to be retrieved. If LastEvaluatedStreamArnis not empty, it does not necessarily mean that there is more data in the result set. The only way to know when you have reached the end of the result set is when LastEvaluatedStreamArnis empty. Type: String Length Constraints: Minimum length of 37. Maximum length of 1024. - Streams A list of stream descriptors associated with the current account and endpoint. Type: Array of Stream objects Retrieve All Stream ARNs The following sample returns all of the stream ARNs..ListStreams {} Sample Response HTTP/1.1 200 OK x-amzn-RequestId: <RequestId> x-amz-crc32: <Checksum> Content-Type: application/x-amz-json-1.0 Content-Length: <PayloadSizeBytes> Date: <Date> { "Streams": [ { "StreamArn": "arn:aws:dynamodb:us-wesst-2:111122223333:table/Forum/stream/2015-05-20T20:51:10.252", "TableName": "Forum", "StreamLabel": "2015-05-20T20:51:10.252" }, { "StreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/Forum/stream/2015-05-20T20:50:02.714", "TableName": "Forum", "StreamLabel": "2015-05-20T20:50:02.714" }, { "StreamArn": "arn:aws:dynamodb:us-west-2:111122223333:table/Forum/stream/2015-05-19T23:03:50.641", "TableName": "Forum", "StreamLabel": "2015-05-19T23:03:50.641" }, ...remaining output omitted... ] } See Also For more information about using this API in one of the language-specific Amazon SDKs, see the following:
https://docs.amazonaws.cn/en_us/amazondynamodb/latest/APIReference/API_streams_ListStreams.html
2021-11-27T02:14:26
CC-MAIN-2021-49
1637964358078.2
[]
docs.amazonaws.cn
IES Texture Node The IES Texture is used to match real world lights based on IES files (IES). IES files store the directional intensity distribution of light sources. Ingressi - Vector Texture coordinate for lookup in the light distribution. Defaults to the normal. - Forza Light strength multiplier. Proprietà - Modalità The location to load the IES file from. - Internal Use IES profile from a file embedded in a text data-block in the blend-file, for easy distribution. - External Load IES profile from a file on the drive. Uscite - Fattore Light intensity, typically plugged into the Strength input of an Emission node. Esempi Lights with different IES profiles.
https://docs.blender.org/manual/it/dev/render/shader_nodes/textures/ies.html
2021-11-27T03:33:56
CC-MAIN-2021-49
1637964358078.2
[array(['../../../_images/render_shader-nodes_textures_ies_node.png', '../../../_images/render_shader-nodes_textures_ies_node.png'], dtype=object) array(['../../../_images/render_shader-nodes_textures_ies_example.jpg', '../../../_images/render_shader-nodes_textures_ies_example.jpg'], dtype=object) ]
docs.blender.org
Database cursors A database cursor is a database-level object that lets you query a database multiple times. You'll get consistent results even if there are data-append or data-retention operations happening in parallel with the queries. Database cursors are designed to address two important scenarios: The ability to repeat the same query multiple times and get the same results, as long as the query indicates "same data set". The ability to make an "exactly once" query. This query only "sees" the data that a previous query didn't see, because the data wasn't available then. The query lets you iterate, for example, through all the newly arrived data in a table without fear of processing the same record twice or skipping records by mistake. The database cursor is represented in the query language as a scalar value of type string. The actual value should be considered opaque and there's no support for any operation other than to save its value or use the cursor functions noted below. Cursor functions Kusto provides three functions to help implement the two above scenarios: cursor_current(): Use this function to retrieve the current value of the database cursor. You can use this value as an argument to the two other functions. This function also has a synonym, current_cursor(). cursor_after(rhs:string): This special function can be used on table records that have the IngestionTime policy enabled. It returns a scalar value of type boolindicating whether the record's ingestion_time()database cursor value comes after the rhsdatabase cursor value. cursor_before_or_at(rhs:string): This special function can be used on the table records that have the IngestionTime policy enabled. It returns a scalar value of type boolindicating whether the record's ingestion_time()database cursor value comes before or at the rhsdatabase cursor value. The two special functions ( cursor_after and cursor_before_or_at) also have a side-effect: When they're used, Kusto will emit the current value of the database cursor to the @ExtendedProperties result set of the query. The property name for the cursor is Cursor, and its value is a single string. For example: {"Cursor" : "636040929866477946"} Restrictions Database cursors can only be used with tables for which the IngestionTime policy has been enabled. Each record in such a table is associated with the value of the database cursor that was in effect when the record was ingested. As such, the ingestion_time() function can be used. The database cursor object holds no meaningful value unless the database has at least one table that has an IngestionTime policy defined. This value is guaranteed to update, as-needed by the ingestion history, into such tables and the queries run, that reference such tables. It might, or might not, be updated in other cases. The ingestion process first commits the data, so that it's available for querying, and only then assigns an actual cursor value to each record. If you attempt to query for data immediately following the ingestion completion using a database cursor, the results might not yet incorporate the last records added, because they haven't yet been assigned the cursor value. Also, retrieving the current database cursor value repeatedly might return the same value, even if ingestion was done in between, because only a cursor commit can update its value. Querying a table based on database cursors is only guaranteed to "work" (providing exactly-once guarantees) if the records are ingested directly into that table. If you are using extents commands, such as move extents/.replace extents to move data into the table, or if you are using .rename table, then querying this table using database cursors is not guaranteed to not miss any data. This is because the ingestion time of the records is assigned when initially ingested, and does not change during the move extents operation. Therefore, when the extents are moved into the target table, it's possible that the cursor value assigned to the records in these extents was already processed (and next query by database cursor will miss the new records). Example: Processing records exactly once For a table Employees with schema [Name, Salary], to continuously process new records as they're ingested into the table, use the following process: // [Once] Enable the IngestionTime policy on table Employees .set table Employees policy ingestiontime true // [Once] Get all the data that the Employees table currently holds Employees | where cursor_after('') // The query above will return the database cursor value in // the @ExtendedProperties result set. Lets assume that it returns // the value '636040929866477946' // [Many] Get all the data that was added to the Employees table // since the previous query was run using the previously-returned // database cursor Employees | where cursor_after('636040929866477946') // -> 636040929866477950 Employees | where cursor_after('636040929866477950') // -> 636040929866479999 Employees | where cursor_after('636040929866479999') // -> 636040939866479000
https://docs.microsoft.com/en-us/azure/data-explorer/kusto/management/databasecursor
2021-11-27T04:22:12
CC-MAIN-2021-49
1637964358078.2
[]
docs.microsoft.com
idcorresponds to a stock at a specific time era. The featuresdescribe the various quantitative attributes of the stock at the time. The targetrepresents an abstract measure of performance ~4 weeks into the future. Saturday at 18:00 UTCa new roundopens and new tournament data is released. To participate in the round, you must submit your latest predictions by the deadline on Monday 14:30 UTC. corr) between your predictions and the targets. The higher the correlation the better. mmc) and feature neutral correlation ( fnc). The higher the meta model contribution and feature neutral correlation the better. corrand/or mmc) you want to stake on. stake_valueis the value of your stake on the first Thursday (scoring day) of the round. payout_factoris number that scales with the total NMR staked across all models in the tournament. The higher the total NMR staked above the 300K threshold the lower the payout factor. corr_multiplierand mmc_multiplierare configured by you to control your exposure to each score. You are given the following multiplier options. corr, mmc, and fnc. Reputation is the weighted average of a given metric over the past 20 rounds.
https://docs.numer.ai/tournament/learn
2021-11-27T03:34:14
CC-MAIN-2021-49
1637964358078.2
[]
docs.numer.ai
A Workflow Workflow steps: Identify the tasks that you wish to execute. - Add a task. - In Plan View, click the Plus icon at the bottom of your plan. - Specify the task to execute. Repeat the previous step to add additional tasks as needed. - To test your plan, click Run now. The plan is immediately executed. Edit the plan and repeat the above steps until the plan is ready for production runs. -. Steps: When you first open Plan View, you should see an. - You can apply overrides through Plan View, too. - For more information, see Manage Parameters Dialog. To save your schedule, click Save. - In the context panel, you can make changes to your schedule: After saving, the schedule is automatically enabled. To disable the schedule, use the slider bar. -. Steps: - After you select the HTTP task type, you can specify the task in the context panel. Specify the fields of the request. - Click Save. - The task is created and added to the plan. For more information, see Plan View for HTTP Tasks.. Example - Success or Failure Tasks in a Plan.
https://docs.trifacta.com/pages/diffpages.action?pageId=174732113&originalId=172764553
2021-11-27T02:32:36
CC-MAIN-2021-49
1637964358078.2
[]
docs.trifacta.com
The Administrative Console Guide Full online documentation for the WP EasyCart eCommerce plugin! Full online documentation for the WP EasyCart eCommerce plugin! The short description is a small bit of text that appears near the product title and pricing of a product. We find it best to experiment with this section during product development to see how much text and what looks best at the product level. Note: You can enter some basic html tags as well to help style your content here, but please be aware if you copy and paste in other html from outside sources that scripts can cause injection issues.
https://docs.wpeasycart.com/wp-easycart-administrative-console-guide/?section=short-description
2021-11-27T02:04:56
CC-MAIN-2021-49
1637964358078.2
[]
docs.wpeasycart.com
WSO2 Governance Registry generates events for each significant registry operation. Through the Management Console and through APIs, you can create subscriptions to these events. WSO2 Governance Registry can publish notifications to external or internal endpoints. On the other hand, WSO2 Governance Registry contains an Apache Axis2 based runtime as a part of the WSO2 Carbon Framework. You can deploy web services on this runtime, which can be used as internal endpoints to receive notifications generated on the WSO2 Governance Registry. A Message Receiver is an extension mechanism of Apache Axis2 which allows service authors to define how message processing should happen within a service endpoint. Using a message receiver you can implement an endpoint without physically requiring an implementation class for your service. This makes it much easier for us to very easily develop event subscribers. This sample explains how to: - Create a message receiver to subscribe to notifications on WSO2 Governance Registry - Forward notifications from WSO2 Governance Registry to WSO2 Business Activity Monitor. Once successfully deployed this sample will forward notification generated on WSO2 Governance Registry to the WSO2 Business Activity Monitor. We will be reusing the code of the Handler Sample in this example. This sample requires Apache Maven and WSO2 Business Activity Monitor. See Installation Prerequisites for links on how to install it. Also see Installing Business Activity Monitor on Windows from Binary Distribution or Installing Business Activity Monitor on Linux and Solaris from Binary Distribution. Instructions 1. Open to BAM_HOME/ repository/conf/carbon.xml and set the port offset to 1. <Offset>1</Offset> 2. Start the WSO2 Business Activity Monitor. See Installing Business Activity Monitor on Windows from Binary Distribution or Installing Business Activity Monitor on Linux and Solaris from Binary Distribution. 3. Deploy the KPI_Registry_Activity.tbox BAM Toolbox to WSO2 Business Activity Monitor. A BAM Toolbox is an installable archive which contains stream definitions, dashboard components and analytics for WSO2 Business Activity Monitor. The KPI_Registry_Activity.tbox is a pre-built BAM Toolbox based on the KPI Monitoring Sample example of WSO2 Business Activity Monitor, which is specifically designed for this sample. Read the KPI Monitoring Sample example of WSO2 Business Activity Monitor to learn how to make changes to this BAM Toolbox or create your own. Select the ToolBox From File System option and fill in the URL of the KPI_Registry_Activity.tbox Toolbox. Then click on the Installbutton. If the BAM Toolbox fails to install from the URL download the KPI_Registry_Activity.tbox Toolbox to your local file system and upload it. 4. Navigate to GREG_HOME/ samples/handler/src to find the source code of the Handler Sample. 5. Add the following.apache.axis2.wso2</groupId> <artifactId>axis2</artifactId> </dependency> <dependency> <groupId>org.wso2.carbon</groupId> <artifactId>org.wso2.carbon.databridge.agent.thrift</artifactId> <version>4.0.1</version> </dependency> <dependency> <groupId>org.wso2.carbon</groupId> <artifactId>org.wso2.carbon.databridge.commons</artifactId> <version>4.0.0</version> </dependency> 6. Comment-out the following in your POM file: <!--<exclusion> <groupId>org.wso2.carbon</groupId> <artifactId>org.wso2.carbon.context</artifactId> </exclusion>--> <!--Fragment-Host>org.wso2.carbon.registry.core</Fragment-Host--> The Fragment-Host Bundle Manifest Header declares this sample bundle to be a Fragment of the Registry Kernel. This would mean that the sample handler bundle will become an extension to the Registry Kernel. However, bundles containing services.xml files would not be deployed until the Apache Axis2 runtime has been initialized. Due to OSGi bundle start-up order, it is required that the Registry Kernel starts up before the Apache Axis2 runtime. Since the Fragment now also being an extension to the Registry Kernel, the combination of this bundle and the Registry Kernel (plus any other fragments) should start up before the Apache Axis2 runtime. But, due to the bundle not getting deployed until the Apache Axis2 runtime has been initialized, it will create a deadlock situation. Due to this reason, we have to get rid of the Fragment-Host header. 7. Add the following content to a file named services.xml in GREG_HOME/samples/handler/src/resources/META-INF: <serviceGroup> <service name="Subscriber" scope="transportsession"> <transports> <transport>http</transport> </transports> <operation name="receive"> <actionMapping></actionMapping> <messageReceiver mep="" class="org.wso2.carbon.registry.samples.receiver.SampleMessageReceiver" /> </operation> </service> </serviceGroup> 8. Add a new Java Class named SampleMessageReceiver at GREG_HOME/samples/handler/src/src/main/java/org/wso2/carbon/registry/samples/receiver/SampleMessageReceiver.java with the following source: package org.wso2.carbon.registry.samples.receiver; import org.apache.axiom.om.OMElement; import org.apache.axiom.om.xpath.AXIOMXPath; import org.apache.axiom.soap.SOAPEnvelope; import org.apache.axis2.AxisFault; import org.apache.axis2.context.MessageContext; import org.apache.axis2.receivers.AbstractMessageReceiver; import org.wso2.carbon.databridge.agent.thrift.Agent; import org.wso2.carbon.databridge.agent.thrift.DataPublisher; import org.wso2.carbon.databridge.agent.thrift.conf.AgentConfiguration; import org.wso2.carbon.databridge.commons.Event; import org.wso2.carbon.databridge.commons.exception.NoStreamDefinitionExistException; import org.wso2.carbon.registry.core.utils.RegistryUtils; import org.wso2.carbon.utils.NetworkUtils; import java.util.ArrayList; public class SampleMessageReceiver extends AbstractMessageReceiver { public static final String NAMESPACE = ""; public static final String REGISTRY_ACTIVITY_STREAM = "org.wso2.bam.registry.activity.kpi"; public static final String VERSION = "1.0.0"; protected void invokeBusinessLogic(MessageContext messageContext) throws AxisFault { SOAPEnvelope envelope = messageContext.getEnvelope(); try { // Find Username and Operation AXIOMXPath xPath = new AXIOMXPath("//ns:RegistryOperation"); xPath.addNamespace("ns", NAMESPACE); String operation = ((OMElement)((ArrayList)xPath.evaluate(envelope)).get(0)).getText(); xPath = new AXIOMXPath("//ns:Username"); xPath.addNamespace("ns", NAMESPACE); String username = ((OMElement)((ArrayList)xPath.evaluate(envelope)).get(0)).getText(); // Create Data Publisher RegistryUtils.setTrustStoreSystemProperties(); DataPublisher dataPublisher = new DataPublisher( "tcp://" + NetworkUtils.getLocalHostname() + ":7612", "admin", "admin", new Agent(new AgentConfiguration())); // Find Data Stream String streamId; try { streamId = dataPublisher.findStream(REGISTRY_ACTIVITY_STREAM, VERSION); } catch (NoStreamDefinitionExistException ignored) { streamId = dataPublisher.defineStream("{" + " 'name':'" + REGISTRY_ACTIVITY_STREAM + "'," + " 'version':'" + VERSION + "'," + " 'nickName': 'Registry_Activity'," + " 'description': 'Registry Activities'," + " 'metaData':[" + " {'name':'clientType','type':'STRING'}" + " ]," + " 'payloadData':[" + " {'name':'operation','type':'STRING'}," + " {'name':'user','type':'STRING'}" + " ]" + "}"); } if (!streamId.isEmpty()) { // Publish Event to Stream dataPublisher.publish(new Event( streamId, System.currentTimeMillis(), new Object[]{"external"}, null, new Object[]{ operation, username})); dataPublisher.stop(); System.out.println("Successfully Published Event"); } } catch (Exception e) { e.printStackTrace(); } } } 9.: 10. Copy the GREG_HOME/ samples/handler/src/target/ org.wso2.carbon.registry.samples.handler-4.5.0.jar into GREG_HOME/repository/components/dropins. 11. Start the server and observe the command prompt. See Running the Product for more information. You should also observe a log similar to the following explaining that your Message Receiver was successfully deployed. [2012-09-12 01:57:38,059] INFO {org.wso2.carbon.core.deployment.DeploymentInterceptor} - Deploying Axis2 service: Subscriber {super-tenant} 12. After the server has started, log into the Management Console and add a notification to the root collection with the following settings: - Event - Update - Notification - SOAP - Endpoint - - Hierarchical Subscription Method - Collection, Children and Grand Children 13. Now perform various operations on the registry such such Adding/Updating Resources, Setting Properties etc. This should generate notifications which will then be forwarded for BAM. You should see lines similar to the following printed on your command prompt, which indicates that you events were successfully generated. For best results, create multiple user accounts and log in using different credentials while you perform operations. Successfully Published Event 14. Navigate to the Gadget Portal of WSO2 Business Activity Monitor to see the statistics corresponding to your operations. A Message Receiver must implement the org.apache.axis2.engine.MessageReceiver interface. Read more about Message Receivers to get a better understanding of their uses.
https://docs.wso2.com/display/Governance500/Notifications+Subscriber+Sample
2021-11-27T02:30:33
CC-MAIN-2021-49
1637964358078.2
[]
docs.wso2.com
Connect to Your Container Instance To perform basic administrative tasks on your instance, such as updating or installing software or accessing diagnostic logs, connect to the instance using SSH. To connect to your instance using SSH, your container instances must meet the following prerequisites: Your container instances need external network access to connect using SSH. If your container instances are running in a private VPC, they need an SSH bastion instance to provide this access. For more information, see the Securely connect to Linux instances running in a private Amazon VPC blog post. Your container instances must have been launched with a valid Amazon EC2 key pair. Amazon ECS container instances have no password, and you use a key pair to log in using SSH. If you did not specify a key pair when you launched your instance, there is no way to connect to the instance. SSH uses port 22 for communication. Port 22 must be open in your container instance security group for you to connect to your instance using SSH. Note The Amazon ECS console first-run experience creates a security group for your container instances without inbound access on port 22. If your container instances were launched from the console first-run experience, add inbound access to port 22 on the security group used for those instances. For more information, see Authorizing Network Access to Your Instances in the Amazon EC2 User Guide for Linux Instances. To connect to your container instance Find the public IP or DNS address for your container instance. Open the Amazon ECS console at. Select the cluster that hosts your container instance. On the Cluster page, choose ECS Instances. On the Container Instance column, select the container instance to connect to. On the Container Instance page, record the Public IP or Public DNS for your instance. Find the default username for your container instance AMI. The user name for instances launched with an Amazon ECS-optimized AMI is ec2-user. For Ubuntu AMIs, the default user name is ubuntu. For CoreOS, the default user name is core..
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance-connect.html
2018-12-10T00:09:53
CC-MAIN-2018-51
1544376823228.36
[]
docs.aws.amazon.com
Below are steps for an Android smartphone, but the steps are identical on the various platforms. Inviting a Device To add a device, you will need to go to the "Devices" page or "Dashboard" and click the "Invite Device" link. Invite from Dashboard Invite from Devices Page If the device you wish to add has a valid email address or phone number, you can send it an invite to connect to your organization. When you're done inputting the device details, click "Invite". In a few minutes, the device should receive a text message or email. Open the message and follow the link to download the app in the relevant app store. See a sample email below, I've circled the "Organization Key" which you'll require a bit later. Alternatively, you can direct the device owner to download Device Magic's Mobile Forms app from the relevant app store. Please search for "Mobile Forms" in the App Store (iOS) and "Forms" in the Google Play Store (Android). There will be an option to install when viewing the listing, please choose this option. If you already have a version of the "Forms" app, it may ask if you wish to replace it. Once the download is complete, open the app. When the app starts up for the first time, you will be asked if you would like to "Create a New Organization" or "Sign in to an Existing Team". If you received this invitation by text/email or created an organization when going through the earlier steps in this guide, select "Sign in to an Existing Team". If you'd like to create a new Device Magic account, then click the "Create a New Organization" button, input the required information, and follow the prompts in the email you will receive. Input the "Full Name" of your device and the "Organization Key". The "Organization Key" can be found within the text message or email you received previously. Note: If you're an administrator of the account, the Organization Key can also be found on your dashboard. Finally, click "Join My Team". The device will then attempt to to contact our servers and join your organization. The account administrator (which could be you, if you are reading this) will receive an email to notify you that the device is awaiting your approval. If you are the account administrator, you do have the option to "Log In as Administrator" directly from the app when your request is pending approval. Switch back to the website. If you click "Devices", the "Devices" page will refresh and you will see your device waiting to be assigned (if you don't see anything, double check that you entered the Organization Key correctly on your device). Click "Approve". Now have a look at your mobile phone. The application will automatically move past the pending assignment screen and begin downloading your assigned forms. If you have any questions or comments feel free to send us a message at [email protected] or leave us a comment below.
https://docs.devicemagic.com/getting-started-with-device-magic/the-basics/connecting-a-device-to-your-organization
2018-12-10T00:19:58
CC-MAIN-2018-51
1544376823228.36
[array(['https://downloads.intercomcdn.com/i/o/71880071/eeb350f29b87064fd4a7566c/Screen+Shot+2018-08-14+at+12.07.04.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/71879064/032e6ab10d251d0143ae3ea2/Screen+Shot+2018-08-14+at+11.58.07.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/71880162/fad96b665a4e915c6787ab65/Screen_Shot_2016-01-11_at_10.38.32_AM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/41649662/f6d8bfa5098258a0acc19b03/Screen+Shot+2017-12-05+at+11.27.08+AM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/71880164/84162bdec173ed56a2bbe67d/Screenshot_20170306-102041.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/71880165/04248eea6e1b2327ee37735f/Screenshot_2015-10-21-14-14-20.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/71880166/9b34c7e96e25e3c52e50f0b9/Screen_Shot_2015-10-21_at_2.58.36_PM.png', None], dtype=object) ]
docs.devicemagic.com
Manage deployment of Office 365 add-ins in the Office Office. Office 365 admin center. Click Settings > Services & add-ins. Click Office Store. Click the toggle next to Let people in your organization go to the Office Store so that it's in the Off position. in monthly channel. Available in semi-annual release in July 2018.. Note Outlook add-in installation is managed by a different process._2<<
https://docs.microsoft.com/en-us/office365/admin/manage/manage-deployment-of-add-ins?redirectSourcePath=%252fen-us%252farticle%252fmanage-deployment-of-office-365-add-ins-in-the-office-365-admin-center-737e8c86-be63-44d7-bf02-492fa7cd9c3f&view=o365-worldwide
2018-12-10T00:42:40
CC-MAIN-2018-51
1544376823228.36
[array(['../media/0c131b91-d2a1-4515-838f-26b551b77dd2.png?view=o365-worldwide', 'List of Scopes'], dtype=object) array(['../media/b3abd42f-63d8-4a5f-8893-d1ae38f4e9b2.png?view=o365-worldwide', 'Add-in dialog for Centralized Deployment'], dtype=object) array(['../media/553b0c0a-65e9-4746-b3b0-8c1b81715a86.png?view=o365-worldwide', 'Office ribbon with Search Citations'], dtype=object) ]
docs.microsoft.com
18th World Gastroenterologists Summit Start Date : September 7, 2018 End Date : September 8, 2018 Time : 9:00 am to6:00 pm Phone : +16508894686 Location : Auckland, New Zealand Description. Registration Info Register Now : Contact the OrganizerContact the Organizer Organized by Organized by Ethel Ayler Ethel Ayler Tel: +18002166499 Website: Event Categories: Gastro- enterology, General Surgery, Health & Nutrition, Hepatology, Infectious Disease, Internal Medicine, Oncology, and Pediatrics.
http://meetings4docs.com/event/18th-world-gastroenterologists-summit/
2018-12-09T23:30:03
CC-MAIN-2018-51
1544376823228.36
[array(['http://meetings4docs.com/wp-content/themes/Events-your%20theme/thumb.php?src=http://meetings4docs.com/wp-content/uploads/2018/02/Homepage_Banner.jpg&w=105&h=105&zc=1&q=80&bid=1', None], dtype=object) ]
meetings4docs.com
Installation¶ Users¶ If you only want to use clan, install it this way: pip install clan Note clan is intended for researchers and analysts. You will need to understand the Google Analytics API in order to use it effectively. It is not intended to generate reports for your boss. Developers¶ If you are a developer that also wants to hack on clan, install it this way: git clone git://github.com/onyxfish/clan.git cd clan mkvirtualenv --no-site-packages clan pip install -r requirements.txt python setup.py develop Note If you have a recent version of pip, you may need to run pip with the additional arguments --allow-external argparse.
https://clan.readthedocs.io/en/0.1.3/installation.html
2019-03-18T22:28:37
CC-MAIN-2019-13
1552912201707.53
[]
clan.readthedocs.io
Configure Git for contributing预计阅读时间: 7 分钟 Work through this page to configure Git and a repository you’ll use throughout the Contributor Guide. The work you do further in the guide, depends on the work you do here. Task 1. Fork and clone the Docker code Before contributing, you first fork the Docker code repository. A fork copies a repository at a particular point in time. GitHub tracks for you where a fork originates. As you make contributions, you change your fork’s code. When you are ready, you make a pull request back to the original Docker repository. If you aren’t familiar with this workflow, don’t worry, this guide walks you through all the steps. To fork and clone Docker: Open a browser and log into GitHub with your account. Go to the docker/docker repository. Click the “Fork” button in the upper right corner of the GitHub interface. GitHub forks the repository to your GitHub account. The original docker/dockerrepository becomes a new fork YOUR_ACCOUNT/docker. Open a terminal window on your local host and change to your home directory. $ cd ~ In Windows, you’ll work in your Docker Quickstart Terminal window instead of Powershell or a cmdwindow. Create a reposdirectory. $ mkdir repos Change into your reposdirectory. $ cd repos Clone the fork to your local host into a repository called docker-fork. $ git clone docker-fork Naming your local repo docker-forkshould help make these instructions easier to follow; experienced coders don’t typically change the name. Change directory into your new docker-forkdirectory. $ cd docker-fork Take a moment to familiarize yourself with the repository’s contents. List the contents. Task 2. Set your signature and an upstream remote When you contribute to Docker, you must certify you agree with the Developer Certificate of Origin. You indicate your agreement by signing your git commits like this: Signed-off-by: Pat Smith <[email protected]> To create a signature, you configure your username and email address in Git. You can set these globally or locally on just your docker-fork repository. You must sign with your real name. You can sign your git commit automatically with git commit -s. Docker does not accept anonymous contributions or contributions through pseudonyms. As you change code in your fork, you’ll want to keep it in sync with the changes others make in the docker/docker repository. To make syncing easier, you’ll also add a remote called upstream that points to docker/docker. A remote is just another project version hosted on the internet or network. To configure your username, email, and add a remote: Change to the root of your docker-forkrepository. $ cd docker-fork Set your user.namefor the repository. $ git config --local user.name "FirstName LastName" Set your user.emailfor the repository. $ git config --local user.email "[email protected]" Set your local repo to track changes upstream, on the dockerrepository. $ git remote add upstream Check the result in your gitconfiguration. $ git config --local -l core.repositoryformatversion=0 core.filemode=true core.bare=false core.logallrefupdates=true remote.origin.url= remote.origin.fetch=+refs/heads/*:refs/remotes/origin/* branch.master.remote=origin branch.master.merge=refs/heads/master user.name=Mary Anthony [email protected] remote.upstream.url= remote.upstream.fetch=+refs/heads/*:refs/remotes/upstream/* To list just the remotes use: $ git remote -v origin (fetch) origin (push) upstream (fetch) upstream (push) Task 3. Create and push a branch As you change code in your fork, make your changes on a repository branch. The branch name should reflect what you are working on. In this section, you create a branch, make a change, and push it up to your fork. This branch is just for testing your config for this guide. The changes are part of a dry run, so the branch name will be dry-run-test. To create and push the branch to your fork on GitHub: Open a terminal and go to the root of your docker-fork. $ cd docker-fork Create a dry-run-testbranch. $ git checkout -b dry-run-test This command creates the branch and switches the repository to it. Verify you are in your new branch. $ git branch * dry-run-test master The current branch has an * (asterisk) marker. So, these results shows you are on the right branch. Create a TEST.mdfile in the repository’s root. $ touch TEST.md Edit the file and add your email and location. You can use any text editor you are comfortable with. Save and close the file. Check the status of your branch. $ git status On branch dry-run-test Untracked files: (use "git add <file>..." to include in what will be committed) TEST.md nothing added to commit but untracked files present (use "git add" to track) You’ve only changed the one file. It is untracked so far by git. Add your file. $ git add TEST.md That is the only staged file. Stage is fancy word for work that Git is tracking. Sign and commit your change. $ git commit -s -m "Making a dry run test." [dry-run-test 6e728fb] Making a dry run test 1 file changed, 1 insertion(+) create mode 100644 TEST.md Commit messages should have a short summary sentence of no more than 50 characters. Optionally, you can also include a more detailed explanation after the summary. Separate the summary from any explanation with an empty line. Push your changes to GitHub. $ git push --set-upstream origin dry-run-test Username for '': moxiegirl Password for '': Git prompts you for your GitHub username and password. Then, the command returns a result. Counting objects: 13, done. Compressing objects: 100% (2/2), done. Writing objects: 100% (3/3), 320 bytes | 0 bytes/s, done. Total 3 (delta 1), reused 0 (delta 0) To * [new branch] dry-run-test -> dry-run-test Branch dry-run-test set up to track remote branch dry-run-test from origin. Open your browser to GitHub. Navigate to your Docker fork. Make sure the dry-run-testbranch exists, that it has your commit, and the commit is signed. Where to go next Congratulations, you have finished configuring both your local host environment and Git for contributing. In the next section you’ll learn how to set up and work in a Docker development container.GitHub account, repository, clone, fork, branch, upstream, Git, Go, make
https://docs.docker-cn.com/opensource/project/set-up-git/
2019-03-18T21:53:20
CC-MAIN-2019-13
1552912201707.53
[]
docs.docker-cn.com
The Azure Blob Storage stores files in a flat key/value store without formal support for folders. The hadoop-azure file system layer simulates folders on top of Azure storage. By default, folder rename in the hadoop-azure file system layer is not atomic. This means that a failure during a folder rename could potentially leave some folders in the original directory and some in the new one. Since HBase depends on atomic folder rename, a configuration setting called fs.azure.atomic.rename.dir can be set in core-site.xml to specify a comma-separated list of directories where folder rename is made atomic. If a folder rename fails, a redo will be applied to finish. A file <folderName>-renamePending.json may appear temporarily and is the record of the intention of the rename operation, to allow redo in event of a failure. The default value of this setting is just /hbase. To list multiple directories, separate them with a comma. For example: <property> <name>fs.azure.atomic.rename.dir</name> <value>/hbase,/data</value> </property>
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/bk_cloud-data-access/content/wasb-atomic-rename.html
2019-03-18T22:30:19
CC-MAIN-2019-13
1552912201707.53
[]
docs.hortonworks.com
Warning! This page documents an earlier version of InfluxDB Enterprise, which is no longer actively developed. InfluxDB Enterprise v1.7 is the most recent stable version of InfluxDB Enterprise. Configuring InfluxDB Enterprise Configuring InfluxDB Enterprise covers the InfluxDB Enterprise configuration settings, including global options, meta node options, and data node options. Data node configurations The Data node configurations includes listings and descriptions of all data node configurations. Meta node configurations The Meta node configurations includes listings and descriptions of all meta node configurations..
https://docs.influxdata.com/enterprise_influxdb/v1.6/administration/
2019-03-18T22:25:55
CC-MAIN-2019-13
1552912201707.53
[]
docs.influxdata.com
Contents Performance Analytics and Reporting Previous Topic Next Topic Edit a job for the indicator Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Edit a job for the indicator Add a data collection job to an indicator to collect scores for that indicator. Before you begin Role required: pa_admin, pa_power_user, or admin Procedure Open an existing automated indicator. In the Jobs related list, click Edit. (Optional) Use Add Filter and Run Filter to limit the selection of jobs. Select one or more jobs in the Collections or Jobs List. Use the arrow buttons to move the jobs to the other list. Click Save. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-performance-analytics-and-reporting/page/use/performance-analytics/task/t_EditAJobForTheIndicator.html
2019-03-18T22:26:20
CC-MAIN-2019-13
1552912201707.53
[]
docs.servicenow.com
Metrics to see the overall status of the cluster. Disk IO - Average (upper chart) Disk IO - Average (lower chart) Disk IO - Total Average implies sum/count for values reported by all hosts in the cluster. Example: In a 30 second window, if 98 out of 100 hosts reported 1 or more value, it is the SUM(Avg value from each host + Interpolated value for 2 missing hosts)/100. Sum/Total implies the sum of all values in a timeslice (30 seconds) from all hosts in the cluster. The same interpolation rule applies.
https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.3.0/using-ambari-core-services/content/amb_system_home.html
2019-03-18T22:35:21
CC-MAIN-2019-13
1552912201707.53
[]
docs.hortonworks.com
Edit cluster details You can modify the Cluster Location, Data Center name, Tags, or Description for any cluster registered in DP Platform. The DataPlane Admin role is required to perform this task. - Click the (Clusters) icon in the DP Platform navigation pane. - Optional: Enter a cluster name in the search field and press Enter.You can only search by cluster name. You can search by partial or full names. - In the cluster list, locate the row for the cluster you want to edit. - At the end of the row, click the (Actions) icon and then click Edit.The Edit Cluster page displays. - Modify the cluster details. - Click Update.The Clusters page displays a list with the updated cluster.
https://docs.hortonworks.com/HDPDocuments/DP/DP-1.2.1/administration/content/dp_edit_a_cluster_in_dp_platform.html
2019-03-18T22:32:03
CC-MAIN-2019-13
1552912201707.53
[]
docs.hortonworks.com