content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
The scaffolding plugin configures Grails' support for CRUD via scaffolding. scaffolding An example of enabling "dynamic" scaffolding: class BookController { static scaffold = Book // static scaffold = true form is not supported in grails 3.0 and above } Refer to the section on scaffolding in the Grails user guide which details Grails' scaffolding support.
http://docs.grails.org/latest/ref/Plug-ins/scaffolding.html
2017-06-22T16:51:11
CC-MAIN-2017-26
1498128319636.73
[]
docs.grails.org
Timestamp type Enter a timestamp type as an integer for CQL input, or as a string literal in ISO 8601 formats. Values for the timestamp type are encoded as 64-bit signed integers representing a number of milliseconds since the standard base time known as the epoch: January 1 1970 at 00:00:00 GMT. Enter a timestamp type as an integer for CQL input, or as a string literal'T'HH:mm:ss.ffffffZ yyyy-mm-dd yyyy-mm-ddZ where Z is the RFC-822 4-digit time zone, expressing the time zone's difference from UTC. For example, for example the date and time of Feb 3, 2011, at 04:05:00 AM, GMT: 2011-02-03 04:05+0000 2011-02-03 04:05:00+0000 2011-02-03T04:05+0000 2011-02-03T04:05:00+0000, you can also omit the time of day. For example: 2011-02-03 2011-02-03+0000 In this case, the time of day defaults to 00:00:00 in the specified or default time zone.Timestamp output appears in the following format by default in cqlsh: yyyy-mm-dd HH:mm:ssZ time_formatproperty in the [ui]section of the cqlshrc file. [ui] time_format = %Y-%m-%d %H:%M
http://docs.datastax.com/en/cql/3.3/cql/cql_reference/timestamp_type_r.html
2017-06-22T16:28:17
CC-MAIN-2017-26
1498128319636.73
[]
docs.datastax.com
Managing Your MongoDB Deployment - MongoDB version management - Viewing and killing current operations - Restarting your MongoDB processes - Compacting your database deployment - Initiating a failover for your cluster - mLab’s rolling node replacement process Overview When you host your MongoDB database with mLab,Lab The version of MongoDB currently used by mLab as the default is version 3.2 (as of September 27, 2016). However, you have the option of selecting other version(s). Determining your current MongoDB version Follow these steps to see which version of MongoDB your deployment is currently running: - Log in to the mLab() 3.0.7 How to change MongoDB versions Not available for Sandbox databases If you have a for-pay deployment, you can upgrade (or change) the version of MongoDB you are running directly from the mLab management portal. The process is seamless if you are making a replica set connection to one of our Cluster plans. Prerequisites We strongly recommend reviewing the following before a release (major) upgrade: - The requirements for the version to which you are upgrading - MongoDB’s Driver Compatability reference Steps to change versions - Log in to the mLab management portal. - From your account’s Home page, navigate to the deployment that will be modified. - Click the “Tools” tab. - Under the “Initiate maintenance operation” header, select the “Change MongoDB version” option. - Select the desired version in the drop-down menu that appears below “This deployment is running MongoDB version…” - Read the instructions and requirements carefully before clicking the “Upgrade to…” or “Patch to…” or “Downgrade to…” button. What to expect If you have a replica set cluster plan, the entire process should take just a few minutes to complete although some exceptions1 may apply. We will first restart your non-primary nodes (e.g., arbiter and secondary node(s)) with the binaries for the new version. Then we will intentionally fail you over in order to upgrade your original primary. Finally, we will fail you back over so that your original primary is once again primary. You should experience no downtime if your drivers and client code have been properly configured with a replica set connection. Note that during failover, it may take 5-30 seconds for a new member to be elected primary. If you are on a single-node plan, your database server will be restarted which usually involves approximately 20 seconds of downtime. If your Dedicated plan deployment is currently running 3.0.x with the MMAPv1 storage engine, note that an upgrade to 3.2.x with the WiredTiger storage engine will also automatically initiate a rolling node replacement process that will seamlessly migrate your deployment to the WiredTiger storage engine over the course of several hours or even days. Read about mLab’s rolling node replacement process below. Frequently asked questions Q. Are for-pay deployments automatically upgraded when mLab supports a new MongoDB version? Maintenance (minor) versions We do not automatically patch any for-pay deployments to the latest maintenance/minor version (e.g., 3.2.11 to 3.2.12). Instead, we send out email notifications if maintenance releases have very important bug fixes in them. That being said, if there were any truly critical issues (e.g., one that would result in data loss), it’s likely that we would automatically patch and send a notification. Release (major) versions The only time we will automatically upgrade the MongoDB version on a for-pay deployment is when we de-support the currently-running version. We typically support at least two release (major) versions on our for-pay Shared plans and three release versions on our Dedicated plans (listed here). Eventually, as release versions are de-supported, an upgrade will be necessary. In those cases, we send multiple notifications well in advance of a mandatory upgrade such as this example notice. If the user doesn’t perform the upgrade at their convenience by the stated deadline, we will automatically perform the version upgrade to our minimum supported version. Q. Why can’t I change the MongoDB version that’s running on my Sandbox database? Because our Sandbox databases are running on server processes shared by multiple users, version changes are not possible. All Sandbox plans are automatically upgraded to the latest MongoDB version we support. To run on a specific version of MongoDB, you will need to upgrade to one of our for-pay plans which provide your own mongod server process and the flexibility of making version changes at your convenience. Q. How do I test a specific maintenance (minor) version? We do not offer the ability to change to a specific maintenance (minor) version. Maintenance versions are supposed to contain only bug fixes and patches and as a result, we don’t consider it necessary to treat these versions (e.g., 3.2.11 and 3.2.12) differently. At any given time, we offer only the latest maintenance version of each release. That being said, if you are upgrading to a different release (major) version (e.g., 2.6.x vs. 3.0.x), we highly recommend thorough testing in a Staging environment. Viewing and killing current operations Not available for Sandbox databases To quickly see if one or more operations are particularly long-running, use the tool in the mLablab.com for help. If you have a Dedicated plan and are in an emergency situation, use the emergency email that we provided to you. Additional reading from the mLabLab management portal. - Log in to the mLab management portal. - From your account’s Home page, navigate to the deployment that needs to be restarted. - Click the “Tools” tab. - Under the “Initiate maintenance operation” header, select the “Restart deployment” option. - Click the “Restart deployment” button. - Follow the instructions in the “Warning” window to confirm the restart, then click the “Restart” button.lab.com. Compacting your database deployment Sometimes it’s necessary to compact your database in order to reclaim disk space (e.g., are you quickly approaching your storage limits?) and/or reduce fragmentation. When you compact your database, you are effectively reducing its file size. Understanding file size vs. data size The fileSize metric is reported under the “Size on Disk” heading in our management console. It is only relevant for deployments running the MMAPv1 storage engine. Deployments running the MMAPv1 storage engine (including Sandbox with the MMAPv1 storage engine. How to compact your database(s) If you are on a multi-node, highly available replica set cluster plan (Shared or Dedicated) and would like to try to reclaim disk space, you can do so while still using your database. However, compacting a Sandbox or any single-node plan will require downtime while the compaction is taking place. Compacting Sandbox and single-node plan deployments If you are on a Sandbox or single-node plan and would like to try to reclaim disk space, you can use MongoDB’s repairDatabase command. If your fileSize or “Size on Disk” is under 1.5lab.com. The repairDatabase command is a blocking operation. Your database will be unavailable until the repair is complete. Compacting Shared Cluster plan deployments The process we use for compactions on Shared Cluster plans is to resync each node from scratch. This is a better method of reclaiming disk space than db.repairDatabase() because when resyncing a secondary member of your replica set, you’ll be able to use the primary member of your replica set. High-level process: - Resync the current secondary node using an initial sync. - Initiate a failover. - Resync the now secondary, originally primary. Steps to compact: - Log in to the mLab management portal. - Navigate to the Shared Cluster deployment that you want to compact. - On the “Databases” tab, note values in the “Size” and “Size on Disk” columns. - If your database’s “Size on Disk” is only a little larger than the “Size” a compaction will have no or little effect. A good rule of thumb is that a compaction is only likely to be effective if the “Size on Disk” is more than 30% larger than the “Size” value. - Navigate to the “Servers” tab. - First click “resync” on the node that’s currently in the state of SECONDARY. - Once the sync is complete, then click “step down (fail over)” on the node that’s currently in the state of PRIMARY. - Finally click “resync” on the node that was primary but is now in the state of SECONDARY. Your deployment will not have the same level of availability during the maintenance because the node being synced will be unavailable. In addition, backups could be delayed or cancelled while the sync is in progress. An application will gracefully handle failover events if it has been properly configured to use a replica set connection. Compacting Dedicated Cluster plan deployments The process we use to compact a Dedicated Cluster plan deployment is our seamless rolling node replacement process. This is the best method of reclaiming disk space because your deployment will maintain the same level of availability during the process. Read about mLab’s rolling node replacement process below. Steps to compact: - Log in to the mLab management portal. - Navigate to the Dedicated Cluster deployment that you want to compact. - Click the “Tools” tab. - Under the “Initiate maintenance operation” header, select the “Compact using MongoDB’s initial sync process” option. - Read the recommendation on whether or not to compact. - Select the failover option in the drop-down menu that appears at the bottom of the “Failover Preference” section. - Click the “Compact” button and confirm that you want to proceed. This will automatically initiate a rolling node replacement to compact your deployment. Initiating a failover for your cluster If you would like to force your current primary to step down, you can do so through the mLab management portal. The following instructions are the equivalent of running the rs.stepDown() function in the mongo shell: - Log in to the mLab management portal. - From your account’s Home page, navigate to the deployment that needs a failover. - Click the “Servers” tab. - Click the “step down (fail over)” link that appears under the “Manage” column in the row for your current primary. - In the dialog box that appears, click the “Step down” button. mLab’s rolling node replacement process If you are on a replica set cluster plan with auto-failover, mLab’s rolling node replacement process will allow you to maintain high availability and keep your existing connection string during scheduled maintenance. If your application/driver is properly configured for replica set connections, you should experience no downtime during this process except during failover. A Dedicated Cluster plan cannot be downgraded to a Shared Cluster plan using the rolling node replacement process. However, a downgrade from one Dedicated Cluster plan to another Dedicated Cluster plan using this process is both possible and recommended. What is this process used for? The rolling node replacement process is most commonly used for: - Upgrading or downgrading plans - Migrating to the WiredTiger storage engine - Compactions Steps to replace multiple nodes in a cluster Replacing all electable nodes in a cluster: - For every node to be replaced, mLab will add a new, hidden node to the existing replica set. - To expedite the process, we will use a recent block storage snapshot whenever possible as the basis for this new node. - Otherwise, the replacement node will need to undergo MongoDB’s initial sync process. - Wait for the new nodes to be in the SECONDARY state and in sync with the primary (i.e., no replication lag). - For every secondary to be replaced, mLab will swap it out, updating DNS records to preserve the connection string. - You or mLab will intentionally initiate a failover so that your current primary becomes secondary. - mLab will swap out the final node, now secondary, with the new node, updating DNS records to preserve the connection string. Steps to replace one node in a cluster Replacing one node in a cluster: - mLab will add a new, hidden node to your existing replica set. - To expedite the process, we will use a recent block storage snapshot whenever possible as the basis for this new node. - Otherwise, the replacement node will need to undergo an initial sync. - Wait for the new node to be in the SECONDARY state and in sync with the primary (i.e., no replication lag). - If the node being replaced is currently primary, either you or mLab will intentionally initiate a failover so that your current primary becomes secondary. - mLab will swap out your existing node with the new node, updating DNS records to preserve the connection string. Expected impact on running applications The rolling node replacement process is mostly seamless. However, be aware that: If MongoDB’s initial sync process is necessary for the maintenance event (e.g., during a compaction), the syncing process will add additional read load during the initial, clone phase of the initial sync. During a failover it may take 5-30 seconds for a primary to be elected. If your application has not been configured with a replica set connection that can handle failover, writes will continue to fail after the new primary is elected. As such, mLab will coordinate with you for the required failover unless you explicitly tell us it’s not necessary (see next section). MongoDB’s replica set reconfiguration command, replSetReconfig, will be run in an impactful way two times during this process. While this command can sever existing connections and temporarily cause errors in driver logs, these types of disconnects usually have minimal effect on application/drivers that have been configured properly. Notification and coordination Swapping out a current secondary: - We will notify you when we swap out your current secondar(ies) with replacement nodes. Swapping out your current primary: - When your current primary is ready to be swapped out, we will coordinate with you so that you can initiate the required failover at the time that makes the most sense for you and/or your team. - If you know that your application has no trouble handling failover, let us know, and we can initiate the required failover on your behalf immediately before we swap out your current primary. Additional charges The extra virtual machines that are used during a rolling node replacement process in order to maintain the same level of availability may incur charges.
http://docs.mlab.com/ops/
2017-06-22T16:18:14
CC-MAIN-2017-26
1498128319636.73
[]
docs.mlab.com
The site is and they asked me […] 08-Apr-2015 Blog, Tips Everybody is talking about the Google’s upcoming algorithm update. Then go to your wordpress site’s admin dashboard and navigate to “Plugins > Add New > Upload“. Now upload the plugin and install and activate. The plugin also allows you to deliver your content across the platforms and operating system. Provides you an option to select your favourite theme and also to customize the theme that suits your website’s identity. Похожие записи:
http://wp-docs.ru/2016/10/09/14539-mobipress-%D1%88%D0%B0%D0%B1%D0%BB%D0%BE%D0%BD
2017-06-22T16:37:32
CC-MAIN-2017-26
1498128319636.73
[array(['http://2.bp.blogspot.com/-vaJCI6Eed8U/Vfdvp7eAsYI/AAAAAAAAAW0/hyv2NkT1wHA/s1600/unnamed.jpg', None], dtype=object) ]
wp-docs.ru
A collection of validators. The class that all other validators must inherit from. Validates for only alph characters. Validates a credential, normall a password. Validates for currency format. Validates for a well formed email address. Validates for an array of Identifers. Validates for an Identifer. Validates an inequality such as <, >, =<, =>, or !. Validates string length. Allows null values to be valid. Validates for the name of a person. Validates null values to be in-valid. Allows multiple validators to be chained together into a single compound validator.
http://sds.readthedocs.io/en/latest/validator/index.html
2017-06-22T16:26:15
CC-MAIN-2017-26
1498128319636.73
[]
sds.readthedocs.io
1 Introduction To create a mobile app, you need platform-specific app signing keys. A mobile app is signed with a digital signature by its developers before publication. These signatures are used by both app stores and devices to verify that the app is authentic. Depending on which platforms you want to target, you will need to create the required signing keys. The following sections describe (per platform) how to create those keys. 2 iOS Unfortunately, signing keys is always required for iOS app deployment, even if you just want to test the app on your personal device and do not want to publish to the Apple App Store. This section describes how to create the required files. It is convenient to have an Apple Mac available, but it is not a requirement. You do always need an Apple Developer Account. 2.1 On Apple Macs If you have an Apple Mac available, see the Apple developer documentation on certificate management for information on how to obtain an iOS signing certificate and distribution profile. Next, see the Apple documentation on how to create the required distribution profile. Finally, check the end of this section for information on how to upload the signing key files to Adobe PhoneGap Build. 2.2 On Other Platforms If you do not have an Apple Mac available, you can create a certificate signing request manually. First, create a private key and certificate signing request with the OpenSSL utilty. The following steps assume you have a Windows machine, but these are equally applicable to Linux machines, which usually have the OpenSSL package pre-installed. To create a certificate signing request manually, follow these steps: -, for example, C:\OpenSSL (make note of this directory, as you will need it in step 3). - Open a command prompt. On most systems, you need to do this as an administrator (right-click the Windows start menu link and select Run as Administrator). - Generate a private key with the OpenSSL program that you just installed. Replace C:\OpenSSLwith where you installed OpenSSL in step 1. The private key file is stored at the location specified after the -outparameter. The following example will store the file in the root directory of your C: drive (you can change this to anything you want, just select a convenient place and keep track of where the file is stored): "C:\OpenSSL\bin\openssl.exe" genrsa -out "C:\private.key" 2048. The command will output “Generating RSA private key, 2048 bit long modulus” and lots of dots and plus signs. -. Follow these steps to do that: - Open the Apple Developer Member Center. - Under iOS, tvOS, watchOS, click Certificates, All. - In the iOS Certificates overview, click the plus button on the top-right. This will open the Add iOS Certificate wizard in the Select Type step with the caption “What type of certificate do you need?”. - If the plus button is disabled (greyed out), you do not have enough rights. Ask the company account administrator to give you extra rights. - Under Development, select iOS Development Certificate. - Click Continue. You are now at step About Creating a Certificate Signing Request (CSR). Continue. (for example, next to the private key and CSR files). - Click Done. The iOS Certificates overview page becomes visible again. Your new certificate should be in the list. Here, you can download it again, or you can revoke it (in case you lose the corresponding private key). 2.3 Creating the Required Distribution Profile Once you have the certificate file, you need to obtain a distribution profile. The Apple Developer Member Center allows you to define an app identifier, a test device, and finally a distribution profile. For more information, check the Apple documentation on how to maintain identifiers, devices and profiles. 2.4 Uploading the Key to Adobe PhoneGap Build Once you have downloaded the signing certificate (a .cer file), you need to convert the signing certificate from a .cer to a .p12. Use OpenSSL with the following steps: - Create from the signing certificate a PEM format: "C:\OpenSSL\bin\openssl.exe" x509 -in "C:\ios.cer" -inform DER -out "C:\ios_pem.pem" -outform PEM. - Create from the PEM certificate a password secured. This action requires the PEM certificate, the private key that was created in step 3 earlier, and the password thas was given on the creation of the ios.csr: "C:\OpenSSL\bin\openssl.exe" pkcs12 -export -out "C:\ios.p12" -inkey "C:\private.key" -in "C:\ios_pem.pem". - You can upload the signing certificate (now a .p12file) and the distribution profile (a .mobileprovisionfile) to Adobe PhoneGap Build on your account page. Go to the Signing Keys tab. 3, for example, C:\Program Files\Java\jre1.8.0_20\bin. After creating the key store file, upload it to Adobe PhoneGap Build on your account page. Go to the Signing Keys tab.
https://docs.mendix.com/refguide/managing-app-signing-keys
2018-08-14T13:17:13
CC-MAIN-2018-34
1534221209040.29
[]
docs.mendix.com
By Carles Sora and Neus Ballús In recent years, the number of people with university qualifications has been enormously increasing. In a society in which each new academic year sees more university degree holders, it is important to reflect on the role that university plays in our lives. Ambtítol is a compilation of the experiences of several former university students; it seeks to find out how their experiences at university have marked their lives personally, socially, professionally, intellectually… “Ambtítol tries to establish an open debate on the role that university plays in our societies” Ambtítol was conceived as an interactive documentary because we understand that the issue we are dealing with requires the users’ active role and participation. The aim is to set up an environment for all those former university students who’d wish to join this collective project – wherever they are. As such, Ambtítol tries to establish an open debate on the role that university plays in our societies. And to do so we need a lively open format, like the webdoc. Ambtítol proposes active navigation in which users have to build their documentary, choosing different testimonials of experiences of life around university. The users have to choose several times between two or three characters, presented with an animated image and a question the directly addresses them. With their choices they go on a personal journey corresponding to three time periods associated with university experience: the decision to study at university, the study period itself, and post-university working life. An example of navigating from the start of the webdoc to the end discussion space The idea of providing the opportunity to view the journey via a map is to give access to other characters. Thus, once the user has reached the final space, he can start the journey anew and learn about alternative experiences from a different character’s perspective. The relational structure and the short duration of the testimonial clips (one to two minutes) aim to encourage the user to generate multiple paths of between three and six characters each. “The relational structure and the short duration of the testimonial clips aim to encourage the user to generate multiple paths” During navigation, there are times when the user comes to a participative space where he must select the concept that best defines for him five relevant questions posed by the webdoc about university experience: Decisive moments, The world of work, The ideal university, Relationships and group life, Travelling. The choice of these concepts is cumulative and generates a ranking of the concepts and issues that are most relevant to the viewers of the webdoc. This ranking can be seen at the end of the journey in the “Discussion area” where you one see which concepts are most voted by all users and where you can also leave a written comment. Concerning the issue of what the “ideal university” looks like, for example, right now the most voted concept is “Make people wiser”. The final consultation space. From all these unique experiences – the ones we portray and the ones the users themselves contribute – we intend to generate a debate around the future of the university. In this collective reflection we want to come to light problems or shortcomings that the public university is currently facing, so that ideas as to how the situation could be improved may emerge. Thus, the role of Ambtítol is to describe the role university plays in our lives, from a very personal and unique angle, but also, underlying, it is a desire to improve present reality. During the coming months, Ambtítol aims to stimulate discussion by presenting the project in Barcelona and other cities, delivering talks and parallel activities at other universities, foundations and secondary schools. The aim of these activities is not only to expand potential audiences, but above all, to use the webdoc as a strategy to encourage public debate on the situation of the university today. Neus Ballús (Mollet del Vallès, 1980) is a film director and scriptwriter. Her first feature-length film, La plaga (The Plague,2013) was screened at the 63rd Berlinale (Berlin International Film Festival) and has won four Gaudí Awards 2013. She has also been nominatded for the 2013 LUX Prize and the European Film Awards besides receiving one nomination for the Goya Prize. Tracing the relationships among five local characters, the film portrays everyday life in Gallecs, in a rural zone of the Vallès Oriental region on the periphery of Barcelona. With a degree in Audiovisual Communication and a Master’s in Documentary Making from the Pompeu Fabra University, she has also made the shorts La Gabi (2004), L’avi de la càmera (Grandad with Camera, 2005) and the documentary Immersió (2009), the latter work being filmed underwater in a public swimming pool. This film was awarded the Best Short Film Prize at the ALCINE Festival. Her next film will be shot in Senegal, dealing with the western tourism experience. Carles Sora (Barcelona, 1980) is an interactive designer, media artist and lecturer of digital media at the Audiovisual Studies Department of Pompeu Fabra University, where teaches in graduate and postgraduate courses. He holds a PhD in Social Communication with a research on digital temporalities from UPF (2015), an Interdisciplinary Master in Cognitive Systems and Interactive Media from UPF (2009) and a B.S. in Multimedia from UPC (2004). He has directed and participated in several interactive projects in contexts such as visual arts, theatre and museography. He was founder member of the interactive exhibition studio Touche.cat (2009) and director of the Digital Technologies for the Stage programme (2010). He is advocate of digital temporalities, interactive documentaries for social change and other creative technologies forms. He has presented his work in several academic and artistic international venues, among them in Recto/Verso as artist in residence at Méduse (Quebec) and in Muestra Internacional de Performance (México DF).
http://i-docs.org/2016/04/13/qualified-ambtitol-the-university-subject-of-debate/
2018-08-14T14:20:36
CC-MAIN-2018-34
1534221209040.29
[array(['https://i0.wp.com/i-docs.org/wp-content/uploads/2016/04/img2-copy-1.jpg?resize=1022%2C715', 'img2-copy'], dtype=object) array(['https://i0.wp.com/i-docs.org/wp-content/uploads/2016/04/img3-copy-1.jpg?resize=1021%2C541', 'The final consultation space.'], dtype=object) ]
i-docs.org
In releases prior to ClustrixDB 9, the database could only be run with as the root system user. With ClustrixDB 9, it can now be run as a non-root user on CentOS 7. Installing ClustrixDB requires root or sudo access, but as part of installation you can specify which user(s) should be used to run and manage the database. Overview of Users The following Linux OS user accounts used to install and operate ClustrixDB: There is no performance difference with running ClustrixDB as a non-root user. The ClustrixDB installer will create both the clxd and clxm users if they do not already exist. To reduce confusion within your team and when working with Clustrix Support, we recommend leaving the ClustrixDB Daemon user at the default (clxd). This default user name helps identify this Linux user as a daemon-only account that should not be used by administrators during normal operation You may wish to use a different Linux user for the ClustrixDB Management user. This can be specified as part of installation. For example, if you normally log into Linux using a user named sysops, and you would like to manage ClustrixDB while logged in as this sysops user, then during the ClustrixDB installation, select sysops as the Management user instead of clxm. OS users cannot be modified once installation is complete. Configure clxd and clxm Linux users When using the recommended options, the ClustrixDB installer will automatically create the daemon ( clxd) and management ( clxm) users and grant the associated privileges. If you prefer to specify existing users, please note the following: ClustrixDB Daemon (clxd): This Linux user should not be granted sudo privileges. Doing so would effectively allow the ClustrixDB installation to run with root privileges. To facilitate cluster-wide upgrades of the ClustrixDB software, the clxd Linux user should have passwordless SSH access configured between ClustrixDB nodes. See Configure SSH Authentication for instructions on how to set this up. ClustrixDB Management (default: clxm): This Linux user does not require sudo privileges. To facilitate easy use of the ClustrixDB command-line management tools, passwordless SSH access between ClustrixDB nodes should be configured for this user. Non-root vs Root installation and upgrade: There is no performance difference with running ClustrixDB as a non-root user. Differences between non-root and root: Host-based authentication is not supported (see Configure SSH Authentication for more information on connectivity between nodes in ClustrixDB 9 non-root) If the database is in read only mode, taking a mysqldump requires using the --lock-tables=false option Prepare a system for running as non-root In addition to the normal steps to prepare a system for running ClustrixDB, If you previously installed ClustrixDB using a root-based install, perform the following steps (as root) to prepare a node for a non-root install: Clustrix now supports RHEL/CentOS 7. If you are migrating from an existing installation to a non-root installation, Clustrix recommends also migrating to RHEL/CentOS 7 at the same time.
http://docs.clustrix.com/display/CLXDOC/ClustrixDB+Operating+System+Users
2018-08-14T13:33:40
CC-MAIN-2018-34
1534221209040.29
[]
docs.clustrix.com
.. Note. In this section Additional resources
https://docs.microsoft.com/en-us/windows/desktop/VSS/volume-shadow-copy-service-portal
2018-08-14T14:12:04
CC-MAIN-2018-34
1534221209040.29
[]
docs.microsoft.com
Safely detach a storage device from your - Browse to the host in the vSphere Web Client navigator. -.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-F2E75F67-740B-4406-9F0C-A2D99A698F2A.html
2018-08-14T13:38:27
CC-MAIN-2018-34
1534221209040.29
[]
docs.vmware.com
Do you provide historical data? Yes, you can access the archive to get access to data older than 30 days.. How are results sorted? By default (when the sort parameter isn't specified) the results are sorted by the recommended order of crawl date. You can however change the sort order by using the following values: - relevancy - social.facebook.likes - social.facebook.shares - social.facebook.comments - social.gplus.shares - social.pinterest.shares - social.linkedin.shares - social.stumbledupon.shares - social.vk.shares - replies_count - participants_count - spam_score - performance_score - domain_rank - ord_in_thread - rating For example, the following call, will return posts ordered by the number of likes:*&sort=social.facebook.likes Why do the thread and post URLs go through Omgili.com? On the free plan, URLs for post and threads redirect through Omgili.com with a 5 second redirect lag. This way we show site owners webhose.io is a significant traffic referral source. Do you filter out spam? Each thread is given a spam score, ranging between 0 to 1, indicating how spammy the text is. For example, you can filter out threads with spam score higher than 0.5, by adding term "spam_score:<=0.5" to the search query. My result set shows the same article link multiple times - don't you filter out duplicates? We do filter out duplicates. You may get the same article link multiple times, if your query matches multiple comments for the same article. Webhose.io searches at the post level, so results include each post that matched your query. Each post also contains information about its containing thread, one of the properties of the thread, is the article link. That's the reason you might see the same link multiple times. If you want to search only for the first post (i.e only the article and no comments) add is_first:true to your query. For example: opera is_first:true Will return only articles (i.e no comments) containing the word "opera". How many keywords can we track per month? You can enter any Boolean query with no set limit to the number of tracked keywords. The plan limit refers to the number of monthly requests, which you can upgrade at any time. How many sources do you crawl? / Can you share your complete list of sources on your crawling cycle? Webhose.io does not share this information. We could never provide a comprehensive list that is up to date as it is by nature an ever evolving and continuously updating dataset that aggregates a vast volume of sources. What we can tell you however is that is in the millions with over 10MM posts indexed daily. We pride ourselves in our ability to quickly add sources that we don't yet have covered within a few hours. Moreover, you can quickly use the API query builder domain field to confirm coverage for a particular source. Customers send us source requests (often including a long list of sources), and we can report back to you regarding our coverage in a day or two. Does your search support entity extraction (like people, companies, locations)? Yes. You can search by person, location or organization on news or blog posts in English. For example, organization:apple will return news or blog posts mentioning Apple the company and not the fruit..\:\/\/). Can I get the highlighted fragments that matched my query? Yes. Just add highlight=true as a parameter to your call.$"..
https://docs.webhose.io/v1.0/docs/frequently-asked-questions
2018-08-14T14:23:32
CC-MAIN-2018-34
1534221209040.29
[]
docs.webhose.io
Changelog¶ Here you can see the full list of changes between each Flask-Restless release. Numbers following a pound sign (#) refer to GitHub issues. Note As of version 0.13.0, Flask-Restless supports Python 2.6, 2.7, and 3. Before that, it supported Python 2.5, 2.6, and 2.7. Note As of version 0.6, Flask-Restless supports both pure SQLAlchemy and Flask-SQLAlchemy models. Before that, it supported only Elixir models. Version 0.17.0¶ Released on February 17, 2015. - Corrects bug to allow delayed initialization of multiple Flask applications. - #167: allows custom serialization/deserialization functions. - #198: allows arbitrary Boolean expressions in search query filters. - #226: allows creating APIs before initializing the Flask application object. - #274: adds the url_for()function for computing URLs from models. - #379: improves datetime parsing in search requests. - #398: fixes bug where DELETE_SINGLE processors were not actually used. - #400: disallows excluding a primary key on a POST request. Version 0.16.0¶ Released on February 3, 2015. - #237: allows bulk delete of model instances via the allow_delete_manykeyword argument. - #313, #389: APIManager.init_app()now can be correctly used to initialize multiple Flask applications. - #327, #391: allows ordering searches by fields on related instances. - #353: allows search queries to specify group_bydirectives. - #365: allows preprocessors to specify return values on GET requests. - #385: makes the include_methods keywords argument respect model properties. Version 0.15.1¶ Released on January 2, 2015. Version 0.15.0¶ Released on October 30, 2014. - #320: detect settable hybrid properties instead of raising an exception. - #350: allows exclude/include columns to be specified as SQLAlchemy column objects in addition to strings. - #356: rollback the SQLAlchemy session on a failed PATCH request. - #368: adds missing documentation on using custom queries (see Custom queries) Version 0.14.2¶ Released on September 2, 2014. Version 0.14.1¶ Released on August 26, 2014. Version 0.14.0¶ Released on August 12, 2014. - Fixes bug where primary key specified by user was not being checked in some POST requests and some search queries. - #223: documents CORS example. - #280: don’t expose raw SQL in responses on database errors. - #299: show error message if search query tests for NULLusing comparison operators. - #315: check for query object being None. - #324: DELETE should only return 204 No Content if something is actuall deleted. - #325: support nullinside hassearch operators. - #328: enable automatic testing for Python 3.4. - #333: enforce limit in helpers.count(). - #338: catch validation exceptions when attempting to update relations. - #339: use user-specified primary key on PATCH requests. - #344: correctly encodes Unicode fields in responses. Version 0.13.1¶ Released on April 21, 2014. Version 0.13.0¶ Released on April 6, 2014. - Allows universal preprocessors or postprocessors; see Universal preprocessors and postprocessors. - Allows specifying which primary key to use when creating endpoint URLs. - Requires SQLAlchemy version 0.8 or greater. - #17: use Flask’s flask.Request.jsonto parse incoming JSON requests. - #29: replace custom jsonify_status_codefunction with built-in support for return jsonify(), status_codestyle return statements (new in Flask 0.9). - #51: Use mimerender to render dictionaries to JSON format. - #247: adds support for making POST requests to dictionary-like association proxies. - #249: returns 404 Not Found if a search reveals no matching results. - #254: returns 404 Not Found if no related field exists for a request with a related field in the URL. - #256: makes search parameters available to postprocessors for GET and PATCH requests that access multiple resources. - #263: Adds Python 3.3 support; drops Python 2.5 support. - #267: Adds compatibility for legacy Microsoft Internet Explorer versions 8 and 9. - #270: allows the queryattribute on models to be a callable. - #282: order responses by primary key if no order is specified. - #284: catch DataErrorand ProgrammingErrorexceptions when bad data are sent to the server. - #286: speed up paginated responses by using optimized count() function. - #293: allows sqlalchemy.Timefields in JSON responses. Version 0.12.1¶ Released on December 1, 2013. - #222: on POST and PATCH requests, recurse into nested relations to get or create instances of related models. - #246: adds pysqlite to test requirements. - #260: return a single object when making a GET request to a relation sub-URL. - #264: all methods now execute postprocessors after setting headers. - #265: convert strings to dates in related models when making POST requests. Version 0.12.0¶ Released on August 8, 2013. - #188: provides metadata as well as normal data in JSONP responses. - #193: allows DELETE requests to related instances. - #215: removes Python 2.5 tests from Travis configuration. - #216: don’t resolve Query objects until pagination function. - #217: adds missing indices in format string. - #220: fix bug when checking attributes on a hybrid property. - #227: allows client to request that the server use the current date and/or time when setting the value of a field. - #228 (as well as #212, #218, #231): fixes issue due to a module removed from Flask version 0.10. Version 0.11.0¶ Released on May 18, 2013. - Requests that require a body but don’t have Content-Type: application/jsonwill cause a 415 Unsupported Media Type response. - Responses now have Content-Type: application/json. - #180: allow more expressive hasand anysearches. - #195: convert UUID objects to strings when converting an instance of a model to a dictionary. - #202: allow setting hybrid properties with expressions and setters. - #203: adds the include_methodskeyword argument to APIManager.create_api(), which allows JSON responses to include the result of calling arbitrary methods of instances of models. - #204, 205: allow parameters in Content-Typeheader. Version 0.10.1¶ Released on May 8, 2013. Version 0.10.0¶ Released on April 30, 2013. - #2: adds basic GET access to one level of relationship depth for models. - #113: interpret empty strings for date fields as Noneobjects. - #115: use Python’s built-in assert statements for testing - #128: allow disjunctions when filtering search queries. - #130: documentation and examples now more clearly show search examples. - #135: added support for hybrid properties. - #139: remove custom code for authentication in favor of user-defined pre- and postprocessors (this supercedes the fix from #154). - #141: relax requirement for version of python-dateutil to be not equal to 2.0 if using Python version 2.6 or 2.7. - #146: preprocessors now really execute before other code. - #148: adds support for SQLAlchemy association proxies. - #154 (this fix is irrelevant due to #139): authentication function now may raise an exception instead of just returning a Boolean. - #157: POST requests now receive a response containing all fields of the created instance. - #162: allow pre- and postprocessors to indicate that no change has occurred. - #164, #172, and #173: PATCH requests update fields on related instances. - #165: fixed bug in automatic exposing of URLs for related instances. - #170: respond with correct HTTP status codes when a query for a single instance results in none or multiple instances. - #174: allow dynamically loaded relationships for automatically exposed URLs of related instances. - #176: get model attribute instead of column name when getting name of primary key. - #182: allow POST requests that set hybrid properties. - #152: adds some basic server-side logging for exceptions raised by views. Version 0.9.3¶ Released on February 4, 2013. - Fixes incompatibility with Python 2.5 try/except syntax. - #116: handle requests which raise IntegrityError. Version 0.9.2¶ Released on February 4, 2013. Version 0.9.1¶ Released on January 17, 2013. - #126: fix documentation build failure due to bug in a dependency. - #127: added “ilike” query operator. Version 0.9.0¶ Released on January 16, 2013. - Removed ability to provide a Sessionclass when initializing APIManager; provide an instance of the class instead. - Changes some dynamically loaded relationships used for testing and in examples to be many-to-one instead of the incorrect one-to-many. Versions of SQLAlchemy after 0.8.0b2 raise an exception when the latter is used. - #105: added ability to set a list of related model instances on a model. - #107: server responds with an error code when a PATCH or POST request specifies a field which does not exist on the model. - #108: dynamically loaded relationships should now be rendered correctly by the views._to_dict()function regardless of whether they are a list or a single object. - #109: use sphinxcontrib-issuetracker to render links to GitHub issues in documentation. - #110: enable results_per_pagequery parameter for clients, and added max_results_per_pagekeyword argument to APIManager.create_api(). - #114: fix bug where string representations of integers were converted to integers. - #117: allow adding related instances on PATCH requests for one-to-one relationships. - #123: PATCH requests to instances which do not exist result in a 404 Not Found response. Version 0.8.0¶ Released on November 19, 2012. - #94: views._to_dict()should return a single object instead of a list when resolving dynamically loaded many-to-one relationships. - #104: added num_resultskey to paginated JSON responses. Version 0.7.0¶ Released on October 9, 2012. - Added working includeand excludefunctionality to the views._to_dict()function. - Added exclude_columnskeyword argument to APIManager.create_api(). - #79: attempted to access attribute of Nonein constructor of APIManager. - #83: allow POST requests with one-to-one related instances. - #86: allow specifying include and exclude for related models. - #91: correctly handle POST requests to nullable DateTimecolumns. - #93: Added a total_pagesmapping to the JSON response. - #98: GET requests to the function evaluation endpoint should not have a data payload. - #101: excludein views._to_dict()function now correctly excludes requested fields from the returned dictionary. Version 0.6¶ Released on June 20, 2012. - Added support for accessing model instances via arbitrary primary keys, instead of requiring an integer column named id. - Added example which uses curl as a client. - Added support for pagination of responses. - Fixed issue due to symbolic link from READMEto README.mdwhen running pip bundle foobar Flask-Restless. - Separated API blueprint creation from registration, using APIManager.create_api()and APIManager.create_api_blueprint(). - Added support for pure SQLAlchemy in addition to Flask-SQLAlchemy. - #74: Added post_form_preprocessorkeyword argument to APIManager.create_api(). - #77: validation errors are now correctly handled on PATCH requests. Version 0.5¶ Released on April 10, 2012. - Dual-licensed under GNU AGPLv3+ and 3-clause BSD license. - Added capturing of exceptions raised during field validation. - Added examples/separate_endpoints.py, showing how to create separate API endpoints for a single model. - Added include_columnskeyword argument to create_api()method to allow users to specify which columns of the model are exposed in the API. - Replaced Elixir with Flask-SQLAlchemy. Flask-Restless now only supports Flask-SQLAlchemy. Version 0.4¶ Released on March 29, 2012. - Added Python 2.5 and Python 2.6 support. - Allow users to specify which HTTP methods for a particular API will require authentication and how that authentication will take place. - Created base classes for test cases. - Moved the evaluate_functionsfunction out of the flask_restless.searchmodule and corrected documentation about how function evaluation works. - Added allow_functions keyword argument to create_api(). - Fixed bug where we weren’t allowing PUT requests in create_api(). - Added collection_namekeyword argument to create_api()to allow user provided names in URLs. - Added allow_patch_manykeyword argument to create_api()to allow enabling or disabling the PATCH many functionality. - Disable the PATCH many functionality by default.
http://flask-restless.readthedocs.io/en/stable/changelog.html
2018-08-14T13:40:11
CC-MAIN-2018-34
1534221209040.29
[]
flask-restless.readthedocs.io
DateTime::Format::Builder - Create DateTime parser classes and objects. - NAME - VERSION - SYNOPSIS - DESCRIPTION - TUTORIAL - ERROR HANDLING AND BAD PARSES - SINGLE SPECIFICATIONS - MULTIPLE SPECIFICATIONS - EXECUTION FLOW - METHODS - SUBCLASSING - USING BUILDER OBJECTS aka USERS USING BUILDER - LONGER EXAMPLES - THANKS - SUPPORT - SEE ALSO - AUTHORS NAME DateTime::Format::Builder - Create DateTime parser classes and objects. VERSION version 0.81 SYNOPSIS package DateTime::Format::Brief; use DateTime::Format::Builder ( parsers => { parse_datetime => [ { regex => qr/^(\d{4})(\d\d)(\d\d)(\d\d)(\d\d)(\d\d)$/, params => [qw( year month day hour minute second )], }, { regex => qr/^(\d{4})(\d\d)(\d\d)$/, params => [qw( year month day )], }, ], } ); DESCRIPTION Dat. This creates the end methods. Coderefs die on bad parses, return DateTime objects on good parse. TUTORIAL See DateTime::Format::Builder::Tutorial. ERROR HANDLING AND BAD PARSES. SINGLE SPECIFICATIONS A single specification is a hash ref of instructions on how to create a parser. The precise set of keys and values varies according to parser type. There are some common ones though: length is an optional parameter that can be used to specify that this particular regex is only applicable to strings of a certain fixed length. This can be used to make parsers more efficient. It's strongly recommended that any parser that can use this parameter. label provides a name for the specification and is passed to some of the callbacks about to mentioned. on_match and on_fail are callbacks. Both routines will be called with parameters of: input, being the input to the parser (after any preprocessing callbacks). label, being the label of the parser, if there is one. self, being the object on which the method has been invoked (which may just be a class name). Naturally, you can then invoke your own methods on it do get information you want. args, being an arrayref of any passed arguments, if any. If there were no arguments, then this parameter is not given. These routines will be called depending on whether the regex match succeeded or failed. preprocess is a callback provided for cleaning up input prior to parsing. It's given a hash as arguments with the following keys: input being the datetime string the parser was given (if using multiple specifications and an overall preprocess then this is the date after it's been through that preprocessor). parsed being the state of parsing so far. Usually empty at this point unless an overall preprocess was given. Items may be placed in it and will be given to any postprocessor and DateTime->new(unless the postprocessor deletes it). self, args, label as per on_match and on_fail. parservariant of preprocess is performed after any length calculations. postprocess is the last code stop before: DateTime::Format::Builder::Parser::Regex - provides regular expression based parsing. DateTime::Format::Builder::Parser::Strptime - provides strptime based parsing. Subroutines / coderefs as specifications.. Callbacks I mention a number of callbacks in this document. Any time you see a callback being mentioned, you can, if you like, substitute an arrayref of coderefs rather than having the straight coderef. MULTIPLE SPECIFICATIONS These are very easily described as an array of single specifications. Note that if the first element of the array is an arrayref, then you're specifying options. preprocess lets you specify a preprocessor that is called before any of the parsers are tried. This lets you do things like strip off timezones or any unnecessary data. The most common use people have for it at present is to get the input date to a particular length so that the length is usable (DateTime::Format::ICal would use it to strip off the variable length timezone). Arguments are as for the single parser preprocess variant with the exception that label is never given. on_fail should be a reference to a subroutine that is called if the parser fails. If this is not provided, the default action is to call DateTime::Format::Builder::on_fail, or the on_failmethod of the subclass of DTFB that was used to create the parser. EXECUTION FLOW Builder allows you to plug in a fair few callbacks, which can make following how a parse failed (or succeeded unexpectedly) somewhat tricky. For Single Specifications A single specification will do the following: User calls parser: my $dt = $class->parse_datetime( $string ); preprocess is called. It's given $stringand a reference to the parsing workspace hash, which we'll call $p. At this point, $pis empty. The return value is used as $datefor the rest of this single parser. Anything put in $pis also used for the rest of this single parser. regex is applied. If regex did not match, then on_fail is called (and is given . postprocess is called with . For Multiple Specifications With multiple specifications: User calls parser: my $dt = $class->complex_parse( $string ); The overall preprocessor is called and is given $stringand the hashref $p(identically to the per parser preprocess mentioned in the previous flow). If the callback modifies $pthen a copy of $pis given to each of the individual parsers. This is so parsers won't accidentally pollute each other's workspace. If an appropriate length specific parser is found, then it is called and the single parser flow (see the previous section) is followed, and the parser is given a copy of $pand the return value of the overall preprocessor as $date. If a DateTimeobject was returned so we go straight back to the user. If no appropriate parser was found, or the parser returned undef, then we progress to step 3! Any non-length based parsers are tried in the order they were specified. For each of those the single specification flow above is performed, and is given a copy of the output from the overall preprocessor. If a real DateTimeobject is returned then we exit back to the user. If no parser could parse, then an error is thrown. See the section on error handling regarding the undefs mentioned above. METHODS In the general course of things you won't need any of the methods. Life often throws unexpected things at us so the methods are all available for use. import import() is a wrapper for create_class(). If you specify the class option (see documentation for create_class()) it will be ignored. create_class. parsers takes a hashref of methods and their parser specifications. See the DateTime::Format::Builder::Tutorial for details. Note that if you define a subroutine of the same name as one of the methods you define here, an error will be thrown. constructor determines whether and how to create a. verbose takes a value. If the value is undef, then logging is disabled. If the value is a filehandle then that's where logging will go. If it's a true value, then output will go to STDERR. Alternatively, call $DateTime::Format::Builder::verbose()with the relevant value. Whichever value is given more recently is adhered to. Be aware that verbosity is a global wide setting. class is optional and specifies the name of the class in which to create the specified methods. If using this method in the guise of import()then this field will cause an error so it is only of use when calling as create_class(). version is also optional and specifies the value to give . SUBCLASSING In the rest of the documentation I've often lied in order to get some of the ideas across more easily. The thing is, this module's very flexible. You can get markedly different behaviour from simply subclassing it and overriding some methods. create_method Given a parser coderef, returns a coderef that is suitable to be a method. The default action is to call on_fail() in the event of a non-parse, but you can make it do whatever you want. on_fail. USING BUILDER OBJECTS aka USERS USING BUILDER The methods listed in the METHODS section are all you generally need when creating your own class. Sometimes you may not want a full blown class to parse something just for this one program. Some methods are provided to make that task easier. new The basic constructor. It takes no arguments, merely returns a new DateTime::Format::Builder object. my $parser = DateTime::Format::Builder->new(); If called as a method on an object (rather than as a class method), then it clones the object. my $clone = $parser->new(); clone Provided for those who prefer an explicit clone() method rather than using new() as an object method. my $clone_of_clone = $clone->clone(); parser. set_parser. get_parser Returns the parser the object is using. my $code = $parser->get_parser(); parse_datetime. format_datetime If you call this function, it will throw an errror. LONGER EXAMPLES Some longer examples are provided in the distribution. These implement some of the common parsing DateTime modules using Builder. Each of them are, or were, drop in replacements for the modules at the time of writing them. THANKS. SEE ALSO [email protected] mailing list. perl, DateTime, DateTime::Format::Builder::Tutorial, DateTime::Format::Builder::Parser AUTHORS Dave Rolsky <[email protected]> Iain Truskett This software is Copyright (c) 2013 by Dave Rolsky. This is free software, licensed under: The Artistic License 2.0 (GPL Compatible)
http://docs.activestate.com/activeperl/5.24/perl/lib/DateTime/Format/Builder.html
2018-08-14T14:23:50
CC-MAIN-2018-34
1534221209040.29
[]
docs.activestate.com
Schedule sprints VSTS | TFS 2018 | TFS 2017 | TFS 2015 | TFS 2013 With Scrum, teams plan and track work at regular time intervals, referred to as a sprint cadence. You define sprints to correspond to the cadence your team uses. Many teams choose a two or three week cadence. However, you can specify shorter or longer sprint cycles. Also, you can create a release schedule which encompasses several sprints. Quick start guide to scheduling sprints To quickly get started, you can use the default sprints, also referred to as iterations, that were added when your project was created. Note, you must be a member of the Project Administrators group in order to add sprints and schedule sprint dates. (If you created the project, you're a VSTS organization. To learn how to use the web portal effectively, see Navigation Basics. For on-premises TFS users, choose Previous Navigation for guidance. That's it! You can now start planning your first sprint. Of course, if you have several teams, more complex release and sprint cadences to schedule, or want to create child iterations, then you'll need to read further. You define these through the settings context for the project. Note Terminology note: Your set of Agile tools uses the Iteration Path field to track sprints and releases. When you define sprints, you define the pick list of values available for the Iteration Path field. You use iterations to group work into sprints, milestones, or releases in which they'll be worked on or shipped. Add and schedule new sprints for several teams and release cadences Note Your sprint backlog and taskboard are designed to support your Scrum processes. In addition, you have access to product and portfolio backlogs and Kanban boards. For an overview of the features supported on each backlog and board, see Backlogs, boards, and plans. Your project comes with several sprints predefined. However, they aren't associated with any dates. For Scrum and sprint planning, you'll want to assign start and end dates for the sprints your team will use. Defining additional sprints is a two-step process. You first define the sprints for your project—Define project iterations—and then you select the sprints that each team will use—Select team sprints. In this way, the system supports teams that work on different sprint cadences. Each sprint that you select for your team provides access to a sprint backlog, task board, and other sprint planning tools for planning and tracking work. For example, by selecting Sprints 1 thru 6, the Fabrikam Fiber team gets access to six sprint backlogs. They also get access to capacity planning tools and a task board for each sprint. Try this next Related articles If you work with several teams, and each team wants their own backlog view, you can create additional teams. Each team then gets access to their own set of Agile tools. Each Agile tool filters work items to only include those assigned values under the team's default area path and iteration path.
https://docs.microsoft.com/en-us/vsts/boards/sprints/define-sprints?view=vsts
2018-08-14T13:23:14
CC-MAIN-2018-34
1534221209040.29
[]
docs.microsoft.com
This is an iframe, to view it upgrade your browser or enable iframe display. Prev Gnome-RPM The GNOME Desktop () provides another graphical RPM-management tool, Gnome-RPM. Also known as gnorpm, Gnome-RPM is very similar to KPackage in terms of its basic functionality, although Gnome-RPM can manage only RPMs. When started, Gnome-RPM presents a hierarchical list of installed packages, arranged by group, as shown in Figure 8-10: Insert 54965-0 fg0810.tif Figure 8-10: The main Gnome-RPM window. After a specific package has been selected, you can can query to see its details, as shown in Figure 8-11: Insert 54965-0 fg0811.tif Figure 78-11: Querying the details for a package. With Gnome-RPM, you can also filter the list of packages to see only the list of uninstalled RPMs, as shown in Figure 8-12. Insert 54965-0 fg0812.tif Figure 78-12: Filtering to see only the uninstalled packages. Like KPackage, when installing new software, Gnome-RPM lacks the ability to automatically install any dependencies needed by that software. Prev 7.2.3. KPackage Up 7.3. Extending RPM Management
https://docs.fedoraproject.org/en-US/Fedora_Draft_Documentation/0.1/html/RPM_Guide/ch07s02s04.html
2018-08-14T13:29:32
CC-MAIN-2018-34
1534221209040.29
[]
docs.fedoraproject.org
You are browsing documentation for a version other than the latest stable release. Switch to the latest stable release, 1.5. This document outlines the steps needed to build a Yocto Project image for a device. The build output will most notably include: .biosimgif the system is an x86 system and boots using the traditional BIOS and GRUB bootloader .sdimgif the system is an ARM system and boots using U-Boot .uefiimgif the system is an x86 system and boots using the UEFI standard and GRUB bootloader .mender If you do not want to build your own images for testing purposes, the Getting started tutorials provide links to several demo images.. Before building for your device with Mender, Mender needs to be integrated with your device (most notably with U-Boot). This integration enables robust and atomic rollbacks with Mender. The following reference devices are already integrated with Mender, so if you are building for one of these you do not need to do any integration: If you are building for a different device, please see Device integration for general requirements and adjustments you might need to enable your device to support atomic image-based deployments with rollback. There might already be similar devices you can use as a starting point in the meta-mender repository. If you want to save time, you can use our professional services to integrate your device with Mender. Make sure that the clock is set correctly on your devices. Otherwise certificate verification will become unreliable and the Mender client can likely not connect to the Mender server. See certificate troubleshooting for more information. We use the sumo sumo git://git.yoctoproject.org/poky cd poky Note that the Yocto Project also depends on some development tools to be in place. We will now add the required meta layers to our build environment. Please make sure you are standing in the directory where poky resides, i.e. the top level of the Yocto Project build tree, and run these commands: git clone -b sumo git://github.com/mendersoftware/meta-mender Next, we initialize the build environment: source oe-init-build-env This creates a build directory with the default name, build, and makes it the current working directory. We then need to add the Mender layers into our project: bitbake-layers add-layer ../meta-mender/meta-mender-core The meta-mender-demo layer (below) is not appropriate if you are building for production devices. Please go to the section about building for production to see the difference between demo builds and production builds. bitbake-layers add-layer ../meta-mender/meta-mender-demo Finally, add the Mender layer specific to your board. Mender currently comes with three reference devices that you can build for (only add one of these): bitbake-layers add-layer ../meta-mender/meta-mender-raspberrypi(depends on meta-raspberrypi) bitbake-layers add-layer ../meta-mender/meta-mender-qemu Other devices may have community support, either in meta-mender or other repositories. If you are building for a different device, please see Device integration for general requirements and adjustments you might need to enable your device to support Mender.. # raspberrypi3, beaglebone, vexpress-qemu and qemux86-64 are reference devices MACHINE = "<YOUR-MACHINE>" # For Raspberry Pi, uncomment the following block: # RPI_USE_U_BOOT = "1" # MENDER_PARTITION_ALIGNMENT_KB = "4096" # MENDER_BOOT_PART_SIZE_MB = "40" # IMAGE_INSTALL_append = " kernel-image kernel-devicetree" # IMAGE_FSTYPES_remove += " rpi-sdimg" # # Lines below not needed for Yocto Rocko (2.4) or newer. # IMAGE_BOOT_FILES_append = " boot.scr u-boot.bin;${SDIMG_KERNELIMAGE}" # KERNEL_IMAGETYPE = "uImage" #.%" # Build for Hosted Mender # To get your tenant token, log in to, # click your email at the top right and then "My organization". # Remember to remove the meta-mender-demo layer (if you have added it). # We recommend Mender 1.2.1 and Yocto Project's pyro or later for Hosted Mender. # # MENDER_SERVER_URL = "" # MENDER_TENANT_TOKEN = "<YOUR-HOSTED-MENDER-TENANT-TOKEN>" DISTRO_FEATURES_append = " systemd" VIRTUAL-RUNTIME_init_manager = "systemd" DISTRO_FEATURES_BACKFILL_CONSIDERED = "sysvinit" VIRTUAL-RUNTIME_initscripts = "" ARTIFACTIMG_FSTYPE = "ext4" The size of the disk image ( .sdimg) should match the total size of your storage so you do not leave unused space; see the variable MENDER_STORAGE_TOTAL_SIZE_MB for more information. Mender selects the file system type it builds into the disk image, which is used for initial flash provisioning, based on the ARTIFACTIMG_FSTYPE variable. See the section on file system types for more information. If you are building for Hosted Mender, make sure to set MENDER_SERVER_URL and MENDER_TENANT_TOKEN (see the comments above). If you would like to use a read-only root file system, please see the section on configuring the image for read-only rootfs. Once all the configuration steps are done, build an image with bitbake: bitbake <YOUR-TARGET> Replace <YOUR-TARGET> with the desired target or image name, e.g. core-image-full-cmdline. one of the virtual Mender reference devices ( qemux86-64 or vexpress-qemu), you can start up your newly built image with the script in ../meta-mender/meta-mender-qemu/scripts/mender-qemu and log in as root without password.
https://docs.mender.io/1.6/artifacts/building-mender-yocto-image
2018-08-14T13:43:02
CC-MAIN-2018-34
1534221209040.29
[]
docs.mender.io
Overview Thank you for choosing Telerik RadSpreadStreamProcessing! This article briefly explains the specifics of RadSpreadStreamProcessing - what is spread streaming, how it works compared to other libraries and when to use it.
https://docs.telerik.com/devtools/xamarin/nativecontrols/android/spreadstreamprocessing/overview.html
2017-11-17T23:02:23
CC-MAIN-2017-47
1510934804019.50
[array(['images/SpreadStreamProcessing-Overview_01.png', None], dtype=object) ]
docs.telerik.com
To change some of the information that you supplied when you installed Site Recovery Manager Server, you can run the Site Recovery Manager installer in modify mode. About this task Installing Site Recovery Manager Server binds the installation to a number of values that you supply, including the vCenter Server instance to extend, the Site Recovery Manager database type, DSN and credentials, the type of authentication, and so on. The Site Recovery Manager installer supports a modify mode that allows you to change certain values that you configured when you installed Site Recovery Manager Server. The user name and password of the vCenter Server administrator, if they changed since you installed Site Recovery Manager The type of authentication (certificate-based or credential-based), the authentication details, or both. The user name, password, and connection numbers for the Site Recovery Manager database The user account under which the Site Recovery Manager Server service runs The installer's modify mode presents modified versions of some of the pages that are part of the Site Recovery Manager Server installer. You cannot modify the host and administrator configuration information, including the local site name, Site Recovery Manager administrator email address, local host address, or the listener ports. This page is omitted when you run the installer in modify mode. Site Recovery Manager does not use the administrator email address that you provided during installation, so if the Site Recovery Manager administrator changes after you installed Site Recovery Manager Server, Site Recovery Manager operation is not affected. Updating the certificate affects the thumbprint, which can affect the connection between the protected site and the recovery site. Check the connection between the protected site and the recovery site after you run the installer in modify mode. For information about how to configure the connection between the protected site and the recovery site, see Connect the Site Recovery Manager Server Instances on the Protected and Recovery Sites. If you selected the embedded database when you installed Site Recovery Manager, you cannot modify the installation to use an external database, and the reverse. Prerequisites Verify that you have administrator privileges on Site Recovery Manager Server or that you are a member of the Administrators group. Disable Windows User Account Control (UAC) before you attempt the change operation or select Run as administrator when you start the Site Recovery Manager installer If you change the Site Recovery Manager installation to use certificate-based authentication instead of credential-based authentication, or if you upload a new custom certificate, you must provide the certificate for the remote site to the vSphere Web Client service on each site. See Provide Trusted CA Certificates to vSphere Web Client. Procedure - Log in to the Site Recovery Manager Server host. - Open Programs and Features from the Windows Control Panel. - Select the entry for VMware vCenter Site Recovery Manager and click Change. - Click Next. - Select Modify and click Next. - Enter the username and password for the vCenter Server instance that Site Recovery Manager extends. If you use auto-generated certificates, Site Recovery Manager Server uses the username and password that you specify here to authenticate with vCenter Server whenever you connect to Site Recovery Manager. If you use custom certificates, only the Site Recovery Manager installer uses this account to register Site Recovery Manager with vCenter Server during installation. You cannot use the installer's modify mode to change the vCenter Server address or port. When you click Next, the installer contacts the specified vCenter Server instance and validates the information you supplied. - Select an authentication method and click Next. If you do not select Use existing certificate, you are prompted to supply additional authentication details such as certificate location or strings to use for Organization and Organizational Unit. - Provide or change the database configuration information and click Next. - Choose whether to keep or discard the database contents and click Next. -. - When the modification operation is finished and the Site Recovery Manager Server restarts, log in to the vSphere Web Client to check the status of the connection between the protected site and the recovery site. - (Optional) : If the connection between the protected site and the recovery site is broken, reconfigure the connection, starting from the Site Recovery Manager Server that you updated.
https://docs.vmware.com/en/Site-Recovery-Manager/5.8/com.vmware.srm.install_config.doc/GUID-55A0D565-333C-4A5D-9B34-429DB48DBF5D.html
2017-11-17T23:19:08
CC-MAIN-2017-47
1510934804019.50
[]
docs.vmware.com
View Persona Management and Windows roaming profiles require a specific minimum level of permissions on the user profile repository. View Persona Management also requires that the security group of the users who put data on the shared folder must have read attributes on the share. Set the required access permissions on your user profile repository and redirected folder share. For information about roaming user profiles security, see the Microsoft TechNet topic, Security Recommendations for Roaming User Profiles Shared Folders.
https://docs.vmware.com/en/VMware-Horizon-6/6.2/com.vmware.horizon-view.desktops.doc/GUID-8DA2B3DC-028F-4A0A-9AB0-DCABE72B802C.html
2017-11-17T23:19:02
CC-MAIN-2017-47
1510934804019.50
[]
docs.vmware.com
The ContactForm class¶ - class contact_form.forms. ContactForm¶ The base contact form class from which all contact form classes should inherit. If you don’t need any customization, you can simply use this form to provide basic contact functionality; it will collect name, email address and message. The ContactFormViewincluded in this application knows how to work with this form and can handle many types of subclasses as well (see below for a discussion of the important points), so in many cases it will be all that you need. If you’d like to use this form or a subclass of it from one of your own views, just do the following: - When you instantiate the form, pass the current HttpRequestobject as the keyword argument request; this is used internally by the base implementation, and also made available so that subclasses can add functionality which relies on inspecting the request (such as spam filtering). - To send the message, call the form’s savemethod, which accepts the keyword argument fail_silentlyand defaults it to False. This argument is passed directly to Django’s send_mail()function, and allows you to suppress or raise exceptions as needed for debugging. The savemethod has no return value. Other than that, treat it like any other form; validity checks and validated data are handled normally, through the is_valid()method and the cleaned_datadictionary. Under the hood, this form uses a somewhat abstracted interface in order to make it easier to subclass and add functionality. The following attributes play a role in determining behavior, and any of them can be implemented as an attribute or as a method: from_email¶ The email address to use in the From:header of the message. By default, this is the value of the setting DEFAULT_FROM_EMAIL. recipient_list¶ The list of recipients for the message. By default, this is the email addresses specified in the setting MANAGERS. subject_template_name¶ The name of the template to use when rendering the subject line of the message. By default, this is contact_form/contact_form_subject.txt. template_name¶ The name of the template to use when rendering the body of the message. By default, this is contact_form/contact_form.txt. And two methods are involved in actually producing the contents of the message to send: Returns the body of the message to send. By default, this is accomplished by rendering the template name specified in template_name. subject()¶ Returns the subject line of the message to send. By default, this is accomplished by rendering the template name specified in subject_template_name. Finally, the message itself is generated by the following two methods: get_message_dict()¶ This method loops through from_email, recipient_list, message()and subject(), collecting those parts into a dictionary with keys corresponding to the arguments to Django’s send_mailfunction, then returns the dictionary. Overriding this allows essentially unlimited customization of how the message is generated. Note that for compatibility, implementations which override this should support callables for the values of from_emailand recipient_list. get_context()¶ For methods which render portions of the message using templates (by default, message()and subject()), generates the context used by those templates. The default context will be a RequestContext(using the current HTTP request, so user information is available), plus the contents of the form’s cleaned_datadictionary, and one additional variable: site - If django.contrib.sitesis installed, the currently-active Siteobject. Otherwise, a RequestSiteobject generated from the request. Meanwhile, the following attributes/methods generally should not be overridden; doing so may interfere with functionality, may not accomplish what you want, and generally any desired customization can be accomplished in a more straightforward way through overriding one of the attributes/methods listed above. request¶ The HttpRequestobject representing the current request. This is set automatically in __init__(), and is used both to generate a RequestContextfor the templates and to allow subclasses to engage in request-specific behavior. save()¶ If the form has data and is valid, will actually send the email, by calling get_message_dict()and passing the result to Django’s send_mailfunction. Note that subclasses which override __init__or save()need to accept *argsand **kwargs, and pass them via super, in order to preserve behavior (each of those methods accepts at least one additional argument, and this application expects and requires them to do so).
http://django-contact-form.readthedocs.io/en/1.3/forms.html
2017-11-17T23:12:28
CC-MAIN-2017-47
1510934804019.50
[]
django-contact-form.readthedocs.io
GameMaker: Studio by default only permits you to export games for Windows or the GameMaker: Player, while the Professional version also permits you to compile for Windows 8 (as well as use additional functionality). If you have bought GameMaker: Studio through Steam, you will also have an additional target module to export your games to the Steam Workshop. You can get further additional modules which will permit you to export to other platforms like HTML5, Android, iOS and Windows Phone 8: - The HTML5 module enables you to produce ready-to-run HTML and JavaScript code that you can host or embed on any website - The Android module enables you to create and distribute *.apk files - If you have an iOS developer account then with the iOS module you can also publish to iPad, iPhone and iPod - The Windows Phone module permits you to target those devices for your games (for Windows Surface devices, use the Windows 8 module) - With the Tizen module you can target a vast number of "smart" devices - With the Mac OSX module you can target the Apple desktop market - The Linux module permits you to make desktop games that run on the Ubuntu operating system These modules are available only to those who have upgraded to the Pro version of GameMaker: Studio (more information on the different versions can be found here). This section of the help file gives you all the information you need for creating your first games. Later sections will discuss more advanced topics, how to polish and distribute your game, and the built-in programming language GML that considerably extends the possibilities of the product. Information on the basic use of GameMaker: Studio can be found in the following sections: - Introduction - Installation and System Requirements - Activation - GameMaker: Studio Overview - The Graphical User Interface (GUI) - - The File Menu - - The Edit Menu - - The Resources Menu - - The Scripts Menu - - The Run Menu - - The Help Menu - - The Marketplace Menu - Sounds And Music - Backgrounds - Defining Objects - Events - - Create Event - - Destroy Event - - Alarm Events - - Step Events - - Collision Event - - Keyboard Events - - Other Events - - Draw Event - - Asynchronous Events - Actions - - Move Actions - - Main Actions, Set 1 - - Main Actions, Set 2 - - Control Actions - - Score Actions - - Extra Actions - - Draw Actions - - Using Variables and Expressions in Actions - Creating Rooms - - Settings - - Backgrounds - - Objects - Game Information - Distributing Your Game
http://docs.yoyogames.com/source/dadiospice/000_using%20gamemaker/index.html
2017-11-17T23:17:53
CC-MAIN-2017-47
1510934804019.50
[]
docs.yoyogames.com
method cache Documentation for method cache, assembled from the following types: class Any (Any) method cache Defined As: method cache(--> List) Provides a List representation of the object itself, calling the method list on the instance. role PositionalBindFailover From PositionalBindFailover (PositionalBindFailover) method cache method cache(PositionalBindFailover: --> List) Returns a List based on the iterator method, and caches it. Subsequent calls to cache always return the same List object.
https://docs.perl6.org/routine/cache
2018-02-18T03:04:01
CC-MAIN-2018-09
1518891811352.60
[]
docs.perl6.org
Did you find this page useful? Do you have a suggestion? Give us feedback or send us a pull request on GitHub. First time using the AWS CLI? See the User Guide for help getting started. Deletes a tag or tags from a resource. You must provide the ARN of the resource from which you want to delete the tag or tags. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. delete-tags --resource-name <value> --tag-keys <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --resource-name (string) The Amazon Resource Name (ARN) from which you want to remove the tag or tags. For example, arn:aws:redshift:us-east-1:123456789:cluster:t1 . --tag-keys (list) The tag key that you want.
https://docs.aws.amazon.com/cli/latest/reference/redshift/delete-tags.html
2018-02-18T03:43:59
CC-MAIN-2018-09
1518891811352.60
[]
docs.aws.amazon.com
UCF authority document update process You can update the UCF documents you use in GRC manually or configure the system to do it automatically whenever a new UCF version is available. By default, GRC downloads the most recent version of the UCF authority documents, which are updated quarterly. The ServiceNow system places these files in staging tables until they are imported into GRC. When you import a new document version, these entities are updated: Authority documents Citations Controls GRC observes these general rules when importing updated documents from UCF: If UCF authority documents or citations are updated, both entities are imported into GRC and versioned. If only the UCF controls are updated, then only the controls are versioned. In this case, a new link is created between the updated control and the existing citation that uses it. Older versions of updated controls are automatically deactivated and do not appear in lists of controls. The control test definitions, policies, and risks that use these updated entities are reset to use the latest version. Any control test instances tied to a control from the previous version remain linked to that control. You must generate new control test instances based on the latest UCF version. The system deactivates all previous versions of an imported UCF document and retains them in their respective GRC tables. Figure 1. Automatic updates to UCF links Changes to these UCF authority document fields trigger versioning in GRC. Table 1. Authority document fields UCF Field GRC Field ucf_ad_common_name name ucf_ad_id source_id ucf_ad_version source_version ucf_ad_date_modified source_last_modified ucf_ad_release_version source_release_version ucf_ad_url url Table 2. Citation fields UCF Fields GRC Fields ucf_citation reference ucf_citation_guidance key_areas ucf_citation_id source_id ucf_citation_date_modified source_last_modified ucf_citation_release_version source_release_version Table 3. Control fields UCF Fields GRC Fields ucf_ce_control_title name ucf_ce_control_statement description ucf_ce_id source_id ucf_ce_date_modified source_last_modified ucf_ce_release_version source_release_version Update a UCF document manuallyBy default, GRC is configured to require manual update of the UCF documents it uses.Configure an automatic UCF downloadWhen the GRC plugin is activated, the system creates a scheduled job called Notify GRC Admin new UCF is available. By default, this job is configured to check for new UCF authority documents each Monday.
https://docs.servicenow.com/bundle/geneva-governance-risk-compliance/page/product/it_governance_risk_and_compliance/concept/c_GRCDocumentUpdates.html
2018-02-18T03:04:31
CC-MAIN-2018-09
1518891811352.60
[]
docs.servicenow.com
Specify an OAuth scope Specify the OAuth scopes that you get from the provider. Scopes can be any level of access specified by the provider, such as read, write, or any string, including a URL. Before you beginRole required: admin Procedure Open a third-party OAuth provider record. Open a profile associated with the provider. In the OAuth Entity Profile Scopes embedded list, click Insert a new row. Enter a name for the profile. Right-click OAuth Entity Profile form header and select Save. The profile record is created. Click the name of the scope you created. Fill in the form fields (see table). Table 1. OAuth Entity Scope form fields Field Description Name Enter a descriptive name. OAuth provider Verify the provider associated with this scope. OAuth scope The scope that you are granted by the provider. Typical scopes are read and write. Scopes can be any string that the provider specifies. Click Update. Related TasksSpecify an OAuth profileRelated ConceptsOAuth support for authorization code flowOAuth profiles and scopes
https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/administer/security/task/t_SpecifyAnOAuthScope.html
2018-02-18T03:04:12
CC-MAIN-2018-09
1518891811352.60
[]
docs.servicenow.com
CONTENTS Contents Editorial: European crisis .5 Themes . 23 Europe The EU’s choice: reform or decline . 24 Nordics in brief .7 Sweden Growth all around.8 Denmark Consumers drive growth . 10 Norway Still an oil story . 12 Finland Living off debt . 14 Global overview Slower for longer . 32 Oil Crude at USD 50 a barrel by end 2016 . 35 Baltics and Russia in brief . 17 Estonia Soft growth, lower debt . 18 Key figures . 36 Latvia Labour market is overheating . 19 Lithuania Looking for a growth engine . 20 Russia Green shoots (in autumn) . 21 Contributors . 40 China Fight against capital flight. 26 Exports to Russia It’s all about oil . 28 Swedish property The boom is over . 30 ”Much suggests that the long-lasting Swedish housing market boom is over and that prices will move sideways from here” Andreas Wallström Nordea chief analyst 2 / 2016 / Nordea Economic Outlook / 03 Print Download PDF file
http://docs.nordeamarkets.com/?Page=3
2018-02-18T02:38:39
CC-MAIN-2018-09
1518891811352.60
[]
docs.nordeamarkets.com
Welcome to OIOI’s documentation!¶ Warning This is the latest version of the OIOI, version 4. For other versions, e.g. version 3, open the menu on the very bottom left of the screen. Even though this version is marked as stable, new fields may be added to requests and responses. Your service must still accept all responses. You may ignore the additional fields or implement an update. Requests will only get additional fields if they are optional. Important This specific release is 4.19.0. The main documentation for the OIOI is organized into a couple of sections: - Introduction for OEMs - Managing Users - Receiving POIs - Receiving CDRs - Sending Remote Starts - Allowing RFID Starts as OEM Connecting as Fleet Operator¶ - Adyen Setup - Adyen Verify - API minimum version number - Company Get Images - Connector Post Status - RFID Post - RFID Verify - Session Post - Session Start - Session Stop - Station Get By IDs - Station Get Surface - Station Get Usage - Station Post - User Add Credit Card - User Add Payment - User Add RFID - User Block RFID - User Change Password - User Charging Key Activate - User Get Bills - User Get Charging Keys - User Get Details - User Get Payment Methods - User Get Recent Sessions - User Get Recent Stations - User Logout - User Post Details - User Register - User Reset - User Unblock RFID - User Verify - Get All Vehicles
http://docs.plugsurfing.com/en/stable/
2018-02-18T02:35:13
CC-MAIN-2018-09
1518891811352.60
[array(['_images/ps-oioi-logo-small.png', '_images/ps-oioi-logo-small.png'], dtype=object) ]
docs.plugsurfing.com
Resetting Your Password To reset your password, log in to elevio and open the drop down in the top right hand corner. Click account settings and you’ll be brought to the ‘Your Account’ page. Simply type your new password in the ‘Password’ and ‘Confirm Password’ boxes and hit ‘Update Account’. Your password will now have been updated.
http://docs.elevio.help/en/articles/81529-resetting-your-password
2018-02-18T03:05:35
CC-MAIN-2018-09
1518891811352.60
[]
docs.elevio.help
The make plugin The make plugin is useful for building make based parts. Make based projects are projects that have a Makefile that drives the build. This plugin always runs ‘make’ followed by ‘make install’, except when the ‘artifacts’ keyword is used. Plugin-specific keywords - artifacts: (list) Link/copy the given files from the make output to the snap installation directory. If specified, the 'make install' step will be skipped. - makefile: (string) Use the given file as the makefile. - make-parameters: (list of strings) Pass the given parameters to the make command. - make-install-var: (string; default: DESTDIR) Use this variable to redirect the installation into the snap.
https://docs.snapcraft.io/reference/plugins/make
2018-02-18T03:24:43
CC-MAIN-2018-09
1518891811352.60
[]
docs.snapcraft.io
PRAW and OAuth¶ OAuth support allows you to use reddit to authenticate on non-reddit websites. It also allows a user to authorize an application to perform different groups of actions on reddit with his account. A moderator can grant an application the right to set flair on his subreddits without simultaneously granting the application the right to submit content, vote, remove content or ban people. Before the moderator would have to grant the application total access, either by giving it the password or by modding an account controlled by the applications. Note: Support for OAuth is added in version 2.0. This will not work with any previous edition. A step by step OAuth guide¶ PRAW simplifies the process of using OAuth greatly. The following is a step-by-step OAuth guide via the interpreter. For real applications you’ll need a webserver, but for educational purposes doing it via the interpreter is superior. In the next section there is an An example webserver. Step 1: Create an application.¶ Go to reddit.com’s app page, click on the “are you a developer? create an app” button. Fill out the name, description and about url. Name must be filled out, but the rest doesn’t. Write whatever you please. For redirect uri set it to. All four variables can be changed later. Click create app and you should something like the following. The random string of letters under your app’s name is its client id. The random string of letters next to secret are your client_secret and should not be shared with anybody. At the bottom is the redirect_uri. Step 2: Setting up PRAW.¶ Warning This example, like most of the PRAW examples, binds an instance of PRAW to the r variable. While we’ve made no distinction before, r (or any instance of PRAW) should not be bound to a global variable due to the fact that a single instance of PRAW cannot concurrently manage multiple distinct user-sessions. If you want to persist instances of PRAW across multiple requests in a web application, we recommend that you create an new instance per distinct authentication. Furthermore, if your web application spawns multiple processes, it is highly recommended that you utilize PRAW’s multiprocess functionality. We start as usual by importing the PRAW package and creating a >>> import praw >>> r = praw.Reddit('OAuth testing example by u/_Daimon_ ver 0.1 see ' ... '' ... 'pages/oauth.html for source') Next we set the app info to match what we got in step 1. >>> r.set_oauth_app_info(client_id='stJlUSUbPQe5lQ', ... client_secret='DoNotSHAREWithANYBODY', ... redirect_uri='' ... 'authorize_callback') The OAuth app info can be automatically set, check out The Configuration Files to see how. Step 4: Exchanging the code for an access_token and a refresh_token.¶ After completing step 3, you’re redirected to the redirect_uri. Since we don’t have a webserver running there at the moment, we’ll see something like this. Notice the code in the url. Now we simply exchange the code for the access information. >>> access_information = r.get_access_information('8aunZCxfv8mcCf' ... 'D8no4CSlO55u0') This will overwrite any existing authentication and make subsequent requests to reddit using this authentication unless we set the argument update_session to False. get_access_information() returns a dict with the scope, access_token and refresh_token of the authenticated user. So later we can swap from one authenticated user to another with >>> r.set_access_credentials(**access_information) If scope contains identity then r.user will be set to the OAuthenticated user with r.get_access_information or set_access_credentials() unless we’ve set the update_user argument to False. Step 5: Use the access.¶ Now that we’ve gained access, it’s time to use it. >>> authenticated_user = r.get_me() >>> print(authenticated_user.name, authenticated_user.link_karma) Step 6: Refreshing the access_token.¶ An access token lasts for 60 minutes. To get access after that period, we’ll need to refresh the access token. >>> r.refresh_access_information(access_information['refresh_token']) This returns a dict, where the access_token key has had its value updated. Neither scope nor refresh_token will have changed. Note: In version 3.2.0 and higher, PRAW will automatically attempt to refresh the access token if a refresh token is available when it expires. For most personal-use scripts, this eliminates the need to use refresh_access_information() except when signing in. An example webserver¶ To run the example webserver, first install flask. $ pip install flask Then save the code below into a file called example_webserver.py, set the CLIENT_ID & CLIENT_SECRET to the correct values and run the program. Now you have a webserver running on Go there and click on one of the links. You’ll be asked to authorize your own application, click allow. Now you’ll be redirected back and your user details will be written to the screen. # example_webserver.py # ######################## from flask import Flask, request import praw app = Flask(__name__)Try again</a>" return variables_text + '</br></br>' + text + '</br></br>' + back_link if __name__ == '__main__': r = praw.Reddit('OAuth Webserver example by u/_Daimon_ ver 0.1. See ' '' 'pages/oauth.html for more info.') r.set_oauth_app_info(CLIENT_ID, CLIENT_SECRET, REDIRECT_URI) app.run(debug=True, port=65010) OAuth Scopes.¶ The following list of access types can be combined in any way you please. Just pass a string containing each scope that you want (if you want several, they should be seperated by spaces, e.g. "identity submit edit") to the scope argument of the get_authorize_url method. The description of each scope is identical to the one users will see when they have to authorize your application.
http://praw.readthedocs.io/en/v3.6.1/pages/oauth.html
2018-02-18T02:42:54
CC-MAIN-2018-09
1518891811352.60
[]
praw.readthedocs.io
Go to BOOKINGS | SEARCH BOOKINGS to create detailed Lists and Reports of Guest Bookings. The Search Bookings function is a powerful tool for creating lists based on specific criteria and date ranges. It can be used to locate specific booking details, locate availability issues and for marketing to guests, The lists created here contain detailed guest information and can be exported to Excel/CSV for more detailed sorting and to import in a marketing database. See, Saving Reports to Excel Search Bookings is also useful for managing availability and in identifying discrepancies on the Tape Chart such as, a-1 or a zero. In this case, choose Room "No Room" in the Room Field in the left column, to identify bookings missing a Room Assignment. See Resolving Tape Chart Availability Issues We have provided some examples of ways to use Search Bookings however, there are many other useful search criteria parameters that can be used to create detailed guest reports, so please try out some different combinations for creating lists. For example, you may need to find a booking created on a specific date by a certain travel agent or you may want to create a list of all bookings completed in February of last year to run a Valentine's special. - See a description of Search Criteria Fields - Download a sample 'Search Bookings' result file that was saved as .CSV here. See examples below Examples of Reports generated in Search Bookings Search criteria can be used in combination to create very specific lists. For a better understanding of how this works, see the examples below. Then, try some searches based on your own criteria. Please note that there are five sets of search fields which must be used together to create a start and end date. If you are searching for one day, then enter the same date in the start and end date. These fields are located next to eachother and you must enter dates in both fields. - "Arrival Date": and "Departure Date": - "Begins On and Ends On" - "Created on Start Date" and "Created on End Date" - "Arrival Date From" and "Arrival Date To" - "Departure Date" From and "Departure Date To" Example 1: Search by Arrival Date: In this example, we need a list of Guests arriving between the Arrival Dates of Jan 1 and Jan. 30, 2015 who stayed in the Room Type: SUITE. NOTE: To get an list with all arrivals for all room types for this date range, do not choose a room type. Example 2: Search by Booking Date In this example, we need a list of Completed Bookings created between June 1 - Aug 31 , 2016. First, choose Status "Complete". Then, go the "Created On Start Date" and the "Created on End Date" and choose the date range, If you want to included all bookings (canceled, confirmed, etc.), then leave the Status dropdown list on "All". Example 3: Find an Online Booking How to search for Booking.com bookings made between a date range. In this example, we will use the 'Created on Start Date' of March 8 and the 'Created On End Date' of March 9, because we're searching for bookings created between March 8-9. In the Travel Agent field, we start to type "Booking.com', the Agent: Booking.com Direct appears and we select this Agent, as it is the OTA Agent used for allocating and receiving bookings via the direct Booking. com channel. Clicking 'Search' then returns all Booking.com Direct bookings made between March 8-9 and each booking can be clicked to open from the 'results' page. Example 4: Search for No Room In this example, we need to find a list of Bookings that are not assigned a Room. These bookings are in the system and have been assigned a Room Type so are deducting from availability, but can't been seen on the Tape Chart. You will notice the # of rooms available for that Room Type on the Tape Chart will be reduced or even be a 0 if you are sold out. To locate these bookings, choose "No Room" from the Room field drop down list and click Search. Search Criteria Details The following describes each of the search Criteria fields. Last Name: You can input a whole last name or a partial last name. For instance entering the letter "C" would return a result of all stays where the guest last name begins with the letter "C". First Name: You can input a whole last name or a partial last name. For instance entering the letter "C" would return a result of all stays where the guest last name begins with the letter "C". Arrival Date: A date entered here needs to match the ARRIVAL DATE of the reservation. Departure Date: A date entered here needs to match the DEPARTURE DATE of the reservation. Room Type: This is a drop down list of all applicable room types. Rate Plan: This is a drop down list of all applicable rate plans. Begin Date:The beginning of a date range where the reservation has at least one room night. End Date:The end of a date range where the reservation has at least one room night. Room Type: This is a drop down list of all applicable room types. Room #: This function cannot be used alone. It must be used in conjunction with at least one other parameter. This is also a drop down list of applicable rooms. Status:This will allow you to search by status of reservation: Confirmed, Unconfirmed, Cancelled.. Confirmation #: Similar to the Guest Last Name, the user may enter either a complete confirmation number or a partial number. A partial number will return all stays that begin with the partial number. Cancellation #: Same functionality as Confirmation #. Travel Agent: Same functionality as Company but can search by both name and IATA number. Company: This is an EZ Search function. Once a company is chosen all stays that are linked to that company are displayed. Source: Search by the list of Sources available. Guest Type #: Same functionality as Confirmation #. Folio #: Enter the Folio # or partial numbers to search. Credit Card #: Enter the last four digits of credit card # or a partial number Group Name: This searches for reservations by their channel. A channel is a channel of distribution, examples of channels would be the Front Desk, booking engine or GDS. Group Booking Title: Same functionality as Company. Begins on and Ends On
https://docs.bookingcenter.com/display/MYPMS/Search+Bookings
2018-02-18T03:11:55
CC-MAIN-2018-09
1518891811352.60
[array(['/download/attachments/1376547/search_booking.com.png?version=1&modificationDate=1457549513000&api=v2', None], dtype=object) ]
docs.bookingcenter.com
You can change or add virtual machine configuration parameters when instructed by a VMware technical support representative, or if you see VMware documentation that instructs you to add or change a parameter to fix a problem with your system. About this task..
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.vm_admin.doc/GUID-B746F15C-888F-4210-ABDA-94E2D66854CC.html
2018-02-18T03:27:54
CC-MAIN-2018-09
1518891811352.60
[]
docs.vmware.com
AlfStream Module¶ This module provides an Apache Camel component and an Alfresco Repository component. Changelog¶ The change log for Alfstream can be found here Requirements:¶ Alfresco¶ - Alfresco Dynamic Extensions - Java version 8 - Version 5.0 and Above Apache Karaf¶ - Apache Karaf 4.0+ - The Alfstream Camel Component Compilation¶ We have created a plugin for gradle called the alfresco amp plugin which is a requirement for compiling all of our modules. the repository, it should be published into your gradle environment. This will by default publish to your local computer, but you can adjust build.gradle to publish to another Maven server if it's a requirement with: gradle publish Installation (Alfresco)¶ - Parashift's AlfStream module is a Repo only amp, please follow our Installation guide on how to install this module. - Alfresco Dynamic Extensions is required to be installed, please follow Dynamic Extension Installation guide on how to install it. - For more information or a free trial, please click here Installation (Karaf)¶ Installation of this module requires an instance of Apache Karaf to host Apache Camel. Karaf is normally stored within the /opt/apache-karaf-*/ directory where * is the version number. Windows Installation¶ To install prerequisite software you will need to do the following: - Download and install the latest version of Java JDK (version 8): - Download and install Apache Karaf: - Extract this to a directory such as C:\apache-karaf-4.0.7 - Run the application in a terminal with bin\karaf.bat After this is done follow the Apache Karaf Manually instructions below. Apache Karaf via Salt Stack¶ Rather than manually instantiating Karaf, utilising Salt Stack will allow the configuration of Karaf to take less time and ensure that things are deployed in the right order. - Create the following pillar configuration on the server: karaf: bundles: - mvn:org.apache.commons/commons-lang3/3.4 - mvn:com.google.code.gson/gson/2.6.2 - mvn:org.yaml/snakeyaml/1.17 artifacts: - alfstream-1.*.*.jar - Run the following state command: salt-call state.sls karaf Apache Karaf Manually¶ - Install Apache Karaf as per the website instructions or windows install instructions above - Copy this file and put it in the karaf base directory, naming it bundles: feature:repo-add camel 2.17.0 feature:repo-add hawtio 1.4.64 feature:install camel feature:install camel-core feature:install camel-blueprint feature:install hawtio bundle:install mvn:org.apache.commons/commons-lang3/3.4 bundle:install mvn:com.google.code.gson/gson/2.6.2 bundle:install mvn:org.yaml/snakeyaml/1.17 bundle:install mvn:org.apache.httpcomponents/httpclient-osgi/4.5.2 bundle:install mvn:org.apache.httpcomponents/httpcore-osgi/4.4.4 - Linux: Log in to the cli by running bin/client - Windows: Make sure you have an instance running by executing: bin\karaf.bat - Run the command source bundlesin the client to install all the bundles - Copy and deploy the following files to the deploy/directory: alfstream-1.*.*.jar Monitoring (Karaf)¶ Hawtio¶ To monitor that everything is going OK, you can utilise the web console hawtio via the following URL: - Location: http://<hostname>:8181/hawtio - Username: karaf karaf Camel Routes¶ In the Camel tab of hawtio, you can see deployed routes that are in Apache Karaf. Tracing¶ Warning: To trace an exchange, You must first disable Inclusion of Streams in the trace output. This is because viewing a file while tracing will consume the stream, not allowing it to be uploaded to Alfresco. In the top right click on karaf then click Preferences. Untick the Include Streams checkbox. After Include Streams is unselected, click on one of the routes and select the Trace tab. Then select Start tracing. This will enable all exchanges passing through the route so that you can see the inputs/outputs. Logs¶ The logs are accessible via the Logs tab. By default the Log Level of INFO is set, so you may need to adjust this to include DEBUG level routes using the CLI CLI¶ - run bin/clientfrom the karaf directory (normally /opt/apache-karaf-*/) - (Optional) increase the log to include debug logging for parashift packages: log:set DEBUG com.parashift - Run log:tailto view the log You will see debug logs from the com.parashift namespace, which will include the alfstream components. Usage Examples¶ There are a few usage examples within the examples directory. Setting up Configuration variables¶ All examples make use of configuration variables external to the blueprint file. To create these configurations: - Create a new file: etc/com.parashift.cfg - Add alfstream.uri, alfstream.username& alfstream.password, i.e: alfstream.url= alfstream.username=admin alfstream.password=admin Deploying examples¶ After you have set up the configuration variables, simply drop the blueprint xml file in the karaf deploy directory to start the route. Debug Log Example¶ File: examples/alfstream-debug-log-blueprint.xml The debug log example simply logs all exchanges to a debug log. You will need a log level set to debug within karaf: log:set debug com.parashift Two Way Blueprint¶ File: examples/alfstream-two-way-blueprint.xml The two way blueprint will synchronise between two instances of Alfresco. This will synchronise all files and folders within a site called test. You will need to set the following configuration Upsert Blueprint¶ File: examples/alfstream-upsert-blueprint.xml Requires the upsert camel component. This will allow alfstream events to be synchronised to a seperate postgres database for reporting and analytics.
https://docs.parashift.com.au/paramodules/alfstream/
2018-02-18T03:31:38
CC-MAIN-2018-09
1518891811352.60
[]
docs.parashift.com.au
Function futures:: future:: [−] [src] result pub fn result<T, E>(r: Result<T, E>) -> FutureResult<T, E> Creates a new "leaf future" which will resolve with the given result. The returned future represents a computation which is finshed immediately. This can be useful with the finished and failed base future types to convert an immediate value to a future to interoperate elsewhere. Examples use futures::future::*; let future_of_1 = result::<u32, u32>(Ok(1)); let future_of_err_2 = result::<u32, u32>(Err(2));
https://docs.rs/futures/0.1.13/futures/future/fn.result.html
2018-02-18T03:24:20
CC-MAIN-2018-09
1518891811352.60
[]
docs.rs
The amount of memory that you allocate for a virtual machine is the amount of memory that the guest operating system detects. About this task The minimum memory size is 4 MB for virtual machines that use BIOS firmware. Virtual machines that use EFI firmware require at least 96 MB of RAM or they cannot power on. MB. 7 virtual machine that is running on ESXi 5.0 is restricted to 255 GB. side of the memory bar. - Click Next. The Network page opens. What to do next Select network adapters for the virtual machine.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.hostclient.doc/GUID-7DE71A2A-065B-4B44-B97C-BAAAA7E6CCF9.html
2018-02-18T03:29:02
CC-MAIN-2018-09
1518891811352.60
[]
docs.vmware.com
Theming MediaGoblin¶ We try to provide a nice theme for MediaGoblin by default, but of course, you might want something different! Maybe you want something more light and colorful, or maybe you want something specifically tailored to your organization. Have no fear—MediaGoblin has theming support! This guide should walk you through installing and making themes. Installing a theme¶ Installing the archive¶ Say you have a theme archive such as goblincities.tar.gz and you want to install this theme! Don’t worry, it’s fairly painless. cd ./user_dev/themes/ Move the theme archive into this directory tar -xzvf <tar-archive> Open your configuration file (probably named mediagoblin_local.ini) and set the theme name: [mediagoblin] # ... theme = goblincities Link the assets so that they can be served by your web server: $ ./bin/gmg assetlink Note If you ever change the current theme in your config file, you should re-run the above command! (In the near future this should be even easier ;)) Set up your webserver to serve theme assets¶ If you followed the nginx setup example in FastCGI and nginx you should already have theme asset setup. However,. If you are simply using this for local development and serving the whole thing via paste/lazyserver, assuming you don’t have a paste_local.ini, the asset serving should be done for you. Configuring where things go¶ By default, MediaGoblin’s install directory for themes is ./user_dev/themes/ (relative to the MediaGoblin checkout or base config file.) However, you can change this location easily with the theme_install_dir setting in the [mediagoblin] section. For example: [mediagoblin] # ... other parameters go here ... theme_install_dir = /path/to/themes/ Other variables you may consider setting: - theme_web_path - When theme-specific assets are specified, this is where MediaGoblin will set the urls. By default this is "/theme_static/"so in the case that your theme was trying to access its file "images/shiny_button.png"MediaGoblin would link to /theme_static/images/shiny_button.png. - theme_linked_assets_dir - Your web server needs to serve the theme files out of some directory, and MediaGoblin will symlink the current theme’s assets here. See the “Link the assets” step in Installing the archive. Making a theme¶ Okay, so a theme layout is pretty simple. Let’s assume we’re making a theme for an instance about hedgehogs! We’ll call this the “hedgehogified” theme. Change to where your theme_install_dir is set to (by default, ./user_dev/themes/ … make those directories or otherwise adjust if necessary): hedgehogified/ |- theme.cfg # configuration file for this theme |- templates/ # override templates | '- mediagoblin/ | |- base.html # overriding mediagoblin/base.html | '- root.html # overriding mediagoblin/root.html '- assets/ | '- images/ | | |- im_a_hedgehog.png # hedgehog-containing image used by theme | | '- custom_logo.png # your theme's custom logo | '- css/ | '- hedgehog.css # your site's hedgehog-specific css |- README.txt # Optionally, a readme file (not required) |- AGPLv3.txt # AGPL license file for your theme. (good practice) '- CC0_1.0.txt # CC0 1.0 legalcode for the assets [if appropriate!] The top level directory of your theme should be the symbolic name for your theme. This is the name that users will use to refer to activate your theme. Note It’s important to note that templates based on MediaGoblin’s code should be released as AGPLv3 (or later), like MediaGoblin’s code itself. However, all the rest of your assets are up to you. In this case, we are waiving our copyright for images and CSS into the public domain via CC0 (as MediaGoblin does) but do what’s appropriate to you. The config file¶ The config file is not presently strictly required, though it is nice to have. Only a few things need to go in here: [theme] name = Hedgehog-ification description = For hedgehog lovers ONLY licensing = AGPLv3 or later templates; assets (images/css) waived under CC0 1.0 The name and description fields here are to give users an idea of what your theme is about. For the moment, we don’t have any listing directories or admin interface, so this probably isn’t useful, but feel free to set it in anticipation of a more glorious future. Licensing field is likewise a textual description of the stuff here; it’s recommended that you preserve the “AGPLv3 or later templates” and specify whatever is appropriate to your assets. Templates¶ Your template directory is where you can put any override and custom templates for MediaGoblin. These follow the general MediaGoblin theming layout, which means that the MediaGoblin core templates are all kept under the ./mediagoblin/ prefix directory. You can copy files right out of MediaGoblin core and modify them in this matter if you wish. To fit with best licensing form, you should either preserve the MediaGoblin copyright header borrowing from a MediaGoblin template, or you may include one like the following: {# # [YOUR THEME], a MediaGoblin theme # Copyright (C) [YEAR] [YOUR <>. #} Assets¶ Put any files, such as images, CSS, etc, that are specific to your theme in here. You can reference these in your templates like so: <img src="{{ request.staticdirect('/images/im_a_shark.png', 'theme') }}" /> This will tell MediaGoblin to reference this image from the current theme. Licensing file(s)¶ You should include AGPLv3.txt with your theme as this is required for the assets. You can copy this from mediagoblin/licenses/. In the above example, we also use CC0 to waive our copyrights to images and css, so we also included CC0_1.0.txt A README.txt file¶ A README file is not strictly required, but probably a good idea. You can put whatever in here, but restating the license choice clearly is probably a good idea. Simple theming by adding CSS¶ Many themes won’t require anything other than the ability to override some of MediaGoblin’s core css. Thankfully, doing so is easy if you combine the above steps! In your theme, do the following (make sure you make the necessary directories and cd to your theme’s directory first): $ cp /path/to/mediagoblin/mediagoblin/templates/mediagoblin/extra_head.html templates/mediagoblin/ Great, now open that file and add something like this at the end: <link rel="stylesheet" type="text/css" href="{{ request.staticdirect('/css/theme.css', 'theme') }}"/> You can name the css file whatever you like. Now make the directory for assets/css/ and add the file assets/css/theme.css. You can now put custom CSS files in here and any CSS you add will override default MediaGoblin CSS.
https://mediagoblin.readthedocs.io/en/stable/siteadmin/theming.html
2018-10-15T20:10:19
CC-MAIN-2018-43
1539583509690.35
[]
mediagoblin.readthedocs.io
What is Emergency Change in ITIL? A Change is nothing but of shifting/transitioning/modifying/from its current state to a desired future state. Any change that is implemented to restore a service or to avoid an outage of a service, especially when it.) Each and every emergency change ticket should be recorded so that it could be tracked, monitored, and updated throughout its life cycle, no emergency changes can be implemented based on verbal/ email communications. Emergency Change Validation When a Major Incident happens, an Emergency Change may have to be deployed into Production (The trigger for Emergency Change Management is always via the Major Incident). The Emergency Change management process gets triggered by logging an Emergency Change in Remedy with appropriate categorization. Change Manager will be notified of the Emergency Change. along with back out readiness. Convene ECAB meeting Change Manager convenes the E-CAB so that the emergency change can be discussed for approval. E-CAB is convened based on the urgency of the situation. This meeting may be conducted via a bridge line opened by the Change Manager. Also, E-CAB members will be made aware of the urgency of the meeting. Change Manager works in close cooperation do not give their approval. Back-out plan has to be there and would be a mandatory requirement for approving the change. While a verbal approval may act as a trigger for the change to be built, it is mandatory to attach an approval email to the RFC. Without a documented approval, an Emergency Change should not be implemented. Build, Schedule and Implement Change In case the Change is approved by the E-CAB, the Change is scheduled for implementation. All relevant Technology Leads/ Experts are informed via a broadcast mail by the Change Manager. Relevant Resolver Group(s) would build, test and implement the change. Testing would be performed based on availability of a Test Environment. Testing, in case of Emergency Changes, is not detailed like that performed during normal or expedited change. Change Owner coordinates the change with the Release Manager. Minimal testing is carried out to gain the confidence and also to avoid post-implementation issues. Release successful ? Once the Release is deployed (Change implemented), Resolver Group checks if the implementation is successful or not. Change Manager will work with Major Incident Manager and ensure that the service is restored at the earliest. Rollback In case change is not successful it would be rolled back by the Change Owner. Complete RFC & Close Once the Service is restored, RFC is updated by the Change Owner with the entire information in all aspects. In case of failed change, RFC is updated with the relevant details. PIR The Change Manager triggers a Post Implementation Review meeting with the Change owner and other key stakeholders, CAB, on <<Day>> to check the impact of Changes on the environment and also to understand whether the Change met its goal or not. Change successful? Once the Change is implemented, Change Manager checks if the implementation is successful or not. Change Manager will work with Major Incident Manager and ensure that the service is restored at the earliest. Reclassify change Change is reclassified as normal/ standard change. Change Manager would seek a business approval to classify the change as a normal/ standard change. Download RACI for Emergency Change Management
http://itil-docs.com/change-management/itil-emergency-change-management-pocess/
2018-10-15T20:03:11
CC-MAIN-2018-43
1539583509690.35
[array(['http://itil-docs.com/wp-content/uploads/2018/05/ITIL-Emergency-Change-Emergency-Change-Management-Process-2.png', 'ITIL Emergency Change, Emergency Change Management Process'], dtype=object) array(['http://itil-docs.com/wp-content/uploads/2018/05/RACI-for-Emergency-Change-Management.png', 'RACI for Emergency Change Management, ITIL Emergency Change'], dtype=object) array(['http://itil-docs.com/wp-content/uploads/2018/05/Emergency-Change-Management-Process.png', 'change management process flow, itil emergency change management process'], dtype=object) ]
itil-docs.com
Office 365 Enterprise Microsoft Office 365 provides powerful online cloud services that enable collaboration, security and compliance, mobility, and intellgence and analytics. This page provides links to guidance for admins and IT Professionals who are deploying, configuring, and managing Office 365 services in enterprise organizations. Guided Deployment with FastTrack Use the FastTrack Center Benefit for Office 365 for guided assistance in planning, deploying, and driving adoption of Office 365 services for your organization. Migrate to Office 365 Migrate your existing on-premises infrastructure to Office 365 and the Microsoft cloud. Office 365 ProPlus Plan, deploy, and manage Office 365 ProPlus in your enterprise environment. Deploy Office 365 Deploy Office 365, including setting up your tenant, configuring your network, and provisioning your users. Hybrid deployments Configure and manage a hybrid deployment between your existing on-premises infrastructure and Office 365. Office 365 Training Improve your Office 365 administration knowledge and skills with Office 365 training courses for IT professionals. Deploy Office 365 Workloads Community & Support Office 365 Tech Community Learn about best practices, news, and the latest trends and topics related to Office 365. -
https://docs.microsoft.com/en-us/Office365/Enterprise/?redirectSourcePath=%252fet-ee%252farticle%252fOffice-365-ettev%2525C3%2525B5ttestsenaariumid-e0d73777-f005-44da-9186-f38058b6e640
2018-09-18T21:47:23
CC-MAIN-2018-39
1537267155702.33
[]
docs.microsoft.com
Auto-Paging in Anypoint Connectors When an Anypoint Connector in your flow produces output that is significantly large, processing the load may cause significant performance delays in your application. To prevent this from happening, split the connector’s output into several "pages" for more efficient processing. Within Mule this Anypoint Connector behavior is referred to as "auto-paging" because the connector automatically paginates its output to prevent memory issues. However, if memory use is not an issue, you can forgo any auto-paging configuration and simply treat the entire payload as a single unit. Prerequisites This document assumes that you are familiar with the Anypoint Studio Essentials, Anypoint Connectors and Global Elements. Review the Getting Started with Anypoint Studio chapter to learn more about developing with Mule’s graphical user interface. Configuring Auto-Paging The table below lists the Anypoint Connectors which support auto-paging functionality. Additionally, you can use auto-paging with any custom-built connector which supports auto-paging functionality. To Configure Using Studio Visual Editor To make the Paging section visible in a connector, you must first select an Operation which outputs a collection, for example Get calendar list. Otherwise, Studio does not display the Paging section in the properties editor. Enter an integer in the Fetch Size field to indicate the batch size of objects in a "page". For example, set the Fetch Size to 50to return information in batches of 50 objects. To Configure Using the XML Transform Message component, etc. Because Mule processes the set of pages one page at a time, it prevents memory usage from exceeding its limits. Example Using Studio Visual Editor Drag an HTTP endpoint onto the canvas. Set its Path to authenticate. Create a Connector Configuration element for the HTTP endpoint. Set its Host to localhostand its Port to 8081 Add a Google Calendars connector to the flow, then set its Operation to Authorize. Create a Google Calendars Global Element, then configure its Consumer Key and Consumer secret. Create a new flow dragging in a new HTTP endpoint. Set its Path to get_events. Add a new Google Calendars connector to the new flow. Set its Operation to Get Eventsand its Fetch Size to 50. Add a Foreach scope after the Google Contacts connector, and a Logger inside the Foreach scope. When a message reaches the Google Calendar connector, the Logger outputs a separate message for each object. If there are more than 50 objects, Mule paginates the output. Example Using XML Create a google-contacts Global Element, then define its Consumer Key and Consumer secret. Create an HTTP connector and set the value of its Path to authenticate. <http:listener Outside the flow, create a configuration element that matches the name referenced by the connector. Set the host to localhost and the port to 8081. <http:listener-config Add a Google Calendars connector setting its operation to authorize. <google-contacts:authorize Create a new flow with a new HTTP endpoint. Set the value of its Path to get_events, and reference the same configuration element as the other connector. <http:listener Add a new Google Contacts connector in the new flow setting its operation to get-eventsand fetchSize to 50. <google-calendars:get-events After the Google Calendars connector, add a Foreach to the flow, then add a Logger as a child element inside Foreach element. Example Final Flows. Another Paging Example You can call both the size() and the close() functions in any expression that supports MEL. The simple example below illustrates how to call size() in a logger so that it records the total amount of objects that the connector is outputting. The following example utilizes the Google Contacts connector. See Also - Learn more about the Foreach scope. Need to handle really large payloads? Learn about Mule High Availability HA Clusters.
https://docs.mulesoft.com/mule-user-guide/v/3.8/auto-paging-in-anypoint-connectors
2018-09-18T20:58:39
CC-MAIN-2018-39
1537267155702.33
[array(['./_images/google-contacts-example-flow.png', 'google contacts flow'], dtype=object) ]
docs.mulesoft.com
This DDA board packet provides information about the state of the parking system, with reports detailing revenues for 3Q FY 2009. It includes comparisons with the previous calendar year. The agenda also includes a mention of a presentation by Connie Pulcipher, City of Ann Arbor, on New Public Notification Regulations; that presentation is not included in this packet. Download 050609_DDA_Board_Meeting_Packet.pdf
http://a2docs.org/view/21
2018-09-18T22:10:09
CC-MAIN-2018-39
1537267155702.33
[]
a2docs.org
Specifies whether the List View settings are persisted in cookies when the browser cookie storage is enabled. Namespace: DevExpress.ExpressApp.Web Assembly: DevExpress.ExpressApp.Web.v18.1.dll This property is considered in XAF ASP.NET applications for List Views represented by the ASPxGridListEditor, ASPxTreeListEditor and ASPxSchedulerListEditor. When the SaveStateInCookies and IModelOptionsStateStore.SaveListViewStateInCookies properties are set to true, you can rearrange the List Editor's columns of a List View, and the next time you start the application, this order will be restored. It is recommended to store the application model differences in the database instead of cookies. List View settings may require a lot of space and the browser's cookie size limit may be exceeded (especially, when you use highly configurable List Editors with lots of options like ASPxPivotGridListEditor and ASPxChartListEditor). If the size is exceeded, settings are not persisted. Additionally, cookie storage produces extra traffic on each request that may lead to performance issues.
https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Web.IModelListViewStateStore.SaveStateInCookies
2018-09-18T21:07:50
CC-MAIN-2018-39
1537267155702.33
[]
docs.devexpress.com
HBase stores all of its data under its root directory in HDFS, configured with hbase.rootdir. The only other directory that the HBase service will read or write is hbase.bulkload.staging.dir. On HDP clusters, hbase.rootdir is typically configured as /apps/hbase/data, and hbase.bulkload.staging.dir is configured as /apps/hbase/staging. HBase data, including the root directory and staging directory, can reside in an encryption zone on HDFS. The HBase service user needs to be granted access to the encryption key in the Ranger KMS, because it performs tasks that require access to HBase data (unlike Hive or HDFS). By design, HDFS-encrypted files cannot be bulk-loaded from one encryption zone into another encryption zone, or from an encryption zone into an unencrypted directory. Encrypted files can only be copied. An attempt to load data from one encryption zone into another will result in a copy operation. Within an encryption zone, files can be copied, moved, bulk-loaded, and renamed. Make the parent directory for the HBase root directory and bulk load staging directory an encryption zone, instead of just the HBase root directory. This is because HBase bulk load operations need to move files from the staging directory into the root directory. In typical deployments, /apps/hbasecan be made an encryption zone. Do not create encryption zones as subdirectories under /apps/hbase, because HBase may need to rename files across those subdirectories. The landing zone for unencrypted data should always be within the destination encryption zone. On a cluster without HBase currently installed: Create the /apps/hbasedirectory, and make it an encryption zone. Configure hbase.rootdir=/apps/hbase/data. Configure hbase.bulkload.staging.dir=/apps/hbase/staging. On a cluster with HBase already installed, perform the following steps: Stop the HBase service. Rename the /apps/hbasedirectory to /apps/hbase-tmp. Create an empty /apps/hbasedirectory, and make it an encryption zone. DistCp -skipcrccheck -updateall data from /apps/hbase-tmpto /apps/hbase, preserving user-group permissions and extended attributes. Start the HBase service and verify that it is working as expected. Remove the /apps/hbase-tmpdirectory. The HBase bulk load process is a MapReduce job that typically runs under the user who owns the source data. HBase data files created as a result of the job are then bulk loaded in to HBase RegionServers. During this process, HBase RegionServers move the bulk-loaded files from the user's directory and move (rename) the files into the HBase root directory ( /apps/hbase/data). When data at rest encryption is used, HDFS cannot do a rename across encryption zones with different keys. Workaround: run the MapReduce job as the hbase user, and specify an output directory that resides in the same encryption zone as the HBase root directory.
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_security/content/hbase-with-hdfs-encr.html
2018-09-18T22:41:33
CC-MAIN-2018-39
1537267155702.33
[]
docs.hortonworks.com
Kalastatic Kalastatic is a prototyping framework and static site generator. Featuring: - Easy installation, with minimal dependencies - Produces documented component library - Output production ready styles that can be ingested by other systems - Browsersync: - Built in webserver - Live reload - saving files reloads the browser - Remote device access - load your local on a mobile device! - Node: - Automated download of front end fxrameworks and other dependencies - Automated Deployments - Twig, or your template engine of choice: - Easy creation of extendable template variations with inheritance - Convenience Utilities: - Cache busting - Deep linking (url fragments) - Character limit filters - Splits CSS files for IE compatibility At Kalamuna we use Kalastatic to put into practice atomic web design principles to produce a living styleguide/component library that can be used to guide back-end implementations in a framework-agnostic approach. It integrates tightly with Drupal 7 and 8, effectively sharing twig templates between the styleguide, prototype and the CMS. Kalastatic serves as a point of convergence between front-end development, back-end development, and content strategy. Ultimately it facilitates putting design first, and this in front users for testing, and stakeholders for meaningful and timely feedback. Overview Benefits Stakeholders Clients - Small uncertainties in communication get ironed out much earlier. - Real, demonstrable progress happens early. - Many concerns can be addressed during the project, instead of waiting for certain milestones. - The whole process becomes more participatory. Agency, PMs, Account Managers - The client never sees a barebones generic site during demos. - From first contact, their branding, typography and colors are in place. This avoids uncertainty, stress and education. - Specific client feedback happens earlier on assets that are cheaper to fix as a prototype than a back-end build. User Experience - We can test assumptions with stakeholders earlier. - We can "show not tell" more effectively. - It's easier to communicate with stakeholders about abstractions when we are looking at something concrete. Frontend Dev - Can work in tools commonly used in the trade. - Now in control of markup, as opposed to working around it. - Can be involved earlier, and stick around later in the process. - We begin working through responsive issues as soon as we begin styleguiding, this results in more successful first passes, less suprises, and better decisions about our responsive/adaptive patterns. Content Strategist - Doesn't have to wait for the CMS to be in place to see content in-situ. - Integrations with third party content staging systems like prismic gathercontent Backend developers - Documented components can clarify implementation needs in code conversation with the front-end team. - Json mock data tells me what needs to be made available to templates. Features Styleguide What's a styleguide? A web Styleguide offer a way of ensuring consistency between brand, design and code. Herein we are looking documenting every component and its code on the site in one place to ensure "same-pagey" communications between designers, front end developers and developers. The pattern portfolio expresses every component and layout structures throughout the site. It articulates the atomic design structure, and is used to illustrate the project’s shared vocabulary. The Kalastatic Styleguide Website styleguides serving both as pattern library, but can also be serve as brand styleguide, to ensure consistency and conformancy in the use of brand assets. The styleguide not only ensures that new front end development can follow established patterns, but also facilitates the creation of on brand ancillary digital properties. Its compiled CSS and JS assets can be referenced and cosumed by third party services as well to create harmonious expressions across multiple systems. Kalastatic uses kss-node as the basis for its styleguide. Kalastatic uses the KSTAT-KSS-Builder to generate the styleguide, which extends some of the documenation features to make it better suited for documenting colors, and other brand-related style concerns. Prototype To provide working, responsive prototypes, we use metalsmith and a bevvy of other tools Prototyping is most useful to consider the components with layouts, side by side with other elements. Where the styleguide documents components in isolation, prototyping helps us see all the bits in context, and even develop behaviors (js) and other integrations, before we dive into CMSs and app-frameworks. Prototypes can be created at will, and draw upon the family of defined components in the system to build out pages, complete with custom content.
https://kalastatic.readthedocs.io/en/latest/
2018-09-18T21:44:40
CC-MAIN-2018-39
1537267155702.33
[]
kalastatic.readthedocs.io
Personas The Personas view lists information about the personas you have created in your environment. A persona is a distinct personality you use for your Genesys Intelligent Automation applications. For example, you might create a distinct persona for each language your company serves. as such, you can use one persona for English-speaking customers and another persona for French-speaking customers. In addition, each persona can use distinct pre-recorded prompts. This is helpful if you want to add distinct personas within a language group to appeal to various subsets of customers. You can have one persona that deals with general English-speaking calls, and another persona that caters to known callers from a particular age group, region, segment level, and more. You can upload your own dynamic prompts to use with personas as a superior alternative to TTS (Text-to-Speech) prompts. You can also create visual themes for each persona to use with WebIVR applications. Personas tab The Personas tab lists the personas you have created. Intelligent Automation comes pre-installed with a default persona, shown below: This displays the following information about the persona: - Default Persona - The name of the persona. - Uses en-gb for both prompts and speech recognition - The language used by the persona (in this case, British English). It also states this language is used for both prompts and speech recognition. - Dynamic playback prompts - This persona uses TTS (text-to-speech) for verbalizing information to the caller. However, if you have uploaded dynamic prompts, you can choose the prompt package here. - Create associated visual alternative - If enabled, Intelligent Automation creates a visual persona for use in WebIVR applications. - Visual Theme - If this persona is used in a WebIVR application, this menu allows you to select which theme to use. Creating a persona Click Add New Persona to create a persona for your company. The new persona appears in the list: Configure the following: - New Persona - The name of the persona. Choose a name that concisely describes the persona's function. In some cases, this might be as simple as a language (English or French). However, if you want to use more than one persona per language, use a name that describes its purpose (for example, English - Gold Segment). - Prompts language - Select the language that this persona uses for TTS prompts. - Speech recognition language - Select the language that this persona uses for speech recognition. This is often the same language you selected for Prompts language, but you can choose another language for speech recognition if needed (for example, if non-native speakers are frequently misunderstood by a particular language's speech-recognition engine and you want to use an alternative). - Display flag - Select a flag to identify your persona. This icon is seen in the Callflow Editor for specific blocks, such as Message blocks, that allow you to select a persona. - Dynamic playback prompts - Select whether to use Text-to-Speech or a dynamic prompt package that you previously uploaded. - Create associated visual alternative - If enabled, Intelligent Automation creates a visual persona for use in WebIVR applications. - Visual Theme - If this persona is used in a WebIVR application, this drop-down menu allows you to select which theme to use. Click Save Persona Changes. Editing a persona Click the change this link within a persona to change its details. You can configure any of the fields described in the Creating a new persona section. Click Save Persona Changes. Deleting a persona Click the Delete persona link within a persona to delete it, then click Save Persona Changes. To the left of the Delete persona link, Intelligent Automation states how many applications or modules are currently using this persona. Ensure you understand the risk of deleting a persona that is being used in an application or module. Dynamic Prompt Uploads tab The Dynamic Prompt Uploads tab lists the dynamic prompts you have uploaded to your environment. Intelligent Automation uses dynamic prompts to give applications more natural-sounding language when speaking dynamic information back to customers. For example, when giving a calendar date to a customer, TTS (Text-to-Speech) might sound more uneven ("January One Two Zero One Seven" for January 1, 2017). The TTS voice might also not be in the tone or dialect that your callers expect. However, with dynamic prompts, you can use a native speaker to provide snippets of sounds that Intelligent Automation uses to concatenate more natural-sounding language for callers ("January First Twenty Seventeen" for January 1, 2017). Before you can upload a new dynamic prompts package, you must prepare a ZIP file that contains recordings of the various sounds needed to produce a dynamic prompt. For example, you must have a speaker record sounds of the alphabet, numbers, times, dates, and more. These files must be saved with the exact filename provided by Intelligent Automation. To view a list of the required sounds and filenames, click Upload new Dynamic Prompts. In the Prompt Set to Use menu, select a language (for example, medium en-gb for British English with a medium-sized subset of sounds), then click View Prompt List. Note the filename used for each sound. Your recording package must include all of the listed filenames, and the filenames must be an exact match. Uploading a new dynamic prompts package Once the package is ready, click Upload new Dynamic Prompts. In the Prompt Set to Use menu, select a language. In the Unique Upload Name field, provide a name for this dynamic prompts package. Choose a descriptive name that describes the purpose of the dynamic prompts. Click Choose File to select the ZIP file on your computer, and then click Upload Prompts to upload the file. Viewing information about your dynamic prompts Once you have uploaded a package of dynamic prompts, the list updates to show information about the package. For example: In the example above, you can view the following information: - Prompt Set - The language set for these prompts. - Upload Name - The name given in the Unique Upload Name field when the package was uploaded. - Upload Date - The date the package was uploaded. - Supported Currencies - The currencies this package supports. In this example, it supports this language's default currency. However, you might have a package that supports prompts for euros, pounds, dollars, and more. If so, these currencies are listed in this field. Downloading a dynamic prompts package In the Actions column, click download to download a ZIP file of the dynamic prompts package. Deleting a dynamic prompts package In the Actions column, click delete to delete the dynamic prompts package. Intelligent Automation displays a warning message that states any prompts using this package will revert to TTS (Text-to-Speech). If you understand the warning and agree to the deletion, click OK. Themes tab The Themes tab lists the themes available in your environment for WebIVR applications. By default, your Intelligent Automation installation comes with the Genesys Blue theme. However, you can create your own theme to suit your business needs. Creating a new theme Click Create new Theme. The Edit Theme screen appears. In the Name field, enter a unique name for your theme that describes its style and purpose. For example, you might call a theme Sales - Red, to indicate the theme is used by your company's sales department and the theme is based on the color red. In the Colour Palettes section, specify which colors are available for use in this theme. You can click the X beside a color to remove it from the palette, making it unavailable for selection when configuring this theme. Or, you can add a color by clicking the + button. When you add a new color, Intelligent Automation displays a color-picker screen to allow you to customize the color. Click Set Colour when done to save the color to the theme's palette. Below the color palette, you can define CSS-based settings for everything from the theme's header to the appearance of validation messages. At the bottom of the settings list is a section called CSS Override. You can provide custom CSS in this field to further customize your theme. Any CSS specified in the CSS Override section supersedes CSS settings in the other sections. For example, if you set a particular border style in the Header section but then specified a different setting in the CSS Override section, the CSS Override setting is used. Feedback Comment on this article:
https://docs.genesys.com/Documentation/GAAP/latest/iaHelp/Personas
2018-09-18T21:32:10
CC-MAIN-2018-39
1537267155702.33
[]
docs.genesys.com
API Usage¶ Standard Django Cache API¶ get(self, key[, default=None]): Retrieves a value from the cache. add(self, key, value[, timeout=DEFAULT_TIMEOUT]): Add a value to the cache, failing if the key already exists. set(self, key, value, timeout=DEFAULT_TIMEOUT): Sets a value to the cache, regardless of whether it exists. If timeout == None, then cache is set indefinitely. Otherwise, timeout defaults to the defined DEFAULT_TIMEOUT. delete(self, key): Removes a key from the cache delete_many(self, keys[, version=None]): Removes multiple keys at once. clear(self[, version=None]): Flushes the cache. If version is provided, all keys under the version number will be deleted. Otherwise, all keys will be flushed. get_many(self, keys[, version=None]): Retrieves many keys at once. set_many(self, data[, timeout=None, version=None]): Set many values in the cache at once from a dict of key/value pairs. This is much more efficient than calling set() multiple times and is atomic. incr(self, key[, delta=1]): Add delta to value in the cache. If the key does not exist, raise a ValueError exception. incr_version(self, key[, delta=1, version=None]): Adds delta to the cache version for the supplied key. Returns the new version. Cache Methods Provided by django-redis-cache¶ has_key(self, key): Returns True if the key is in the cache and has not expired. ttl(self, key): Returns the ‘time-to-live’ of a key. If the key is not volatile, i.e. it has not set an expiration, then the value returned is None. Otherwise, the value is the number of seconds remaining. If the key does not exist, 0 is returned. delete_pattern(pattern[, version=None]): Deletes keys matching the glob-style pattern provided. get_or_set(self, key, func[, timeout=None]): Retrieves a key value from the cache and sets the value if it does not exist. reinsert_keys(self): Helper function to reinsert keys using a different pickle protocol version. persist(self, key): Removes the timeout on a key. Equivalent to setting a timeout of None in a set command. :param key: Location of the value :rtype: bool expire(self, key, timeout): Set the expire time on a key
http://django-redis-cache.readthedocs.io/en/latest/api.html
2017-03-23T08:10:49
CC-MAIN-2017-13
1490218186841.66
[]
django-redis-cache.readthedocs.io
RackHD supports event notification via both web hook and AMQP. A web hook allows applications to subscribe certain RackHD published events by configured URL, when one of the subscribed events is triggered, RackHD will send a POST request with event payload to configured URL. RackHD also publishes defined events over AMQP, so subscribers to RackHD’s instance of AMQP don’t need to register a webhook URL to get events. The AMQP events can be prolific, so we recommend that consumers filter events as they are received to what is desired. All published external events’ payload formats are common, the event attributes are as below: The table of type, typeId, action and severity for all external events Example of heartbeat event payload: { "version": "1.0", "type": "heartbeat", "action": "updated", "typeId": "kickseed.example.com.on-taskgraph", "severity": "information", "createdAt": "2016-07-13T14:23:45.627Z", "nodeId": "null", "data": { "name": "on-taskgraph", "title": "node", "pid": 6086, "uid": 0, "platform": "linux", "release": { "name": "node", "lts": "Argon", "sourceUrl": "", "headersUrl": "" }, "versions": { "http_parser": "2.7.0", "node": "4.7.2", "v8": "4.5.103.43", "uv": "1.9.1", "zlib": "1.2.8", "ares": "1.10.1-DEV", "icu": "56.1", "modules": "46", "openssl": "1.0.2j" }, "memoryUsage": { "rss": 116531200, "heapTotal": 84715104, "heapUsed": 81638904 }, "currentTime": "2017-01-24T07:18:49.236Z", "nextUpdate": "2017-01-24T07:18:59.236Z", "lastUpdate": "2017-01-24T07:18:39.236Z", "cpuUsage": "NA" } } Example of node discovered event payload: { "type": "node", "action": "discovered", "typeId": "58aa8e54ef2b49ed6a6cdd4c", "nodeId": "58aa8e54ef2b49ed6a6cdd4c", "severity": "information", "data": { "ipMacAddresses": [ { "ipAddress": "172.31.128.2", "macAddress": "2c:60:0c:ad:d5:ba" }, { "macAddress": "90:e2:ba:91:1b:e4" }, { "macAddress": "90:e2:ba:91:1b:e5" }, { "macAddress": "2c:60:0c:c0:a8:ce" } ], "nodeId": "58aa8e54ef2b49ed6a6cdd4c", "nodeType": "compute" }, "version": "1.0", "createdAt": "2017-02-20T06:37:23.775Z" } The change of resources managed by RackHD could be retrieved from AMQP messages. ALl the fields in routing key exists in the common event payloads event_payload. Examples of routing key: Heartbeat event routing key of on-tftp service: heartbeat.updated.information.kickseed.example.com.on-tftp Polleralert sel event routing key: polleralert.sel.updated.critical.44b15c51450be454180fabc.57b15c51450be454180fa460 Node discovered event routing key: node.discovered.information.57b15c51450be454180fa460.57b15c51450be454180fa460 Graph event routing key: graph.started.information.35b15c51450be454180fabd.57b15c51450be454180fa460 All the events could be filtered by routing keys, for example: All services’ heartbeat events: $ sudo node sniff.js "on.events" "heartbeat.#" All nodes’ discovered events: $ sudo node sniff.js "on.events" "#.discovered.#" ‘sniff.js’ is a tool located at The web hooks used for subscribing event notification could be registered by POST <server>/api/current/hooks API as below curl -H "Content-Type: application/json" -X POST -d @payload.json <server>api/current/hooks The payload.json attributes in the example above are as below: When a hook is registered and eligible events happened, RackHD will send a POST request to the hook url. POST request’s Content-Type will be application/json, and the request body be the event payload. An example of payload.json with minimal attributes: { "url": "" } When multiple hooks are registered, a single event can be sent to multiple hook urls if it meets hooks’ filtering conditions. The conditions of which events should be notified could be specified in the filters attribute in the hook_payload, when filters attribute is not specified, or it’s empty, all the events will be notified to the hook url. The filters attribute is an array, so multiple filters could be specified. The event will be sent as long as any filter condition is satisfied, even if the conditions may have overlaps. The filter attributes are type, typeId, action, severity and nodeId listed in event_payload. Filtering by data is not supported currently. Filtering expression of hook filters is based on javascript regular expression, below table describes some base operations for hook filters: An example of multiple filters: { "name": "event sets", "url": "", "filters": [ { "type": "node", "nodeId": "57b15c51450be454180fa460" }, { "type": "node", "action": "discovered|updated", } ] } Create a new hook Delete an existing hook DELETE /api/2.0/hooks/:id Get a list of hooks GET /api/2.0/hooks Get details of a single hook GET /api/2.0/hooks/:id Update an existing hook PATCH /api/2.0/hooks/:id { "name": "New Hook" }
http://rackhd.readthedocs.io/en/latest/rackhd/event_notification.html
2017-03-23T08:11:46
CC-MAIN-2017-13
1490218186841.66
[]
rackhd.readthedocs.io
VDSP++ is an excellent development system for signal processing algorithms, however, it is not equipped to provide true operating system services like the Linux environment. It may be useful for some people to take C, C++ or assembly code algorithms that were written for the VDSP compiler, and recompile it with gcc, g++ or gas, in order that it can be used as part of a Linux application. Since Linux is a full-featured operating system, the application environment differs from the VDSP environment in many ways. Any porting work must be evaluated on an individual case. This page collects some common guidelines and tricks that deal with certain special cases. Once an algorithm/application/library has been ported to gcc and debugged under the Linux kernel environment, it can be used just like any other Linux application/library. Here are the equivalent tools. For those moving from VDSP++ to Linux and gcc, you also may be interested in the section on libs, to see how to use the existing libbfdsp. However, porting standalone algorithms to Linux is a non-trivial task, as most standalone algorithms are not architected in a manner that are capable of running in a full featured Operating System like Linux, and must take the following items into consideration. Even the most simple algorithms must follow these rules: Many of these design guidelines increase overhead, but this is just the way Linux works (they are done that way on purpose) - if you don't want to port your algorithm to an environment with this type of design (Linux kernel) - there are many other frameworks - like VDK, which can be used. We can not support attempts to work around these design rules. Doing so will cause the kernel to be unstable and make the project impossible to debug. There have also been people asking about debugging the kernel or Linux using a non-gdb debugger. A thorough understanding of the Operating System is required to understand why this is not possible. Here are a few of the key issues: Developing with Linux requires that you learn the kernel architecture, and proper tools usage - gdb.
https://docs.blackfin.uclinux.org/doku.php?id=visualdsp:port_program_to_linux
2017-03-23T08:18:02
CC-MAIN-2017-13
1490218186841.66
[]
docs.blackfin.uclinux.org
- Reference > mongoShell Methods > - Database Methods > - db.getLastErrorObj() db.getLastErrorObj()¶ On this page Definition¶ db. getLastErrorObj()¶ Specifies the level of write concern for confirming the success of previous write operation issued over the same connection and returns the document for that operation. When using db.getLastErrorObj(), clients must issue the db.getLastErrorObj()on the same connection as the write operation they wish to confirm. The db.getLastErrorObj()is a mongoshell wrapper around the getLastErrorcommand. Changed in version 2.6: A new protocol for write operations integrates write concerns with the write operations, eliminating the need for a separate db.getLastErrorObj(). Most write methods now return the status of the write operation, including error information. In previous versions, clients typically used the db.getLastErrorObj()in combination with a write operation to verify that the write succeeded. The db.getLastErrorObj()can accept the following parameters: Behavior¶ The returned document provides error information on the previous write operation. If the db.getLastErrorObj() method itself encounters an error, such as an incorrect write concern value, the db.getLastErrorObj() throws an exception. For information on the returned document, see getLastError command. Example¶ The following example issues a db.getLastErrorObj() operation that verifies that the preceding write operation, issued over the same connection, has propagated to at least two members of the replica set. db.getLastErrorObj(2) See also
https://docs.mongodb.com/master/reference/method/db.getLastErrorObj/
2017-03-23T08:18:35
CC-MAIN-2017-13
1490218186841.66
[]
docs.mongodb.com
Puppet 3.0 — 3.4 Release Notes Included in Puppet Enterprise 3.2. A newer version is available; see the version menu above for details. This page documents the history of the Puppet 3 series. Starting from version 3.0.0, Puppet is semantically versioned with a three-part version number. In version X.Y.Z: - X must increase for major backwards-incompatible changes. - Y may increase for backwards-compatible new functionality. - Z may increase for bug fixes. Note: In general, you should upgrade the puppet master servers before upgrading the agents they support. Also, before upgrading, look above at the table of contents for this page. Identify the version you’re upgrading TO and any versions you’re upgrading THROUGH, and check them for a subheader labeled “Upgrade Warning,” which will always be at the top of that version’s notes. If there’s anything special you need to know before upgrading, we will put it here. Puppet 3.4.3 Released February 19, 2014. 3.4.3 is a bug fix release in the Puppet 3.4 series. Bug Fixes PUP-1473: User resource fails on UTF-8 comment Puppet’s user resource now supports UTF-8 characters for the comment attribute, rather than just ASCII. PUP-736: Encoding mis-matches cause package prefetching to fail Previously, puppet could fail to process a package resource if it referred to an RPM whose description contained non-ASCII characters. Puppet now handles these resources correctly. PUP-1524: Duplicate events since 3.4.0 Since Puppet 3.4.0, failed resources would sometimes be logged twice. These duplicate events were particularly problematic for PuppetDB, since they could cause the whole transaction to be rolled back. This release fixes the issue. PUP-1485: test agent/fallback_to_cached_catalog.rb assumes no master is running by default The acceptance test for falling back to a cached catalog would still run with a puppet master, even though the functionality assumed that the puppet master was unavailable. The test now guarantees that the master will be unreachable by specifying a bogus server. PUP-1322: Puppet REST API always fails if there’s at least one broken manifest Previously, REST API calls to <host>/production/resource_types/*?kind=class would fail completely if there was a syntax error in one or more manifests in the module path. This release changes that behavior so that the call will succeed and list all parseable classes. PUP-1529: Usability regression caused by PUP-1322 Fixes a regression caused by the fix for PUP-1322: syntax errors reached while loading an include would be squelched, which would eventually result in a misleading “class not found” error. PUP-751: Performance regression due to excessive file watching This performance regression was also linked to PUP-1322: excessive file watching was causing a significant slowdown during catalog compilation. This release addresses the performance hit and improves benchmarking by adding tasks to measure the loading of defined types. PUP-1729: Remove Debian Sid from build targets Acceptance testing on Debian Sid (unstable) was failing regularly due to factors outside of our control, like broken packages in the distribution’s own repositories. Acceptance tests are still being run against the Debian “testing” release. Windows-Specific Fixes PUP-1211: Puppet Resource Package fails On Windows, the puppet resource package command would fail immediately if at least one of the installed packages had non-ASCII characters in its display name. Puppet will now use the correct string encoding on Windows, which fixes this bug. PUP-1389: Windows File resources generating 0 byte files when title has “-“ This bug prevented puppet from properly managing the content of file resources with “-“ in their titles on Windows. This release fixes the bug. PUP-1411: Windows agents experience intermittent SSL_connect failures in acceptance testing Acceptance tests would intermittently fail on Windows due to a bug involving OpenSSL and WEBrick that would cause the connection to time out after 6.2 seconds. This release improves the OpenSSL initialization process and extends the timeout interval to 10 seconds, which fixes the bug. Puppet 3.4.2 Released January 6, 2014. 3.4.2 is a bug fix release in the Puppet 3.4 series. Bug Fixes PUP-724: Could not autoload puppet /util /instrumentation /listeners /log” This bug could cause a failure while autoloading puppet/util/instrumentation/listeners/log.rb. It was related to the way that puppet compared Ruby Time instances, which would sometimes differ when they shouldn’t. PUP-1015: Could not intialize global default settings… This regression was introduced in Puppet 3.4.0 and prevented Foreman from functioning properly. PUP-1099: Incorrect permissions in RPMs This caused some example file permissions to be set incorrectly on RHEL6. PUP-1144: No longer allows variables with leading underscores This caused the the experimental future parser to reject variable names that started with $_. It was introduced in Puppet 3.4.0. PUP-1255: Default file mode is now 0600 instead of 0644 The default mode for file resources was changed from 0644 to 0600 in Puppet 3.4.1. This release restores the previous behavior. Puppet 3.4.1 Released December 26, 2013. 3.4.1 is a security fix release of the Puppet 3. This created a vulnerability in which an attacker could make the name a symlink to another file and thereby cause puppet agent to overwrite something it did not intend to. Puppet 3.4.0 Released December 19, 2013. (RC1: Dec. 3. RC2: Dec. 10.) 3.4.0 is a backward-compatible feature and fix release in the Puppet 3 series. The main improvements of this release are: - Fixes for some high-profile bugs, including the “anchor pattern” issue and broken RDoc on Ruby 1.9+ - New certificate autosigning behavior to help quickly and securely add new nodes in elastic environments - Windows improvements, especially for fileresources - Trusted node data in the compiler It introduces one known regression, PUP-1015, for users who use Foreman’s provisioning tools. If you use Foreman for provisioning, you should wait and upgrade to 3.4.2. New contain Function Removes Need for “Anchor Pattern” Puppet now includes a contain function to allow classes to contain other classes. It works similarly to the include function, with the added effect of creating a containment relationship. For more information, see: - The containment page of the language reference, for background information about class containment issues and an explanation of the anchor pattern. - The classes page of the language reference, for complete information on declaring classes with contain, include, and more. (Issue 8040, PUP-99) Policy-Based Certificate Autosigning Puppet can now use site-specific logic to decide which certificate signing requests (CSRs) should be autosigned. This feature is based on custom executables, which can examine each CSR as it arrives and use any decision-making criteria you choose. Prior to 3.4, Puppet would accept a whitelist of nodes whose requests should be autosigned. This wasn’t very flexible, and didn’t allow things like using a preshared key to verify the legitimacy of a node. This is now very possible, and works especially well when combined with the next new feature (custom CSR attributes). For details, see: - The “Policy-Based Autosigning” section of the autosigning reference page - Documentation for the autosignsetting (Issue 7244, PUP-664, PUP-453) Custom Data in CSRs and Certificates It is now possible for puppet agent nodes to insert arbitrary data into their certificate signing requests (CSRs). This data can be used as verification for policy-based autosigning (see above), and may have more applications in the future. Two kinds of custom data are available: “custom attributes,” which are discarded once the certificate is signed, and “certificate extensions,” which persist in the signed certificate. For details on custom CSR data, see: - The “CSR Attributes and Certificate Extensions” reference page - Documentation for the csr_attributessetting (Issue 7243, PUP-669, PUP-670, PUP-664) Priority Level Can Be Set for Puppet Processes Puppet’s processes, including puppet agent and puppet apply, can now lower or raise their own priority level using the priority setting. (Note that they can’t raise their priority unless they are running as a privileged user.) This is especially useful for making sure resource-intensive Puppet runs don’t interfere with a machine’s real duties. Manifest Documentation (RDoc/Puppetdoc) Works on Ruby 1.9+ Puppet manifests can be documented with RDoc-formatted text in comments above each class or defined type, and you can run puppet doc --outputdir /tmp/rdoc to extract that documentation and generate HTML with it. However, this has never worked when running Puppet under Ruby 1.9 or higher. As of this release, building documentation sites with puppet doc works under Ruby 1.9 and 2.0. Note that any existing problems with the puppet doc command still apply — it sometimes skips certain classes with no clear reason, and there are various formatting glitches. We are still investigating more reliable and convenient ways to display Puppet code documentation, and will probably be using Geppetto as a foundation for future efforts. New $trusted Hash With Trusted Node Data Since at least Puppet 2.6, the Puppet compiler receives a special $clientcert variable that contains the node’s certificate name. However, this variable is self-reported by agent nodes and is not verified by the puppet master. This means $clientcert might contain more or less anything, and can’t be trusted when deciding whether to insert sensitive information into the catalog. As of 3.4, you can configure the puppet master to verify each agent node’s certname and make it available to the compiler as $trusted['certname']. To do this, you must set the trusted_node_data setting to true in the master’s puppet.conf. See the language documentation about special variables for more details. File Resources Can Opt Out of Source Permissions Traditionally, if file resources did not have the owner, group, and/or mode permissions explicitly specified and were using a source file, they would set the permissions on the target system to match those of the source. This could cause files to be insecure or too secure on Windows systems being managed by a Linux puppet master. (And even in all-*nix environments, it often isn’t the desired behavior.) Now, you can opt out of source permissions using the file type’s source_permissions attribute. This can be done per-resource, or globally with a resource default in site.pp. As part of this, the previous default behavior ( source_permissions => use) is now deprecated on Windows; the default for Windows is expected to change to ignore in Puppet 4.0. (Issue 5240, Issue 18931) Windows Improvements Puppet’s Windows support continues to get better, with improvements to resource types and packaging. File Type Improvements - Puppet now supports managing symlinks on Windows. (Issue 19447, PUP-262) See the tips for using file resources on Windows for more information. - A permissions mode is no longer required when specifying the file owner and group. (Issue 11563, PUP-264) - You can now opt out of source file owner/group/mode permissions (see above). (Issue 5240, PUP-265) - Puppet will no longer create files it can’t edit. (Issue 15559, PUP-263) Package Type Improvements - The Windows package provider now has the versionablefeature, which allows for easier package upgrades. (Issue 21133) See the tips for using package resources on Windows for more information. Group Type Improvements - You can now add domain users to the local Administrators group. (Issue 17031, PUP-255) Exec Type Improvements - Puppet will now accurately capture exit codes from exec resources on Windows. (Previously, exit codes higher than 255 were mangled: Puppet would report modulo 255 of the actual exit code, such that exit code 257 would appear as 2.) (Issue 23124, PUP-434) Packaging and Installer Improvements - The Windows Puppet installer has several new MSI properties for automated installation, which can set the service user and startup mode. (Issue 21243, Issue 18268, PUP-386, PUP-387) - The Windows installer now puts Puppet on the PATH, so a special command prompt is no longer necessary. (Issue 22700, PUP-415) - Windows installer options can now override existing settings. (Issue 20281, PUP-388) New puppet cert reinventory Command As part of the fix for issue 693/23074, the Puppet CA no longer rebuilds the certificate inventory for each new certificate. However, rebuilding the inventory can still be helpful, generally when you have a large inventory file with a high percentage of old revoked certificates. When necessary, it can now be done manually by running puppet cert reinventory when your puppet master is stopped. RPM Package Provider Now Supports install_options Package resources using the rpm package provider can now specify command-line flags to pass to the RPM binary. This is generally useful for specifying a --prefix, or for overriding macros like arch. HTTP API Documentation Puppet’s HTTP API endpoints now have extensive documentation for the formatting of their requests and the objects they return. For version-specific endpoint documentation, see the HTTP API section of the developer docs. (PUP-124, PUP-125, PUP-126, PUP-127, PUP-128, PUP-131, PUP-133, PUP-134, PUP-135, PUP-136, PUP-137, PUP-138, PUP-139) Msgpack Serialization (Experimental) Puppet agents and masters can now optionally use Msgpack for all communications. This is an experimental feature and is disabled by default; see the Msgpack experiment page for details about it. Changes to Experimental Future Parser Several changes were made to the experimental lambda and iteration support included in the future parser. The documentation has been updated to reflect the changes; see the “Experimental Features” section in the navigation sidebar to the left. - Remove alternative lambda syntaxes (Issue 22962, PUP-354) - Remove “foreach” function (Issue 22784, PUP-503) - Fix mixed naming of map/collect - reduce (Issue 22785, PUP-504) - Remove the iterative ‘reject’ function (Issue 22729, PUP-505) - Iterative function ‘select’ should be renamed to ‘filter’ (Issue 22792, PUP-537) - Future parser lexer does not handle all kinds of interpolated expressions (Issue 22593, PUP-352) - Variable names with uppercase letters are not allowed (Issue 22442) Preparations for Syncing External Facts Puppet can now pluginsync external facts to agent nodes… but it’s not very useful yet, since Facter can’t yet load those facts. End-to-end support is planned for next quarter, in Facter 2.0. Miscellaneous Improvements - Allow profiling on puppet apply. Previously, the profiling features added for Puppet 3.2 were only available to puppet agent; now, puppet apply can log profiling information when run with --profileor profile = truein puppet.conf. (Issue 22581, PUP-341) - Mount resources now autorequire parent mounts. (Issue 22665, PUP-450) - Class main now appears in containment paths in reports. Previously, it was represented by an empty string, which could be confusing. This is mostly useful for PuppetDB. (Issue 23131, PUP-278) Puppet::Util.executenow offers a way to get the exit status of the command — the object it returns, which was previously a Stringcontaining the command’s output, is now a subclass of Stringwith an #exitstatusmethod that returns the exit status. This can be useful for type and provider developers. (Issue 2538) Bug Fixes Fixed Race Condition in Certificate Serial Numbers As part of improving certificate autosigning for elastic cloud environments, we found a series of bugs involving the certificate inventory — when too many certificates were being signed at once (impossible in manual signing, but easy when testing autosigning at large scales), the CA might assign a serial number to a node, start rebuilding the inventory, then assign the same number to another node (if it came in before the rebuild was finished). This is now fixed, and the cert inventory is handled more safely. To accommodate the need to occasionally rebuild the inventory, a puppet cert reinventory command was added (see above). (Issue 693, Issue 23074, PUP-277, PUP-635, PUP-636, PUP-637) Cached Catalogs Work Again This was a regression from Puppet 3.0.0, as an unintended consequence of making the ENC authoritative for node environments. In many cases (generally when agents couldn’t reach the puppet master), it broke the puppet agent’s ability to use cached catalogs when it failed to retrieve one. The issue is now fixed, and agents will obey the usecacheonfailure setting. Hiera Bugs - Errors from automatic class parameter lookups were not clearly indicating that Hiera was the source of the problem. This was made more informative. (Issue 19955, PUP-176) - Automatic class parameter lookups weren’t setting the special calling_module/ calling_classvariables. This has been fixed. (Issue 21198, PUP-83) Misc Bug Fixes The usual grab bag of clean-ups and fixes. As of 3.4.0, Puppet will: - Manage the vardir’s owner and group. Before, not managing the vardir’s owner and group could sometimes cause the puppet master or CA tools to fail, if the ownership of the vardir got messed up. (PUP-319) - Don’t overaggressively use resource-like class evaluation for ENCs that assign classes with the hash syntax. ENCs can use two methods for assigning classes to nodes, one of which allows class parameters to be specified. If class parameters ARE specified, the class has to be evaluated like a resource to prevent parameter conflicts. This fixed the problem that Puppet was being a little overeager and wasn’t checking whether parameters were actually present. (Issue 23096, PUP-268) - Make Puppet init scripts report run status correctly even if they aren’t configured to start. Previously, if the puppet master init script was configured to never run and a Puppet manifest was also ensuring the service was stopped, this could cause Puppet to try to stop the service every single run. (Issue 23033, PUP-642) - Skip module metadata that cannot be parsed. Previously, module metadata that couldn’t be parsed was not skipped and could cause the puppet master to fail catalog serving if a module with bad metadata was installed. (Issue 22818, Issue 20728, Issue 15856, PUP-614) - Use FFI native windows root certs code. This fix cleaned up some potential puppet agent crashes on Windows by using the Win32 APIs better. (Issue 23183, PUP-766) - Guard against duplicate Windows root certs. Previously, duplicates could cause unnecessary run failures. (Issue 21817, PUP-734) - Make Debian user/group resources report their proper containment path. Previously, Puppet events from Debian showed in Puppet Enterprise’s event inspector as “unclassified.” (Issue 22943, PUP-565) - Fix race condition in filebucket. Before the fix, there were unnecessary run failures when multiple nodes were trying to write to a puppet master’s filebucket. (Issue 22918, PUP-538) - Force encoding of usercomment values to ASCII-8BIT. Previously, there were run failures under Ruby 1.9 and higher when userresources were present. (Issue 22703, PUP-451) - Don’t serialize transient vars in Puppet::Resource. Previously, Puppet would write YAML data that couldn’t be deserialized by other tools. (Issue 4506, PUP-447) - Validate the nameattribute for package resources to disallow arrays. Previously, there was inconsistent behavior between dpkg and the other package providers. (Issue 22557, PUP-403) - Use the most preferred supported serialization format over HTTP. Puppet had been choosing a format at random whenever there were multiple acceptable formats. (Issue 22891, PUP-570) - Set value_collectionfor boolean params. Before the fix, boolean resource attributes were displayed badly in the type reference. (Issue 22699, PUP-446) All 3.4.0 Changes For a list of all changes in the 3.4.0 release, see: Puppet 3.3.2 Released November 12, 2013 3.3.2 is a bug fix release in the Puppet 3.3 series. The most important fix was a bug causing problems with NetApp devices. Bug Fixes Issue 22804: NetApp network device failing with Puppet >3.2.1 - “Could not intern…” This caused failures when using the puppet device subcommand with NetApp network devices. It could also cause failures with any custom functions, types, or providers that used the Ruby REXML library to do heavy lifting, if they happened to call include REXML outside any module or class. Issue 22810: RPM provider query method returns leading quote in package names This was causing strange interactions with MCollective when querying packages on RPM-based systems. Issue 22847: Windows Puppet::Util::ADSI::User and Puppet::Util::ADSI::Group issues WMI queries that hang Puppet in a domain environment When getting a list of existing user accounts on a Windows machine, Puppet was putting out a query that could seem to take forever in a large ActiveDirectory environment. It was fixed by limiting the query to the local objects that Puppet actually cares about. Issue 22878: Running processes on windows (through mcollective) cause private CloseHandle to be called instead of public method This didn’t affect most users, due to the way Puppet is packaged on Windows, but it could cause major failures of many resource types for people running Puppet from source. Puppet 3.3.1 Released October 7, 2013. 3.3.1 is a bug fix release in the Puppet 3.3 series. The focus of the release is fixing backwards compatibility regressions that slipped in via the YAML deprecations in 3.3.0. Upgrade Note The release of Puppet 3.3.1 supersedes the upgrade warning for Puppet 3.3.0. As of this release, agent nodes are compatible with all Puppet 3.x masters with no extra configuration. Fixes for Backwards Compatibility Regressions in 3.3.0 - Issue 22535: puppet 3.3.0 ignores File ignore in recursive copy - Issue 22608: filebucket (backup) does not work with 3.3.0 master and older clients - Issue 22530: Reports no longer work for clients older than 3.3.0 when using a 3.3.0 puppet master - Issue 22652: ignore doesn’t work if pluginsync enabled New backward compatibility issues were discovered after the release of 3.3.0, so we changed our handling of deprecated wire formats. Starting with 3.3.1, you do not need to set additional settings in puppet.conf on your agent nodes in order to use newer agents with puppet masters running 3.2.4 or earlier. Agents will work with all 3.x masters, and they will automatically negotiate wire formats as needed. This behavior supersedes the behavior described for 3.3.0; the report_serialization_format setting is now unnecessary. Additionally, this release fixes: - Two cases where 3.3.0 masters would do the wrong thing with older agents. (Reports would fail unless the master had report_serialization_formatset to yaml, which was not intended, and remote filebucket backups would always fail.) - A regression where files that should have been ignored during pluginsync were being copied to agents. Miscellaneous Regression Fixes Issue 22772: Managing an empty file causes a filebucket error This was a regression in 3.3.0, caused by deprecating YAML for content we send to remote filebuckets. Issue 22384: Excessive logging for files not found This was a regression in 3.3.0. When using multiple values in an array for the file type’s source attribute, Puppet will check them in order and use the first one it finds; whenever it doesn’t find one, it will log a note at the “info” log level, which is silent when logging isn’t verbose. In 3.3.0, the level was accidentally changed to the “notice” level, which was too noisy. Issue 22529: apt package ensure absent/purged causes warnings on 3.3.0 This was a regression in 3.3.0. The apt package provider was logging bogus warnings when processing resources with ensure values of absent or purged. Issue 22493: Can’t start puppet agent on non english Windows This problem was probably introduced in Puppet 3.2, when our Windows installer switched to Ruby 1.9; a fix was attempted in 3.2.4, but it wasn’t fully successful. The behavior was caused by a bug in one of the Ruby libraries Puppet relies on. We submitted a fix upstream, and packaged a fixed version of the gem into the Windows installer. Fixes for Long-Standing Bugs Issue 19994: ParsedFile providers do not clear failed flush operations from their queues This bug dates to Puppet 2.6 or earlier. The bug behavior was weird. Basically: - Your manifests include multiple ssh_authorized_keyresources for multiple user accounts. - One of the users has messed-up permissions for their authorized keys file, and their resource fails because Puppet tries to write to the file as that user. - All remaining key resources also fail, because Puppet tries to write the rest of them to that same user’s file instead of the file they were supposed to go in. Issue 21975: Puppet Monkey patch 'def instance_variables' clashing with SOAP Class… This bug dates to 3.0.0. It was causing problems when using plugins that use SOAP libraries, such as the types and providers in the puppetlabs/f5 module. Issue 22474: --no-zlib flag doesn’t prevent zlib from being required in Puppet This bug dates to 3.0.0, and caused Puppet to fail when running on a copy of Ruby without zlib compiled in. Issue 22471: Malformed state.yaml causes puppet to fail runs with Psych yaml parser This bug dates to 3.0.0, and could cause occasional agent run failures under Ruby 1.9 or 2.0. Puppet 3.3.0 Released September 12, 2013. 3.3.0 is a backward-compatible feature and fix release in the Puppet 3 series. Upgrade Warning (Superseded by Puppet 3.3.1) Note: The following is superseded by compatibility improvements in Puppet 3.3.1, which requires no configuration to work with older masters. If possible, you should upgrade directly to 3.3.1 instead of 3.3.0. Although 3.3.0 is backward-compatible, its default configuration will cause reporting failures when ≥ 3.3.0 agent nodes connect to a sub-3.3.0 master. - This only affects newer agents + older masters; it is not a problem if you upgrade the puppet master first. - To use ≥ 3.3.0 agents with an older puppet master, set report_serialization_formatto yamlin their puppet.conf files; this restores full compatibility. See the note below on yaml deprecation for details. Configurable Resource Ordering (Issue 22205: Order of resource application should be selectable by a setting.) Puppet can now optionally apply unrelated resources in the order they were written in their manifest files. A new ordering setting configures how unrelated resources should be ordered when applying a catalog. This setting affects puppet agent and puppet apply, but not puppet master. The allowed values for this setting are title-hash, manifest, and random: title-hash(the default) will order resources randomly, but will use the same order across runs and across nodes. manifestwill use the order in which the resources were declared in their manifest files.. Data in Modules (Issue 16856: puppet should support data in modules) This feature makes it possible to contribute data bindings from modules to a site-wide hierarchy of data bindings. This feature is introduced as an opt-in, and it is turned on by setting binder to true in puppet.conf. It is turned on by default when using the future parser. The implementation is based on ARM-9 Data in Modules, which contains the background, a description, and a set of examples. Security: YAML Over the Network is Now Deprecated (Issue 21427: Deprecate YAML for network data transmission) YAML has been the cause of many security problems, so we are refactoring Puppet to stop sending YAML over the network. Puppet will still write YAML to disk (since that doesn’t add security risks), but all data objects sent over the network will be serialized as JSON. (Or, for the time being, as “PSON,” which is JSON that may sometimes contain non-UTF8 data.) As of this release: - All places where the puppet master accepts YAML are deprecated. If the master receives YAML, it will still accept it but will log a deprecation warning. - The puppet master can now accept reports in JSON format. (Prior to 3.3.0, puppet masters could only accept reports in YAML.) - The puppet agent no longer defaults to requesting YAML from the puppet master (for catalogs, node objects, etc.). - The puppet agent no longer defaults to sending YAML to the puppet master (for reports, query parameters like facts, etc.). Deprecation plan: Currently, we plan to remove YAML over the network in Puppet 4.0. This means in cases where Puppet 3.3 would issue a deprecation warning, Puppet 4 will completely refuse the request. New Setting for Compatibility With Sub-3.3.0 Masters Note: The following is superseded by compatibility improvements in Puppet 3.3.1, which requires no configuration to work with older masters. If possible, you should upgrade directly to 3.3.1 instead of 3.3.0. Puppet 3.3 agents now default to sending reports as JSON, and masters running Puppet 3.2.4 and earlier cannot understand JSON reports. Using an out of the box 3.3 agent with a 3.2 puppet master will therefore fail. - To avoid errors, upgrade the puppet master first. - If you must use ≥ 3.3.0 agents with older puppet masters, set the new report_serialization_formatto yamlin the agents’ puppet.conf; this restores full compatibility. Regex Capture Variables from Node Definitions ($1, etc.) (Issue 2628: It would be useful if node name regexps set $1) Node definitions now set the standard regex capture variables, similar to the behavior of conditional statements that use regexes. Redirect Response Handling (Issue 18255: accept 301 response from fileserver) Puppet’s HTTP client now follows HTTP redirects when given status codes 301 (permanent), 302 (temporary), or 307 (temporary). The new functionality includes a redirection limit, and recreates the redirected connection with the same certificates and store as the original (as long as the new location is ssl protected). Redirects are performed for GET, HEAD, and POST requests. This is mostly useful for configuring the puppet master’s front end webserver to send fileserver traffic to the closest server. Filebucket Improvements (Issue 22375: File bucket and Puppet File resource: fails with “regexp buffer overflow” when backing up binary file) There were a number of problems with the remote filebucket functionality for backing up files under Puppet’s management over the network. It is now possible to back up binary files, which previously would consume lots of memory and error out. Non-binary filebucket operations should also be faster as we eliminated an unnecessary network round-trip that echoed the entire contents of the file back to the agent after it was uploaded to the server. Internal Format and API Improvements Report Format 4 Puppet’s report format version has been bumped to 4. This is backward-compatible with report format 3, and adds transaction_uuid to reports and containment_path to resource statuses. Unique Per-run Identifier in Reports and Catalog Requests (Issue 21831: Generate a UUID for catalog retrieval and report posts) Puppet agent now embeds a per-run UUID in its catalog requests, and embeds the same UUID in its reports after applying the catalog. This makes it possible to correlate events from reports with the catalog that provoked those events. There is currently no interface for doing this correlation, but a future version of PuppetDB will provide this functionality via catalog and report queries. Readable Attributes on Puppet::ModuleTool::Dependency Objects (Issue 21749: Make attributes readable on Puppet::ModuleTool::Dependency objects) This API change enables access to module dependency information via Ruby code. User Interface Improvements Improved CSS for Puppet Doc Rdoc Output (Issue 6561: Better looking CSS for puppet doc rdoc mode) The standard skin for rdoc generated from Puppet manifests has been updated to improve readability. Note that puppet doc rdoc functionality remains broken on Ruby 1.9 and up. Improved Display of Arrays in Console Output (Issue 20284: Output one item per line for arrays in console output) This changes the output to console from faces applications to output array items as one item per line. Configurable Module Skeleton Directory (Issue 21170: enhancement of the module generate functionality) Previously, you could provide your own template for the puppet module generate action by creating a directory called skeleton in the directory specified by the module_working_dir setting. (The layout of the directory should match that of lib/puppet/module_tool/skeleton.) This directory can now be configured independently with the module_skeleton_dir setting. Improvements to Resource Types Package Type: Multi-Package Removal With Urpmi Provider (Issue 16792: permit to remove more than 1 package using urpmi provider) It was tedious to remove some packages when using the urpmi provider since it only allowed to remove one package at the time, and that removal must be made in dependency order. Now, the urpmi provider behaves similar to the apt provider. Package Type: Package Descriptions in RAL (Issue 19875: Get package descriptions from RAL) Previously, rpm and dpkg provider implementations obtained package information from the system without capturing descriptions. They now capture the single line description summary for packages as a read-only parameter. Package Type: OpenBSD Improvements Jasper Lievisse Adriaanse contributed several improvements and fixes to the OpenBSD package provider. (Issue 21930: Enchance OpenBSD pkg.conf handling) It is now possible to use += when defining the installpath for OpenBSD. Previously, an attempt to use this was ignored; now, it’s possible to have a pkg.conf like: installpath = foo installpath += bar Which will be turned into a PKG_PATH: foo:bar. (Issue 22021: Implement (un)install options feature for OpenBSD package provider) It is now possible to specify install_options and uninstall_options for the OpenBSD package provider. These were previously not available. (Issue 22023: Implement purgeable feature for OpenBSD package provider) It is now possible to use the purged value for ensure with the OpenBSD package provider. Yumrepo Type: AWS S3 Repos (Issue 21452: Add s3_enabled option to the yumrepo type) It is now possible to use a yum repo stored in AWS S3 (via the yum-s3-iam plugin) by setting the resource’s s3_enabled attribute to 1. Special thanks to 3.3.0 Contributors Adrien Thebo, Alex Dreyer, Alexander Fortin, Alexey Lapitsky, Aman Gupta, Andrew Parker, Andy Brody, Anton Lofgren, Brice Figureau, Charlie Sharpsteen, Chris Price, Clay Caviness, David Schmitt, Dean Wilson, Duncan Phillips, Dustin J. Mitchell, Eric Sorenson, Erik Dalén, Felix Frank, Garrett Honeycutt, Henrik Lindberg, Hunter Haugen, Jasper Lievisse Adriaanse, Jeff McCune, Jeff Weiss, Jesse Hathaway, John Julien, Josh Cooper, Josh Partlow, Juan Ignacio Donoso, Kosh, Kylo Ginsberg, Mathieu Parent, Matthaus Owens, Melissa Stone, Melissa, Michael Scherer, Michal Růžička, Moses Mendoza, Neil Hemingway, Nick Fagerlund, Nick Lewis, Patrick Carlisle, Pieter van de Bruggen, Richard Clamp, Richard Pijnenburg, Richard Soderberg, Richard Stevenson, Sergey Sudakovich, Stefan Schulte, Thomas Hallgren, W. Andrew Loe III, arnoudj, floatingatoll, ironpinguin, joshrivers, phinze, superseb All 3.3.0 Changes See here for a list of all changes in the 3.3.0 release. Puppet 3.2.4 Released August 15, 2013. 3.2.4 is a security fix release of the Puppet 3.2 3.2.3 Released July 15, 2013. 3.2.3 is a bugfix release of the Puppet 3.2 series. It fixes some Windows bugs introduced in 3.2.0, as well as a few performance problems and miscellaneous bugs. Windows Fixes This release fixes several Windows bugs that couldn’t be targeted for earlier 3.2 releases. - #20768: windows user provider can not manage password or home directory — This was a regression in 3.2.0/3.2.1. - #21043: runinterval setting in puppet.conf ignored on Windows in Puppet 3.2.1 — This was a regression in 3.2.0/3.2.1. - #16080: Service provider broken in Windows Server 2012 — This affected all previous Puppet versions. - #20787: ‘puppet resource group’ takes incredibly long on Windows — This affected all previous Puppet versions. - #20302: Windows File.executable? now returns false on ruby 1.9 - #21280: Don’t create c:\dev\null in windows specs — This was only relevant to Puppet developers. Logging and Reporting Fixes - #20383: Bring back helpful error messages like prior to Puppet 3 — This was a regression from 3.0.0, which caused file names and line numbers to disappear from duplicate resource declaration errors. - #20900: tagmail triggers in –onetime mode without changes after upgrade from 3.1.1 to 3.2.1 — This was a regression in 3.2.0/3.2.1. - #20919: Logging behaviour issues in 3.2.1 — This was a regression in 3.2.0/3.2.1, which caused noisy logging to the console even if the --logdestoption was set. Performance Fixes - #21376: Stack level too deep after updating from 3.1.1 to 3.2.2 — This would sometimes cause total failures when importing a large number of manifest files (such as with the import nodes/*.ppidiom). - #21320: Puppet daemon may sleep for 100 years after receiving USR1 on 64 bit systems — MCollective’s Puppet plugin uses puppet agent’s USR1 signal to trigger a run if the agent is running; on 64-bit systems, this could cause puppet agent to keep running, but stop doing scheduled configuration runs. This was caused by a bug in Ruby < 2.0, but we modified Puppet to work around it. - #20901: puppet --versionis unnecessarily slow — This was a regression in 3.2.0/3.2.1. Misc Fixes All 3.2.3 Changes See here for a list of all changes in the 3.2.3 release. Puppet 3.2.2 3.2.2 is a security fix release of the Puppet 3.2 3.2.1 3.2.1 is a bugfix release of the Puppet 3.2 series. It addresses two major issues that were uncovered in 3.2.0 and caused us to pull that release (#20726 and #20742). It also includes a fix for Solaris support (#19760). Issues fixed: - Bug #19760: install sun packages failed with: Error: /Stage[main]/Inf_sol10defaultpkg/Package[SMCcurl]: Could not evaluate: Unable to get information about package SMCcurl because of: No message - Bug #20726: usermod command arguments out of order - Bug #20742: unauthenticated clients unable to communicate with puppet master (running in passenger) Known Regressions On Windows, Puppet 3.2.1 is unable to manage the home directory for a user account. (Bug #20768) This is a regression from Puppet 3.1.1; it was introduced by switching to Ruby 1.9 in the Windows .msi package. This bug will be fixed soon in a point release, but wasn’t severe enough to delay shipping. All 3.2.1 Changes See here for a list of all changes in the 3.2.1 release. Puppet 3.2.0 3.2.0 is a backward-compatible features and fixes release in the Puppet 3 series. It was never officially released, as major bugs were discovered after the release was tagged but before it was published; 3.2.1 was the first official Puppet 3.2 release. The most notable changes are: - An optional, experimental “Future” parser - Ruby 2.0 support - OpenWRT OS support - External CA support - A new modulo ( %) operator - New slow catalog profiling capabilities - General improvements and fixes, including improved splay behavior, fixes to the cron type, improvements to the module tool, and some Hiera-related fixes Ruby Bug Warning: Ruby 1.9.3-p0 has bugs that cause a number of known issues with Puppet 3.2.0 and later,. The official Puppet Labs packages default to pulling in Ruby 1.8.7, but will use 1.9.3-p0 if you previously chose the system 1.9.3 package. There’s not a lot we can do about the resulting bugs; if you’re using Precise and want to use Ruby 1.9.3, we recommend using Puppet Enterprise or installing a third-party Ruby package. Experimental “Future” Parser With Iteration In a first for Puppet, we’re shipping two versions of the Puppet language in one release. - Language: Experimental Features (Puppet 3.2) - Demonstration: Revision of the puppet-network module using experimental features (GitHub home for the revised module) By default, Puppet 3.2 is backward compatible with Puppet 3.1, with only minimal new language features (the modulo operator). However, if you set parser = future in puppet.conf, you can try out new, proposed language features like iteration (as defined in arm-2). See the documents linked above for complete details. Note that features in the experimental parser are exempt from semantic versioning. They might change several times before being released in the “current” parser. Ruby 2.0 Support Special thanks to: Dominic Cleal. Previous releases almost worked on Ruby 2.0; this one officially works. OpenWRT OS Support Special thanks to: Kyle Anderson. OpenWRT is a distribution of Linux that runs on small consumer-grade routers, and you can now manage more of it with Puppet. This requires Facter 1.7.0-rc1 or later, as well as Puppet 3.2. Puppet Labs doesn’t ship any packages for OpenWRT. New OpenWRT support includes: - Facter values: operatingsystemand osfamilywill report as OpenWrt operatingsystemreleasewill resolve correctly, by checking the /etc/openwrt_versionfile - General Linux facts will generally resolve as expected. - Packages: - The new opkgprovider can install packages and dependencies from the system repositories (set in /etc/opkg.conf), can ensure specific package versions, and can install packages from files. - Services: - The new openwrtprovider can enable/disable services on startup, as well as ensuring started/stopped states. Since OpenWRT init scripts don’t have status commands, it uses the system process table to detect status; if a service’s process name doesn’t match the init script name, be sure to specify a statusor patternattribute in your resources. External CA Support Special thanks to: Dustin Mitchell. We now officially support using an external certificate authority with Puppet. See the documentation linked above for complete details. If you were stalled on 2.7.17 due to bug 15561, upgrading to 3.2 should fix your problems. (Issues 15561, 17864, 19271, and 20027) Modulo Operator Special thanks to: Erik Dalén. The new % modulo operator will return the remainder of dividing two values. Better Profiling and Debugging of Slow Catalog Compilations Special thanks to: Andy Parker and Chris Price. If you set the profile setting to true in an agent node’s puppet.conf (or specify --profile on the command line), the puppet master will log additional debug-level messages about how much time each step of its catalog compilation takes.. General Improvements and Fixes Splay Fixes for Puppet Agent The splay setting promised relief from thundering-herd problems, but it was broken; the agents would splay on their first run, then they’d all sync up on their second run. That’s fixed now. Cron Fixes Special thanks to: Felix Frank, Stefan Schulte, and Charlie Sharpsteen. The cron resource type is now much better behaved, and some truly ancient bugs are fixed. (Issues 593, 656, 1453, 2251, 3047, 5752, 16121, 16809, 19716, and 19876) Module Tool Improvements The puppet module command no longer misbehaves on systems without GNU tar installed, and it works on Windows now. (Issues 11276, 13542, 14728, 18229, 19128, 19409, and 15841) Hiera-Related Fixes The calling_module and calling_class pseudo-variables were broken, and automatic parameter lookup would die when it found false values. These bugs are both fixed. puppet:/// URIs Pointing to Symlinks Work Now Special thanks to: Chris Boot. In older versions, a source => puppet:///..... URI pointing to a symlink on the puppet master would fail annoyingly. Now Puppet follows the symlink and serves the linked content. Puppet Apply Writes Data Files Now Special thanks to: R.I. Pienaar. Puppet apply now writes the classes file and resources file. If you run a masterless Puppet site, you can now integrate with systems like MCollective that use these files. All 3.2.0 Changes See here for a list of all non-trivial changes for the 3.2.0 release. Puppet 3.1.1 Puppet 3.1.1 is a security release addressing several vulnerabilities discovered in the 3.x line of Puppet. These vulnerabilities have been assigned Mitre CVE numbers CVE-2013-1640, CVE-2013-1652, CVE-2013-1653, CVE-2013-1654, CVE-2013-1655 and CVE-2013-2275. All users of Puppet 3.1.0 and earlier are strongly encouraged to upgrade to 3.1.1. Puppet 3.1.1 Downloads - Source: - Windows package: - RPMs: or /fedora - Debs: - Mac package: - Gems are available via rubygems at or by using gem install puppet --version=3.1.1 See the Verifying Puppet Download section at: Please report feedback via the Puppet Labs Redmine site, using an affected puppet version of 3.1.1: Puppet 3.1.1 Changelog - Andrew Parker (3): - (#14093) Cleanup tests for template functionality - ( (7): - - Justin Stoller (6): - Acceptance tests for CVEs 2013 (1640, 1652, 1653, 1654,2274, 2275) - Separate tests for same CVEs into separate files - We can ( and should ) use grep instead of grep -E - add quotes around paths for windows interop - remove tests that do not run on 3.1+ - run curl against the master on the master - Moses Mendoza (1): - Update PUPPETVERSION for 3.1.1 - Nick Lewis (3): - (#19393) Safely load YAML from the network - Always read request body when using Rack - Fix order-dependent test failure in network/authorization 3.1.0 Puppet 3.1.0 is a features and fixes release in the 3.x series, focused on adding documentation and cleaning up extension loading. New: YARD API Documentation To go along with the improved usability of Puppet as a library, we’ve added YARD documentation throughout the codebase. YARD generates browsable code documentation based on in-line comments. This is a first pass through the codebase but about half of it’s covered now. To use the YARD docs, simply run gem install yard then yard server --nocache from inside a puppet source code checkout (the directory containing lib/puppet). YARD documentation is also available in the generated references section under Developer Documentation. Fix: YAML Node Cache Restored on Master In 3.0.0, we inadvertently removed functionality that people relied upon to get a list of all the nodes checking into a particular puppet master. This is now enabled for good, added to the test harness, and available for use as: # shell snippet export CLIENTYAML=`puppet master --configprint yamldir` puppet node search "*" --node_terminus yaml --clientyamldir $CLIENTYAML Improvements When Loading Ruby Code A major area of focus for this release was loading extension code. As people wrote and distributed Faces (new puppet subcommands that extend Puppet’s capabilities), bugs like #7316 started biting them. Additionally, seemingly simple things like retrieving configuration file settings quickly got complicated, causing problems both for Puppet Labs’ code like Cloud Provisioner as well as third-party integrations like Foreman. The upshot is that it’s now possible to fully initialize puppet when using it as a library, loading Ruby code from Forge modules works correctly, and tools like puppetlabs_spec_helper now work correctly. All Bugs Fixed in 3.1.0 Use the Puppet issue tracker to find every bug fixed in a given version of Puppet. - All bugs fixed in 3.1.0 (approx. 53) Puppet 3.0.2 3.0.2 Target version and resolved issues: Puppet 3.0.1 3.0.1 Target version and resolved issues: Puppet 3.0.0 Puppet 3.0.0 is the first release of the Puppet 3 series, which includes breaking changes, new features, and bug fixes. Upgrade Warning: Many Breaking Changes Puppet 3.0.0 is a release on a major version boundary, which means it contains breaking changes that make it incompatible with Puppet 2.7.x. These changes are listed below, and their headers begin with a “BREAK” label. You should read through them and determine which will apply to your installation. Improved Version Numbering Puppet 3 marks the beginning of a new version scheme for Puppet releases. Beginning with 3.0.0, Puppet uses a strict three-field version number: - The leftmost segment of the version number must increase for major backwards-incompatible changes. - The middle segment may increase for backwards-compatible new functionality. - The rightmost segment may increase for bug fixes. BREAK: Changes to Dependencies and Supported Systems - Puppet 3 adds support for Ruby 1.9.3, and drops support for Ruby 1.8.5. (Puppet Labs is publishing Ruby 1.8.7 packages in its repositories to help users who are still on RHEL and CentOS 5.) - Note that puppet docis only supported on Ruby 1.8.7, due to 1.9’s changes to the underlying RDoc library. See ticket # 11786 for more information. - [Hiera][] is now a dependency of Puppet. - Puppet now requires Facter 1.6.2 or later. - Support for Mac OS X 10.4 has been dropped. BREAK: Dynamic Scope for Variables is Removed Dynamic scoping of variables, which was deprecated in Puppet 2.7, has been removed. See Language: Scope for more details. The most recent 2.7 release logs warnings about any variables in your code that are still being looked up dynamically. Upgrade note: Before upgrading from Puppet 2.x, you should do the following: - Restart your puppet master — this is necessary because deprecation warnings are only produced once per run, and warnings that were already logged may not appear again in your logs until a restart. - Allow all of your nodes to check in and retrieve a catalog. - Examine your puppet master’s logs for dynamic scope warnings. - Edit any manifests referenced in the warnings to remove the dynamic lookup behavior. Use fully qualified variable names where necessary, and move makeshift data hierarchies out of your manifests and into [Hiera][]. BREAK: Parameters In Definitions Must Be Variables Parameter lists in class and defined type definitions must include a dollar sign ( $) prefix for each parameter. In other words, parameters must be styled like variables. Non-variable-like parameter lists have been deprecated since at least Puppet 0.23.0. The syntax for class and defined resource declarations is unchanged. Right: define vhost ($port = 80, $vhostdir) { ... } Wrong: define vhost (port = 80, vhostdir) { ... } Unchanged: vhost {'web01.example.com': port => 8080, vhostdir => '/etc/apache2/conf.d', } BREAK: puppet:/// URLs Pointing to Module Files Must Contain modules/ Since 0.25, Puppet URLs pointing to source files in the files directory of a module have had to start with puppet:///modules/; however, the old way has continued to work (while logging deprecation warnings), ostensibly for compatibility with 0.24 clients. Support for 0.24-style URLs has now been removed, and the modules/ portion is mandatory. BREAK: Deprecated Commands Are Removed The legacy standalone executables, which were replaced by subcommands in Puppet 2.6, have been removed. Additionally, running puppet without a subcommand no longer defaults to puppet apply. Upgrade note: Examine your Puppet init scripts, the configuration of the puppet master’s web server, and any wrapper scripts you may be using, and ensure that they are using the new subcommands instead of the legacy standalone commands. BREAK: Puppet Apply’s --apply Option Is Removed The --apply option has been removed. It was replaced by --catalog. BREAK (Partially Reverted in 3.0.2): Console Output Formatting Changes The format of messages displayed to the console has changed slightly, potentially leading to scripts that watch these messages breaking. Additionally, we now use STDERR appropriately on *nix platforms. Upgrade Note: If you scrape Puppet’s console output, revise the relevant scripts. Note that some of these changes were reverted in 3.0.2. This does not change the formatting of messages logged through other channels (eg: syslog, files), which remain as they were before. See bug #13559 for details BREAK: Removed and Modified Settings The following settings have been removed: factsync(Deprecated since Puppet 0.25 and replaced with pluginsync; see ticket #2277) ca_days(Replaced with ca_ttl) servertype(No longer needed, due to removal of built-in Mongrel support) downcasefact(Long-since deprecated) reportserver(Long-since deprecated; replaced with report_server) The following settings now behave differently: pluginsyncis now enabled by default cacrlcan no longer be set to false. Instead, Puppet will now ignore the CRL if the file in this setting is not present on disk. BREAK: Puppet Master Rack Configuration Is Changed Puppet master’s config.ru file has changed slightly; see ext/rack/files/config.ru in the Puppet source code for an updated example. The new configuration: - Should now require 'puppet/util/command_line'instead of 'puppet/application/master'. - Should now run Puppet::Util::CommandLine.new.executeinstead of Puppet::Application[:master].run. - Should explicitly set the --confdiroption (to avoid reading from ~/.puppet/puppet.conf). diff --git a/ext/rack/files/config.ru b/ext/rack/files/config.ru index f9c492d..c825d22 100644 --- a/ext/rack/files/config.ru +++ b/ext/rack/files/config.ru @@ -10,7 +10,25 @@ $0 = "master" # ARGV << "--debug" ARGV << "--rack" +ARGV << "--confdir" << "/etc/puppet" +ARGV << "--vardir" << "/var/lib/puppet" + -require 'puppet/application/master' +require 'puppet/util/command_line' -run Puppet::Application[:master].run +run Puppet::Util::CommandLine.new.execute + Upgrade note: If you run puppet master via a Rack server like Passenger, you must change the config.rufile as described above. BREAK: Special-Case Mongrel Support Is Removed; Use Rack Instead Previously, the puppet master had special-case support for running under Mongrel. Since Puppet’s standard Rack support can also be used with Mongrel, this redundant code has been removed. Upgrade note: If you are using Mongrel to run your puppet master, re-configure it to run Puppet as a standard Rack application. BREAK: File Type Changes - The recurseparameter can no longer set recursion depth, and must be set to true, false, or remote. Use the recurselimitparameter to set recursion depth. (Setting depth with the recurseparameter has been deprecated since at least Puppet 2.6.8.) BREAK: Mount Type Changes - The pathparameter has been removed. It was deprecated and replaced by namesometime before Puppet 0.25.0. BREAK: Package Type Changes - The typeparameter has been removed. It was deprecated and replaced by providersome time before Puppet 0.25.0. - The msiprovider has been deprecated in favor of the more versatile windowsprovider. - The install_optionsparameter for Windows packages now accepts an array of mixed strings and hashes; however, it remains backwards-compatible with the 2.7 single hash format. - A new uninstall_optionsparameter was added for Windows packages. It uses the same semantics as install_options. BREAK: Exec Type Changes - The logoutputparameter now defaults to on_failure. - Due to misleading values, the HOMEand USERenvironment variables are now unset when running commands. BREAK: Deprecated check Metaparameter Is Removed - The checkmetaparameter has been removed. It was deprecated and replaced by auditin Puppet 2.6.0. BREAK: Puppet Agent Now Requires node Access in Master’s auth.conf Puppet agent nodes now requires access to their own node object on the puppet master; this is used for making ENC-set environments authoritative over agent-set environments. Your puppet master’s auth.conf file must contain the following stanza, or else agent nodes will not be able to retrieve catalogs: # allow nodes to retrieve their own node object path ~ ^/node/([^/]+)$ method find allow $1 Auth.conf has allowed this by default since 2.7.0, but puppet masters which have been upgraded from previous versions may still be disallowing it. Upgrade note: Check your auth.conf file and make sure it includes the above stanza before the final stanza. Add it if necessary. BREAK: auth no in auth.conf Is Now the Same as `auth any’ Previously, auth no in auth.conf would reject connections with valid certificates. This was confusing, and the behavior has been removed; auth no now allows any kind of connection, same as auth any. BREAK: auth.conf’s allow Directive Rejects IP Addresses; Use allow_ip Instead To allow hosts based on IP address, use the new allow_ip directive. It functions exactly like IP addresses in allow used to, except that it does not support backreferences. The allow directive now assumes that the string is not an IP address. Upgrade Note: If your auth.confallowed any specific nodes by IP address, you must replace those allowdirectives with allow_ip. BREAK: fileserver.conf Cannot Control Access By IP; Use auth.conf Instead The above fix to ambiguous ACLs in auth.conf caused authorization by IP address in fileserver.conf to break. We are opting not to fix it, in favor of centralizing our authorization interfaces. All authorization rules in fileserver.conf can be reproduced in auth.conf instead, as access must pass through auth.conf before reaching fileserver.conf. If you need to control access to custom fileserver mount points by IP address, set the rule in fileserver.conf to allow *, and create rules in auth.conf like the following: path ~ ^/file_(metadata|content)s?/my_custom_mount_point/ auth yes allow /^(.+\.)?example.com$/ allow_ip 192.168.100.0/24 Rules like these must go above the rule for /file/. Note that you must control both the file_metadata(s) and file_content(s) paths; the regular expression above should do the trick. BREAK: “Resource Type” API Has Changed The API for querying resource types has changed to more closely match standard Puppet terminology. This is most likely to be visible to any external tools that were using the HTTP API to query for information about resource types. - You can now add a kindoption to your request, which will allow you to filter results by one of the following kinds of resource types: class, node, defined_type. - The API would previously return a field called typefor each result; this has been changed to kind. - The API would previously return the value hostclassfor the typefield for classes; this has been changed to class. - The API would previously return the value definitionfor the typefield for classes; this has been changed to defined_type. - The API would previously return a field called argumentsfor any result that contained a parameter list; this has been changed to parameters. An example of the new output: [ { "line": 1, "file": "/home/cprice/work/puppet/test/master/conf/modules/resource_type_foo/manifests/init.pp", "name": "resource_type_foo", "kind": "class" }, { "line": 1, "file": "/home/cprice/work/puppet/test/master/conf/modules/resource_type_foo/manifests/my_parameterized_class.pp", "parameters": { "param1": null, "param2": "\"default2\"" }, "name": "resource_type_foo::my_parameterized_class", "kind": "class" }, { "line": 1, "file": "/home/cprice/work/puppet/test/master/conf/modules/resource_type_foo/manifests/my_defined_type.pp", "parameters": { "param1": null, "param2": "\"default2\"" }, "name": "resource_type_foo::my_defined_type", "kind": "defined_type" }, { "line": 1, "file": "/home/cprice/work/puppet/test/master/conf/modules/resource_type_foo/manifests/my_node.pp", "name": "my_node", "kind": "node" } ] BREAK: Deprecated XML-RPC Support Is Entirely Removed XML-RPC support has been removed entirely, in favor of the HTTP API introduced in 2.6. XML-RPC support has been deprecated since 2.6.0. BREAK: Changes to Ruby API, Including Type and Provider Interface The following hard changes have been made to Puppet’s internal Ruby API: - Utility code: The Puppet::Util.symbolizemethod has been removed. Some older types and providers (notably the MySql module) used this function; if you get errors like undefined method 'symbolize' for #<Puppet::Type::..., you may need to upgrade your modules to newer versions. See ticket 16791 for more information. - Helper code: String#linesand IO#linesrevert to standard Ruby semantics. Puppet used to emulate these methods to accomodate ancient Ruby versions, and its emulation was slightly inaccurate. We’ve stopped emulating them, so they now include the separator character ( $/, default value \n) in the output and include content where they previously wouldn’t. - Functions: Puppet functions called from Ruby code (templates, other functions, etc.) must be called with an array of arguments. Puppet has always expected this, but was not enforcing it. See ticket #15756 for more information. - Faces: The set_default_formatmethod has been removed. It had been deprecated and replaced by render_as. - Resource types: The following methods for type objects have been removed: states, newstate, [ ], [ ]=, alias, clear, create, delete, each, and has_key?. - Providers: The mkmodelmethodsmethod for provider objects has been removed. It was replaced with mk_resource_methods. - Providers: The LANG, LC_*, and HOMEenvironment variables are now unset when providers and other code execute external commands. The following Ruby methods are now deprecated: - Applications: The Puppet::Applicationclass’s #should_parse_config, #should_not_parse_config, and #should_parse_config?methods are now deprecated, and will be removed in a future release. They are no longer necessary for individual applications and faces, since Puppet now automatically determines when the config file should be re-parsed. BREAK: Changes to Agent Lockfile Behavior Puppet agent now uses two lockfiles instead of one: - The run-in-progress lockfile (configured with the agent_catalog_run_lockfilesetting) is present if an agent catalog run is in progress. It contains the PID of the currently running process. - The disabled lockfile (configured with the agent_disabled_lockfilesetting) is present if the agent was disabled by an administrator. The file is a JSON hash which may contain a disabled_messagekey, whose value should be a string with an explanatory message from the administrator. DEPRECATION: Ruby DSL is Deprecated The Ruby DSL that was added in Puppet 2.6 (and then largely ignored) is deprecated. Deprecation warnings have been added to Puppet 3.1. Automatic Data Bindings for Class Parameters When you declare or assign classes, Puppet now automatically looks up parameter values in Hiera. See Classes for more details. Hiera Functions Are Available in Core The hiera, hiera_array, hiera_hash, and hiera_include functions are now included in Puppet core. If you previously installed these functions with the hiera-puppet package, you may need to uninstall it before upgrading. Major Speed Increase Puppet 3 is faster than Puppet 2.6 and significantly faster than Puppet 2.7. The exact change will depend on your site’s configuration and Puppet code, but many 2.7 users have seen up to a 50% improvement. Solaris Improvements - Puppet now supports the ipkg format, and is able to “hold” packages (install without activating) on Solaris. - Zones support is fixed. - Zpool support is significantly improved. Rubygem Extension Support Puppet can now load extensions (including subcommands) and plugins (custom types/providers/functions) from gems. See ticket #7788 for more information. Puppet Agent Is More Efficient in Daemon Mode Puppet agent now forks a child process to run each catalog. This allows it to return memory to system more efficiently when running in daemon mode, and should reduce resource consumption for users who don’t run puppet agent from cron. puppet parser validate Will Read From STDIN Piped content to puppet parser validate will now be read and validated, rather than ignoring it and requiring a file on disk. The HTTP Report Processor Now Supports HTTPS Use an https:// URL in the report_server setting to submit reports to an HTTPS server. The include Function Now Accepts Arrays Formerly, it would accept a comma separated list but would fail on arrays. This has been remedied. unless Statement Puppet now has an unless statement. Puppet Agent Can Use DNS SRV Records to Find Puppet Master Note: This feature is meant for certain unusual use cases; if you are wondering whether it will be useful to you, the answer is probably “No, use round-robin DNS or a load balancer instead.” Usually, agent nodes use the server setting from puppet.conf to locate their puppet master, with optional ca_server and report_server settings for centralizing some kinds of puppet master traffic. If you set use_srv_records to true, agent nodes will instead use DNS SRV records to attempt to locate the puppet master. These records must be configured as follows: The srv_domain setting can be used to set the domain the agent will query; it defaults to the value of the domain fact. If the agent doesn’t find an SRV record or can’t contact the servers named in the SRV record, it will fall back to the server/ ca_server/ report_server settings from puppet.conf. * (Note that the file server record is somewhat dangerous, as it overrides the server specified in any puppet:// URL, not just URLs that use the default server.) All Bugs Fixed in 3.0.0 Use the Puppet issue tracker to find every bug fixed in a given version of Puppet. - All bugs fixed in 3.0.0 (approx. 220)
https://docs.puppet.com/puppet/3/release_notes.html
2017-03-23T08:13:46
CC-MAIN-2017-13
1490218186841.66
[]
docs.puppet.com
FrameworkEvent¶ - std::ostream & cppmicroservices:: operator<<(std::ostream &os, FrameworkEvent::Type eventType)¶ Writes a string representation of eventTypeto the stream os. - std::ostream & cppmicroservices:: operator<<(std::ostream &os, const FrameworkEvent &evt)¶ Writes a string representation of evtto the stream os. - bool cppmicroservices:: operator==(const FrameworkEvent &rhs, const FrameworkEvent &lhs)¶ Compares two framework events for equality. - class cppmicroservices:: FrameworkEvent¶ - #include <cppmicroservices/FrameworkEvent.h> An event from the Micro Services framework describing a Framework event. FrameworkEventobjects are delivered to listeners connected via BundleContext::AddFrameworkListener() when an event occurs within the Framework which a user would be interested in. A Typecode is used to identify the event type for future extendability. Public Types - enum Type¶ A type code used to identify the event type for future extendability. Values: FRAMEWORK_STARTED= 0x00000001¶ The Framework has started. This event is fired when the Framework has started after all installed bundles that are marked to be started have been started. The source of this event is the System Bundle. FRAMEWORK_ERROR= 0x00000002¶ The Framework has been started. The Framework‘s BundleActivator Start method has been executed. FRAMEWORK_WARNING= 0x00000010¶ A warning has occurred. There was a warning associated with a bundle. FRAMEWORK_INFO= 0x00000020¶ An informational event has occurred. There was an informational event associated with a bundle. FRAMEWORK_STOPPED= 0x00000040¶ The Framework has been stopped. This event is fired when the Framework has been stopped because of a stop operation on the system bundle. The source of this event is the System Bundle. FRAMEWORK_STOPPED_UPDATE= 0x00000080¶ The Framework is about to be stopped. This event is fired when the Framework has been stopped because of an update operation on the system bundle. The Framework will be restarted after this event is fired. The source of this event is the System Bundle. Public Functions operator bool() const¶ Returns falseif the FrameworkEvent is empty (i.e invalid) and trueif the FrameworkEvent is not null and contains valid data. - Return trueif this event object is valid, falseotherwise. FrameworkEvent(Type type, const Bundle &bundle, const std::string &message, const std::exception_ptr exception = nullptr)¶ Creates a Framework event of the specified type. - Parameters type: The event type. bundle: The bundle associated with the event. This bundle is also the source of the event. message: The message associated with the event. exception: The exception associated with this event. Should be nullptr if there is no exception. - Bundle GetBundle() const¶ Returns the bundle associated with the event. - Return - The bundle associated with the event. - std::string GetMessage() const¶ Returns the message associated with the event. - Return - the message associated with the event. - std::exception_ptr GetThrowable() const¶ Returns the exception associated with this event. - Remark - Use std::rethrow_exceptionto throw the exception returned. - Return - The exception. May be nullptrif there is no related exception. - Type GetType() const¶ Returns the type of framework event. The type values are: - FRAMEWORK_STARTED - FRAMEWORK_ERROR - FRAMEWORK_WARNING - FRAMEWORK_INFO - FRAMEWORK_STOPPED - FRAMEWORK_STOPPED_UPDATE - FRAMEWORK_WAIT_TIMEDOUT
http://docs.cppmicroservices.org/en/latest/framework/doc/api/main/FrameworkEvent.html
2017-03-23T08:19:23
CC-MAIN-2017-13
1490218186841.66
[]
docs.cppmicroservices.org
Toggle navigation Documentation Home Online Store Support All Documentation for Plugin Version keyword + Filter by product MemberMouse WooCommerce Plus Wishlist Member Easy Digital Downloads Plus Wishlist Member WooCommerce Plus Wishlist 1-Click Registration MemberMouse WooCommerce Plus What version of WooCommerce plugin I need to use MemberMouse WooCommerce Plus? Wishlist Member Easy Digital Downloads Plus What version of Easy Digital Downloads plugin do I need? Wishlist Member WooCommerce Plus What version of WooCommerce plugin do I need? Wishlist 1-Click Registration License class version:HP_EDD_LICENSE_VERSION error
http://docs.happyplugins.com/doc/keyword/plugin-version
2017-03-23T08:18:57
CC-MAIN-2017-13
1490218186841.66
[]
docs.happyplugins.com
22, 80, 443, 18080(Optional: Used to view the management stack as it comes up) 500, 4500 2181, 2376, 2888, 3888, 6379 Note: Currently, Docker for Windows and Docker for Mac are not supported. Prepare the nodes that will be used in the HA setup. These nodes should meet the same requirements as a single node setup of Rancher. (Optional) Pre-pulling the rancher/server image onto the Rancher nodes. Currently, our HA setup supports 3 cluster sizes. Note: The nodes can be split between data centers connected with high speed low latency links within a region, but should not be attempted acrosss larger geographic regions. If you choose to split the nodes within a region, Zookeeper is used in our HA setup and requires a quorum to stay active. If you split the nodes between data centers, you will only be able to survive the region with the fewest nodes going down. On one of the nodes, launch a Rancher server that will be used to generate the HA startup scripts. This script generating Rancher server will connect to the external MySQL database and populate the database schema. It will be used to bootstrap the HA deployment process. Eventually, the Rancher server container used in this step will be replaced with a HA configured Rancher server. $ sudo docker run -d -p 8080:8080 \ -e CATTLE_DB_CATTLE_MYSQL_HOST=<hostname or IP of MySQL instance> \ -e CATTLE_DB_CATTLE_MYSQL_PORT=<port> \ -e CATTLE_DB_CATTLE_MYSQL_NAME=<Name of Database> \ -e CATTLE_DB_CATTLE_USERNAME=<Username> \ -e CATTLE_DB_CATTLE_PASSWORD=<Password> \ -v /var/run/docker.sock:/var/run/docker.sock \ rancher/server Note: Please be patient with this step, initialization may take up to 15 minutes to complete. # The version would be whatever was used in Step 4 $ sudo docker pull rancher/server http://<server_IP>:8080. Under Admin -> High Availability, there will be a confirmation that Rancher server has successfully connected to an external database. If this is not set up correctly, please repeat steps 1 and 4 in the previous section. For each node that you want in HA, use the startup script to launch Rancher server on all nodes. The script will start a Rancher server container that connects to the same external MySQL database created earlier. Note: Please ensure that you have stopped the script generating Rancher server container after you generate the rancher-ha.shlaunch script. Otherwise, if you try to launch the HA script on the same node, there will be a port conflict and the HA node will fail to start. Navigate to the IP or hostname of the external load balancer that you provided earlier and used in the Host Registration URL when generating the configuration scripts. Please note that it will take a couple of minutes before the UI is available as Rancher. If your UI doesn’t become available, view the status of the management stack. Once you have added all the hosts into your environment, your HA setup is complete and you can start launching services, or start launching templates from the Rancher Catalog. Note: If you are using AWS, you will need to specify the IP of the hosts that you are adding into Rancher. If you are adding a custom host, you can specify the public IP in the UI and the command to launch Rancher agent will be editted to specify the IP. If you are adding a host through the UI, after the host has been added into Rancher, you will need to ssh into the host to re-run the custom command to re-launch Rancher agent so that the IP is correct.
http://docs.rancher.com/rancher/v1.1/en/installing-rancher/installing-server/multi-nodes/
2017-03-23T08:17:35
CC-MAIN-2017-13
1490218186841.66
[]
docs.rancher.com
This is an iframe, to view it upgrade your browser or enable iframe display. Prev B.2.3.. Warning If an operating system was installed on that partition, it must be reinstalled if you want to use that system as well. Be aware that some computers sold with pre-installed operating systems may not include the installation media to reinstall the original operating system. You should check whether this applies to your system is before you destroy your original partition and its operating system installation. After creating a smaller partition for your existing operating system, you can reinstall software, restore your data, and start the installation. Figure B.10, “Disk Drive Being Destructively Repartitioned” shows this being done. Image of a disk drive being destructively repartitioned, where 1 represents before and 2 represents after. Figure B.10. Disk Drive Being Destructively Repartitioned In the above example, 1 represents before and 2 represents after. Warning Any data previously present in the original partition is lost.. B.2.3.1. Compress Existing Data As the following figure shows, the first step is to compress the data in your existing partition. The reason for doing this is to rearrange the data such that it maximizes the available free space at the "end" of the partition. Image of a disk drive being compressed, where 1 represents before and 2 represents after. Figure B.11. Disk Drive Being Compressed. Prev B.2.2. Using Space from an Unused Partition Up B.2.3.2. Resize the Existing Partition
https://docs.fedoraproject.org/en-US/Fedora/25/html/Installation_Guide/sect-disk-partitions-active-partition.html
2017-03-23T08:14:08
CC-MAIN-2017-13
1490218186841.66
[]
docs.fedoraproject.org
Tips and Tricks Tabbed Modeling It is possible to edit multiple models at the same time by using model tabs. Every new or loaded model is displayed as a tab in the tab bar at the top of the screen. By clicking on a tab, you can view the respective model. Eclipse Project Synchronization If you use Eclipse to develop process applications, your models are typically part of an Eclipse project. With default settings, editing and saving a model in the Camunda Modeler requires manual refreshing of the project content in Eclipse. Eclipse can be configured to automatically refresh project content whenever a file changes by selecting Window / Preferences in the top level menu and navigate to General / Workspace in the preferences window. Tick the box Refresh using native hooks or polling.
https://docs.camunda.org/manual/7.6/modeler/camunda-modeler/tips/
2017-03-23T08:27:06
CC-MAIN-2017-13
1490218186841.66
[array(['img/model-tabs-1.png', 'Model Tabs'], dtype=object) array(['img/eclipse-refresh.png', 'Model Tabs'], dtype=object)]
docs.camunda.org
Getting Started with Joomla! for WordPress users This series of documents introduces Joomla! to WordPress users and covers Joomla version 3.x. Who is it written for? The series is for anyone who already has experience with WordPress and wants to use Joomla! What's covered - Getting Started with Templates for WordPress users - Getting Started with Categories for WordPress users - Getting Started with Articles for WordPress users - Getting Started with Menus for WordPress users - Getting Started with Modules for WordPress users - Installing a Joomla website for WordPress users - Migrating content from a WordPress website to Joomla - Combining Joomla and WordPress This article or section is in the process of an expansion or major restructuring. You are welcome to assist in its construction by editing it as well. If this article or section has not been edited in several days, please remove this template. This article was last edited by Rossetti (talk| contribs) 2 years ago. (Purge) Advertisement
https://docs.joomla.org/J3.x:Getting_Started_with_Joomla!_for_WordPress_users
2017-03-23T08:22:50
CC-MAIN-2017-13
1490218186841.66
[]
docs.joomla.org
Rules for generating Dockerfiles involving OPAM val run_as_opam : ('a, unit, string, Dockerfile.t) Pervasives.format4 -> 'a run_as_opam fmt runs the command specified by the fmt format string as the opam user. val opam_init : ?branch:string -> ?repo:string -> ?need_upgrade:bool -> ?compiler_version:string -> unit -> Dockerfile.t opam_init ?branch ?repo ?need_upgrade ?compiler_version initialises the OPAM repository. The repo is git://github.com/ocaml/opam-repository by default and branch is master by default. If compiler-version is specified, an opam switch is executed to that version. If unspecified, then the system switch is default. need_upgrade will run opam admin upgrade on the repository for the latest OPAM2 metadata format. val install_opam_from_source : ?prefix:string -> ?branch:string -> unit -> Dockerfile.t Commands to install OPAM via a source code checkout from GitHub. The branch defaults to the 1.2 stable branch. The binaries are installed under <prefix>/bin, defaulting to /usr/local/bin. val install_cloud_solver : Dockerfile.t install_cloud_solver will use the hosted OPAM aspcud service from IRILL. It will install a fake /usr/bin/aspcud script that requires online connectivity. val header : ?maintainer:string -> string -> string -> Dockerfile.t header image tag initalises a fresh Dockerfile using the image:tag as its base.
http://docs.mirage.io/dockerfile/Dockerfile_opam/index.html
2017-03-23T08:14:38
CC-MAIN-2017-13
1490218186841.66
[]
docs.mirage.io
Rancher Server is able to run without internet, but the web browser accessing the UI will need access to the private network. Rancher can be configured with either a private registry or with an HTTP proxy. When launching Rancher server with no internet access, there will be a couple of features that will no longer work properly.. It is assumed you either have your own private registry or other means of distributing docker images to your machine. If you need help with creating a private registry, please refer to the Docker documentation for private registries. It is very important that all images (e.g.. rancher/server, rancher/agent, and any infrastructure service images) are distributed before attempting to install/upgrade Rancher Server. If these versions are not available in your private registry, Rancher Server will become unstable. For each release of Rancher server, the corresponding Rancher agent and Rancher agent instance versions will be available in the release notes. In order to find the images for your infrastructure services, you would need to reference the infra-templates folders in our Rancher catalog and community catalog to see which infrastructure services that you’d like to include and the associated images in those templates from those catalogs. These examples are for the rancher/server and rancher/agent images using a machine that has access to both DockerHub and your private registry. We recommend tagging the version of the images in your private registry as the same version that exist in DockerHub. # rancher/server $ docker pull rancher/server:v1.6.0 $ docker tag rancher/server:v1.6.0 localhost:5000/<NAME_OF_LOCAL_RANCHER_SERVER_IMAGE>:v1.6.0 $ docker push localhost:5000/<NAME_OF_LOCAL_RANCHER_SERVER_IMAGE>:v1.6.0 # rancher/agent $ docker pull rancher/agent:v1.1.3 $ docker tag rancher/agent:v1.1.3 localhost:5000/<NAME_OF_LOCAL_RANCHER_AGENT_IMAGE>:v1.1.3 $ docker push localhost:5000/<NAME_OF_LOCAL_RANCHER_AGENT_IMAGE>:v1.1.3 Note: For any infrastructure services images, you would have to follow the same steps. On your machine, start Rancher server to use the specific Rancher Agent image. We recommend using specific version tags instead of the latest tag to ensure you are working with the correct versions. Example: $ sudo docker run -d --restart=unless-stopped -p 8080:8080 \ -e CATTLE_BOOTSTRAP_REQUIRED_IMAGE=<Private_Registry_Domain>:5000/<NAME_OF_LOCAL_RANCHER_AGENT_IMAGE>:v1.1.3 \ <Private_Registry_Domain>:5000/<NAME_OF_LOCAL_RANCHER_SERVER_IMAGE>:v1.6.0 The UI and API will be available on the exposed port 8080. You can access the UI by going to the following URL: http://<SERVER_IP>:8080. After accessing the UI, will be configured to use the private registry image for the Rancher agent. $ sudo docker run -d --privileged -v /var/run/docker.sock:/var/run/docker.sock <Private_Registry_Domain>:5000/<NAME_OF_LOCAL_RANCHER_AGENT_IMAGE>:v1.1.3 http://<SERVER_IP>:8080/v1/scripts/<security_credentials> In Rancher, all infrastructure services are defaulted to pull from DockerHub. Changing the default registry from DockerHub to a different private registry is located in the API settings. Add the private registry: In Infrastructure -> Registries section, add the private registry that contain the images for the infrastructure services. Update the default registry: Under Admin -> Setting -> Advanced Settings, click on the I understand that I can break things by changing advanced settings. Find the registry.default setting and click on the edit icon. Add the registry value and click on Save. Once the registry.default setting has been updated, the infrastructure services will begin to pull from the private registry instead of DockerHub. Create a New Environment: After updating the default registry, you will need to re-create your environments so that the infrastructure services will be using the updated default registry. Any existing environments prior to the change in default registry would have their infrastructure services still pointing to DockerHub. Note: Any infrastructure stacks in an existing environment will still be using the original default registry (e.g. DockerHub). These stacks will need to be deleted and re-launched to start using the updated default registry. The stacks can be deployed from Catalog -> Library. Reminder, in this setup, the web browser accessing the UI will need access only the private network. In order to set up an HTTP proxy, the Docker daemon will need to be modified to point to the proxy for Rancher server and Rancher hosts. Before launching Rancher server or Rancher agents,. Rancher server does not need to be launched using any environment variables when using a proxy. Therefore, the command to start Rancher server will be the same as a regular installation. sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server The UI and API will be available on the exposed port 8080. You can access the UI by going to the following URL: http://<SERVER_IP>:8080. After accessing the UI, you can can be used on any machine that has Docker configured to use HTTP proxy.
http://docs.rancher.com/rancher/v1.6/en/installing-rancher/installing-server/no-internet-access/
2017-03-23T08:20:48
CC-MAIN-2017-13
1490218186841.66
[]
docs.rancher.com
The commit size and interval settings were introduced in 2.2.0. --svc-applier-block-commit-size) When the commit timer reaches the specified commit interval (set by --svc-applier-block-commit-interval).
http://docs.continuent.com/tungsten-replicator-4.0/performance-block.html
2017-03-23T08:11:38
CC-MAIN-2017-13
1490218186841.66
[]
docs.continuent.com
Predictive scaling for Amazon EC2 Auto Scaling Use predictive scaling to increase the number of EC2 instances in your Auto Scaling group In general, if you have regular patterns of traffic increases and applications that take a long time to initialize, you should consider using predictive scaling. Predictive scaling can help you scale faster by launching capacity in advance of forecasted load, compared to using only dynamic scaling, which is reactive in nature. Predictive scaling can also potentially save you money on your EC2 bill by helping you avoid the need to overprovision capacity. For example, consider an application that has high usage during business hours and low usage overnight. At the start of each business day, predictive scaling can add capacity before the first influx of traffic. This helps your application maintain high availability and performance when going from a period of lower utilization to a period of higher utilization. You don't have to wait for dynamic scaling to react to changing traffic. You also don't have to spend time reviewing your application's load patterns and trying to schedule the right amount of capacity using scheduled scaling. Use the AWS Management Console, the AWS CLI, or one of the SDKs to add a predictive scaling policy to any Auto Scaling group. Contents How predictive scaling works Predictive scaling uses machine learning to predict capacity requirements based on historical data from CloudWatch. The machine learning algorithm consumes the available historical data and calculates capacity that best fits the historical load pattern, and then continuously learns based on new data to make future forecasts more accurate. To use predictive scaling, you first create a scaling policy with a pair of metrics and a target utilization. Forecast creation starts immediately after you create your policy if there is at least 24 hours of historical data. Predictive scaling finds patterns in CloudWatch metric data from the previous 14 days to create an hourly forecast for the next 48 hours. Forecast data is updated daily based on the most recent CloudWatch metric data. You can configure predictive scaling in forecast only mode so that you can evaluate the forecast before predictive scaling starts actively scaling capacity. You can then view the forecast and recent metric data from CloudWatch in graph form from the Amazon EC2 Auto Scaling console. You can also access forecast data by using the AWS CLI or one of the SDKs. When you are ready to start scaling with predictive scaling, switch the policy from forecast only mode to forecast and scale mode. After you switch to forecast and scale mode, your Auto Scaling group starts scaling based on the forecast. Using the forecast, Amazon EC2 Auto Scaling scales the number of instances at the beginning of each hour: If actual capacity is less than the predicted capacity, Amazon EC2 Auto Scaling scales out your Auto Scaling group so that its desired capacity is equal to the predicted capacity. If actual capacity is greater than the predicted capacity, Amazon EC2 Auto Scaling doesn't scale in capacity. The values that you set for the minimum and maximum capacity of the Auto Scaling group are adhered to if the predicted capacity is outside of this range. Best practices Confirm whether predictive scaling is suitable for your workload. A workload is a good fit for predictive scaling if it exhibits recurring load patterns that are specific to the day of the week or the time of day. To check this, configure predictive scaling policies in forecast only mode. Evaluate the forecast and its accuracy before allowing predictive scaling to actively scale your application. Predictive scaling needs at least 24 hours of historical data to start forecasting. However, forecasts are more effective if historical data spans the full two weeks. If you update your application by creating a new Auto Scaling group and deleting the old one, then your new Auto Scaling group needs 24 hours of historical load data before predictive scaling can start generating forecasts again. In this case, you might have to wait a few days for a more accurate forecast. Create multiple predictive scaling policies in forecast only mode to test the potential effects of different metrics. You can create multiple predictive scaling policies for each Auto Scaling group, but only one of the policies can be used for active scaling. If you choose a custom metric pair, you can define a different combination of load metric and scaling metric. To avoid issues, make sure that the load metric you choose represents the full load on your application. Use predictive scaling with dynamic scaling. Dynamic scaling is used to automatically scale capacity in response to real-time changes in resource utilization. Using it with predictive scaling helps you follow the demand curve for your application closely, scaling in during periods of low traffic and scaling out when traffic is higher than expected. When multiple scaling policies are active, each policy determines the desired capacity independently, and the desired capacity is set to the maximum of those. For example, if 10 instances are required to stay at the target utilization in a target tracking scaling policy, and 8 instances are required to stay at the target utilization in a predictive scaling policy, then the group's desired capacity is set to 10. Create a predictive scaling policy (console) You can configure predictive scaling policies on an Auto Scaling group after the group is created. To create a predictive scaling policy Open the Amazon EC2 Auto Scaling console at . Select the check box next to your Auto Scaling group. A split pane opens up in the bottom part of the Auto Scaling groups page, showing information about the group that's selected. On the Automatic scaling tab, in Scaling policies, choose Create predictive scaling policy. To define a policy, do the following: Enter a name for the policy. Turn on Scale based on forecast to give Amazon EC2 Auto Scaling permission to start scaling right away. To keep the policy in forecast only mode, keep Scale based on forecast turned off. For Metrics, choose your metrics from the list of options. Options include CPU, Network In, Network Out, Application Load Balancer request count, and Custom metric pair. If you chose Application Load Balancer request count per target, then choose a target group in Target group. Application Load Balancer request count per target is only supported if you have attached an Application Load Balancer target group to your Auto Scaling group. If you chose Custom metric pair, choose individual metrics from the drop-down lists for Load metric and Scaling metric. For Target utilization, enter the target value that Amazon EC2 Auto Scaling should maintain. Amazon EC2 Auto Scaling scales out your capacity until the average utilization is at the target utilization, or until it reaches the maximum number of instances you specified. (Optional) For Pre-launch instances, choose how far in advance you want your instances launched before the forecast calls for the load to increase. (Optional) For Max capacity behavior, choose whether to allow Amazon EC2 Auto Scaling to scale out higher than the group's maximum capacity when predicted capacity exceeds the defined maximum. Turning on this setting allows scale out to occur during periods when your traffic is forecasted to be at its highest. (Optional) For Buffer maximum capacity above the forecasted capacity, choose how much additional capacity to use when the predicted capacity is close to or exceeds the maximum capacity. The value is specified as a percentage relative to the predicted capacity. For example, if the buffer is 10, this means a 10 percent buffer, so if the predicted capacity is 50 and the maximum capacity is 40, then the effective maximum capacity is 55. If set to 0, Amazon EC2 Auto Scaling may scale capacity higher than the maximum capacity to equal but not exceed predicted capacity. Choose Create predictive scaling policy. Create a predictive scaling policy (AWS CLI) Use the AWS CLI as follows to configure predictive scaling policies for your Auto Scaling group. For more information about the CloudWatch metrics you can specify for a predictive scaling policy, see PredictiveScalingMetricSpecification in the Amazon EC2 Auto Scaling API Reference. Example 1: A predictive scaling policy that creates forecasts but doesn't scale The following example policy shows a complete policy configuration that uses CPU utilization metrics for predictive scaling with a target utilization of 40. ForecastOnly mode is used by default, unless you explicitly specify which mode to use. Save this configuration in a file named config.json. { "MetricSpecifications": [ { "TargetValue": 40, "PredefinedMetricPairSpecification": { "PredefinedMetricType": "ASGCPUUtilization" } } ] } To create this policy, run the put-scaling-policy command with the configuration file specified, as demonstrated in the following example. aws autoscaling put-scaling-policy --policy-name cpu40-predictive-scaling-policy\ --auto-scaling-group-name my-asg--policy-type PredictiveScaling \ --predictive-scaling-configuration If successful, this command returns the policy's Amazon Resource Name (ARN). { "PolicyARN": "arn:aws:autoscaling:region:account-id:scalingPolicy:2f4f5048-d8a8-4d14-b13a-d1905620f345:autoScalingGroupName/my-asg:policyName/cpu40-predictive-scaling-policy", "Alarms": [] } Example 2: A predictive scaling policy that forecasts and scales For a policy that allows Amazon EC2 Auto Scaling to forecast and scale, add the property Mode with a value of ForecastAndScale. The following example shows a policy configuration that uses Application Load Balancer request count metrics. The target utilization is 1000, and predictive scaling is set to ForecastAndScale mode. { "MetricSpecifications": [ { "TargetValue": 1000, "PredefinedMetricPairSpecification": { "PredefinedMetricType": "ALBRequestCount", "ResourceLabel": "app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff" } } ], "Mode": "ForecastAndScale" } To create this policy, run the put-scaling-policy command with the configuration file specified, as demonstrated in the following example. aws autoscaling put-scaling-policy --policy-name alb1000-predictive-scaling-policy\ --auto-scaling-group-name my-asg--policy-type PredictiveScaling \ --predictive-scaling-configuration If successful, this command returns the policy's Amazon Resource Name (ARN). { "PolicyARN": "arn:aws:autoscaling:region:account-id:scalingPolicy:19556d63-7914-4997-8c81-d27ca5241386:autoScalingGroupName/my-asg:policyName/alb1000-predictive-scaling-policy", "Alarms": [] } Example 3: A predictive scaling policy that can scale higher than maximum capacity The following example shows how to create a policy that can scale higher than the group's maximum size limit when you need it to handle a higher than normal load. By default, Amazon EC2 Auto Scaling doesn't scale your EC2 capacity higher than your defined maximum capacity. However, it might be helpful to let it scale higher with slightly more capacity to avoid performance or availability issues. To provide room for Amazon EC2 Auto Scaling to provision additional capacity when the capacity is predicted to be at or very close to your group's maximum size, specify the MaxCapacityBreachBehavior and MaxCapacityBuffer properties, as shown in the following example. You must specify MaxCapacityBreachBehavior with a value of IncreaseMaxCapacity. The maximum number of instances that your group can have depends on the value of MaxCapacityBuffer. { "MetricSpecifications": [ { "TargetValue": 70, "PredefinedMetricPairSpecification": { "PredefinedMetricType": "ASGCPUUtilization" } } ], "MaxCapacityBreachBehavior": "IncreaseMaxCapacity", "MaxCapacityBuffer": 10 } In this example, the policy is configured to use a 10 percent buffer ( "MaxCapacityBuffer": 10), so if the predicted capacity is 50 and the maximum capacity is 40, then the effective maximum capacity is 55. A policy that can scale capacity higher than the maximum capacity to equal but not exceed predicted capacity would have a buffer of 0 ( "MaxCapacityBuffer": 0). To create this policy, run the put-scaling-policy command with the configuration file specified, as demonstrated in the following example. aws autoscaling put-scaling-policy --policy-name cpu70-predictive-scaling-policy\ --auto-scaling-group-name my-asg--policy-type PredictiveScaling \ --predictive-scaling-configuration If successful, this command returns the policy's Amazon Resource Name (ARN). { "PolicyARN": "arn:aws:autoscaling:region:account-id:scalingPolicy:d02ef525-8651-4314-bf14-888331ebd04f:autoScalingGroupName/my-asg:policyName/cpu70-predictive-scaling-policy", "Alarms": [] } Limitations Predictive scaling requires 24 hours of metric history before it can generate forecasts. You currently cannot use predictive scaling with Auto Scaling groups that have a mixed instances policy.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html
2022-01-29T05:51:05
CC-MAIN-2022-05
1642320299927.25
[]
docs.aws.amazon.com
This section consists of the following: Mechanical Dimensions The U60 DIN 43880 2TE enclosure is 35mm (1.38“) wide, 89.5mm (3.52“) high and 66.5mm (2.62“) deep excluding the network connectors. Visual Indicators Three LEDs are located in the centre of the enclosure. The yellow Service LED on the left (labeled Link or Service), provides a visual indication of the network status of the U60. The RX LED in the middle flashes green when data is received from the network. The TX LED on the right flashes green when data is transmitted onto the network. The Service status is also provided to the host via the USB interface so that the host can provide a network Service indicator if required. Connectors The U60 FT DIN has a two-way network connector with push clips to retain the network cabling. The network connection is polarity insensitive. The U60 RS-485 DIN has a three-way network connector, also with push clips to retain the network cabling. The three positions of the connector are used for the following: + (Signal + or B), - (Signal - or A), and SC (Signal Common). When used in unbiased LonWorks RS-485 networks, the network connection is polarity insensitive. Both the U60 FT DIN and U60 RS-485 DIN also have Micro B female USB connectors for connection to a host processor and come supplied with a suitable USB Type A to Micro-B cable. Compatibility The U60 DIN is compatible with the IzoT Router, Windows computers with OpenLDV 5, with hosts communicating with a Layer 5 MIP interface, and with hosts communicating with a Layer 2 MIP interface including hosts implementing the IzoT Device Stack EX..
http://docs.adestotech.com/pages/?pageId=43375992&sortBy=size
2022-01-29T05:22:13
CC-MAIN-2022-05
1642320299927.25
[]
docs.adestotech.com
Linked Images Widget Overview This article covers specific setup and content entry for the Linked Images Widget. If you've never set up a Widget, check the Setup Guides Section for a step by step guide for getting started. The Linked Images Widget is a flexible content area that allows you to configure a group of images with links Linked Images Widget. Content Entry Form While entering content, you can upload the image and create the link using the default widget setup. By default, the Widget's content entry form will include the following: - Image - The image for this item. - URL - The URL to link the image to. - Open link in new window - Whether or not to open the link in the current window or a new window. Customization To edit configuration for this Widget, head into the Configure section in the sidebar under your item type, for example, under Products, choose Configure. Hover on the Widget to see options. Widget Display Settings - Grid Width - How many linked images to display horizontally. This setting assumes the visitor's device is large enough to display the items horizontally, such as a desktop display. Has no impact on smaller screens like phones..
https://custom-fields.docs.bonify.io/article/342-linked-images-widget
2022-01-29T05:09:37
CC-MAIN-2022-05
1642320299927.25
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecd69412c7d3a3dea3d0b36/images/5ff625e166df373cab706eba/file-0NKbvfPMoZ.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecd69412c7d3a3dea3d0b36/images/5ff62676551e0c2853f39ddf/file-QhcmNbo9na.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecd69412c7d3a3dea3d0b36/images/5fd2523323119734ee37e2b8/file-u4BX1ZivDD.png', None], dtype=object) ]
custom-fields.docs.bonify.io
Replace task steps with a MetaBot Use MetaBots to encapsulate assets and logic to do common processes and tasks. MetaBots are reusable by other bots, allowing Bot developers to create a library of MetaBots for reuse. Prerequisites - The Import data to a MetaBot task must be completed successfully before you can complete this task. - Important: Verify that MyFirstTaskBot.atmx is complete and fully functional. Procedure - Navigate to the main screen of the Automation Anywhere Enterprise Client where the TaskBot was created in Build a basic bot using the Enterprise Client. - Open the TaskBot MyFirstTaskBot.atmx. - Disable or delete all the "Set text `Filedata Column. . ." rows. - Click the MetaBot tab. - Drag and drop the MetaBot AddNewUsers.mbot between Start Loop and Stop Loop. A window titled MetaBot opens with a list of the Input Parameters created in the Logic for this MetaBot. - Click the Value field next to the vFirstNameInput Parameter. - Press F2. - In the list of variables, select the Fielddata Column. - Click Insert. - Clear the Column Number / Select Variable field and type the appropriate column number for each parameter. - Column 1 = vFistName - Column 2 = vLastName - Column 3 = vCompanyName - Column 4 = vEmail - Column 5 = vPhone - Column 6 = vUserName - Column 7 = vPassword - Click Save. - In the Workbench menu ribbon, click Save. To verify that the TaskBot works correctly, go to Verify a basic MetaBot. .
https://docs.automationanywhere.com/fr-FR/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/build-basic-bot/aae-replace-task-steps-with-a-meta-bot.html
2022-01-29T04:03:29
CC-MAIN-2022-05
1642320299927.25
[]
docs.automationanywhere.com
ASP.NET 2.0 and 3.5 not working after you uninstall ASP.NET 4.5 in Windows 8 or Windows Server 2012 This article helps you resolve the problem where uninstalling ASP.NET 4.5 from Windows 8 or Windows Server 2012 causes ASP.NET 2.0 and ASP.NET 3.5 not working. Original product version: Windows Server 2012, Hyper-V Server 2012, .NET Framework 4.5 Original KB number: 2748719 Symptoms ASP.NET 2.0 and ASP.NET 3.5 require the ASP.NET 4.5 feature to be enabled on a computer that is running Windows 8 or Windows Server 2012. If you remove or disable ASP.NET 4.5, then all ASP.NET 2.0 and ASP.NET 3.5 applications on the computer will not run. Resolution To enable the ASP.NET 4.5 feature in Windows 8 or 8.1, follow these steps: Press the Windows logo key, type control panel, and then click the Control Panel icon. Note If you are not using a keyboard, swipe in from the right edge of the screen, tap Search, type control panel in the search box, and then tap the displayed Control Panel icon. Click Programs, and then click Turn Windows features on or off. Expand .NET Framework 4.5 Advanced Services. Select the ASP.NET 4.5 check box. Click OK. More information For more information about how to enable the ASP.NET 4.5 feature on Windows Server 2012, see IIS 8.0 using ASP.NET 3.5 and ASP.NET 4.5.
https://docs.microsoft.com/en-US/troubleshoot/developer/webapps/aspnet/development/aspnet-35-aspnet-2-not-work
2022-01-29T05:03:32
CC-MAIN-2022-05
1642320299927.25
[]
docs.microsoft.com
Roadmap Please check with the sales team anytime to add feature requests or provide feedback on priority of current items in progress. Recent Updates: 6100 Release - Text annotation fix to save and view better with various versions of Acrobat PDF viewers - Signature feature to allow users to sign documents easier without export and import - Quick Message option allowing users to send notifications to email notification lists - Quick Message update to allow token-based message content - API field exposed for bucket parameter on document add function - Admin search includes the option to search by contains or starts-with criteria Next Minor Release Annotations - Default to transparent background for signature annotations - Ability to add multiples annotation types without clicking APPLY - Ability to transfer annotations when splitting documents - Ability to transfer annotations when merging documents - Ability to transfer annotations to the split/merge dialog when splitting or merging docs - Prevent ability to move, but not save another user's annotation - Remove ability to move another user’s annotation Security - Updated cookie management and cross-site scripting prevention checks - Improved login and logout session management - Update checks to handle hidden documents when downloading stack - Prohibit document movement when editing is locked. - Remove Stack Edit Type column where not required - Updated security checks for new bundled documents - Enhanced screen navigation handling for user auto-logout options Document, Barcode, and Bundling Enhancements - Add ALT-IDs to appear in Document Definition grid - File Catalog user interface behavior to preserve document ordering in loan list - Zip settings applied by default for manual bundling - Add missing document list and naming list settings in top section of Bundle Execution Screen - Add ability to merge nested documents - Add barcode unknown document setting in File Gateway handlers - Ability to search a for document by typing the name in the Document List drop down - Ability to copy documents from one loan to another - Bundling enhancements to skip corrupted/missing documents - Correct manual bundling issue with Chrome version 83 Miscellaneous - Update error information for password related changes in the admin utilities - Print Client update to keep documents saved for later after previous documents - Order by gateway behavior update for File Room File List - Update date format mask behavior for Field Definitions - Sort by installed date for better display on Version Info Grid - Enhance Cleanup Agent to improve performance with large documents stores - Add new capabilities and options to reporting framework
https://docs.xdocm.com/6100/user/roadmap
2022-01-29T04:57:04
CC-MAIN-2022-05
1642320299927.25
[]
docs.xdocm.com
Projects using Kiln¶ Over the past years and versions, Kiln. - Ancient Inscriptions of the Northern Black Sea - Corpus of Romanesque Sculpture in Britain and Ireland - Centre for the History and Analysis of Recorded Music - The Complete Works of Ben Jonson: Online Edition - Digital Du Chemin - Electronic Sawyer - The Gascon Rolls Project - Greek Bible in Byzantine Judaism - Henry III Fine Rolls - Hofmeister XIX - Inscriptions of Roman Cyrenaica - Inscriptions of Roman Tripolitania - Jane Austen’s Fiction Manuscripts - Jonathan Swift Archive - Language of Landscape - Digital Edition of Hermann Burger’s Lokalbericht - Mapping Medieval Chester - Nineteenth-Century Serials Edition - Records of Early English Drama (REED) - Schenker Documents Online - Sharing Ancient Wisdoms
https://kiln.readthedocs.io/en/latest/projects.html
2022-01-29T04:04:20
CC-MAIN-2022-05
1642320299927.25
[]
kiln.readthedocs.io
Bots: Configure version control To control edits to files that might include TaskBots, MetaBots, docs, reports, scripts, exe files, and workflows, as an Control Room admin, you can configure version control in the Control Room settings. The Control Room is integrated with Subversion Version Control so that the version, checkin or checkout, version history, and version rollback functionality can be leveraged with ease for all files. By default, the feature is disabled. Version control prerequisites - For version control to be enabled and integrated from the Control Room, the SVN server must be installed and configured.Note: Automation Anywhere Control Room supports various versions of the SVN. See Version control requirements. - SVN administrator user should be created with required permissions. - SVN repository should be created, which can be used to store all version control files. Reusing an SVN repository that is not empty for multiple Control Room instances, such as development and UAT environments, might delete version details and the history of existing bots. If the Control Room instances for development and UAT are different, then either reuse an empty SVN repository or create a new SVN repository.Note: After the Control Room integration with SVN is up and running, all communication for version control operations from the Automation Anywhere Enterprise Client to SVN take places only through the Control Room. Impact of enabling and disabling version control settings When you enable and disable version control settings in the Control Room, it affects the way the Enterprise Client can access bots and upload those to the Control Room. While enabling and disabling this setting, ensure you are aware about its impact, which is summarized in the following list: - When you enable version control settings, the system uploads the bots from the Control Room repository to the SVN repository. During SVN syncing, the Control Room repository is in read-only mode and locked. You will not be able to perform actions such as upload, delete, set production version, checkout, checkin, undo checkout, and force unlock. - When you disable version control settings, the files that are in checked-out state are listed for force unlock by the Control Room administrator. You are allowed to disable the settings only when you unlock the checked-out files. - When you reenable version control settings, you can: - Connect to the repository where you uploaded the bots earlier. Version history of existing bots is also retained. As a result: - The version of the bots that are not updated remains the same. - A new version of the updated bots is created. - Production version is not set if the option Do not assign production version. I will do so manually is selected. - Production version is set to latest versions of the bots if the option Automatically assign the latest version of bots to production version is selected. - To avoid any error, connect to a new repository that is empty. Your version history of the earlier repository is not retained. Also, you can choose to set the production version manually or automatically. All updates to the Version control system settings are captured in the Audit Log page.
https://docs.automationanywhere.com/fr-FR/bundle/enterprise-v11.3/page/enterprise/topics/control-room/administration/settings/bots-configure-version-control.html
2022-01-29T04:22:33
CC-MAIN-2022-05
1642320299927.25
[]
docs.automationanywhere.com
Host Your Kit Supporting grass roots sport across the United Kingdom iomart in the Community Our approach to corporate citizenship mirrors our core values. We seek to involve ourselves in projects that help young people achieve their personal goals through teamwork. iomart has helped more than 200 youth sports teams across the United Kingdom by giving them FREE high quality sports kits through its community sports campaign Host Your Kit. Young players from Plymouth to Aberdeen have received football, basketball, hockey and rugby kits through this iomart initiative. iomart also teamed up with one of its customers Wheatley Housing Group to provide free kits and sports equipment for youth teams in disadvantaged communities across central Scotland. More Information about Host Your Kit Host Your Kit has been supported by leading organisations including basketballscotland, GreaterSport Manchester and Youth Football Scotland. Ambassadors for the campaign have included GB basketball star Kieron Achara and Scotland footballers Charlie Mulgrew and Emma Black. To find out more visit the official Host Your Kit website iomart works with some of the world's leading technology brands and service providers.
https://docs.iomart.com/about-iomart/corporate-responsibility/host-kit/
2022-01-29T03:35:36
CC-MAIN-2022-05
1642320299927.25
[array(['https://docs.iomart.com/wp-content/uploads/2016/04/vmware-icon.png', 'iomart Technology partners'], dtype=object) array(['https://docs.iomart.com/wp-content/uploads/2014/07/host-your-kit-logo.png', None], dtype=object) array(['https://docs.iomart.com/wp-content/uploads/2015/01/hyk1.jpg', None], dtype=object) array(['https://docs.iomart.com/wp-content/uploads/2016/04/vmware-icon.png', 'iomart Technology partners'], dtype=object) array(['https://docs.iomart.com/wp-content/uploads/2016/04/microsoft-icon.png', 'iomart Technology partners'], dtype=object) array(['https://docs.iomart.com/wp-content/uploads/2015/04/AWS_Logo_PoweredBy_space.png', 'iomart Technology partners'], dtype=object) array(['https://docs.iomart.com/wp-content/uploads/2015/08/dell-emc-partner.png', 'iomart Technology partners'], dtype=object) array(['https://docs.iomart.com/wp-content/uploads/2014/08/cisco.png', 'iomart Technology partners'], dtype=object) array(['https://docs.iomart.com/wp-content/uploads/2014/08/openstack.png', 'iomart Technology partners'], dtype=object) array(['https://docs.iomart.com/wp-content/uploads/2014/09/asigra-footer.png', 'iomart Technology partners'], dtype=object) array(['https://docs.iomart.com/wp-content/uploads/2015/08/zerto-partner.png', 'iomart Technology partners'], dtype=object) array(['https://docs.iomart.com/wp-content/uploads/2015/08/arbor_networks-e1439290770528.jpg', 'iomart Technology partners'], dtype=object) array(['https://docs.iomart.com/wp-content/uploads/2014/09/symantec.png', 'iomart Technology partners'], dtype=object) ]
docs.iomart.com
What APIs does Infinispan offer? Which JVMs (JDKs) does Infinispan work with? Is Infinispan's configuration compatible with JBoss Cache? Grouping API vs Key Affinity Service Does Infinispan store data by value or by reference?? When using Atomikos transaction manager, distributed caches are not distributing data, what is the problem? Eviction and Expiration FAQs Cache's number of entries never reaches configured maxEntries, why is that? Expiration does not work, what is the problem? Why is cache size sometimes even higher than specified maxEntries of the eviction configuration element? Why isn't there a notification for the expiration of a cache entry? consistency guarantees do I have with different Asynchronous processing settings ? In a cache entry modified listener, can the modified value be retrieved via Cache.get() when isPre=false? When annotating a method with CacheEntryCreated, how do I retrieve the value of the cache entry added? How do you make Infinispan send replication traffic over a specific network when you don't know the IP address? When using the GUI Demo, I've just put an entry in the cache with lifespan of -1. Why do I see it as having a lifespan of 60,000? When I run an application based on the Query module, I get a ClassNotFoundException for org.slf4j.impl.StaticLoggerBinder. How do I solve it? JBoss Application Server Integration FAQs Can I run my own Infinispan cache within JBoss Application Server 5 or 4? Can I run my own Infinispan cache within JBoss Application Server 6? How can I enable logging? Third Party Container FAQs Can I use Infinispan on Google App Engine for Java? When running on Glassfish or Apache, creating a cache throws an exception saying "Unable to construct a GlobalComponentRegistry", what is it wrong? Can I use Infinispan with Groovy? What about Jython, Clojure, JRuby or Scala etc.?? When running Infinispan under load, I see RejectedExecutionException, how can I fix it? Can I bind Cache or CacheManager to JNDI?? After running a Hot Rod server for a while, I get a NullPointerException in HotRodEncoder.getTopologyResponse(), how can I get around it? Is there a way to do a Bulk Get on a remote cache? What is the startServer.sh script used for? What is the startServer.bat script used for? How can I get Infinispan to show the full byte array? The log only shows partial contents of byte arrays... Clustering Transport FAQs How do I retrieve the clustering physical address? Can data stored via CLI be read using Infinispan remote clients (Hot Rod, Memcached, REST)?
https://docs.jboss.org/author/display/ISPN/Technical%20FAQs.html
2022-01-29T03:48:16
CC-MAIN-2022-05
1642320299927.25
[]
docs.jboss.org
A child theme is a theme that inherits the functionality and styling of another theme, called the parent theme. Child themes are the recommended way of modifying an existing theme. When a theme update releases, you can update it without fearing that your code customizations will be lost. No, if you don't need to edit the code or add snippets, installing the child theme is optional You can find the child theme in themeforest downloads section. Same as the parent theme, go to appearance > themes > add new > upload the my-listing-child.zip and activate it. Remember that when doing code customizations, the child theme must be activated for the changes to take effect
https://docs.mylistingtheme.com/article/installing-child-theme/
2022-01-29T03:35:44
CC-MAIN-2022-05
1642320299927.25
[]
docs.mylistingtheme.com
get https://{tenant_url}/v2.0/SCIM/capabilities. Large group support features are identified in the swagger documentation for the /v2.0/Users and /v2.0/Groups API. The "largeGroupSupport" flag enabled means that the tenant supports the retrieval of the members of groups that have greater than 10,000 members. Additional features include the ability to: Large group support imposes some API limitations. They are:
https://docs.verify.ibm.com/verify/reference/getcapabilities
2022-01-29T03:56:42
CC-MAIN-2022-05
1642320299927.25
[]
docs.verify.ibm.com
: 1. Click the email message icon from the Actions Menu 2. From the window that pops up, choose the message you want to send from the **Quick Message** dropdown field 3. Choose the message recipient from the **Choose Recipient** dropdown field 4. The screen below now has all the information needed to send the message. Based on your permissions, you may be able to edit the email subject and/or body. If you have this permission, make any appropriate edits and click SEND 5. The recipient will receive an email with the information above
https://docs.xdocm.com/6112/user/the-xdoc-document-viewer/document-actions/quick-message
2022-01-29T05:23:36
CC-MAIN-2022-05
1642320299927.25
[array(['/user/pages/04.6112/02.user/03.the-xdoc-document-viewer/16.document-actions/10.quick-message/quick-message-actions.png', 'Quick Message in Actions Menu'], dtype=object) array(['/user/pages/04.6112/02.user/03.the-xdoc-document-viewer/16.document-actions/10.quick-message/quick-message-dropdown.png', 'Quick Message Dropdown'], dtype=object) array(['/user/pages/04.6112/02.user/03.the-xdoc-document-viewer/16.document-actions/10.quick-message/choose-recipient-dropdown.png', 'Choose Recipient Dropdown'], dtype=object) array(['/user/pages/04.6112/02.user/03.the-xdoc-document-viewer/16.document-actions/10.quick-message/quick-message-message.png', 'Quick Message Email Message'], dtype=object) ]
docs.xdocm.com
Could not find asset snippets/products.custom_fields.liquid Help, there's some liquid error! Liquid error: Could not find asset snippets/products.custom_fields.liquid This error can happen if you've deleted products.custom_fields.liquid, or if you updated your theme to a new version. The snippet the error message is referring to is an automatically generated snippet that's maintained by the app and added to your theme automatically. The fix is simple. You can force the app to regenerate the template by interacting with field settings for any field. For example, changing the weights (ordering) your fields and saving will regenerate the template and the message will go away.
https://custom-fields.docs.bonify.io/article/131-could-not-find-asset-snippets-productscustomfieldsliquid
2022-01-29T04:38:57
CC-MAIN-2022-05
1642320299927.25
[]
custom-fields.docs.bonify.io
Release notes Creating payment links through the Customer Area requires no development work, and allows you to accept most payment methods, except for buy now, pay later payment methods. You can view the status of payment links within your Customer Area, and get payment notifications sent to your email. Using payment links to accept payments works as follows: On this page, you'll also learn how to implement additional use cases such as: - Tokenize payment details for subscription payments. - Customize the payment methods, style, and language shown on the payment page. Before you begin Before you begin to integrate, make sure you have followed the Get started with Adyen guide to: - Get an overview of the steps needed to accept live payments. - Create your test account. After you have created your test account: - Add your terms and conditions to the Pay by Link payment page. - Check that you have the required permissions for creating payment links. This includes API permissions, and your Customer Area user permissions. - Add payment methods to your account. Check your API permissions Check that you have the API permissions to create payment links: - Log in to your Customer Area. - Go to Developers > API credentials, and select the API credential ws@Company.[YourCompanyAccount]. Check your Customer Area user permissions Check that you have the user permissions to create payment links: - Log in to your Customer Area. - Go to Account > Users. - Select the user who will be creating payment links. This opens a page with the details for this user. - In the Roles and Associated Accounts pane, check that the user has the following user role: - Pay by Link Interface If the user does not have this role, contact your admin user. Add payment methods to your account If you haven't done so already, add payment methods to your merchant account. To see which payment methods are supported when you create payment links through the Customer Area, refer to Supported payment methods. - Log in to your Customer Area. - Switch to your merchant account. - Go to Account > Payment methods. - Select Add payment methods. - Start entering the name of the payment method, then select it from the drop-down list. - Select Submit. Step 1: Create a payment link To create a payment link: - Log in to your Customer Area. - Switch to your merchant account. - Go to Pay by Link > Payment links - Select Create payment link. Fill out the form with the payment information, specifying the following under Transaction details: Under Additional details, you can optionally choose to either: - Ask the shopper to fill in their personal details such as name, email, or address. - Manually enter shopper details if you have collected these already. If a payment method requires these details, the shopper is no longer asked to provide them on the payment page. - Select Create payment link. The next page confirms the payment link was created. Step 2: Send the payment link to your shopper On the payment link confirmation page, select Copy link. Below is an example payment link. - Send the payment link to your shopper. When the shopper selects the link, they are redirected to the Adyen-hosted payment form. The shopper can choose to pay with any payment method available in the Shopper country you provided in the form. Step 3: Get updates about the payment After the shopper completes the payment, you can check the payment result in your Customer Area, under Transactions > Payments. To keep track of the payment, you can also get payment updates sent to your email, and view payment links in your Customer Area. Alternatively, you can set up notification webhooks to get payment updates sent to your server. This requires development work. Get payment updates to your email To get payment status updates sent to email addresses: - Log in to your Customer Area. - Switch to your merchant account. - Go to Pay by Link > Settings. - Under Email notifications, enter one or more email addresses to receive updates for payment links manually created under this merchant account. To add more addresses, select the Enter key after each one. - Turn on the Creator email updates toggle to always send updates to the user who created the payment link. When a payment has been completed, you receive an email that contains information about the payment, including the Merchant reference, PSP reference, and payment method. View payment links in your Customer Area You can view payment links created within the last 90 days in your Customer Area. - Log in to your Customer Area. - Select Pay by Link > Payment Links. Payment links can have the following statuses: - Active: the payment link is active and can be used to make a payment. - Completed: the payment has been authorized. If you have enabled manual capture on your merchant account, you also need to capture the payment. - Payment pending: the final result of the payment is not yet known. - Expired: the payment link has expired. If you created a reusable payment link, only two statuses apply: Active and Expired. The status will not change to completed. Tokenize payment details When creating payment links through Customer Area, we only support storing payment details for recurring payments and not for one-off payments. To store your shopper's payment details for subsequent one-off payments, use the Pay by Link API. To tokenize the shopper's payment details for recurring payments, enable a setting in your account. - Log in to your Customer Area. - Switch to your merchant account. - Go to Account > Checkout settings. - Go to Tokenization. - Make sure that the Recurring toggle is turned on. When the shopper wants you to store their payment details, follow the instructions on creating a payment link, and additionally make sure to include the Shopper reference. Force the expiry of a payment link In some scenarios, you may want to force the expiry of a payment link. For example, if a shopper updates their order after you've sent them a payment link, you may want to create a new payment link with the updated amount. To force the expiry of a payment link: - Log in to your Customer Area. - Select Pay by Link > Payment Links. - Under the Payment link column, select the payment link you want to force expiry for. - In the Summary section, select the Manually expire link button under the Payment link. Customize the payment page You can customize: - Which payment methods are shown on the payment page, and in which order. - The appearance of the payment page, using themes. Payment methods The payment methods are ordered by popularity, the most popular payment methods in the shopper's country appearing at the top. You can configure which payment methods are rendered (and in which order) based on the shopper's country. To configure these settings, you must have the Change payment methods user role. - Log in to your Customer Area. - Go to Account > Checkout settings. - Select a Shopper country. - Drag the payment methods into the order you want them to appear to shoppers in this country. - To hide a payment method from shoppers in this country, drag it to the Other configured payment methods box. Themes Themes allow you to specify a background image and a brand logo to customize the appearance of the payment page. If you create multiple themes, you can choose a theme when you create the payment link. On the company account level, you can create only one theme. However, on the merchant account level, you can create multiple themes and set up a default one. If you don't create any themes on the merchant account, the one from the company account will be used for all payment pages. Personalize a página de pagamento Você pode personalizar a página de pagamento hospedada pela Adyen com sua marca e incluir um link para seus Termos e Condições. Seu usuário Admin tem acesso para configurar a página de pagamento. Se um usuário diferente da área do cliente precisar de acesso para configurar a página de pagamento, peça ao seu usuário Admin para atribuir a função Pay by Link Settings. Seu usuário Admin pode configurar a página, mas ele precisa ter a mesma função atribuída a ele antes de poder atribuí-la a outros. Se o seu usuário Admin não tiver essa função, entre em contato com nossa Support Team. Para personalizar a página de pagamento: - Faça login na sua Customer Area. - Selecione Account > Pay by Link. - Selecione Appearance. - Insira o nome da sua empresa, loja ou marca, faça upload do logotipo da marca e adicione um link para seus Termos e Condições. - Opcionalmente, altere a cor de fundo e carregue uma imagem de fundo. - Selecione Save. Você receberá uma confirmação de que o formulário de pagamento foi atualizado. Test and go live Before going live, use our list of test cards and other payment methods to test your integration. We recommend testing each payment method that you intend to offer to your shoppers. You can check the test payments in your Customer Area, under Transactions > Payments. When you are ready to go live, you need to: - Apply for a live account. - Configure your live account. - Submit a request to add payment methods in your live Customer Area . - Add your terms and conditions to your live Customer Area.
https://docs.adyen.com/pt/unified-commerce/pay-by-link/payment-links/customer-area
2022-01-29T04:33:28
CC-MAIN-2022-05
1642320299927.25
[]
docs.adyen.com
Specify the locality for IP addresses By adding a CIDR block to the Network Localities page, you can classify traffic from these IP addresses as internal or external to your network. Assets. Detail metric data for internal or external devices is displayed. To remove the filter, click the x icon as shown in the following figure. Thank you for your feedback. Can we contact you to ask follow up questions?
https://docs.extrahop.com/7.8/network-localities-specify/
2022-01-29T05:25:14
CC-MAIN-2022-05
1642320299927.25
[array(['/images/7.8/devices_network_locality_remove_filter.png', None], dtype=object) array(['/images/7.8/devices_network_locality_remove_filter.png', None], dtype=object) ]
docs.extrahop.com
. Customized charts are then saved to dashboards. The following steps show you how to quickly create a blank custom chart: - Log in to the ExtraHop system through https://<extrahop-hostname-or-IP-address>. - Complete one of the following steps: - Click Dashboards at the top of the page. - Click Assets: Thank you for your feedback. Can we contact you to ask follow up questions?
https://docs.extrahop.com/8.0/create-chart/
2022-01-29T05:11:15
CC-MAIN-2022-05
1642320299927.25
[]
docs.extrahop.com
Deactivated plugins and themes scan The scan This scan checks if any deactivated plugins and themes are installed. The fix SecuPress will ask you to select the deactivated plugins and themes to be deleted. What if the fix doesn't work ? Several solutions are possible. - Try to manually delete the plugins and themes from the usual plugins page ( /wp-admin/plugins.php)/, /wp-content/plugins and themes.
https://docs.secupress.me/article/118-deactivated-plugins-and-themes-scan
2022-01-29T04:04:45
CC-MAIN-2022-05
1642320299927.25
[]
docs.secupress.me
This guide describes how to upgrade an existing installation of CloudBees Accelerator. You can upgrade the software using a GUI or an interactive command-line interface (for Linux and Solaris), or by using a "silent" installation. Not all instructions are the same for each platform. Follow the instructions carefully for your platform.
https://docs.cloudbees.com/docs/cloudbees-build-acceleration/11.2/upgrade-guide/
2021-10-16T04:40:53
CC-MAIN-2021-43
1634323583423.96
[]
docs.cloudbees.com
Installing Cloudera Manager, CDH, and Managed Services The following diagram illustrates the phases required to install Cloudera Manager and a Cloudera Manager deployment of CDH and managed services. Every phase is required, but you can accomplish each phase in multiple ways, depending on your organization's policies and requirements. The six phases are grouped into three installation paths based on how the Cloudera Manager Server and database software are installed on the Cloudera Manager Server and cluster hosts. The criteria for choosing an installation path are discussed in Cloudera Manager Deployment. Cloudera Manager Installation Software Cloudera Manager provides the following software for the supported installation paths: - Installation path A - A small self-executing Cloudera Manager installation program to install the Cloudera Manager Server and other packages in preparation for host installation. The Cloudera Manager installer, which you install on the host where you want the Cloudera Manager Server to run, performs the following: - Installs the package repositories for Cloudera Manager and the Oracle Java Development Kit (JDK) - Installs the Cloudera Manager packages - Installs and configures an embedded PostgreSQL database for use by the Cloudera Manager Server, some Cloudera Management Service roles, some managed services, and Cloudera Navigator roles - Installation paths B and C - Cloudera Manager package repositories for manually installing the Cloudera Manager Server, Agent, and embedded database packages. - All installation paths - The Cloudera Manager Installation wizard for automating CDH and managed service installation and configuration on the cluster hosts. Cloudera Manager provides two methods for installing CDH and managed services: parcels and packages. Parcels simplify the installation process and allow you to download, distribute, and activate new versions of CDH and managed services from within Cloudera Manager. After you install Cloudera Manager and you connect to the Cloudera Manager Admin Console for the first time, use the Cloudera Manager Installation wizard to: - Discover cluster hosts - Optionally install the Oracle JDK - Optionally install CDH, managed service, and Cloudera Manager Agent software on cluster hosts - Select services - Map service roles to hosts - Edit service configurations - Start services
https://docs.cloudera.com/documentation/enterprise/5-3-x/topics/cm_ig_intro_to_cm_install.html
2021-10-16T05:15:14
CC-MAIN-2021-43
1634323583423.96
[array(['../images/cm_install_phases.jpg', None], dtype=object)]
docs.cloudera.com
At euroSKIES, we will serve a perfect service to our members for a perfect flight simulation experience. With your registration as Pilot, you are added to our mailing list automatically. For the Newsletter removal, please fill in the form below. It may take up to 72 hours until the removal process is completed! With the removal from our Newsletter System, you will be contacted in mandatory cases only. These will be following cases: - General euroSKIES issues - Account issues - Changes in Rules & Regulations and Privacy Policy - personal contact requests and support tickets related to your account. You are able to relist on our Newsletter System at every time by clicking on the following link.
https://docs.euroskies.net/documents/member-service/account-requests/newsletter-removal/
2021-10-16T05:14:56
CC-MAIN-2021-43
1634323583423.96
[]
docs.euroskies.net
Aerospike Detailed information on the Aerospike state store component Component format To setup Aerospike state store create a component of type state.Aerospike. See this guide on how to create and apply a state store configuration. apiVersion: dapr.io/v1alpha1 kind: Component metadata: name: <NAME> namespace: <NAMESPACE> spec: type: state.Aerospike version: v1 metadata: - name: hosts value: <REPLACE-WITH-HOSTS> # Required. A comma delimited string of hosts. Example: "aerospike:3000,aerospike2:3000" - name: namespace value: <REPLACE-WITH-NAMESPACE> # Required. The aerospike namespace. - name: set value: <REPLACE-WITH-SET> # Optional WarningThe above example uses secrets as plain strings. It is recommended to use a secret store for the secrets as described here. Spec metadata fields Setup Aerospike You can run Aerospike locally using Docker: docker run -d --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -p 3003:3003 aerospike You can then interact with the server using localhost:3000. The easiest way to install Aerospike on Kubernetes is by using the Helm chart: helm repo add incubator helm install --name my-aerospike --namespace aerospike stable/aerospike This installs Aerospike into the aerospike namespace. To interact with Aerospike, find the service with: kubectl get svc aerospike -n aerospike. For example, if installing using the example above, the Aerospike host address would be: aerospike-my-aerospike.aerospike.svc.cluster.local:3000)
https://docs.dapr.io/reference/components-reference/supported-state-stores/setup-aerospike/
2021-10-16T06:42:43
CC-MAIN-2021-43
1634323583423.96
[]
docs.dapr.io
Troubleshooting Contents - 1 I cannot find TWONKY V4.4.* to run the Linn DS products. - 2 I cannot see any Playlists on the Music listing - 3 The LINNGUI can see the DS, but cannot see any of the Music tracks - 4 I can only see some of my Music and the Muaic keeps pausing - 5 LINNCONFIG reports and error everytime I run it - 6 On the LINNGUI, I cannot add anything to the Playlist window - 7 I cannot play some of my iTunes files - 8 A ripped CD is not showing on the Linn GUI - 9 The audio is dropping out - 10 The music has stopped and the DS player is unresponsive - 11 Linn Config is not showing the DS player - 12 Linn GUI is crashing at start-up - 13 The DS player has forgotten my playlist - 14 The Nokia N800/810 is only one track at a time on the Linn DS - 15 The power is on but nothing is happening - 16 Status of Linn GUI on a wireless control point fades - 17 The system has been upgraded to Bute, now my Nokia N800 is unable to control the DS - 18 Upgraded to Bute, now can't control Klimax Kontrol via handset pointed at Klimax DS - 19 When I use the Tablet PC, it does not have the scroll bar as per the PC running LinnGUI - 20 Using an ASUS EeePC, the LINNGUI comes up with a screen without a Mouse pointer - 21 Internet Radio does not work I cannot find TWONKY V4.4.* to run the Linn DS products. Twonky have recently changed their website. You can download Twonky V4.4.* from You can also download TWONKY V4.4.11, this is avaliable for differenet NAS devicee Note: you buy the Twonky Media SERVER licence, NOT the Twonky Media MANAGER licence. the Media Server Licence can be purchased from [1] TWONKY MEDIA V5, the current release of Twonky V5.0 does not support FLAC at this time. & We would recommend going back to [n] Twonky Media SERVER V4.4.11 until this is resolved. - The Media server may only show a limited type of Music files. Software such as Twonky Medis V5 (Jan 09), does not display FLAC files. Linn recommend FLAC files and this may cause a disparity. In this case we would recommend using Twonly V4.4.11 I can only see some of my Music and the Muaic: This will tell twonky to only rescan when you have added something new.. On the LINNGUI,. [3].. When I use the Tablet PC, it does not have the scroll bar as per the PC running LinnGUI See below... Using an ASUS EeePC, the LINNGUI Internet Radio does not work Shoutcast have changed their website setup. This has been fixed from Twonky Software verion V4.4.9 [4]
https://docs.linn.co.uk/wiki/index.php?title=Troubleshooting&oldid=2072
2021-10-16T06:24:30
CC-MAIN-2021-43
1634323583423.96
[]
docs.linn.co.uk
Sources Management¶ JavaScript Sources Location¶ The JavaScript sources of an application must be located in the project folder src/main/js. All JavaScript files ( *.js) found in this folder, at any level, are processed. JavaScript Sources Load Order¶ When several JavaScript files are found in the sources folder, they are loaded in alphabetical order of their relative path. For example, the following source files: src └── main └── js ├── components │ ├── component1.js │ └── component2.js ├── ui │ └── widgets.js ├── app.js ├── feature1.js └── feature2.js are loaded in this order: - app.js - components/component1.js - components/component2.js - feature1.js - feature2.js - ui/widgets.js JavaScript Sources Load Scope¶ All the code of the JavaScript source files are loaded in the same scope. It means a variable or function defined in a source file can be used in another one if it has been loaded first. In this example: the file src/main/js/lib.js is loaded before src/main/js/main.js so the function sum can be used in src/main/js/main.js. JavaScript Sources Processing¶ JavaScript sources need to be processed before being executed. This processing is done in the following cases:
https://docs.microej.com/en/latest/ApplicationDeveloperGuide/js/sources.html
2021-10-16T06:44:33
CC-MAIN-2021-43
1634323583423.96
[]
docs.microej.com
Device Id #Overview The device ID is the unique identifier for your device. For IP-based devices provisioned on nRF Cloud, this id must be unique across all other devices in our system. It must also be used as the MQTT client id if your device is using MQTT. The recommended device ID format is a UUID. All nRF9160 SiPs have a factory generated UUID that can be used as the device ID. See the section below for details on how to obtain your nRF9160’s UUID. See also "Configuration options for device ID" for details on configuring the device ID in your application firmware. Nordic hardware products like the nRF9160 DK or Thingy:91 ship with a device ID in the format of nrf-[IMEI], for example, nrf-351358811330130. The IMEI is printed on the product's label. However, you are not restricted to using the nrf- format. To change the device ID, your device will require new credentials and properly configured application firmware. Bluetooth Low Energy devices are a little different. If using our iPhone gateway app, Bluetooth LE devices will show an id in UUID format. If using the Android gateway app, Bluetooth LE device ids are usually MAC addresses. These ids do not have to be globally unique because they are not provisioned in the cloud, but only connected to a gateway. Device ID's for Bluetooth LE gateways will have a UUID with a soft-gateway prefix sgw-. #How to Obtain the nRF9160's UUID Requires modem firmware v1.3.x and later. The nRF9160 contains a UUID which can be used as the nRF Cloud device ID (MQTT client ID). The UUID is found in the device identity attestation token, which is a base64 encoded CBOR object. To request an attestation token issue the following AT command: AT%ATTESTTOKEN The attestation token must then be decoded/parsed. This can be done using the modem_credentials_parser.py python3 script. See the README for additional details. The UUID will be displayed in the script's output on the line starting with Dev UUID:. The output of the KEYGEN AT command can be similarly parsed using modem_credentials_parser.py to display the UUID. The UUID is also included in JSON Web Tokens (JWTs) generated by the modem. To generate a JWT, use the JWT AT command. Decode the base64 output and the UUID can be found in the payload's "iss" claim after "nRF9160.". To obtain the UUID in your device's application code, use the Modem Attestation Token library or the Modem JWT library.
https://docs.nrfcloud.com/Reference/Devices/DeviceId/
2021-10-16T04:41:32
CC-MAIN-2021-43
1634323583423.96
[]
docs.nrfcloud.com
Custom events Create your own specific events for tracking. What you can track Custom events information (custom user actions) will be displayed directly in recordings. Create your own events and we will track them for you. Events allow you to track user interactions other than clicks, page views (URL) and text inputs. With custom events you can get creative and track pretty much everything you want.var eventName = 'UserOpenUpsellWindow';var properties = {"type": "SmallDiscLimit"};smartlook('track', eventName, properties);</script> Parameter properties is a variable. In case you need to display only a specific information about your user there is no need to use any other parameters in your custom event. Have a look at this example where user reached app preset limit. <script>smartlook('track', 'UserLimitReached');</script>
https://docs.smartlook.com/docs/web/custom-events/
2021-10-16T06:37:12
CC-MAIN-2021-43
1634323583423.96
[]
docs.smartlook.com
libEnsemble: A Python Library for Dynamic Ensemble-Based Computations David Bindel, Stephen Hudson, Jeffrey Larson, John-Luke Navarro and Stefan Wild A PDF poster version of this content is available on figshare. Overview libEnsemble is a Python library for coordinating the concurrent evaluation of dynamic ensembles of calculations. The library is developed to use massively parallel resources to accelerate the solution of design, decision, and inference problems and to expand the class of problems that can benefit from increased concurrency levels. libEnsemble aims for the following: Extreme scaling Resilience/fault tolerance Monitoring/killing of tasks (and recovering resources) Portability and flexibility Exploitation of persistent data/control flow libEnsemble is most commonly used to coordinate large numbers of parallel instances (ensembles) of simulations at huge scales. Using libEnsemble The user selects or supplies a gen_f function that generates simulation input and a sim_f function that performs and monitors simulations. The user parameterizes these functions and initiates libEnsemble in a calling script. Examples and templates of such scripts and functions are included in the library. For example, the gen_f may contain an optimization routine to generate new simulation parameters on-the-fly based on results from previous sim_f simulations. Other potential use-cases include: Manager and Workers libEnsemble employs a manager/worker scheme that can communicate through MPI, Python’s multiprocessing, or TCP. Each worker can control and monitor any level of work, from small sub-node tasks to huge many-node simulations. The manager allocates workers to asynchronously execute gen_f generation functions and sim_f simulation functions based on produced output, directed by a provided alloc_f allocation function. Flexible Run Mechanisms libEnsemble has been developed, supported, and tested on systems of highly varying scales, from laptops to machines with thousands of compute nodes. On multi-node systems, there are two basic modes of configuring libEnsemble to run and launch tasks (user applications) on available nodes. Distributed: Workers are distributed across allocated nodes and launch tasks in-place. Workers share nodes with their applications. Centralized: Workers run on one or more dedicated nodes and launch tasks to the remaining allocated nodes. Note Dividing up workers and tasks to allocated nodes is highly configurable. Multiple workers (and thus multiple tasks or user function instances) can be assigned to a single node. Alternatively, multiple nodes may be assigned to a single worker and each routine it performs. Executor Module An Executor interface is provided to ensure libEnsemble routines that coordinate user applications are portable, resilient, and flexible. The Executor automatically detects allocated nodes and available cores and can split up tasks if resource data isn’t supplied. The Executor is agnostic of both the job launch/management system and selected manager/worker communication method on each machine. The main functions are submit(), poll(), and kill(). On machines that do not support launches from compute nodes, libEnsemble’s Executor can interface with the Balsam library, which functions as a proxy job launcher that maintains and submits jobs from a database on front end launch nodes. Supported Research Machines libEnsemble is tested and supported on the following high-performance research machines: Running at Scale OPAL Simulations ALCF/Theta (Cray XC40) with Balsam, at Argonne National Laboratory 1030 node allocation, 511 workers, MPI communications. 2044 2-node simulations Object Oriented Parallel Accelerator Library (OPAL) simulation functions. Try libEnsemble Online Try libEnsemble online with two Jupyter notebook examples. The first notebook demonstrates the basics of parallel ensemble calculations with libEnsemble through a Simple Functions Tutorial. The second notebook, an Executor Tutorial, contains an example similar to most use-cases: simulation functions that launch and coordinate user applications. Note The Executor Tutorial notebook may take a couple minutes to initiate.
https://libensemble.readthedocs.io/en/develop/scipy2020.html
2021-10-16T05:50:46
CC-MAIN-2021-43
1634323583423.96
[array(['_images/ECP_logo.png', 'ECP'], dtype=object) array(['_images/ANL_CMYK.png', 'ANL'], dtype=object) array(['_images/white.png', '_images/white.png'], dtype=object) array(['_images/using_new.png', 'Using libEnsemble'], dtype=object) array(['_images/logo_manager_worker.png', 'Managers and Workers'], dtype=object) array(['_images/distributed_new.png', 'Distributed'], dtype=object) array(['_images/centralized_new.png', 'Centralized'], dtype=object) array(['_images/central_balsam.png', 'Central Balsam'], dtype=object)]
libensemble.readthedocs.io
Luminance Key Node The Luminance Key node determines background objects from foreground objects by the difference in the luminance (brightness) levels. Stock footage of explosions, smoke or debris are normally shot against a solid, dark background rather than a green screen. This node can separate the foreground effect from the background. It can also be used for sky replacement for overexposed or gray skies that aren’t suitable for chroma keying. Suggerimento When compositing footage of something that emits light and has a dark background, like fire, a Mix Node using a Screen or Add operator will produce better results. Ingressi - Immagine Ingresso di immagine standard. Proprietà - Limit - High Determines the lowest values that are considered foreground. (Which is supposed to be – relatively – light: from this value to 1.0.) - Low Determines the highest values that are considered to be background objects. (Which is supposed to be – relatively – dark: from 0.0 to this value.) Nota Brightness levels between the two values form a gradient of transparency between foreground and background objects. Uscite - Immagine Image with an alpha channel adjusted for the keyed selection. - Matte A black-and-white alpha mask of the key. Esempio For this example the model was shot against a white background. Using the Luminance Key node, we get a matte out where the background is white, and the model is black; the opposite of what we want. If we wanted to use the matte, we have to switch the white and the black. How to do this? Color Ramp node to the rescue – we set the left color to White Alpha 1.0, and the right color to be Black Alpha 0.0. Thus, when the Color Ramp gets in black, it spits out white, and vice versa. The reversed mask is shown; its white outline is usable as an alpha mask now. Using Luma Key with a twist. Now to mix, we do not really need the Alpha Over node; we can just use the mask as our Factor input. In this kinda weird case, we can use the matte directly; we just switch the input nodes. As you can see, since the matte is white (1.0) where we do not want to use the model picture, we feed the background photo to the bottom socket (recall the Mix node uses the top socket where the factor is 0.0, and the bottom socket where the factor is 1.0). Feeding our original photo into the top socket means it will be used where the Luminance Key node has spit out Black. Voilà, our model is teleported from Atlanta to aboard a cruise ship docked in Miami.
https://docs.blender.org/manual/it/dev/compositing/types/matte/luminance_key.html
2021-10-16T06:45:52
CC-MAIN-2021-43
1634323583423.96
[array(['../../../_images/compositing_node-types_CompositorNodeLuminanceKey.png', '../../../_images/compositing_node-types_CompositorNodeLuminanceKey.png'], dtype=object) array(['../../../_images/compositing_types_matte_luminance-key_example.png', '../../../_images/compositing_types_matte_luminance-key_example.png'], dtype=object) ]
docs.blender.org
Profiling Profiling game performance is a very important step in the development process. Flax Editor contains a dedicated set of tools such as the Profiler window and supports using external utilities. Follow this documentation section to learn more about measuring and optimizing the performance of your games and projects.
https://docs.flaxengine.com/manual/editor/profiling/index.html
2021-10-16T05:45:47
CC-MAIN-2021-43
1634323583423.96
[array(['media/title.jpg', 'Flax Profiler'], dtype=object)]
docs.flaxengine.com
Text Control Type This topic provides information about Microsoft UI Automation support for the Text control type. A text control is a basic user interface item that represents a piece of text on the screen. The following sections define the required UI Automation tree structure, properties, control patterns, and events for the Text control type. The UI Automation requirements apply to all tree controls where the UI framework/platform integrates UI Automation support for control types and control patterns. This topic contains the following sections. Typical Tree Structure The following table depicts a typical control and content view of the UI Automation tree that pertains to text controls and describes what can be contained in each view. For more information about the UI Automation tree, see UI Automation Tree Overview. A text control can be used alone as a label or as static text on a form. It can also be contained within the structure of one of the following items: Text controls might not appear in the content view of the UI Automation tree because text is often displayed through the Name property of another control. For example, the text used to label a combo box control is exposed through the control's Name property. Because the combo box control is in the content view of the UI Automation tree, the text control need not be there. Text controls may have children in the content view if there is an embedded object such as a hyperlink. Relevant Properties The following table lists the UI Automation properties whose value or definition is especially relevant to the text controls. For more information about UI Automation properties, see Retrieving Properties from UI Automation Elements. Required Control Patterns The following table lists the UI Automation control patterns required to be supported by text controls. For more information on control patterns, see UI Automation Control Patterns Overview. Required Events The following table lists the UI Automation events that text controls are required to support. For more information on events, see UI Automation Events Overview. Related topics Conceptual UI Automation Control Types Overview -
https://docs.microsoft.com/en-us/windows/win32/winauto/uiauto-supporttextcontroltype
2021-10-16T05:21:52
CC-MAIN-2021-43
1634323583423.96
[]
docs.microsoft.com
Which scanner model should I use? Scan2CAD works with any valid image created by a scanner. Therefore any available scanner will be compatible with Scan2CAD. We do not recommend specific scanner models. However, we would recommend that you choose a scanner created by a leading scanner manufacturer such as HP, Canon, Contex, Fuji Xerox and Konica. Scanner products created by these leading manufacturers will offer suitable image quality for raster-to-vector conversion. You can also have the peace of mind that Scan2CAD is recommended by the majority of leading scanner manufacturers including HP, Canon, Fuji Xerox and Konica. After acquiring your scanner you can see our tips for the best scanner settings for converting your designs.
https://docs.scan2cad.com/article/12-which-scanner
2021-10-16T05:18:53
CC-MAIN-2021-43
1634323583423.96
[]
docs.scan2cad.com
Used to provision new devices for the network and combines a number of existing API methods into one. This method assigns the next available, or manually defined, IP address and optionally adds a DNS host record and MAC address that are linked to the IP address and returns the property string containing IP address, netmask and gateway. When configured with a DNS host record, addDeviceInstance() will update the DNS server to immediately deploy the host record. - If the addDeviceInstance API is used to add a static host record and this record is either updated or deleted followed by a full deployment, the changes will not be sent to BDDS resulting in duplicate host records. After either updating or deleting a static host record that was added using the addDeviceInstance API, a quick, differential, or selective deployment must be performed prior to performing a full deployment. Failing to do so will result in duplicated records on BDDS. A quick deployment is only effective if initiated by the same user that performed the initial change. - Static host records are not visible on BDDS, but are visible in BAM in the following scenario: - When a static host record is added using the addDeviceInstance API and the same host record is deleted then deployed through the Address Manager user interface proceeded by the recreation of the original static host record on the same IP address using the addDeviceInstance API during the deployment. The static host record that is visible in BAM will become visible on BDDS once another deployment is performed. - If the addDeviceInstance API is used more than once to add the same static host record with the allowDuplicateHosts DNS deployment option set to true, the host record will be linked to two different IP addresses in BAM. However, on BDDS, only one IP address is linked to the host record. The static host record that is visible in BAM will become visible on BDDS once another deployment is performed. Returns the property string containing IP address, netmask and gateway.
https://docs.bluecatnetworks.com/r/Address-Manager-API-Guide/Add-Device-Instance/9.0.0
2021-10-16T06:06:41
CC-MAIN-2021-43
1634323583423.96
[]
docs.bluecatnetworks.com
dask.array.log¶ - dask.array.log(x, /, out=None, *, where=True,¶ This docstring was copied from numpy.log. Some inconsistencies with the Dask version may exist. Natural logarithm, element-wise. The natural logarithm log is the inverse of the exponential function, so that log(exp(x)) = x. The natural logarithm is logarithm in base e. - Parameters - xarray_like Input value. - natural logarithm of x, element-wise. This is a scalar if x is a scal >>> np.log([1, np.e, np.e**2, 0]) array([ 0., 1., 2., -Inf])
https://docs.dask.org/en/stable/generated/dask.array.log.html
2021-10-16T06:37:06
CC-MAIN-2021-43
1634323583423.96
[]
docs.dask.org
Error Message: A sharing violation occurred while accessing… This article relates to Scan2CAD v8 and v9 This article relates to the following Windows error message: A sharing violation occurred while accessing [file-name] If you see this error message, it is probable that it is caused by one of the two following scenarios. Possible Cause 1: A file is being used by another application According to Microsoft, this error can be caused by multiple applications running a file simultaneously. Solution - Create a duplicate of the Scan2CAD directory which is in your Program Files - Ensure the directory is named anything different to the original, for example ‘Scan2CAD v9 copy’ - Launch the Scan2CAD .exe located in the duplicate directory. Possible Cause 2: Anti virus software. It is possible that your anti virus software is being over zealous. Solution Ensure any Scan2CAD.exe task is allowed by your anti virus software. You may also choose to ‘pause’ the software. If you are concerned about whether Scan2CAD is secure, please see here.
https://docs.scan2cad.com/article/57-error-message-sharing-violation-occurred-accessing
2021-10-16T06:46:13
CC-MAIN-2021-43
1634323583423.96
[]
docs.scan2cad.com
Device Interface - Tremetrics RA500 This page provides instructions for retrieving results from the Tremetrics RA500 audiometer with Enterprise Health (EH), Tremetrics Audio section. - Click the Connect to RA500. - Click the Request Current Test link. “Tremetrics Data Import” from the dropdown. - Click the Go button. - Follow steps 3-6 from the Individual Mode section above. - Select the tests on the audiometer unit to import to EH. - Click the Request All Tests link. A new window opens with all received tests. - EH: -. Related Pages For a list of all supported devices see the Devices List
https://docs.webchartnow.com/functions/system-administration/interfaces/device-interface-tremetrics-ra500.html
2021-10-16T06:07:01
CC-MAIN-2021-43
1634323583423.96
[]
docs.webchartnow.com
The Systems page enables you to connect the systems in which you store data, to the portal. When a consumer makes a request for personal data, these systems are scanned, and the data found is displayed for you to review and edit. To add a new system: - In the top-right corner, click Add A New System. - In the Select System Type page, select a system type. If the system type you selected is not supported, request to be notified when this changes. - Click Next. - In the Register System page, enter a unique system name. - From the drop-down list, select a department or departments to which the system belongs. - Select the profile. - Click Sign In and enter your username and password for this system. The system runs a connection test. If the connection test fails, a Connection Error message appears on the screen. - If the connection succeeds, click Add System. Updated 3 months ago
https://www.docs.bigid.com/bigidme/docs/managing-systems
2021-10-16T06:02:37
CC-MAIN-2021-43
1634323583423.96
[array(['https://files.readme.io/0e4fa9f-BME_Systems.png', 'BME_Systems.png'], dtype=object) array(['https://files.readme.io/0e4fa9f-BME_Systems.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/9bba400-BME_Systems_Select_System.png', 'BME_Systems_Select_System.png'], dtype=object) array(['https://files.readme.io/9bba400-BME_Systems_Select_System.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/2af6db8-BME_Systems_Add.png', 'BME_Systems_Add.png'], dtype=object) array(['https://files.readme.io/2af6db8-BME_Systems_Add.png', 'Click to close...'], dtype=object) ]
www.docs.bigid.com
Vantage Analytics Library provides the Data Scientists and other users with over 50 advanced analytic functions built directly in the Advanced SQL Engine, which is a core capability of Teradata Vantage. These functions support the entire data science process, including exploratory data analysis, data preparation and feature engineering, hypothesis testing, as well as statistical and machine learning model building and scoring. The following are the pre-requisites for running VALIB functions through teradataml: 1. Install the Vantage Analytic Library in Teradata Vantage's Advanced SQL Engine. The library and readme file are available here for download. 2. In order to execute the VALIB functions related to Statistical Tests, the Statistical Test Metadata tables must be loaded into a database on the system to be analyzed. This can be done with the help of Vantage Analytic Library installer. The Statistical Test functions provide a parameter called "stats_database" that can be used to specify the database in which these tables are installed. Once the setup is done, the user is ready to use Vantage Analytic Library functions from teradataml. To execute Vantage Analytic Library functions, 1. Import "valib" object from teradataml as from teradataml import valib 2. Set 'configure.val_install_location' to the database name where Vantage Analytics Library functions are installed. For example, from teradataml import configure configure.val_install_location = "SYSLIB" # SYSLIB is the database name where Vantage Analytics Library functions are installed. 3. Datasets used in the teradataml VALIB functions' examples are loaded with Vantage Analytics Library installer. Properties of VALIB function output object: 1. All VALIB functions return an object of class <VALIB_function> (say valib_obj). 2. The following are the attributes of the VALIB function object: a. The output teradataml DataFrames, which can be accessed as valib_obj.<output_df_name>. Details of the name(s) of the output DataFrame(s) can be found in Teradata Python Function Reference Guide for each individual function. The tables corresponding to output DataFrames are garbage collected at the end when the connection is closed. Users must use copy_to_sql() or DataFrame.to_sql() function to persist the output tables. b. Input arguments that are passed to the function. Users can access all input arguments as valib_obj.<input_argument_x>. c. show_query() function to print the underlying VALIB Stored Procedure call and can be accessed using valib_obj.show_query(). Vantage Analytics Library provides the Data Scientists and other users with over 50 advanced
https://docs.teradata.com/r/xLnbN80h9C6037gi3ildag/nuzF9jQcQpQeA5e0UABhaQ
2021-10-16T05:31:43
CC-MAIN-2021-43
1634323583423.96
[]
docs.teradata.com
Table of Contents Let’s start with a simple example: you are building your own blog using ASP.NET MVC and want to receive an email notification about each posted comment. We will use the simple but awesome Postal library to send emails. Tip I’ve prepared a simple application that has only comments list, you can download its sources to start work on tutorial. You already have a controller action that creates a new comment, and want to add the notification feature. // ~/HomeController.cs [HttpPost] public ActionResult Create(Comment model) { if (ModelState.IsValid) { _db.Comments.Add(model); _db.SaveChanges(); } return RedirectToAction("Index"); } First, install the Install-Package Postal.Mvc5 Then, create ~/Models/NewCommentEmail.cs file with the following contents: using Postal; namespace Hangfire.Mailer.Models { public class NewCommentEmail : Email { public string To { get; set; } public string UserName { get; set; } public string Comment { get; set; } } } Create a corresponding template for this email by adding the ~/Views/Emails/NewComment.cshtml file: @model Hangfire.Mailer.Models.NewCommentEmail To: @Model.To From: [email protected] Subject: New comment posted Hello, There is a new comment from @Model.UserName: @Model.Comment <3 And call Postal to sent email notification from the Create controller action: [HttpPost] public ActionResult Create(Comment model) { if (ModelState.IsValid) { _db.Comments.Add(model); _db.SaveChanges(); var email = new NewCommentEmail { To = "[email protected]", UserName = model.UserName, Comment = model.Text }; email.Send(); } return RedirectToAction("Index"); } Then configure the delivery method in the web.config file (by default, tutorial source code uses C:\Temp directory to store outgoing mail): <system.net> <mailSettings> <smtp deliveryMethod="SpecifiedPickupDirectory"> <specifiedPickupDirectory pickupDirectoryLocation="C:\Temp\" /> </smtp> </mailSettings> </system.net> That’s all. Try to create some comments and you’ll see notifications in the pickup directory. But why should a user wait until the notification was sent? There should be some way to send emails asynchronously, in the background, and return a response to the user as soon as possible. Unfortunately, asynchronous controller actions do not help in this scenario, because they do not yield response to the user while waiting for the asynchronous operation to complete. They only solve internal issues related to thread pooling and application capacity. There are great problems with background threads also. You should use Thread Pool threads or custom ones that are running inside ASP.NET application with care – you can simply lose your emails during the application recycle process (even if you register an implementation of the IRegisteredObject interface in ASP.NET). And you are unlikely to want to install external Windows Services or use Windows Scheduler with a console application to solve this simple problem (we are building a personal blog, not an e-commerce solution). To be able to put tasks into the background and not lose them during application restarts, we’ll use Hangfire. It can handle background jobs in a reliable way inside ASP.NET application without external Windows Services or Windows Scheduler. Install-Package Hangfire Hangfire uses SQL Server or Redis to store information about background jobs. So, let’s configure it. Add a new class Startup into the root of the project: public class Startup { public void Configuration(IAppBuilder app) { GlobalConfiguration.Configuration .UseSqlServerStorage( "MailerDb", new SqlServerStorageOptions { QueuePollInterval = TimeSpan.FromSeconds(1) }); app.UseHangfireDashboard(); app.UseHangfireServer(); } } The SqlServerStorage class will install all database tables automatically on application start-up (but you are able to do it manually). Now we are ready to use Hangfire. It asks us to wrap a piece of code that should be executed in background in a public method. [HttpPost] public ActionResult Create(Comment model) { if (ModelState.IsValid) { _db.Comments.Add(model); _db.SaveChanges(); BackgroundJob.Enqueue(() => NotifyNewComment(model.Id)); } return RedirectToAction("Index"); } Note, that we are passing a comment identifier instead of a full comment – Hangfire should be able to serialize all method call arguments to string values. The default serializer does not know anything about our Comment class. Furthermore, the integer identifier takes less space in serialized form than the full comment text. Now, we need to prepare the NotifyNewComment method that will be called in the background. Note that HttpContext.Current is not available in this situation, but Postal library can work even outside of ASP.NET request. But first install another package (that is needed for Postal 0.9.2, see the issue). Let’s update package and bring in the RazorEngine Update-Package -save public static void NotifyNewComment(int commentId) { // Prepare Postal classes to work outside of ASP.NET request var viewsPath = Path.GetFullPath(HostingEnvironment.MapPath(@"~/Views/Emails")); var engines = new ViewEngineCollection(); engines.Add(new FileSystemRazorViewEngine(viewsPath)); var emailService = new EmailService(engines); // Get comment and send a notification. using (var db = new MailerDbContext()) { var comment = db.Comments.Find(commentId); var email = new NewCommentEmail { To = "[email protected]", UserName = comment.UserName, Comment = comment.Text }; emailService.Send(email); } } This is a plain C# static method. We are creating an EmailService instance, finding the desired comment and sending a mail with Postal. Simple enough, especially when compared to a custom Windows Service solution. Warning Emails now are sent outside of request processing pipeline. As of Postal 1.0.0, there are the following limitations: you can not use layouts for your views, you MUST use Model and not ViewBag, embedding images is not supported either. That’s all! Try to create some comments and see the C:\Temp path. You also can check your background jobs at http://<your-app>/hangfire. If you have any questions, you are welcome to use the comments form below. Note If you experience assembly load exceptions, please, please delete the following sections from the web.config file (I forgot to do this, but don’t want to re-create the repository): <dependentAssembly> <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-6.0.0.0" newVersion="6.0.0.0" /> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="Common.Logging" publicKeyToken="af08829b84f0328e" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0-2.2.0.0" newVersion="2.2.0.0" /> </dependentAssembly> When the emailService.Send method throws an exception, Hangfire will retry it automatically after a delay (that is increased with each attempt). The retry attempt count is limited (10 by default), but you can increase it. Just apply the AutomaticRetryAttribute to the NotifyNewComment method: [AutomaticRetry( Attempts = 20 )] public static void NotifyNewComment(int commentId) { /* ... */ } You can log cases when the maximum number of retry attempts has been exceeded. Try to create the following class: public class LogFailureAttribute : JobFilterAttribute, IApplyStateFilter { private static readonly ILog Logger = LogProvider.GetCurrentClassLogger(); public void OnStateApplied(ApplyStateContext context, IWriteOnlyTransaction transaction) { var failedState = context.NewState as FailedState; if (failedState != null) { Logger.ErrorException( String.Format("Background job #{0} was failed with an exception.", context.JobId), failedState.Exception); } } public void OnStateUnapplied(ApplyStateContext context, IWriteOnlyTransaction transaction) { } } And add it: Either globally by calling the following method at application start: public void Configuration(IAppBuilder app) { GlobalConfiguration.Configuration .UseSqlServerStorage( "MailerDb", new SqlServerStorageOptions { QueuePollInterval = TimeSpan.FromSeconds(1) }) .UseFilter(new LogFailureAttribute()); app.UseHangfireDashboard(); app.UseHangfireServer(); } Or locally by applying the attribute to a method: [LogFailure] public static void NotifyNewComment(int commentId) { /* ... */ } You can see the logging is working when you add a new breakpoint in LogFailureAttribute class inside method OnStateApplied If you like to use any of common logger and you do not need to do anything. Let’s take NLog as an example. Install NLog (current version: 4.2.3) Install-Package NLog Add a new Nlog.config file into the root of the project. <?xml version="1.0" encoding="utf-8" ?> <nlog xmlns="" xmlns: <variable name="appName" value="HangFire.Mailer" /> > run application and new log file could be find on cd %appdata%HangFire.MailerDebug.log If you made a mistake in your NotifyNewComment method, you can fix it and restart the failed background job via the web interface. Try it: // Break background job by setting null to emailService: EmailService emailService = null; Compile a project, add a comment and go to the web interface by typing http://<your-app>/hangfire. Exceed all automatic attempts, then fix the job, restart the application, and click the Retry button on the Failed jobs page. If you set a custom culture for your requests, Hangfire will store and set it during the performance of the background job. Try the following: // HomeController/Create action Thread.CurrentThread.CurrentCulture = CultureInfo.GetCultureInfo("es-ES"); BackgroundJob.Enqueue(() => NotifyNewComment(model.Id)); And check it inside the background job: public static void NotifyNewComment(int commentId) { var currentCultureName = Thread.CurrentThread.CurrentCulture.Name; if (currentCultureName != "es-ES") { throw new InvalidOperationException(String.Format("Current culture is {0}", currentCultureName)); } // ... Please use Hangfire Forum for long questions or questions with source code.
http://docs.hangfire.io/en/latest/tutorials/send-email.html?highlight=jobfilterattribute
2019-09-15T16:01:08
CC-MAIN-2019-39
1568514571651.9
[]
docs.hangfire.io
Sometimes you want the Default answer block to only send a message once instead of sending a reply to every message of the user. Thanks to the Set User Attribute and Redirect to block plugin this is possible to set up in just a few minutes. You can then also add an integration with a Sequence to enable the Default answer again after a certain time has passed. Sending the default answer only once - Create a new block but leave it empty. This block will be triggered when the user already received the default answer and by leaving it empty we cause the bot to stop processing the user's message without sending a response. - Go to the Default answer block, create a Redirect to block plugin and below it a Set User Attribute plugin. - In the Set User Attribute plugin create a new attribute. Let's call it {{default answer triggered}} and set the value to yes or similar. - In the Redirect to block Plugin check if the value equal to what the Set User Attribute plugin assigns to the attribute and link the plugin to the empty block. - Make sure the order of the plugins are correct. The order should always be Redirect To Block, Set User Attribute and then the content of your Default Answer message. When a user now triggers the default answer for the first time the Redirect To Block Plugin will not execute as the value of the attribute is still empty. It will then proceed to set the attribute using the Set User Attribute plugin and show all the cards with content below. If the user then triggers the Default answer block a second time the value of the attribute will match the value in the Redirect To Block plugin and the user will be routed to an empty block, stopping the processing without triggering the other cards in the Default Answer block. Sending the default answer once per day - First follow the guide above to set up the Default answer to send only once. - Create a new sequence and set it to send "after 1 day". The content of the sequence should be another Set user attribute plugin which sets the attribute you've created in the first guide to NOT SET. - Set up an additional Subscribe to Sequence plugin after the Redirect To Block plugin in the Default answer block and link it to the newly created sequence.
https://docs.chatfuel.com/en/articles/961372-showing-the-default-answer-only-once-per-day
2019-09-15T16:32:12
CC-MAIN-2019-39
1568514571651.9
[array(['https://downloads.intercomcdn.com/i/o/136281378/14a86f6920d42178d78d4768/Image1.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/136286061/d95c8ddbbb044eba0740cd3f/Image+2.png', None], dtype=object) ]
docs.chatfuel.com
When you use the delegate operator, you might omit the parameter list. If you do that, the created anonymous method can be converted to a delegate type with any list of parameters, as the following example shows: Action greet = delegate { Console.WriteLine("Hello!"); }; greet(); Action<int, double> introduce = delegate { Console.WriteLine("This is world!"); }; introduce(42, 2.7); // Output: // Hello! // This is world! That's the only functionality of anonymous methods that is not supported by lambda expressions. In all other cases, a lambda expression is a preferred way to write inline code. For more information about features of lambda expressions, for example, capturing outer variables, see Lambda expressions. You also use the delegate keyword to declare a delegate type. C# language specification For more information, see the Anonymous function expressions section of the C# language specification. See also Feedback
https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/operators/delegate-operator
2019-09-15T16:54:37
CC-MAIN-2019-39
1568514571651.9
[]
docs.microsoft.com
AutomationProperties.ItemStatus Attached Property Microsoft Silverlight will reach end of support after October 2021. Learn more. Gets or sets a description of the status of an item in an element. Namespace: System.Windows.Automation Assembly: System.Windows (in System.Windows.dll) Syntax 'Declaration See GetItemStatus, SetItemStatus See GetItemStatus, SetItemStatus <object AutomationProperties. Property Value Type: System.String The status of an item in an element. Remarks This property enables a client to determine whether an element is conveying status about an item. For example, an item that is associated with a contact in a messaging application might be "Busy" or
https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/ms591288(v%3Dvs.95)
2019-09-15T16:16:48
CC-MAIN-2019-39
1568514571651.9
[]
docs.microsoft.com
Run manual tests Azure Test Plans | Azure DevOps Server 2019 | TFS 2018 | TFS 2017 | TFS 2015 Run your manual tests and record the test results for each test step using Microsoft Test Runner. If you find an issue when testing, use Test Runner to create a bug. Test steps, screenshots, and comments are automatically included in the bug. You can use the web runner for web apps, or the desktop runner for desktop app data collection. You just need Basic access to run tests that have been assigned to you with Azure DevOps. Learn more about the access that you need for more advanced testing features. To use all the features described in this topic you must have either an Enterprise, Test Professional, or MSDN Platforms subscription; or have installed the Test Manager extension for Azure Test Plans available from Visual Studio Marketplace. See Manual testing permissions and access. Run tests for web apps If you haven't already, create your manual a desktop computer and run your Windows 8 store app that you are testing on a Windows 8 tablet. Mark each test step as either passed or failed based on the expected results. If a test step fails, you can enter a comment on why it failed or collect diagnostic data for the test. Create a bug to describe what failed. The steps and your comments are automatically added to the bug. Also, the test case is linked to the bug. If Test Runner is running in a web browser window, you can copy a screenshot from the clipboard directly into the bug. You can see any bugs that you have reported during your test session. When you've run all your tests, save the results and close Test Runner. All the test results are stored in Azure DevOps. How do I resume testing, or run one or more tests again? View the testing status for your test suite. You see the most recent results for each test. Open a test and choose the test case in the Related Work section. Then use the Child links in the Related Work section of that work item to view the bugs filed by the tester. Can I run tests offline and then import the results? Run tests for desktop apps If you want to collect more diagnostic data for your desktop application, run your tests using Test Runner client: Launch the test runner client from Azure Test Plans in Azure DevOps by choosing Run for desktop application from the Run menu. Download and install the Test Runner desktop client if you haven't already set it up. Choose Launch and start testing in the same way as described above for web apps. See collect diagnostic data for the test for more information about data collection. Can I run tests offline and then import the results? See also Next step Opinia Trwa ładowanie opinii...
https://docs.microsoft.com/pl-pl/azure/devops/test/run-manual-tests?view=azure-devops
2019-09-15T16:11:36
CC-MAIN-2019-39
1568514571651.9
[]
docs.microsoft.com