content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
An Act to amend 301.48 (3) (b) and 301.49 (3) (b) of the statutes; Relating to: length of contracts for GPS devices for tracking sex offenders and persons who violated injunctions. 2017 Wisconsin Act 346 (PDF: ) 2017 Wisconsin Act 346: LC Act Memo Bill Text (PDF: ) AB601 ROCP for Committee on Corrections On 11/7/2017 (PDF: ) AB601 ROCP for Committee on Judiciary and Public Safety On 1/12/2018 (PDF: ) LC Bill Hearing Materials Wisconsin Ethics Commission information 2017 Senate Bill 516 - S - Judiciary and Public Safety
https://docs.legis.wisconsin.gov/2017/proposals/ab601
2019-03-18T22:05:13
CC-MAIN-2019-13
1552912201707.53
[]
docs.legis.wisconsin.gov
Contents Now Platform User Interface Previous Topic Next Topic Embedded lists Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Embedded lists Some forms may show related lists as embedded. Changes to embedded lists are saved when the form is saved. Note: Embedded lists are not supported in List v3. Embedded lists always display in List v2. Use these controls to work with an embedded list. For more information, see Edit a form. Table 1. Working with embedded lists Task Icon Action Expand an embedded list Click the expand icon in the list header. Collapse an embedded list Click the collapse icon in the list header. Insert a new row Double-click Insert a new row... Edit a row Double-click in an empty area of that field. See Use the list editor. Delete a row Click the delete icon beside the row. New rows are removed immediately. Existing rows are designated for deletion when the record is saved. To clear this designation, click the delete icon again. Figure 1. Embedded list On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-platform-user-interface/page/use/using-forms/concept/c_EmbeddedLists.html
2019-03-18T22:14:18
CC-MAIN-2019-13
1552912201707.53
[]
docs.servicenow.com
Reshaping with the Contour Editor Tool T-SBFND-008-006 The Contour Editor lets you reshape vector shapes, brush strokes and lines in your drawings. Authors Marie-Eve Chartrand Christopher Diaz chrisdiazart.com Del Del.
https://docs.toonboom.com/help/storyboard-pro-5-5/storyboard/drawing/reshape-contour-editor-tool.html
2019-03-18T21:33:15
CC-MAIN-2019-13
1552912201707.53
[array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/HAR/Stage/Drawing/an_ellipse_shape.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/Drawing/an_contoureditor_shape.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/Drawing/an_contoureditor_shape_points.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/Drawing/an_contoureditor_shape_bezierhandle.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/Drawing/an_contoureditor_shape_bezier_ind.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/Drawing/an_contoureditor_shape_point.png', None], dtype=object) array(['../../Resources/Images/HAR/Stage/Drawing/an_contoureditor_shape_line.png', None], dtype=object) ]
docs.toonboom.com
CR 16-026, EmR1604 Hearing Information Department of Natural Resources (NR) Administrative Code Chapter Group Affected: Chs. NR 1-99; Fish, Game and Enforcement, Forestry and Recreation Administrative Code Chapter Affected: Ch. NR 16 (Revised) Related to: Fences for farm-raised white-tailed deer Hearing Date: Wednesday, March 09, 2016 Comment on related emergency rule (through 3/9/2016) Comment on related clearinghouse rule (through 3/9/2016) Related documents: CR 16-026 Rule Text CR 16-026 Economic Impact Analysis EmR1604 Rule Text EmR1604 Fiscal Estimate
http://docs.legis.wisconsin.gov/code/register/2016/722A4/register/rule_notices/cr_16_026_emr1604_hearing_information/cr_16_026_emr1604_hearing_information
2019-03-18T22:02:04
CC-MAIN-2019-13
1552912201707.53
[]
docs.legis.wisconsin.gov
Registering Amazon S3 cloud account You must have valid Amazon S3 credentials to register the cloud account with DLM. - In the DLM.Using the validation feature is recommended to ensure that the Amazon S3 bucket keys are valid. If the keys are not valid, the DLM policy cannot execute a copy of data to the target Amazon S3 bucket. Verify that your credentials are listed on the Cloud Credentials page.
https://docs.hortonworks.com/HDPDocuments/DLM1/DLM-1.3.0/administration/content/dlm_registering_amazon_s3_cloud_account.html
2019-03-18T22:34:29
CC-MAIN-2019-13
1552912201707.53
[]
docs.hortonworks.com
The hadoop s3guard import command can list and import a bucket's metadata into a S3Guard table. This is harmless if the contents are already imported. hadoop s3guard import s3a://guarded-table/ 2018-05-31 15:47:45,672 [main] INFO s3guard.S3GuardTool (S3GuardTool.java:initMetadataStore(270)) - Metadata store DynamoDBMetadataStore{region=eu-west-1, tableName=guarded-table} is initialized. Inserted 0 items into Metadata Store You do not need to issue this command after creating a table; the data is added as listings of S3 paths find new entries. It merely saves by proactively building up the database.
https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/bk_cloud-data-access/content/s3-guard-command-import.html
2019-03-18T22:33:39
CC-MAIN-2019-13
1552912201707.53
[]
docs.hortonworks.com
Breaking: #67932 - felogin template has been changed for RSA encryption¶ See Issue #67932 Description¶ Due to the introduction of the new rsaauth API the felogin template has been changed. A new HTML data-attribute had to be added to the password field in order to enable the RSA encryption Javascript code. Affected Installations¶ Any installation using a custom felogin template and having rsaauth enabled for frontend.
https://docs.typo3.org/typo3cms/extensions/core/latest/Changelog/7.4/Breaking-67932-FeloginTemplateHasBeenChanged.html
2019-03-18T22:30:39
CC-MAIN-2019-13
1552912201707.53
[]
docs.typo3.org
date_second_span(date1, date2); Returns: Real With this function you can get the number of seconds between two dates. The return value is always positive and will be a whole number. diff = date_second_span(date_create_datetime(2011, 9, 15, 11, 4, 0 ), date_current_datetime()); This would set "diff" to the number of seconds between 15th September 2011, 11:04 and the current date and time.
https://docs.yoyogames.com/source/dadiospice/002_reference/date%20and%20time/date_second_span.html
2019-03-18T22:21:06
CC-MAIN-2019-13
1552912201707.53
[]
docs.yoyogames.com
Additional data can be displayed in customizable widgets on each asset page. Customizing the page for an asset applies the customization to the pages for all assets of that type. You can also add variables to the custom asset page that can be used to further refine your showcased data in widgets. In addition to regular variables, each asset type can use a set of "$this" variables to quickly identify resources directly related to the current asset, for example, all virtual machines hosted by the same hypervisor that hosts the current virtual machine. This custom asset page is unique for each user as well as for each asset type. For example, if User A creates a custom asset page for a virtual machine, that custom page will display for any virtual machine asset page, for that user. Users can only view, edit, or delete custom asset pages that they create. Custom asset pages are not included in Insight's export/import functionality.
http://docs.netapp.com/oci-73/topic/com.netapp.doc.oci-acg/GUID-6006D642-8A4A-490B-B347-0C0C236AB9FE.html
2019-03-18T21:53:38
CC-MAIN-2019-13
1552912201707.53
[]
docs.netapp.com
Contents IT Operations Management Previous Topic Next Topic Configuration file tracking Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Configuration file tracking The horizontal discovery process can find configuration files that belong to certain applications and add those configuration files to the CMDB. You can track the changes to these files by comparing them to previous versions. Components for configuration file tracking CI type Every application and host in your organization must have a corresponding configuration item (CI) type which allows Service Mapping and Discovery to discover and process this application correctly. In a base system, many CI types have configuration file paths defined for them. You can add new or modify existing definitions for tracking configuration files. See Modify tracking changes in configuration files for instructions. Patterns Configuration file tracking is available for patterns that discover applications. On the pattern, you can create tracked file definitions that specify the CI type to which the application CI belongs and the path of the configuration file. Specify as many tracked file definitions as needed. You can also specify whether you want to save the contents of configuration files so you can view and compare the contents of different versions. Note: Configuration file tracking is not available for discoveries performed by traditional probes and sensors. The classifier that triggers the pattern must specify the Horizontal Pattern probe, which in turn, must specify the pattern. If you upgrade your instance to the current version, not all classifiers are configured to use patterns for discovery by default. CMDB All configuration files are saved as a CI in the Tracked Configuration file [cmdb_ci_config_file_tracked] table. If you enable the content to be saved, these CI records provide the contents of the configuration files, including previous versions. From the configuration file CI record, you can compare different versions. See Compare versions of CI configuration files for instructions. Properties You can also specify properties to control these aspects of tracked configuration files: The size and number of tracked configuration files. The time window during which changes to configuration files are tracked for a given version. The number of changes allowed on a configuration file during that time window. See Discovery properties for more information. Dependency maps and business service maps Both dependency maps and business service maps display tracked configuration files. The relationship between a configuration file and its host is a contains relationship. The application contains the configuration file. For example, this IIS web server contains three tracked configuration files: Sometimes you organize CI types as a main CI type and its related CI types. On a business service map, Service Mapping shows changes to configuration files of related CIs for the main CIs in inclusions. In inclusions, the system treats applications hosted on a server as independent objects. For example, the Tomcat WAR CI appears separate from its host, the Tomcat CIs. In this case, Service Mapping shows changes to configuration files of Tomcat WAR when you select Tomcat. In addition, Service Mapping displays changes to configuration files of the hardware server hosting inclusions. In this example, it is a Linux server: Deletion strategy You can specify what you want to do with tracked configuration file CI records when discovery can no longer find them. You can keep the configuration file CI record, automatically delete it, delete only the CI relationships to it, or mark it absent. See Set the deletion strategy for tracked configuration files for instructions. Discovery patterns that support configuration file tracking by default These patterns provide tracked file definitions by default: Classifier Pattern CI Type File path of tracked file Apache Server Apache On Unix Pattern Apache On Windows Pattern Apache Web Server [cmdb_ci_apache_web_server] $config_file MySQL Server MySQL server On Windows and Linux Pattern MySQL Instance [cmdb_ci_db_mysql_instance] $config_file Microsoft IIS Server IIS Microsoft iis Web Server [cmdb_ci_microsoft_iis_web_server] EVAL(javascript: var rtrn = '';var winDir = CTX.getCommandManager().shellCommand("echo %WinDir%", false, null, null, CTX);rtrn = winDir.trim() + '\\System32\\Inetsrv\\Config\\*.config';) IIS Virtual Directory [cmdb_ci_iisdirectory] $install_directory + "\*.config" Active MatrixBusiness Works Active MatrixBusiness Works ActiveMatrix Business Works [cmdb_ci_appl_tibco_matrix] $config_file Enterprise Message Service Enterprise Message Service Tibco Enterprise Message Service [cmdb_ci_appl_tibco_message] $config_file Oracle Oracle DB on Windows Pattern Oracle Instance [cmdb_ci_db_ora_instance] $install_directory + "\network\admin\*.ora" Oracle Instance [cmdb_ci_db_ora_instance] $install_directory + "\dbs\*.ora" Oracle DB on Unix Pattern Oracle Instance [cmdb_ci_db_ora_instance] $install_directory + "/dbs/*.ora" Oracle Instance [cmdb_ci_db_ora_instance] $install_directory + "/network/admin/*.ora" Tomcat Tomcat Tomcat [cmdb_ci_app_server_tomcat] $install_directory + "/conf/server.xml" Tomcat WAR [cmdb_ci_app_server_tomcat_war] $install_directory + "/WEB-INF/web.xml" WMB WMB On Unix Pattern IBM WebSphere Message Broker [cmdb_ci_appl_ibm_wmb] $install_directory + "/*/etc/config/*/*.prop" WMB On Windows Pattern IBM WebSphere Message Broker [cmdb_ci_appl_ibm_wmb] $install_directory + "\*\etc\config\*\*.prop" WMQ WMQ On Windows Pattern IBM WebSphere MQ [cmdb_ci_appl_ibm_wmq] $install_directory + "\*\config\*" WMQ On Windows Pattern IBM WebSphere MQ [cmdb_ci_appl_ibm_wmq] $install_directory + "/bin/*.sh" What to do Verify that the Horizontal Discovery probe is active on the classifier for the software that you want to discovery. If not, you can enable it, specify the pattern, and then disable the other probes. See Add the Horizontal Pattern probe to a classifier for instructions. If necessary, add or modify tracked file definitions to change the CI type or file path. If necessary, set the tracked files deletion strategy to specify what you want to do with tracked configuration file CI records when pattern discovery can no longer find them. Run horizontal discovery on the hosts that are running the applications you want to discover with patterns, open the application CI record, and check the Tracked Configuration Files related list. Compare two versions of tracked CI configuration files to see the actual changes made to them. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-it-operations-management/page/product/discovery/concept/tracked-config-files.html
2019-03-18T22:23:14
CC-MAIN-2019-13
1552912201707.53
[]
docs.servicenow.com
facebook_logout() Returns: N/A With this function you can log the user out of Facebook. Note, this does not mean that Facebook is disconnected from the game and there is no need to call facebook_init again if you wish the user to log back in again. It is also worth noting that certain graph and dialogue functions may work, only that they will prompt the user to log in again beforehand. facebook_logout(); The above code simply logs the current user out of Facebook.
https://docs.yoyogames.com/source/dadiospice/002_reference/social%20gaming/facebook/facebook_logout.html
2019-03-18T21:44:00
CC-MAIN-2019-13
1552912201707.53
[]
docs.yoyogames.com
Low free channel process warning cleared The system now has enough free channel processes available. None. This message is issued to help you to assess the number of channel processes that you need. If the number of free channel processes falls below the threshold again, error 2007 is logged. Green Log, System Monitor
http://docs.blueworx.com/BVR/InfoCenter/V7/AIX/help/topic/com.ibm.wvraix.probdet.doc/dtxprobdet115.html
2019-03-18T21:24:27
CC-MAIN-2019-13
1552912201707.53
[]
docs.blueworx.com
Processing Messages When your content processing is finished, you may be notified of progressing warnings or errors. On this page, you may learn more details about both of them. - Your patch file size is nearly the same as your content file size - Cannot find the executable file - Application platform mismatch - Compression Method set to LZ4 or LZ4HC (Unity builds) Your patch file size is nearly the same as your content file size When you’re uploading new application files, a patch file is created. The patch file is meant to be as small as possible, so if it is similar to your full application size, it is considered a mistake. Why is it wrong? Big patch file means that every update requires from your users to download a lot of unnecessary data. That would take much time and may frustrate your users. How to fix it? There may be several reasons: - You’ve uploaded a zip file with the different application. If you upload a zip file that is an entirely different application by mistake, there’s no chance of creating a valid patch. Look at the file listing to make sure that you’ve chosen the files that you’d like to upload. - You’ve changed the name of files or directories. PatchKit currently does not support path changes, so it treats every rename as a delete/add operation. - You’re working with Unity, and you’ve changed your build name. Changing a build name in Unity also changes *_Datadirectory name. That is a similar situation to the one from point 2. Please make sure to name your builds in the same manner on each release. Cannot find the executable file PatchKit cannot find an executable file within your application files. Why is it wrong? PatchKit needs to know which file is executable to launch your application when it is downloaded. How to fix it? PatchKit is automatically looking for executable files, so if it cannot find it, most probably you’ve made one of these mistakes: - You’ve set your Target Platform to something else that your application is built for. For instance, you’re sending a Windows build, but your target platform is set to Linux. - You did not include an executable file in the top directory of your content. - Your executable file is corrupted. Application platform mismatch This error means that PatchKit has found files that belong to a different platform than the one your Target Platform option is set to. Why is it wrong? Files that are not prepared for your target platform are unusable. Your application may throw an error when you try to do something with them. How to fix it? In some cases, these files are safe to be removed because it may mean that you’re making a multi-platform application but you made a configuration mistake and these files can be found in your target build. Also, if you’re sure that you’ve built your application correctly, leaving these files shouldn’t do much harm. Make sure to consider that and decide if you want to make any action or ignore it. Compression Method set to LZ4 or LZ4HC (Unity builds) Starting from Unity 2017.2 there’s a new option in Build Settings called Compression Method. If you’re receiving this message then most probably your application has been built with Compression Method set to LZ4 or LZ4HC. Why is it wrong? Compressing your game build breaks the binary patch file by creating an algorithm. This algorithm is looking for differences in your binary files so that it can be as small as possible. For example, changing a texture should result in a patch file that closely resembles this texture size. By enabling compression, you’re making these kinds of changes impossible to detect. Compression algorithms can generate entirely different output, even if the change is minimal. On top of that, PatchKit on its own is applying compression to your files at the final stage of version processing, so you don’t have to worry about your build size. How to fix it? Just go into the Files/Build Settings window, set Compression Method to None and build your application one more time.
http://docs.patchkit.net/processing_messages.html
2019-03-18T21:26:46
CC-MAIN-2019-13
1552912201707.53
[array(['/img/processing_messages/compression_method.png', 'Compression method'], dtype=object) ]
docs.patchkit.net
<p>¶ Elemento que identifica parágrafos de texto. Deve ser inserida no documento sem qualquer atributo. Exemplo: ... <boxed-text <sec> <title>Box 1. De Humanis corporis fabrica libri septem, or the <italic>Fabrica</italic>, and others.</title> <p><italic>De humani corporis fabrica libri septem, </italic> the <italic>Fabrica</italic>,1<sup>st </sup>edition, came to light in 1543, by the printer Johannes Oporinus, from Basel. It is one of the most influential books on human anatomy, and considered one of the great scientific and artistic oeuvre of mankind. The <italic>Fabrica</italic> is illustrated with detailed illustrations, printed with woodcut engravings, in Venice, with the identity of the artist is uncertain.</p> <p>The <italic>Fabrica,</italic> 2<sup>nd</sup> edition, released in 1555, dedicated to Charles V, is considered more sumptuous than the 1<sup>st </sup>one. There are also corrections, decrease of redundancies, as well as inclusion of physiological experiments, by means of nervous section, e.g., section of the recurrent nerve, with consequent laryngeal paralysis.</p> <p><italic>De Humani corporis fabrica librorum Epitome</italic>, the <italic>Epitome</italic>, printed in 1543, was intended by Vesalius to be a very brief descriptive book, being a remarkable condensation of the 1<sup>st</sup> edition of the main book. It has 6 chapters, the 5<sup>th</sup> concerned with "The brain and the nervous system". </p> </sec> </boxed-text> ...
https://scielo.readthedocs.io/projects/scielo-publishing-schema/pt_BR/latest/tagset/elemento-p.html
2019-03-18T21:56:15
CC-MAIN-2019-13
1552912201707.53
[]
scielo.readthedocs.io
Xamarin.iOS¶ Note This section is under construction. Please contribute! OxyPlot supports both the classic (based on MonoTouch) and unified APIs for iOS. Add the OxyPlot package¶ Add the OxyPlot.Xamarin.iOS NuGet package to the project. References to the portable OxyPlot.dll and the Android-specific OxyPlot.Xamarin.iOS libraries will be added. References¶ The source code can be found in the HelloWorld\iOSApp1 folder in the documentation-examples repository.
http://docs.oxyplot.org/en/master/getting-started/hello-xamarin-ios.html
2019-03-18T22:00:30
CC-MAIN-2019-13
1552912201707.53
[]
docs.oxyplot.org
Select a storage driver预计阅读时间: 8 分钟. After you have read the storage driver overview, the next step is to choose the best storage driver for your workloads. In making this decision, there are three high-level factors to consider: If multiple storage drivers are supported in your kernel, Docker has a prioritized list of which storage driver to use if no storage driver is explicitly configured, assuming that the prerequisites for that storage driver are met: If aufsis available, default to it, because it is the oldest storage driver. However, it is not universally available. If possible, the storage driver with the least amount of configuration is used, such as btrfsor zfs. Each of these relies on the backing filesystem being configured correctly. Otherwise, try to use the storage driver with the best overall performance and stability in the most usual scenarios. overlay2is preferred, followed by overlay. Neither of these requires extra configuration. devicemapperis next, but requires direct-lvmfor production environments, because loopback-lvm, while zero-configuration, has very poor performance. The selection order is defined in Docker’s source code. You can see the order for Docker 17.03 by looking at the source code. For a different Docker version, change the URL to that version. Your choice may be limited by your Docker edition, operating system, and distribution. For instance, aufsis only supported on Ubuntu and Debian, while btrfsis only supported on SLES, which is only supported with Docker EE. See Support storage drivers per Linux distribution. Some storage drivers require you to use a specific format for the backing filesystem. If you have external requirements to use a specific backing filesystem, this may limit your choices. See Supported backing filesystems. After you have narrowed down which storage drivers you can choose from, your choice will be determined by the characteristics of your workload and the level of stability you need. See Other considerations for help making the final decision. Supported storage drivers per Linux distribution At a high level, the storage drivers you can use is partially determined by the Docker edition you use. In addition, Docker does not recommend any configuration that requires you to disable security features of your operating system, such as the need to disable selinux if you use the overlay or overlay2 driver on CentOS. Docker EE and CS-Engine For Docker EE and CS-Engine, the definitive resource for which storage drivers are supported is the Product compatibility matrix. In order to get commercial support from Docker, you must use a supported configuration. Docker CE For Docker CE, only some configurations are tested, and your operating system’s kernel may not support every storage driver. In general, the following configurations work on recent versions of the Linux distribution: When in doubt, the best all-around configuration is to use a modern Linux distribution with a kernel that supports the overlay2 storage driver, and to use Docker volumes for write-heavy workloads instead of relying on writing data to the container’s writable layer. Docker for Mac and Docker for Windows Docker for Mac and Docker for Windows are intended for development, rather than production. Modifying the storage driver on these platforms is not supported. Supported backing filesystems With regard to Docker, the backing filesystem is the filesystem where /var/lib/docker/ is located. Some storage drivers only work with specific backing filesystems. Other considerations Suitability for your workload Among other things, each storage driver has its own performance characteristics that make it more or less suitable for different workloads. Consider the following generalizations: aufs, overlay, and overlay2all operate at the file level rather than the block level. This uses memory more efficiently, but the container’s writable layer may grow quite large in write-heavy workloads. - Block-level storage drivers such as devicemapper, btrfs, and zfsperform better for write-heavy workloads (though not as well as Docker volumes). - For lots of small writes or containers with many layers or deep filesystems, overlaymay perform better than overlay2. btrfsand zfsrequire a lot of memory. zfsis a good choice for high-density workloads such as PaaS. More information about performance, suitability, and best practices is available in the documentation for each storage driver. Shared storage systems and the storage driver If your enterprise uses SAN, NAS, hardware RAID, or other shared storage systems, they may provide high availability, increased performance, thin provisioning, deduplication, and compression. In many cases, Docker can work on top of these storage systems, but Docker does not closely integrate with them. Each Docker storage driver is based on a Linux filesystem or volume manager. Be sure to follow existing best practices for operating your storage driver (filesystem or volume manager) on top of your shared storage system. For example, if using the ZFS storage driver on top of a shared storage system, be sure to follow best practices for operating ZFS filesystems on top of that specific shared storage system. Stability For some users, stability is more important than performance. Though Docker considers all of the storage drivers mentioned here to be stable, some are newer and are still under active development. In general, aufs, overlay, and devicemapper are the choices with the highest stability. Experience and expertise Choose a storage driver that your organization is comfortable maintaining. For example, if you use RHEL or one of its downstream forks, you may already have experience with LVM and Device Mapper. If so, the devicemapper driver might be the best choice. Test with your own workloads You can test Docker’s performance when running your own workloads on different storage drivers. Make sure to use equivalent hardware and workloads to match production conditions, so you can see which storage driver offers the best overall performance. Check and set your current storage driver The detailed documentation for each individual storage driver details all of the set-up steps to use a given storage driver. This is a very high-level summary of how to change the storage driver. Important: Some storage driver types, such as devicemapper, btrfs, and zfs, require additional set-up at the operating system level before you can use them with Docker. To see what storage driver Docker is currently using, use docker info and look for the Storage Driver line: $ docker info Containers: 0 Images: 0 Storage Driver: overlay Backing Filesystem: extfs <output truncated> To set the storage driver, set the option in the daemon.json file, which is located in /etc/docker/ on Linux and C:\ProgramData\docker\config\ on Windows Server. Changing the storage driver on Docker for Mac or Docker for Windows is not supported. If the daemon.json file does not exist, create it. Assuming there are no other settings in the file, it should have the following contents: { "storage-driver": "devicemapper" } You can specify any valid storage driver in place of devicemapper. Restart Docker for the changes to take effect. After restarting, run docker info again to verify that the new storage driver is being used. Related information - About images, containers, and storage drivers aufsstorage driver in practice devicemapperstorage driver in practice overlayand overlay2storage drivers in practice btrfsstorage driver in practice zfsstorage driver in practice - Device Mapper storage driver in practice
https://docs.docker-cn.com/engine/userguide/storagedriver/selectadriver/
2019-03-18T22:22:04
CC-MAIN-2019-13
1552912201707.53
[]
docs.docker-cn.com
Contents IT Business ManagementRole required: it_portfolio_managerThe budget plan includes costs from all selected projects and demands. The budget plan summary is displayed on portfolio workbench and the details can be seen in Financial Management. In advanced planning mode, the creation of a budget plan is mandatory to be able to track the portfolio.. About this task Note: Budget plans must have a budget owner, budget key, and at least one budget item. See Budget owners and budget key owners and Budget itemsfor more information. Procedure Do either of these steps: OptionSteps From the portfolio workbench Click Create Budget Plan under Step 3: Budgeting. From the portfolio form Navigate to Project > Portfolios > All. Open the portfolio, and click the Create/Revise Budget Plan related link. Select the fiscal year from Fiscal Year choice list, and click OK. Result The budget plan for the portfolio for the selected fiscal year is created and promoted. It can be accessed from: the portfolio workbench under step 3. You can re-promote the budget plan, if required. the portfolio form in Budget Plans related list. You can click the budget plan link to view its details. The financial planning for the portfolio is complete. The Track Portfolio action is enabled for the portfolio manager to track the progress of portfolio. What to do next Track the portfolio. Create a forecast plan. Note: If the budget plan is finalized and no more changes are expected, the budget period must be closed. Related ConceptsPortfolio workbenchFinancial ManagementBudgets On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/jakarta-it-business-management/page/product/project-management/task/t_CreateABudgetPlanFromPortfolio.html
2019-03-18T22:28:22
CC-MAIN-2019-13
1552912201707.53
[]
docs.servicenow.com
Contents Now Platform Administration Previous Topic Next Topic Import and map data Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Import and map data defined.. Scheduled imports. LDAP transform mapsThe transform map moves data from the import set table to the target table (User or Group).LDAP scriptingCreate custom transform maps, scripts, and business rules to specify requirements when importing data.Set choice action for reference field importsThe LDAP transform map determines how fields in the Import Set table map to fields in existing tables such as Incident or User.Verify LDAP mappingAfter creating an LDAP transform map, refresh the LDAP data to verify the transform map works as expected. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/london-platform-administration/page/integrate/ldap/concept/c_LDAPImportMaps.html
2019-03-18T22:30:21
CC-MAIN-2019-13
1552912201707.53
[]
docs.servicenow.com
Close Gap Tool Properties The Close Gap tool lets you close small gaps in a drawing, creating invisible strokes between the two closest points. This closes the zone. You do not need to trace directly over the gap. You can draw it a few millimetres away and the Close Gap will automatically choose the two closest points and close the gap. For tasks related to this tool, see Using the Close Gap Tool. - In the Stage view, select a vector layer. - In the Tools toolbar, click the Close Gap button. The tool's properties are displayed in the Tool Properties view.
https://docs.toonboom.com/help/storyboard-pro-5-5/storyboard/reference/tool-properties/close-gap-tool-properties.html
2019-03-18T22:22:35
CC-MAIN-2019-13
1552912201707.53
[array(['../../../Resources/Images/SBP/Reference/close-gap.png', None], dtype=object) array(['../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
ef_ellipse; Returns: N/A. This constant is for use in the functions effect_create_above and effect_create_below, and will create an ellipse effect as illustrated in the image below: effect_create_below(ef_ellipse, x, y, choose(0, 1, 2), make_colour_hsv(irandom(255), 255, 255); The above code will create an ellipse effect with a random size and colour at the position of the instance running the code.
https://docs.yoyogames.com/source/dadiospice/002_reference/particles/simple%20effects/ef_ellipse.html
2019-03-18T21:45:43
CC-MAIN-2019-13
1552912201707.53
[]
docs.yoyogames.com
Site Recovery Manager Server operates as an extension to the vCenter Server at a site. Site Recovery Manager is compatible with other VMware solutions, and with third-party software. You can run other VMware solutions such as vCenter Update Manager, vCenter Server Heartbeat, VMware Fault Tolerance, vSphere Storage vMotion, and vSphere Storage DRS in deployments that you protect using Site Recovery Manager. Use caution before you connect other VMware solutions to the vCenter Server instance to which the Site Recovery Manager Server is connected. Connecting other VMware solutions to the same vCenter Server instance as Site Recovery Manager might cause problems when you upgrade Site Recovery Manager or vSphere. Check the compatibility and interoperability of the versions of these solutions with your version of Site Recovery Manager by consulting VMware Product Interoperability Matrixes.
https://docs.vmware.com/en/Site-Recovery-Manager/6.5/com.vmware.srm.admin.doc/GUID-AD3E9BCA-B1F5-407F-BF24-7A05319530C9.html
2017-11-18T02:53:06
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
A Remote Desktop Services (RDS) host is a server computer that hosts applications and desktops for remote access. In a View deployment, an RDS host is a Windows server that has the Microsoft Remote Desktop Services role, the Microsoft Remote Desktop Session Host service, and View Agent installed. An RDS host can support View Agent Direct Connection (VADC) if it also has VADC Plug-In installed. For information on setting up an RDS host and installing View Agent, see “Setting Up Remote Desktop Services Hosts” in the Setting Up Desktop and Application Pools in View document. For information on installing VADC Plug-In, see Installing View Agent Direct-Connection Plug-In. When you install View Agent, the installer asks for the hostname or IP address of View Connection Server that View Agent will connect to. You can make the installer skip this step by running the installer with a parameter..
https://docs.vmware.com/en/VMware-Horizon-6/6.2/com.vmware.view-agent.directconnectionplugin.doc/GUID-F649644A-F406-418D-B55C-59F28FBE5D5C.html
2017-11-18T02:52:34
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
Use this procedure if the View Connection Server instance that you plan to upgrade is paired with a security server. About this task This procedure is designed to upgrade one security server and its paired View Connection Server instance before moving on to upgrade the next security server and its paired View connection Server instance. This strategy allows for zero down time. If the instance is not paired with a security server, use the procedure Upgrade View Connection Servers in a Replicated Group. The first few steps of this procedure involve upgrading the View Connection Server instance. After the View Connection Server upgrade, but before the security server upgrade, one of the steps describes removing the IPsec rules for the security server. When you remove the IPsec rules for an active security server, all communication with the security server is lost until you upgrade or reinstall the security server. By default, communication between a security server and its paired View Connection Server instance is governed by IPsec rules. If the existing IPsec rules are not removed before you upgrade or reinstall, the pairing between the security server and View Connection Server fails, and a new set of IPsec rules cannot be established after the upgrade. Prerequisites Determine when to perform this procedure. Choose an available desktop maintenance window. Budget 15 to 30 minutes for each security server and its paired View Connection Server instance. If you use View Composer, verify that View Composer has been upgraded. See Upgrade View Composer. After you upgrade View Connection Server, you must add View Composer using View Administrator. Familiarize yourself with the security-related requirements of View, and verify that these requirements are met. See Upgrade Requirements for View Connection Server. You might need to obtain and install a CA-signed SSL server certificate that includes certificate revocation information, verify that Windows Firewall with Advanced Security is set to on, and configure any back-end firewalls to support IPsec. Verify that the virtual or physical machines on which the current security server and View Connection Server instances are installed meet the system requirements. See Horizon Connection Server Requirements. Complete the tasks listed in Preparing View Connection Server for an Upgrade.Important: If any Local Mode desktops are checked out at the time you run the View Connection Server installer to install the upgrade, the upgrade will fail. Verify that you have a license for the new version. Verify that you have a user account with administrative privileges on the hosts that you will use to run the installer and perform the upgrade. If you have not configured a security server pairing password, use the latest version of View Administrator to do so. The installation program will prompt you for this password during installation. See the topic called "Configure a Security Server Pairing Password" in the View Installation document. Procedure - If you are using a load balancer to manage security servers that are paired with Connection Server instances, disable the security server that is paired with the Connection Server instance you are about to upgrade. - Upgrade the View Connection Server instance that is paired with this security server. Follow steps 2 through 6 of Upgrade View Connection Servers in a Replicated Group. - Remove IPsec rules for the security server paired with the View Connection Server instance that you just upgraded. -. The IPsec rules are removed and the Prepare for Upgrade or Reinstallation setting becomes inactive, indicating that you can reinstall or upgrade the security server. - On the host of the security server, download and run the installer for the latest version of View Connection Server. The installer filename is VMware-viewconnectionserver-x86_64-y.y.y-xxxxxx.exe, where xxxxxx is the build number and y.y.y is the version number. The installer determines that an older version is already installed and performs an upgrade. The installer displays fewer installation options than during a fresh installation. You will be prompted to supply the security server pairing password. You might be prompted to dismiss a message box notifying you that the Security Server service was stopped. The installer stops the service in preparation for the upgrade. - After the installer wizard is finished, verify that the VMware Horizon View Security Server service is started. - If you are using a load balancer for managing this security server, add this server back to the load-balanced group. - Log in to View Administrator, select the security server in the Dashboard, and verify that the security server is now at the latest version. - Verify that you can log in to a remote desktop. - In View Administrator, go to tab and remove any duplicate security servers from the list. The automated security server pairing mechanism can produce duplicate entries in the Security Servers list if the full system name does not match the name that was assigned when the security server was originally created. - Use the vdmexport.exe utility to back up the newly upgraded View LDAP database. If you have multiple instances of Connection Server in a replicated group, you need only export the data from one instance. - Log in to Horizon Administrator and examine the dashboard to verify that the vCenter Server and View Composer icons are green. If either of these icons is red and an Invalid Certificate Detected dialog box appears, you must click Verify and either accept the thumbprint of the untrusted certificate, as described in "What to Do Next," or install a valid CA-signed SSL certificate. For information about replacing the default certificate for vCenter Server, see the VMware vSphere Examples and Scenarios document. - Verify that the dashboard icons for the connection server instances are also are green. If any instances have red icons, click the instance to determine the replication status. Replication might be impaired for any of the following reasons: A firewall might be blocking communication The VMware VDMDS service might be stopped on a Connection Server instance The VMware VDMS DSA options might be blocking the replications A network problem has occurred What to do next To use a default or self-signed certificate from vCenter Server or View Composer, see Accept the Thumbprint of a Default SSL Certificate. If the upgrade fails on one or more of the View Connection Server instances, see Create a Replicated Group After Reverting View Connection Server to a Snapshot. If you plan to use enhanced message security mode for JMS messages, make sure that firewalls allow Connection Server instances to receive incoming JMS traffic on port 4002 from desktops and security servers. Also open port 4101 to accept connections from other Connection Server instances. If you ever reinstall Connection Server on a server that has a data collector set configured to monitor performance data, stop the data collector set and start it again.
https://docs.vmware.com/en/VMware-Horizon-7/7.1/com.vmware.horizon-view.upgrade.doc/GUID-A9455797-DB6C-46EC-A196-2A8AEFA3BDFE.html
2017-11-18T02:52:25
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
You can manage servers in session desktops assignments and applications assignments. Procedure - Click the Assign icon. The Assignments page displays. - Click the name of an assignment on the list. The assignments details page displays. - Click Servers at the top of the page. The Servers tab displays, showing a list of servers for the assignment. You can filter, refresh, and export the list using the controls to the top right of the page. You can perform the following actions by clicking one of the buttons at the top of the page.Note: Server status must be green to perform these actions. You can perform the following actions by clicking the ". . ." button and making a selection from the drop-down menu.
https://docs.vmware.com/en/VMware-Horizon-Cloud-Service/services/com.vmware.hchosted171.admin/GUID-E3D27127-B6AB-48EA-B793-A61087FC19A2.html
2017-11-18T02:51:24
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
Although the keyboard works correctly with a local X server, it might not work correctly when you run the same virtual machine with a remote X server. About this task For local X servers, Workstation Pro maps X key codes to PC scan codes to correctly identify a key. Because it cannot tell whether a remote X server is running on a PC or on some other kind of computer, Workstation Pro uses this key code map only for local X servers. You can set a property to tell Workstation Pro to use key code mapping. See Understanding X-Key Codes and Keysyms for more information. To configure a keyboard mapping for a remote X server, you add the appropriate property to the virtual machine configuration (.vmx) file or to ~/.vmware/config. Prerequisites Verify that the remote X server is an XFree86 server running on a PC. Power off the virtual machine and exit Workstation Pro. If the keyboard does not work correctly on an XFree86 server running locally, report the problem to VMware technical support. Procedure - If you use an XFree86-based server that Workstation Pro does not recognize as an XFree86 server, add the xkeymap.usekeycodeMap property and set it to TRUE. This property tells Workstation Pro to always use key code mapping regardless of server type. For example: xkeymap.usekeycodeMap = "TRUE" - If Workstation Pro does not recognize the remote server as an XFree86 server, add the xkeymap.usekeycodeMapIfXFree86 property and set it to TRUE. This property tells Workstation Pro to use key code mapping if you are using an XFree86 server, even if it is remote. For example: usekeycodeMapIfXFree86 = "TRUE"
https://docs.vmware.com/en/VMware-Workstation-Pro/14.0/com.vmware.ws.using.doc/GUID-A0AD8C39-8222-4890-B0DA-AAA1B6014F43.html
2017-11-18T03:14:24
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
Create an Alarm from a Metric on a Graph You can graph a metric and then create an alarm from the metric on the graph, which has the benefit of populating many of the alarm fields for you. To create an alarm from a metric. To create an alarm for the metric, choose the Graphed metrics tab. For Actions, choose the alarm icon. Under Alarm Threshold, type a unique name for the alarm and a description of the alarm. For Whenever, specify a threshold and the number of periods. Under Actions, select the type of action to have the alarm perform when the alarm is triggered. (Optional) For Period, choose a different value. For Statistic, choose Standard to specify one of the statistics in the list or choose Custom to specify a percentile (for example, p95.45). Choose Create Alarm.
http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create_alarm_metric_graph.html
2017-11-18T03:20:24
CC-MAIN-2017-47
1510934804518.38
[]
docs.aws.amazon.com
Product version: AppBuilder 2.9.2 Released: 2015, May 20 AppBuilder 2.9.2 is an update release. For a list of the new features and updates introduced in the earlier major release, AppBuilder 2.9, see AppBuilder 2.9 Release Notes. This release introduces the following update in AppBuilder. - You can now begin development of hybrid mobile apps from the JSDO Mobile app template. This app template shows you how to implement Progress OpenEdge JavaScript Data Objects (JSDOs) to work with an OpenEdge or Rollbase remote data service. The template provides the Progress JSDO 4.0.0 minified library. For more information about working with the template, review the README.txtin the root of the app and visit Creating Mobile Apps Using JSDOs. This release resolves the following general issue in AppBuilder. - If the name of your cryptographic identity contains a single quote ('), AppBuilder cannot build and code sign your app. This release resolves the following issues in the command-line interface. - You cannot upload NativeScript apps to AppManager. The command line notifies you that the appbuilder appmanager uploadoperation is applicable only to Apache Cordovan apps. - You cannot download your application package built for AppManager with the appbuilder appmanager upload <Platform> --downloadcommand.
https://docs.telerik.com/platform/appbuilder/release-notes/2x/v2-9-2
2017-11-18T03:03:28
CC-MAIN-2017-47
1510934804518.38
[]
docs.telerik.com
Maybe I'm missing something, but I have two Google accounts, one a personal Gmail account and another, a Google Apps account for a non-profit I work with. The SyncDocs preferences account tab allows me to enter both accounts, but when I try to assign a different local folder for each, it only saves the most recent folder name for both accounts. Consequently, I've got the docs from both accounts in both local folders and in both Google accounts. Isn't there a way to set separate local folders for each of the two accounts?
http://www.syncdocs.com/forums/topic/managing-multiple-google-docs-accounts
2017-11-18T02:52:01
CC-MAIN-2017-47
1510934804518.38
[]
www.syncdocs.com
sys.trigger_events (Transact-SQL) Contains a row per event for which a trigger fires. Note sys.trigger_events does not apply to event notifications. || |-| |Applies to: SQL Server ( SQL Server 2008 through current version), Azure SQL Database.| Permissions The visibility of the metadata in catalog views is limited to securables that a user either owns or on which the user has been granted some permission. For more information, see Metadata Visibility Configuration. See Also Catalog Views (Transact-SQL) Object Catalog Views (Transact-SQL)
https://docs.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-trigger-events-transact-sql
2017-11-18T02:53:41
CC-MAIN-2017-47
1510934804518.38
[array(['../../includes/media/yes.png', 'yes'], dtype=object) array(['../../includes/media/yes.png', 'yes'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object) array(['../../includes/media/no.png', 'no'], dtype=object)]
docs.microsoft.com
pywws.toservice¶ Post weather update to services such as Weather Underground usage: python -m pywws.toservice [options] data_dir service_name options are: -h or --help display this help -c or --catchup upload all data since last upload -v or --verbose increase amount of reassuring messages data_dir is the root directory of the weather data service_name is the service to upload to, e.g. underground Introduction¶ There are an increasing number of web sites around the world that encourage amateur weather station owners to upload data over the internet. This module enables pywws to upload readings to these organisations. It is highly customisable using configuration files. Each ‘service’ requires a configuration file and one or two templates in pywws/services (that should not need to be edited by the user) and a section in weather.ini containing user specific data such as your site ID and password. See How to integrate pywws with various weather services for details of the available services. Configuration¶ If you haven’t already done so, visit the organisation’s web site and create an account for your weather station. Make a note of any site ID and password details you are given. Stop any pywws software that is running and then run toservice to create a section in weather.ini: python -m pywws.toservice data_dir service_name service_name is the single word service name used by pywws, such as metoffice, data_dir is your weather data directory, as usual. Edit weather.ini and find the section corresponding to the service name, e.g. [underground]. Copy your site details into this section, for example: [underground] password = secret station = ABCDEFG1A Now you can test your configuration: python -m pywws.toservice -vvv data_dir service_name This should show you the data string that is uploaded. Any failure should generate an error message. Upload old data¶ Now you can upload your last 7 days’ data, if the service supports it. Run toservice with the catchup option: python -m pywws.toservice -cvv data_dir service_name This may take 20 minutes or more, depending on how much data you have. Add service(s) upload to regular tasks¶ Edit your weather.ini again, and add a list of services to the [live], [logged], [hourly], [12 hourly] or [daily] section, depending on how often you want to send data. For example: [live] twitter = [] plot = [] text = [] services = ['underground_rf', 'cwop'] [logged] twitter = [] plot = [] text = [] services = ['metoffice', 'cwop'] [hourly] twitter = [] plot = [] text = [] services = ['underground'] Note that the [live] section is only used when running pywws.LiveLog. It is a good idea to repeat any service selected in [live] in the [logged] or [hourly] section in case you switch to running pywws.Hourly. Restart your regular pywws program ( pywws.Hourly or pywws.LiveLog) and visit the appropriate web site to see regular updates from your weather station. Using a different template¶ For some services (mainly MQTT) you might want to write your own template to give greater control over the uploaded data. Copy the default template file from pywws/services to your template directory and then edit it to do what you want. Now edit weather.ini and change the template value from default to the name of your custom template. API¶ Functions Classes - class pywws.toservice. ToService(params, status, calib_data, service_name)[source]¶ Upload weather data to weather services such as Weather Underground. prepare_data(data)[source]¶ Prepare a weather data record. The dataparameter contains the data to be encoded. It should be a ‘calibrated’ data record, as stored in pywws.DataStore.calib_store. The relevant data items are extracted and converted to strings using a template, then merged with the station’s “fixed” data. aprs_send_data(timestamp, prepared_data, ignore_last_update=False)[source]¶ Upload a weather data record using APRS. The prepared_dataparameter contains the data to be uploaded. It should be a dictionary of string keys and string values. http_send_data(timestamp, prepared_data, ignore_last_update=False)[source]¶ Upload a weather data record using HTTP. The prepared_dataparameter contains the data to be uploaded. It should be a dictionary of string keys and string values. next_data(catchup, live_data, ignore_last_update=False)[source]¶ Get weather data records to upload. This method returns either the most recent weather data record, or all records since the last upload, according to the value of catchup. Upload(catchup=True, live_data=None, ignore_last_update=False)[source]¶ Upload one or more weather data records. This method uploads either the most recent weather data record, or all records since the last upload (up to 7 days), according to the value of catchup. It sets the last updateconfiguration value to the time stamp of the most recent record successfully uploaded. Comments or questions? Please subscribe to the pywws mailing list and let us know.
http://pywws.readthedocs.io/en/latest/api/pywws.toservice.html
2017-11-18T02:47:44
CC-MAIN-2017-47
1510934804518.38
[]
pywws.readthedocs.io
Events are changes in the states or attributes of the objects that Orchestrator finds in the plugged-in technology. Orchestrator monitors events by implementing event handlers. Orchestrator plug-ins allow you to monitor events in a plugged-in technology in different ways. The Orchestrator plug-in API allows you to create the following types of event handlers to monitor events in a plugged-in technology. Listeners Passively monitor objects in the plugged-in technology for changes in their state. The plugged-in technology or the plug-in implementation defines the events that listeners monitor. Listeners do not initiate events, but notify Orchestrator when the events occur. Listeners detect events either by polling the plugged-in technology or by receiving notifications from the plugged-in technology. When events occur, Orchestrator policies or workflows that are waiting for the event can react by starting operations in the Orchestrator server. Listener components are optional. Policies Monitor certain events in the plugged-in technology and start operations in the Orchestrator server if the events occur. Policies can monitor policy triggers and policy gauges. Policy triggers define an event in the plugged-in technology that, when it occurs, causes a running policy to start an operation in the Orchestrator server, for example running a workflow. Policy gauges define ranges of values for the attributes of an object in the plugged-in technology that, when exceeded, cause Orchestrator to start an operation. Policies are optional. Workflow triggers If a running workflow contains a Wait Event element, when it reaches that element it suspends its run and waits for an event to occur in a plugged-in technology. Workflow triggers define the events in the plugged-in technology that Waiting Event elements in workflows await. You register workflow triggers with watchers. Workflow triggers are optional. Watchers Watch workflow triggers for a certain event in the plugged-in technology, on behalf of a Waiting Event element in a workflow. When the event occurs, the watchers notify any worklows that are waiting for that event. Watchers are optional.
https://docs.vmware.com/en/vRealize-Orchestrator/7.1/com.vmware.vrealize.orchestrator-dev.doc/GUID-19D6AB94-D37F-4DD0-BDAE-FA88E1E15920.html
2017-11-18T03:07:46
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
vSphere Replication Administration provides information about installing, configuring, and using VMware vSphere Replication. Intended Audience This information is intended for anyone who wants to protect the virtual machines in their virtual infrastructure by using vSphere Replication. The information is written for experienced Windows or Linux system administrators who are familiar with virtual machine technology and datacenter operations.
https://docs.vmware.com/en/vSphere-Replication/5.5/com.vmware.vsphere.replication_admin.doc/GUID-35C0A355-C57B-430B-876E-9D2E6BE4DDBA.html
2017-11-18T03:07:51
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
You create an application pool as part of the process to give users access to an application that runs on RDS hosts. Prerequisites Set up RDS hosts. See Setting Up Remote Desktop Services Hosts. Create a farm that contains the RDS hosts. See Creating Farms. If you plan to add the application pool manually, gather information about the application. See Worksheet for Creating an Application Pool Manually. Procedure - In View Administrator, click . - Click Add. - Follow the prompts in the wizard to create the pool. If you choose to add an application pool manually, use the configuration information you gathered in the worksheet. If you select applications from the list that View Administrator displays, you can select multiple applications. A separate pool is created for each application. Results In View Administrator, you can now view the application pool by clicking. What to do next Entitle users to access the pool. See Entitling Users and Groups. Make sure that your end users have access to Horizon Client 3.0 or later software, which is required to support RDS applications. If you need to ensure that View Connection Server launches the application only on RDS hosts that have sufficient resources to run the application, configure an anti-affinity rule for the application pool. For more information, see "Configure an Anti-Affinity Rule for an Application Pool" in the View Administration document.
https://docs.vmware.com/en/VMware-Horizon-7/7.1/com.vmware.horizon.published.desktops.applications.doc/GUID-278E2BF1-AC84-429E-8437-4D826E04EC96.html
2017-11-18T03:03:18
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
AirWatch uses organization groups (OG) to identify users and establish permissions. When AirWatch is integrated with VMware Identity Manager, the admin and enrollment user REST API keys are configured at the AirWatch organization group type called Customer. When users sign in to Workspace ONE from a device, a device registration event is triggered within VMware Identity Manager. A request is sent to AirWatch to pull any applications that the user and device combination is entitled to. The request is sent using the REST API to locate the user within AirWatch and to place the device in the appropriate organization group. To manage organization groups, two options can be configured in VMware Identity Manager. Enable AirWatch auto discovery. Map AirWatch organization groups to domains in the VMware Identity Manager service. If neither of these two options are configured, Workspace ONE attempts to locate the user at the organization group where the REST API key is created. That is the Customer group. Using AirWatch sign-in page. In this example, when users in the NorthAmerica domain sign in to Workspace ONE, they enter the complete email address as [email protected]. The application looks for the domain and verifies that the user exists or can be created with a directory call in the NorthAmerica organization group. The device can be registered. Using AirWatch Organization Group Mapping to VMware Identity Manager Domains Configure VMware Identity Manager to AirWatch organization group mapping when multiple directories are configured with the same email domain. You enable Map Domains to Multiple Organization Groups in the AirWatch configuration page in the VMware Identity Manager admin console. When the Map Domains to Multiple Organization Groups option is enabled, domains configured in VMware Identity Manager can be mapped to AirWatch organization group IDs. The admin REST API key is also required. In example 2, two domains are mapped to different organization groups. An admin REST API key is required. The same admin REST API key is used for both organization group IDs. In the AirWatch configuration page in the VMware Identity Manager admin console, configure a specific AirWatch organization group ID for each domain. With this configuration, when users logs in to Workspace ONE from their device, the device registration request attempts to locate users from Domain3 in the organization group Europe and users from Domain4 in organization group AsiaPacific. In example 3, one domain is mapped to multiple AirWatch organization groups. Both directories share the email domain. The domain points to the same AirWatch organization group. In this configuration, when users sign in to Workspace ONE, the application prompts the users to select which group they want to register into. In this example, users can select either Engineering or Accounting. Placing Devices in the Correct Organization Group When a user record is successfully located, the device is added to the appropriate organization group. The AirWatch enrollment setting Group ID Assignment Mode determines the organization group to place the device. This setting is in the System Settings > Device & Users > General > Enrollment > Grouping page. app with Engineering and Accounting as options. If Automatically Selected Based on User Group is selected, devices are placed into either Engineering or Accounting based on their user group assignment and corresponding mapping in the AirWatch admin console. Understanding the Concept of a Hidden Group In example 4, when users are prompted to select an organization group from which to register, users also can enter a group ID value that is not in the list presented from the Workspace ONE app. This is the concept of a hidden group. In example 5, in the Corporate organization group structure, North America and Beta are configured as groups under Corporate. In example 5, users enter their email address into Workspace ONE..
https://docs.vmware.com/en/VMware-Identity-Manager/2.9.1/com.vmware.aw-vidm-ws1integration-911/GUID-0B7AEAB9-79FD-4999-9DF0-035FC21491AF.html
2017-11-18T03:03:44
CC-MAIN-2017-47
1510934804518.38
[array(['images/GUID-FD85317F-9E0B-44F0-A194-1F5235FBB03A-low.png', None], dtype=object) array(['images/GUID-D34619C0-258E-467D-BDA4-5797FDFE8AEF-low.png', None], dtype=object) array(['images/GUID-8E1826CE-27A6-4CD0-AE2A-C594D3FAC65F-low.png', None], dtype=object) array(['images/GUID-C3D7358D-D8EA-4A13-AD68-E06AA3FDBA99-low.png', None], dtype=object) array(['images/GUID-A9099C1F-FB4A-46AA-AA84-7F195265C184-low.png', None], dtype=object) array(['images/GUID-C46EE6CB-3B81-49E4-932E-7706E8F920A1-low.png', None], dtype=object) array(['images/GUID-282A649D-3C4C-41CA-8138-4E2DADDCF7C7-low.png', None], dtype=object) array(['images/GUID-9D904CEA-C956-4275-8F5B-57ACB04B865B-low.png', None], dtype=object) ]
docs.vmware.com
Installing NSX-T Container Plug-in (NCP) requires installing components on the master and Kubernetes nodes. Install NSX-T CNI Plug-inNSX-T CNI plug-in must be installed on the Kubernetes nodes. Install and Configure OVSInstall and configure OVS (Open vSwitch) on the minion nodes. Configure NSX-T Networking for Kubernetes NodesThis section describes how to configure NSX-T networking for Kubernetes master and minion nodes. Install NSX Node Agent. Configmap for ncp.ini in nsx-node-agent-ds.ymlThe sample yaml file nsx-node-agent-ds.yml contains a ConfigMap for the configuration file ncp.ini for the NSX node agent. This ConfigMap section contains parameters that you can specify to customize your node agent installation. Install NSX-T Container Plug-inNSX-T Container Plug-in (NCP) is delivered as a Docker image. NCP should run on a node for infrastructure services. Running NCP on the master node is not recommended. Configmap for ncp.ini in ncp-rc.ymlThe sample YAML file ncp-rc.yml contains a ConfigMap for the configuration file ncp.ini. This ConfigMap section contains parameters that you must specify before you install NCP, as described in the previous section. Mount a PEM Encoded Certificate and a Private Key in the NCP PodIf you have a PEM encoded certificate and a private key, you can update the NCP pod definition in the yaml file to mount the TLS secrets in the NCP Pod. Mount a Certificate File in the NCP PodIf you have a certificate file in the node file system, you can update the NCP pod specification to mount the file in the NCP pod. Configuring SyslogYou can run a syslog agent such as rsyslog or syslog-ng in a container to send logs from NCP and related components to a syslog server. Security ConsiderationsWhen deploying NCP, it is important to take steps to secure both the Kubernetes and the NSX-T environments. Tips on Configuring Network ResourcesWhen configuring some network resources, you should be aware of certain restrictions.
https://docs.vmware.com/en/VMware-NSX-T/2.0/com.vmware.nsxt.ncp_kubernetes.doc/GUID-6C539F16-2F50-426C-83D3-1720900C397D.html
2017-11-18T03:04:04
CC-MAIN-2017-47
1510934804518.38
[]
docs.vmware.com
The NSX-IPFIX integration enables the visibility of the blocked and protected flows in the system. The basic filters in the Micro-Segmentation Planning page are as follows: All Allowed Flows: This option is selected by default. To see all the flows for which the action in the firewall rules is set to Alllowed, select this option. Dropped Flows: This option helps to detect the dropped flows and planning the security in a better way. All Protected Flows: This option helps to detect all the flows which have a rule other than of the type any(source) any(dest) any(port) allowassociated with it. Such flows are known as protected flows. All Unprotected Flows: This option helps to detect all the flows that have the default rules of the type any(source) any(dest) any(port) allow. Such flows are known as unprotected flows. The firewall rules are visible only for the allowed and unprotected flows. For example, if you are in the planning phase and you want to see the allowed flows in the system, perform the following steps: On the Micro-Segmentation Planning page, for a particular group, select All Allowed Flows from the drop-down menu. Click the dropped flows in the topology diagram to see the corresponding recommended firewall rules. Implement those firewall rules by exporting them into NSX manager.
https://docs.vmware.com/en/VMware-Network-Insight/services/Using-VMware-Network-Insight/GUID-4DA1D2F7-EE01-4F15-A07B-451D937F4D70.html
2017-11-18T03:04:08
CC-MAIN-2017-47
1510934804518.38
[array(['images/GUID-63AE1581-196E-4D3C-B9FD-F659C2BCD474-low.png', None], dtype=object) ]
docs.vmware.com
This guide describes how to use the administration APIs to create scripts that configure and manage the operation of MarkLogic Server on your system. This guide is intended for a technical audience, specifically the system administrator in charge of MarkLogic Server. The topics in this chapter are: The APIs most often used to automate administrative operations are summarized in the tables below. The MarkLogic Server Admin APIs provide a flexible toolkit for creating new and managing existing configurations of MarkLogic Server. The common use cases of the Admin APIs include. The chapters Server Configuration Scripts and Server Maintenance Operations provide code samples that demonstrate the various uses of the MarkLogic Server Administration APIs. The Using the Admin API chapter describes the fundamentals of using the Admin and other APIs that allow you to perform administrative operations on MarkLogic Server. The Server Configuration Scripts chapter demonstrates how to write simple scripts that create and configure MarkLogic Server objects. The Server Maintenance Operations chapter describes how to write scripts to monitor and manage an existing MarkLogic Server configuration.
http://docs.marklogic.com/guide/admin-api/intro
2017-11-18T02:45:16
CC-MAIN-2017-47
1510934804518.38
[]
docs.marklogic.com
MACD Trading Strategy on Bonfida Bots What is MACD? Moving average convergence and divergence is one of the most popular and widely used indicators in secondary market trading. It is mainly based on the exponential moving average to determine the moving trend of the underlying asset. Components of MACD : Two lines (the MACD line and the signal line), and one histogram. MACD Line: It is the 12-day exponential moving average (EMA) minus the 26-day exponential moving average. Compared with the simple moving average, the exponential moving average places more emphasis on recent prices and could better reflect price fluctuations in the short term. By utilizing 12-day EMA minus 26-day EMA, MACD line could capture the short term capital flow in a more precise way. If the MACD line is greater than zero, the trend of buying in the short term is relatively stronger. Signal Line : It is the 9-day exponential moving average of the MACD line. When the MACD line crosses the signal line, it sometimes indicates that the market trend may start to reverse, especially when it occurs in an area far from the zero axis. Histograms: It vividly captures the difference between MACD line and signal line. (1M BTC/USD MACD Chart, source form TradingView) Some common MACD indicators 1. The histogram goes from negative to positive: Long underlying asset 2. The histogram goes from positive to negative: Short underlying asset 3. The MACD line crosses the signal line downwardly: Longs start to close their positions 4. The signal line crosses the MACD line upwardly: Shorts start to close their positions (for more information regarding MACD strategy, please go to ) Introduction of Bonfida’s MACD Strategy Bots On the explore page, we have created four daily MACD strategy bots of BTC, ETH, SRM and FIDA. Daily MACD Strategy Bots Initial deposits were USDC when those pools were created. When the histogram goes from negative to positive, the pool will use all its assets to buy the base currency (BTC/ETH/SRM/FIDA). When the histogram goes from positive to negative, the pool will sell all of the base currency (BTC/ETH/SRM/FIDA). The order size is 100% of what is available in the pool. (please kindly note that it’s an immediate-or-cancel order, meaning under the situation of thin liquidity, the order may not get filled completely. The unfilled part would get canceled. ) Now let’s take a close look at the MACD FIDA bot The automated trading bot was launched in early March, and currently 1 pool token = 0.01 USDC + 0.544 FIDA. When the pool was created, 1 pool token = 1 USDC. Inception performance is calculated by (current value of LP token — initial value of LP token)/ initial value of LP token. The number stands at 12.9% currently. The MACD FIDA bot automates trades based on the direction of histogram. If you are interested in this strategy, you can directly deposit LP tokens into the pool. ( ) Also, you can refer to the below link for all on-chain transactions done by the pool For those who want to build their own MACD bots, and want to use the MACD line or the signal line as trading alerts, please kindly refer to our video tutorial and integrate those indicators with TradingView. Bonfida Bots documentation : Bonfida Bots - Previous Stop Losses & Take Profits on Serum Next - Bonfida Bots Introduction of Super Trend Strategy Last modified 1mo ago Copy link Contents What is MACD? Daily MACD Strategy Bots
https://docs.bonfida.org/collection/v/help/bonfida-bots/introduction-of-macd-trading-strategy-on-bonfida-bots
2021-11-27T14:16:46
CC-MAIN-2021-49
1637964358189.36
[]
docs.bonfida.org
I have exported my contacts from my PC to a pst. On the Mac in Outlook I have a new email address and imported the contacts pst. However that contact list doesn’t show up on my phone, nor does it show up in the contacts when I login to outlook.com with my new email address. How can I get this to show up in those locations?
https://docs.microsoft.com/en-us/answers/questions/55604/office-365-mac-contacts-import.html
2021-11-27T16:14:58
CC-MAIN-2021-49
1637964358189.36
[]
docs.microsoft.com
In this option, the leads are auto-converted into contacts and/or accounts as specified in the Administration > Auto Conversion. Please contact LeadAngel support to make any changes to your auto conversion options. Select the auto conversion using which you want to convert the leads from the Select Auto Conversion dropdown list.
https://docs.leadangel.com/knowledge-base/auto-convert-leads-into-contact-and-or-accounts/
2021-11-27T14:06:31
CC-MAIN-2021-49
1637964358189.36
[array(['https://docs.leadangel.com/wp-content/uploads/2021/07/image-3-1024x534.png', None], dtype=object) ]
docs.leadangel.com
How it works¶ LwJSON fully complies with RFC 4627 memo. LwJSON accepts input string formatted as JSON as per RFC 4627. Library parses each character and creates list of tokens that are understood by C language for easier further processing. When JSON is successfully parsed, there are several tokens used, one for each JSON data type. Each token consists of: Token type Token parameter name (key) and its length Token value or pointer to first child (in case of object or array types) As an example, JSON text {"mykey":"myvalue"} will be parsed into 2 tokens: First token is the opening bracket and has type object as it holds children tokens Second token has name mykey, its type is string and value is myvalue Warning When JSON input string is parsed, create tokens use input string as a reference. This means that until JSON parsed tokens are being used, original text must stay as-is.
https://docs.majerle.eu/projects/lwjson/en/stable/user-manual/how-it-works.html
2021-11-27T13:45:19
CC-MAIN-2021-49
1637964358189.36
[]
docs.majerle.eu
Report Subscriptions Scheduled Reports Deprecation We will be deprecating our Scheduled Reports and will be replacing them with Report Subscriptions. What are Subscriptions In Woopra, Subscriptions allow you to receive emailed reports directly to your inbox. You can easily subscribe to any report and select how often you'd like to receive the reports. How to Subscribe When you are viewing a report, hit the Subscribe button on the top right. Here you can select to receive an email of the report you are viewing on a set interval. You can select Daily, Weekly, or Monthly intervals. At the bottom of the selection, you can view all your active subscriptions. Updated about 2 years ago
https://docs.woopra.com/docs/subscribing-to-reports
2021-11-27T14:45:09
CC-MAIN-2021-49
1637964358189.36
[array(['https://files.readme.io/965678e-Image_2019-11-21_at_2.37.21_PM.png', 'Image 2019-11-21 at 2.37.21 PM.png'], dtype=object) array(['https://files.readme.io/965678e-Image_2019-11-21_at_2.37.21_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/fe753c2-Image_2019-11-21_at_2.39.35_PM.png', 'Image 2019-11-21 at 2.39.35 PM.png'], dtype=object) array(['https://files.readme.io/fe753c2-Image_2019-11-21_at_2.39.35_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/cf111de-Image_2019-11-21_at_2.43.19_PM.png', 'Image 2019-11-21 at 2.43.19 PM.png'], dtype=object) array(['https://files.readme.io/cf111de-Image_2019-11-21_at_2.43.19_PM.png', 'Click to close...'], dtype=object) ]
docs.woopra.com
What is Service Fabric Mesh? Important The preview of Azure Service Fabric Mesh has been retired. New deployments will no longer be permitted through the Service Fabric Mesh API. Support for existing deployments will continue through April 28, 2021. For details, see Azure Service Fabric Mesh Preview Retirement. This video provides a quick overview of Service Fabric Mesh. Azure Service Fabric Mesh is a fully managed service that enables developers to deploy microservices applications without managing virtual machines, storage, or networking. Applications hosted on Service Fabric Mesh run and scale without you worrying about the infrastructure powering it. Service Fabric Mesh consists of clusters of thousands of machines. All cluster operations are hidden from the developer. Upload your code and specify resources you need, availability requirements, and resource limits. Service Fabric Mesh automatically allocates the infrastructure and handles infrastructure failures, making sure your applications are highly available. You only need to care about the health and responsiveness of your application-not the infrastructure. Service Fabric Mesh is currently in preview. Previews are made available to you on the condition that you agree to the supplemental terms of use. Some aspects of this feature may change prior to general availability (GA).. With Service Fabric Mesh you can: - . Azure role-based access control (Azure RBAC))..
https://docs.microsoft.com/en-us/previous-versions/azure/service-fabric-mesh/service-fabric-mesh-overview
2021-11-27T13:51:47
CC-MAIN-2021-49
1637964358189.36
[]
docs.microsoft.com
. page. close the investigation and optionally, close associated notable events. - Review the investigation summary and share it with others as needed.!
https://docs.splunk.com/Documentation/ES/6.2.0/User/Timelines
2021-11-27T15:56:55
CC-MAIN-2021-49
1637964358189.36
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Date: Sat, 27 Nov 2021 15:42:35 +0000 (GMT) Message-ID: <1694997565.81597.1638027755043@9c5033e110b2> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_81596_1959677575.1638027755043" ------=_Part_81596_1959677575.1638027755043 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Contents:=20 The Trifacta=C2=AE platform&n= bsp;supports multiple methods of authenticating to AWS resources. At the to= pmost level, authentication can be broken down into two modes: system and u= ser. User mode: Individual user accounts must be co= nfigured with AWS credentials. NOTE: This section covers how to manage AWS credentials= through the APIs for individual users (user mode). When in system mode, pl= ease manage AWS configuration through the application. To connect to AWS resources and access S3 data, the following inform= ation is required for each user, depending on the method of authentication.= If users are providing key-secret combinations, the following informatio= n is required. Users can access AWS resources by assigning an awsConfig object to the a= ccount. Tip: This method is recommended. The following information is required: For each authentication method, the above pieces of information must be = provided for each user. These pieces of information are defined in an awsConfig object. An awsConfig object is a set of A= WS configuration properties that can be created, modified, and assigned to = individual users via API. For Method 2, the awsConfig object maps to an awsRole object. An awsRole object references an IAM role and an awsCo= nfig object. When you create an awsConfig object and its credential provide= r is set to temporary, the awsRole object is automaticall= y created for you: roleattribute. = li> This workflow steps through the process for all these methods. awsConfigobject, assigning the object= to the user as part of the process. Acquire all of the information listed above for the awsConfig object you= wish to create. Now, you need to locate the internal identifier for the user to which yo= u wish to assign this AWS configuration. Request: Response: Checkpoint: In the above, you noticed that userId=3D2 i= s associated with awsConfig object id=3D1, which is the one you are replaci= ng. This is the user to modify. Retain this value for later. For more information, see API People Get v4. Create the AWS configuration object. NOTE: Optionally, the personId value can b= e inserted into the request to assign the AWS configuration object to a spe= cific user at create time, when it is created by an admin user. If it is cr= eated by a non-admin user, the object is assigned to the user who created i= t. NOTE: For Method 2, an awsRole object is automatically = created for you when you create the awsConfig object. It is mapped to the a= wsConfig object. Request: Response for Method 2: Checkpoint: In the above, the awsConfig object has an i= nternal identifier ( id=3D6). As part of the request, this object was assigned to user 2 personI= d=3D2. The activeRoleId attribute indicates that the internal ID o= f the awsRole object that was automatically created for you. For more information, see API AWSConfigs Create v4. To verify that the above configuration works: Checkpoint: Configuration and verification is complete.= If you need to change the IAM role ARN for a user, you can modify the aw= sConfig object for that user with the new role information. NOTE: This section only applies if credentialProv= ider has been set to temporary for the object and if yo= u are using multiple IAM role ARNs in the Trifac= ta platform. The following request modifies the awsConfi= g id=3D6. Request: Response: Checkpoint: In the above step, you assigned a new IAM r= ole to the awsConfig object. The underlying awsRole object is created for y= ou and automatically assigned. For more information, see API AWSRoles Create v4. NOTE: After you have completed the above update, the pr= evious awsRole object still exists. If the IAM role associated with it is n= o longer in use, you should delete the awsRole object. See API AWSRoles Delete v4. Suppose you have created your awsConfig objects to use the AWS Key-Secre= t method of authenticating. You have now created a set of IAM roles that yo= u would like to assign to your Trifacta u= sers. The generalized workflow for completing this task is the following: personId, so that you can= map your configuration changes to individuals. See API AWSConfigs Get v4. For each user account ( personId= code>), you must identify the IAM role that you wish to assign it. Request: Response for Method 2: Notes: NOTE: The above request must be applied to each awsConf= ig object that you wish to remap to using an IAM role.
https://docs.trifacta.com/exportword?pageId=143186766
2021-11-27T15:42:35
CC-MAIN-2021-49
1637964358189.36
[]
docs.trifacta.com
NetApp HCI has specific IP address requirements that depend on the size of your deployment. Note that by default the initial IP addresses you assign to each node before using the NetApp Deployment Engine to deploy the system are temporary and cannot be reused. You need to set aside a second permanent set of unused IP addresses that you can assign during final deployment. It is best if the storage network and the management network each use separate contiguous ranges of IP addresses. Use the following table to determine how many IP addresses you need for your NetApp HCI deployment:
http://docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-16p1/GUID-2C577898-DF42-4DAA-B243-BC596B96224F.html
2021-11-27T14:25:48
CC-MAIN-2021-49
1637964358189.36
[]
docs.netapp.com
Utility First React Native has a great StyleSheet API which is optimal for component-based systems. NativeBase leverages it and adds a layer of utility props and constraint based designed tokens on top of it. To understand utility props, let's take an example. #With React Native Let's try the traditional way of building the above card in React Native. #With NativeBase Now let's try to build the same card using NativeBase. With NativeBase, you can apply styles directly in the layout using shorthands. The above example demonstrates the usage of utility props alongwith VStack, HStack components. This approach allows us to style components without using StyleSheet API. Apart from productivity boost and saving time there are other benefits by styling components using utility props. No need to name styles anymore, no need to define an object and think about naming them. Using utility-first props, you can focus on creating reusable components instead of reusable stylesheets. Once you start writing styles this way, any other way will feel cumbersome. Put simply, utility first approach opens up the Avatar state within developers. Once you had a cup of tea, let's proceed to the next section!
https://docs.nativebase.io/3.0.7/utility-first
2021-11-27T14:24:48
CC-MAIN-2021-49
1637964358189.36
[array(['/img/aang-avatar-state.gif', 'aang transitioning to avatar state'], dtype=object) ]
docs.nativebase.io
Using RStudio with Databases# RStudio makes it easy to access and analyze your data with R. RStudio Professional Drivers are ODBC data connectors that help you connect to some of the most popular databases. Visit db.rstudio.com for best practices, examples, and additional configurations to use when working with databases and the RStudio Professional Drivers.
https://docs.rstudio.com/resources/databases/
2021-11-27T13:38:26
CC-MAIN-2021-49
1637964358189.36
[]
docs.rstudio.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Start-LMBImport-MergeStrategy <MergeStrategy>-Payload <Byte[]>-ResourceType <ResourceType>-Tag <Tag[]>-Select <String>-Force <SwitchParameter> StartImportoperation should take when there is an existing resource with the same name. failureReasonfield of the response to the GetImportoperation.OVERWRITE_LATEST - The import operation proceeds even if there is a conflict with an existing resource. The $LASTEST version of the existing resource is overwritten with the data from the import file. resourceTypefield.The cmdlet will automatically convert the supplied parameter of type string, string[], System.IO.FileInfo or System.IO.Stream to byte[] before supplying it to the service. AWS Tools for PowerShell: 2.x.y.z
https://docs.aws.amazon.com/powershell/latest/reference/items/Start-LMBImport.html
2021-11-27T15:30:46
CC-MAIN-2021-49
1637964358189.36
[]
docs.aws.amazon.com
Go to hazelcast.org/documentation; } } ```java To execute a task with the executor framework: * Obtain an `ExecutorService` instance (generally via `Executors`). * Submit a task which returns a `Future`. * After executing the task, you do not have to wait for the execution to complete, you can process other things. * When ready, use the `Future` object to retrieve the result as shown in the code example below. Below, the Echo task is executed. ```java ExecutorService executorService = Executors.newSingleThreadExecutor();.
https://docs.hazelcast.org/docs/3.5/manual/html/distributedcomputing.html
2021-11-27T14:59:06
CC-MAIN-2021-49
1637964358189.36
[]
docs.hazelcast.org
-11-13 Welcome... Cloud Agents A big expansion in China, this week! We've added three new Cloud Agents in provider China Mobile: Chengdu, China Guangzhou, China Shanghai, China 你好! Ni hao! Reports A couple improvements to Report Snapshots, to match the PDF-making features available with Reports, and the option to arrange Number cards in a Number widget. PDF download of Report Snapshots In addition to downloading Reports as PDF files, users can now download Report Snapshots as PDF files. When viewing a Report Snapshot, a new button will be visible (and the Share this Snapshot link will now be a button): PDF attachments in Report Snapshots. Arranging Number Cards. Minor enhancements & bug fixes Questions and comments Got feedback for us? Want to suggest a feature that would light your candle? Send us an email ! Release Notes: 2018-11-27 Release Notes: 2018-10-23 Last modified 1yr ago Copy link Contents Cloud Agents Reports PDF download of Report Snapshots PDF attachments in Report Snapshots Arranging Number Cards Minor enhancements & bug fixes Questions and comments
https://docs.thousandeyes.com/archived-release-notes/2018/2018-11-13-release-notes
2021-11-27T13:36:48
CC-MAIN-2021-49
1637964358189.36
[]
docs.thousandeyes.com
The NetApp Deployment Engine enables you to quickly deploy NetApp HCI. During deployment, you can let the NetApp Deployment Engine automatically set many of the networking configuration details for you. After deployment, NetApp HCI will be ready to serve highly available compute and storage resources in a production environment. You have ensured that all compute and storage nodes that will be part of the initial deployment are running the same versions of Element software (for storage nodes) and Bootstrap OS (for compute nodes).
http://docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-140/GUID-10E554E1-E9D7-4CE5-98D3-C63D15DE2E2A.html
2021-11-27T14:23:05
CC-MAIN-2021-49
1637964358189.36
[]
docs.netapp.com
lightkurve.correctors.SFFCorrector.correct¶ - SFFCorrector.correct(centroid_col=None, centroid_row=None, windows=20, bins=5, timescale=1.5, breakindex=None, degree=3, restore_trend=False, additional_design_matrix=None, polyorder=None, sparse=False, **kwargs)[source]¶ Find the best fit correction for the light curve. - Parameters - centroid_colnp.ndarray of floats (optional) Array of centroid column positions. If None, will use the centroid_colattribute of the input light curve by default. - centroid_rownp.ndarray of floats (optional) Array of centroid row positions. If None, will use the centroid_rowattribute of the input light curve by default. - windowsint Number of windows to split the data into to perform the correction. Default 20. - binsint Number of “knots” to place on the arclength spline. More bins will increase the number of knots, making the spline smoother in arclength. Default 10. - timescale: float Time scale of the b-spline fit to the light curve in time, in units of input light curve time. - breakindexNone, int or list of ints (optional) Optionally the user can break the light curve into sections. Set break index to either an index at which to break, or list of indicies. - degreeint The degree of polynomials in the splines in time and arclength. Higher values will create smoother splines. Default 3. - - restore_trendbool (default False) Whether to restore the long term spline trend to the light curve - propagate_errorsbool (default False) Whether to propagate the uncertainties from the regression. Default is False. Setting to True will increase run time, but will sample from multivariate normal distribution of weights. - additional_design_matrix DesignMatrix(optional) Additional design matrix to remove, e.g. containing background vectors. - polyorderint Deprecated as of Lightkurve v1.4. Use degreeinstead. - Returns - corrected_lc LightCurve Corrected light curve, with noise removed.
https://docs.lightkurve.org/reference/api/lightkurve.correctors.SFFCorrector.correct.html
2021-11-27T13:57:06
CC-MAIN-2021-49
1637964358189.36
[]
docs.lightkurve.org
Energy Detector¶ Picture The Energy Detector can detect energy flow and acts as a resistor. You can define the max flow rate to use it as a resistor. Bug The Energy Detector does not work on versions below 0.4.5b. I recommend to use the latest version. Overview¶ Functions¶ The Energy Detector is quite simple, you can set the max energy flow or receive the current flow/max flow. Example: Changelog/Trivia¶ The Energy Detector had some weird problems in versions older than 0.4.6b The block was able to store infinite amounts of energy or it creates an limitless amount of energy. 0.4.6b The energy detector is now bug free. hopefully 0.4.5b Completly changed the system of the energy detector, but the energy detector was able to drain energy without any reson. 0.4.3b Created a crafting recipe for the detector. 0.4.2b The energy detector is now able to send energy automatically. 0.4.1b Added the lovely bugged energy detector.
https://docs.srendi.de/peripherals/energy_detector/
2021-11-27T13:52:26
CC-MAIN-2021-49
1637964358189.36
[array(['https://srendi.de/wp-content/uploads/2021/04/Energy-Detector.png', 'Header'], dtype=object) ]
docs.srendi.de
unbiased rotation rate as measured by the device's gyroscope. The rotation rate is given as a Vector3 representing the speed of rotation around each of the three axes in radians per second. This value has been processed to remove "bias" and give a more accurate measurement. The raw value reported by the gyroscope hardware can be obtained with the rotationRateUnbiased.y > shakeSpeed && !audioSource.isPlaying) audioSource.PlayOneShot(shakeSound); } }
https://docs.unity3d.com/ScriptReference/Gyroscope-rotationRateUnbiased.html
2021-11-27T14:58:41
CC-MAIN-2021-49
1637964358189.36
[]
docs.unity3d.com
User Activity FlexNet Manager Suite 2020 R2 (On-Premises) The User Activity analysis enables you to display all transaction codes that were used by a specific user in any role. The analysis shows how much a user used a specific transaction code and whether the transaction code is included in the selected role. This makes the User Activity analysis a useful tool for identifying a user's overall activities and for cleaning up a user's roles. The analysis shows the user activity for the parameters (for example, systems, role name, or period) that were specified for the underlying Role Utilization report. To analyze the activity of a specific user: - Execute a Role Utilization report by following the steps described under Creating a Role Utilization Report. - In the report, select the row for the user and the role whose activity you want to analyze. - On the Advanced menu, point to Analysis, and then click User Activity. FlexNet Manager Suite (On-Premises) 2020 R2
https://docs.flexera.com/FlexNetManagerSuite2020R2/EN/WebHelp/tasks/SAP-UserActivity.html
2021-11-27T14:44:49
CC-MAIN-2021-49
1637964358189.36
[]
docs.flexera.com
Date: Mon, 8 Jun 2020 10:44:57 -0500 From: Valeri Galtsev <[email protected]> To: Anatoli <[email protected]> Cc: FreeBSD Mailing List <[email protected]> Subject: Re: freebsd vs. netbsd Message-ID: <[email protected]> In-Reply-To: <6a4f6a15-ec43-03f6-1a41-a109e445f026> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On 2020-06-08 09:25, Anatoli wrote: >> The most secure… if you dismiss the fact that one of the developer (who wrote network stack if my memory serves me) was simultaneously receiving payments from one of three letter agencies for several years. > > Rumors + FUD or do you have any proof? > When I heard that I checked, and receipt of payments was confirmed by developer himself. That is my recollection, I am merely human whose memory can not be perfect, check that on your own. This even if confirmed as a fact, does not mean he left back doors or weak spots in code. The rest is for everyone: to do one's own home work: 1. who don't care just dismiss what is said 2. Who do care to verify if receipt of payments is the fact, just verify on your own (I never think of myself to be considered the source of absolute truth. Merely as a help to point into direction where who is interested may find something helpful) If one verifies the fact of payment(s), the decide for yourself: A. Audit the code (I for one realize I will not be able to find fishy spots in that sophisticated code, so this can not be my choice) B. Accept that it is likely that good enough programmers did audit code, hence there are no weak (or worse) spots in it C. Accept that what top programmer wrote is not that easy to audit, and just shy away from what may (just merely may) be not quite kosher. If you care, of course. And again, do your own thinking, this may, just merely may help someone. Valeri >: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=297491+0+/usr/local/www/mailindex/archive/2020/freebsd-questions/20200614.freebsd-questions
2021-11-27T15:36:19
CC-MAIN-2021-49
1637964358189.36
[]
docs.freebsd.org
Tutorial: Build an Apache Spark machine learning application in Azure HDInsight In this tutorial, you learn how to use the Jupyter Notebook to build an Apache Spark machine learning application for Azure HDInsight. MLlib is Spark's adaptable machine learning library consisting of common learning algorithms and utilities. (Classification, regression, clustering, collaborative filtering, and dimensionality reduction. Also, underlying optimization primitives.) In this tutorial, you learn how to: - Develop an Apache Spark machine learning application. Understand the data set The application uses the sample HVAC.csv data that is available on all clusters by default. The file is located at \HdiSamples\HdiSamples\SensorSampleData\hvac. The data shows the target temperature and the actual temperature of some buildings that have HVAC systems installed. The System column represents the system ID and the SystemAge column represents the number of years the HVAC system has been in place at the building. You can predict whether a building will be hotter or colder based on the target temperature, given system ID, and system age. Develop a Spark machine learning application using Spark MLlib This application uses a Spark ML pipeline to do a document classification. ML Pipelines provide a uniform set of high-level APIs built on top of DataFrames. The DataFrames help users create and tune practical machine learning pipelines. In the pipeline, you split the document into words, convert the words into a numerical feature vector, and finally build a prediction model using the feature vectors and labels. Do the following steps to create the application. Create a Jupyter Notebook using the PySpark kernel. For the instructions, see Create a Jupyter Notebook file. Import the types required for this scenario. Paste the following snippet in an empty cell, and then press SHIFT + ENTER. from pyspark.ml import Pipeline from pyspark.ml.classification import LogisticRegression from pyspark.ml.feature import HashingTF, Tokenizer from pyspark.sql import Row import os import sys from pyspark.sql.types import * from pyspark.mllib.classification import LogisticRegressionWithSGD from pyspark.mllib.regression import LabeledPoint from numpy import array Load the data (hvac.csv), parse it, and use it to train the model. # Define a type called LabelDocument LabeledDocument = Row("BuildingID", "SystemInfo", "label") # Define a function that parses the raw CSV file and returns an object of type LabeledDocument def parseDocument(line): values = [str(x) for x in line.split(',')] if (values[3] > values[2]): hot = 1.0 else: hot = 0.0 textValue = str(values[4]) + " " + str(values[5]) return LabeledDocument((values[6]), textValue, hot) # Load the raw HVAC.csv file, parse it using the function data = sc.textFile("/HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv") documents = data.filter(lambda s: "Date" not in s).map(parseDocument) training = documents.toDF() In the code snippet, you define a function that compares the actual temperature with the target temperature. If the actual temperature is greater, the building is hot, denoted by the value 1.0. Otherwise the building is cold, denoted by the value 0.0. Configure the Spark machine learning pipeline that consists of three stages: tokenizer, hashingTF, and lr. tokenizer = Tokenizer(inputCol="SystemInfo", outputCol="words") hashingTF = HashingTF(inputCol=tokenizer.getOutputCol(), outputCol="features") lr = LogisticRegression(maxIter=10, regParam=0.01) pipeline = Pipeline(stages=[tokenizer, hashingTF, lr]) For more information about pipeline and how it works, see Apache Spark machine learning pipeline. Fit the pipeline to the training document. model = pipeline.fit(training) Verify the training document to checkpoint your progress with the application. training.show() The output is similar to: +----------+----------+-----+ |BuildingID|SystemInfo|label| +----------+----------+-----+ | 4| 13 20| 0.0| | 17| 3 20| 0.0| | 18| 17 20| 1.0| | 15| 2 23| 0.0| | 3| 16 9| 1.0| | 4| 13 28| 0.0| | 2| 12 24| 0.0| | 16| 20 26| 1.0| | 9| 16 9| 1.0| | 12| 6 5| 0.0| | 15| 10 17| 1.0| | 7| 2 11| 0.0| | 15| 14 2| 1.0| | 6| 3 2| 0.0| | 20| 19 22| 0.0| | 8| 19 11| 0.0| | 6| 15 7| 0.0| | 13| 12 5| 0.0| | 4| 8 22| 0.0| | 7| 17 5| 0.0| +----------+----------+-----+ Comparing the output against the raw CSV file. For example, the first row the CSV file has this data: Notice how the actual temperature is less than the target temperature suggesting the building is cold. The value for label in the first row is 0.0, which means the building isn't hot. Prepare a data set to run the trained model against. To do so, you pass on a system ID and system age (denoted as SystemInfo in the training output). The model predicts whether the building with that system ID and system age will be hotter (denoted by 1.0) or cooler (denoted by 0.0). # SystemInfo here is a combination of system ID followed by system age Document = Row("id", "SystemInfo") test = sc.parallelize([(1L, "20 25"), (2L, "4 15"), (3L, "16 9"), (4L, "9 22"), (5L, "17 10"), (6L, "7 22")]) \ .map(lambda x: Document(*x)).toDF() Finally, make predictions on the test data. # Make predictions on test documents and print columns of interest prediction = model.transform(test) selected = prediction.select("SystemInfo", "prediction", "probability") for row in selected.collect(): print row The output is similar to: Row(SystemInfo=u'20 25', prediction=1.0, probability=DenseVector([0.4999, 0.5001])) Row(SystemInfo=u'4 15', prediction=0.0, probability=DenseVector([0.5016, 0.4984])) Row(SystemInfo=u'16 9', prediction=1.0, probability=DenseVector([0.4785, 0.5215])) Row(SystemInfo=u'9 22', prediction=1.0, probability=DenseVector([0.4549, 0.5451])) Row(SystemInfo=u'17 10', prediction=1.0, probability=DenseVector([0.4925, 0.5075])) Row(SystemInfo=u'7 22', prediction=0.0, probability=DenseVector([0.5015, 0.4985])) Observe the first row in the prediction. For an HVAC system with ID 20 and system age of 25 years, the building is hot (prediction=1.0). The first value for DenseVector (0.49999) corresponds to the prediction 0.0 and the second value (0.5001) corresponds to the prediction 1.0. In the output, even though the second value is only marginally higher, the model shows prediction=1.0. Shut down the notebook to release the resources. To do so, from the File menu on the notebook, select Close and Halt. This action shuts down and closes the notebook. Use Anaconda scikit-learn library for Spark machine learning Apache Spark clusters in HDInsight include Anaconda libraries. It also includes the scikit-learn library for machine learning. The library also includes various data sets that you can use to build sample applications directly from a Jupyter Notebook. For examples on using the scikit-learn library, see. steps In this tutorial, you learned how to use the Jupyter Notebook to build an Apache Spark machine learning application for Azure HDInsight. Advance to the next tutorial to learn how to use IntelliJ IDEA for Spark jobs.
https://docs.microsoft.com/en-in/azure/hdinsight/spark/apache-spark-ipython-notebook-machine-learning
2021-11-27T15:35:38
CC-MAIN-2021-49
1637964358189.36
[array(['media/apache-spark-ipython-notebook-machine-learning/spark-machine-learning-understand-data.png', 'Snapshot of data used for Spark machine learning example'], dtype=object) array(['media/apache-spark-ipython-notebook-machine-learning/hdinsight-azure-portal-delete-cluster.png', 'Azure portal deletes an HDInsight cluster'], dtype=object) ]
docs.microsoft.com
Hacking on Hy¶ Join our Hyve!¶ Please come hack on Hy! Please come hang out with us on #hy on irc.freenode.net! Please talk about it on Twitter with the #hy hashtag! Please blog about it! Please don't spraypaint it on your neighbor's fence (without asking nicely)! Hack!¶ Do this: Create a virtual environment: $ virtualenv venv and activate it: $ . venv/bin/activate or use virtualenvwrapper to create and manage your virtual environment: $ mkvirtualenv hy $ workon hy Get the source code: $ git clone or use your fork: $ git clone [email protected]:<YOUR_USERNAME>/hy.git Install for hacking: $ cd hy/ $ pip install -e . Install other develop-y requirements: $ pip install -r requirements-dev.txt Do awesome things; make someone shriek in delight/disgust at what you have wrought. Test!¶ Tests are located in tests/. We use pytest. To run the tests: $ pytest Write tests---tests are good! Also, it is good to run the tests for all the platforms supported and for PEP 8 compliant code. You can do so by running tox: $ tox Document!¶ Documentation is located in docs/. We use Sphinx. To build the docs in HTML: $ cd docs $ make html Write docs---docs are good! Even this doc! Contributor Guidelines¶ Contributions are welcome and greatly appreciated. Every little bit helps in making Hy better. Potential contributions include: - Reporting and fixing bugs. - Requesting features. - Adding features. - Writing tests for outstanding bugs or untested features. - You can mark tests that Hy can't pass yet as xfail. - Cleaning up the code. - Improving the documentation. - Answering questions on the IRC channel, the mailing list, or Stack Overflow. - Evangelizing for Hy in your organization, user group, conference, or bus stop. Issues¶ In order to report bugs or request features, search the issue tracker to check for a duplicate. (If you're reporting a bug, make sure you can reproduce it with the very latest, bleeding-edge version of Hy from the master branch on GitHub. Bugs in stable versions of Hy are fixed on master before the fix makes it into a new stable release.) If there aren't any duplicates, then you can make a new issue. It's totally acceptable to create an issue when you're unsure whether something is a bug or not. We'll help you figure it out. Use the same issue tracker to report problems with the documentation. Pull requests¶ Submit proposed changes to the code or documentation as pull requests (PRs) on GitHub. Git can be intimidating and confusing to the uninitiated. This getting-started guide may be helpful. However, if you're overwhelmed by Git, GitHub, or the rules below, don't sweat it. We want to keep the barrier to contribution low, so we're happy to help you with these finicky things or do them for you if necessary. Deciding what to do¶ Issues tagged good-first-bug are expected to be relatively easy to fix, so they may be good targets for your first PR for Hy. If you're proposing a major change to the Hy language, or you're unsure of the proposed change, create an issue to discuss it before you write any code. This will allow others to give feedback on your idea, and it can avoid wasted work. File headers¶ Every Python or Hy file in the source tree that is potentially ;; in place of # for Hy files): # Copyright [current year] the authors. # This file is part of Hy, which is free software licensed under the Expat # license. See the LICENSE. As a rule of thumb, a file can be considered potentially copyrightable if it includes at least 10 lines that contain something other than comments or whitespace. If in doubt, include the header. Commit formatting¶ Many PRs are small enough that only one commit is necessary, but bigger ones should be organized into logical units as separate commits. PRs should be free of merge commits and commits that fix or revert other commits in the same PR ( git rebase is your friend). Avoid committing spurious whitespace changes. The first line of a commit message should describe the overall change in 50 characters or less. If you wish to add more information, separate it from the first line with a blank line. Testing¶ New features and bug fixes should be tested. If you've caused an xfail test to start passing, remove the xfail mark. If you're testing a bug that has a GitHub issue, include a comment with the URL of the issue. No PR may be merged if it causes any tests to fail. You can run the test suite and check the style of your code with make d. The byte-compiled versions of the test files can be purged using git clean -dfx tests/. If you want to run the tests while skipping the slow ones in test_bin.py, use pytest --ignore=tests/test_bin.py. The PR itself¶ PRs should ask to merge a new branch that you created for the PR into hylang/hy's master branch, and they should have as their origin the most recent commit possible. If the PR fulfills one or more issues, then the body text of the PR (or the commit message for any of its commits) should say "Fixes #123" or "Closes #123" for each affected issue number. Use this exact (case-insensitive) wording, because when a PR containing such text is merged, GitHub automatically closes the mentioned issues, which is handy. Conversely, avoid this exact language if you want to mention an issue without closing it (because e.g. you've partly but not entirely fixed a bug). Before any PR is merged, it must be approved by members of Hy's core team other than the PR's author. Changes to the documentation, or trivial changes to code, need to be approved by one member; all other PRs need to be approved by two members. Anybody on the team may perform the merge. Merging should create a merge commit (don't squash unnecessarily, because that would remove separation between logically separate commits, and don't fast-forward, because that would throw away the history of the commits as a separate branch), which should include the PR number in the commit message. Contributor at. Core Team¶ The core development team of Hy consists of following developers:
https://hy.readthedocs.io/en/stable/hacking.html
2017-12-11T03:37:56
CC-MAIN-2017-51
1512948512121.15
[]
hy.readthedocs.io
In this section we will explain how to add music to the VR space. First of all, open the asset list and click on “Music”. Once you do this the music choice field will be displayed. Please choose the music you would like to upload. Once you have chosen a song hit the “Upload” button. Let’s choose “Music Player” as the accessory. Upon doing this your music will be added to the scene and will begin to sound. The music plays automatically.
http://docs.styly.cc/adding-music/inserting-mp3-files/
2017-12-11T04:00:04
CC-MAIN-2017-51
1512948512121.15
[array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) array(['http://docs.styly.cc/wp-content/plugins/lazy-load/images/1x1.trans.gif', None], dtype=object) ]
docs.styly.cc
Annotation Reference This document lists MuleSoft elements that start with an at sign (@). For more information on annotations, see java.lang.annotation.Annotation. For Mule annotations not in the DevKit, see Creating Flow Objects and Transformers Using Annotations. @BinaryParam Category: REST Mule Version: 3.5 and later Specifies if a payload is a binary type. @Category Category: General Mule Version: 3.5 and later Anypoint Studio and the doclet use the @Category annotation to organize message processors. You can use @Category annotation at class definition level ( @Connector or @Module) to select which category you want your extension listed in: Notes: You can only add the connector to one of the existing Studio categories (this means you cannot define your own category). The values for name and description attributes of @Categorycan only have these values: You can use the following to specify the permitted categories: @Configurable Category: General Mule Version: 3.5 and later. @Configuration Category: Connection Management Mule Version: 3.6 and later later Marks a method inside a @Connector scope as responsible for creating a connection. This method can have several parameters and can contain annotations such as @ConnectionKey or @Password. The @Connect annotation guarantees that the method is called before calling any message processor. This method designates If automatic connection management for username and password authentication is used, have exactly one method annotated @Connect; otherwise compilation fails The parameters cannot be of primitive type such as int, bool, short, etc. Example 1: Example 2:. @ConnectionIdentifier Category: Connection Management Mule Version: 3.5 and laterKey Category: Parameters Mule Version: 3.5 and later Marks a parameter inside the connect method as part of the key for the connector lookup. This only can be used as part of the @Connect method. @ConnectionManagement Category: Connection Management Mule Version: 3.6 and later Indicates a class that defines a connection strategy for basic username and password authentication. Examples The following example is for connectors with connection management and connectivity testing. The following example is for connectors with connection management and no connectivity testing: Indicates a connector strategy class. See @ConnectionStrategy for more examples. @ConnectionStrategy Category: Authentication Mule Version: 3.6 and later @ConnectionManagement Multiple Connection Strategies Each of the connection strategies above extends the BaseConnectionStrategy interface. The @ConnectorStrategy field type is the common interface. later and exists only for backward compatibility. Execution Time: Connector Pooling The simplest way is to maintain current DevKit connector’s architecture and continue having a pool of connectors per each configuration. Use the following example: later Parameters: @Default Category: Parameters Mule Version: 3.5 and later Specifies a default value to a @Configurable field or a @Processor or @Source parameter. Or: @Disconnect Category: Connection Management Mule Version: 3.5 and later later, the @Disconnect method only supports RuntimeException, any other exception causes a failure in a connector’s compilation: This method is invoked as part of the maintenance of the Connection Pool. The pool is configured with a maximum idle time value. When a connection lies in the pool without use for more than the configured time, then the method annotated with @Disconnect is invoked and subsequently the @Connect method. Also, when the @InvalidateConnectionOn annotation is used on a method to catch Exceptions, then the @Disconnect method likewise is invoked with the subsequent reconnect. @Dispose Category: LifeCycle Mule Version: 3.5 and later Mark a method to be disposed during a method’s org.mule.lifecycle.Disposable phase. Note: dispose is a reserved word that cannot be used as the method’s name. See also: @Initialise @Start @Stop Category: Parameters Mule Version: 3.5 and later Specifies a default email pattern. @ExceptionPayload Category: Parameters Mule Version: 3.5 and later Specifies the payload for an exception. @Expr Category: General Mule Version: 3.5 and later']: @ExpressionEnricher Category: General Mule Version: 3.5 and laterEvaluator Category: General Mule Version: 3.5 and later Marks a method inside an @ExpressionLanguage annotation as being responsible for evaluating expressions. @ExpressionLanguage Category: General Mule Version: 3.5 and later Defines a class that exports its functionality as a Mule Expression Language. @ExpressionLanguage restrictions on which types are valid: Cannot be an interface Must be public Cannot have a typed parameter (no generics) @Filter Category: General Mule Version: 3.5 and later. @FriendlyName Category: Display Mule Version: 3.5 and later: Another example illustrates how the friendlyName appears in the Anypoint Studio connector list: The example Barn connector appears in Anypoint Studio’s list of connectors as: See also: @Password @Path @Placement @Summary @Text @Handle Category: Exception Management Mule Version: 3.6 and later Indicates a method for handling and describing exceptions. There is one @Handle per @Handler class. later Indicates a class that handles an exception. Use with @OnException and @Handle. The constraints later Indicates an implementation of RFC-2617 "HTTP Authentication: Basic and Digest Access Authentication".. @Icons Category: General Mule Version: 3.5 and later Custom palette and flow editor icons. Use this annotation on the connector class to override the default location of one or more of the required icons. The path needs to be relative to the /src/main/java directory. @Ignore Category: General Mule Version: 3.5 and later Ignores a field inside a complex object. @InboundHeaders Category: Argument Passing Mule Version: 3.5 and later Passes inbound headers. @Initialise @InvalidateConnectionOn Category: Connection Management Mule version: 3.5 and later Used on a method to catch Exceptions - deprecated use @ReconnectOn instead. @InvocationHeaders Category: Argument Passing Mule Version: 3.5 and later. @Literal Category: Parameters Mule Version: 3.6 and later Specifies Mule Expression Language (MEL) as a method parameter without the DevKit resolving the expression. You can use any MEL code with this annotation. Problem Given the following Processor method: Given the following Mule XML: The enrich method receives the result of evaluating the following expression: This is because DevKit’s generated code tries to automatically resolve the expression. Solution The @Literal annotation flags a method parameter so that its value coming from Mule XML does not get resolved if it’s a Mule expression: In this case, expression evaluation does not apply to the value of the targetExpression parameter. Also, this annotation can be used for Lists of Strings, where each element is passed without evaluating the expression. For example: @MetaDataCategory Category: DataSense Mule Version: 3.5 and later Describes a grouping DataSense concrete class, which returns the types and descriptions of any of those types. Mule 3.6 and later supports @MetaDataCategory both in @Module and @Connector annotations. Use to annotate a class that groups methods used for providing metadata about a connector using DataSense. @MetaDataKeyParam Category: DataSense Mule Version: 3.5 and later Marks a parameter inside @Processor as the key for a metadata lookup. @MetaDataKeyRetriever Category: DataSense Mule Version: 3.5 and later. @MetaDataOutputRetriever Category: DataSense Mule Version: 3.5 and later Marks a method as a describer for @MetaData for output scenarios, for a given @MetaDataKey. @MetaDataRetriever Category: DataSense Mule Version: 3.5 and later The method annotated with @MetaDataRetriever describes the metadata for the received metadata key parameter. Uses the list of metadata keys retrieved by @MetadataKeyRetriever to retrieve the entity composition of each entity Type. @MetaDataScope Category: DataSense Mule Version: 3.5 and later @MetaDataStaticKey Category: Parameters Mule Version: 3.5 and later Defines the specific MetaData type of the annotated value. When applied to a @Processor it affects (by default) just the Output, otherwise it affects the field parameter. See also: @ConnectionKey, @Default, @Email, @ExceptionPayload, @Optional , @RefOnly @Mime Category: General Mule Version: 3.5 and later Generates the appropriate message header. @Module Category: General Mule Version: 3.5 and later @NoMetaData Category: DataSense Mule Version: 3.5 and later Marks a @Processor to avoid discovering metadata with @MetaDataRetriever and @MetaDataKeyRetriever mechanism. @OAuth Category: OAuth Mule Version: 3.5 and later Annotates connectors that uses the OAuth 1.0a protocol for authentication. @OAuth2 Category: OAuth Mule Version: 3.5 and later Annotates connectors that uses the OAuth 2 protocol for authentication. @OAuthAccessToken Category: OAuth Mule Version: 3.3 and laterIdentifier Category: OAuth Mule Version: 3.5 and later Marks a method as responsible for identifying the user of an access token. The method is called by a connector’s access token manager. This identification is used as a key to store access tokens. @OAuthAccessTokenSecret Category: OAuth Mule Version: 3.5 and later Holds an access token secret. @OAuthAuthorizationParameter Category: OAuth Mule Version: 3.5 and later Appends an authorization parameter to authorize a URL. @OAuthCallbackParameter Category: OAuth Mule Version: 3.5 and later Identifies the module attribute that represent each parameter on the service OAuth response. @OAuthConsumerKey Category: OAuth Mule Version: 3.5 and later Holds an OAuth consumer key. This field must contain the OAuth Consumer Key as provided by the Service Provider and described in the OAuth specification. @OAuthConsumerSecret Category: OAuth Mule Version: 3.5 and later Holds an OAuth consumer secret. This field must contain the OAuth Consumer Key as provided by the Service Provider and described in the OAuth specification. @OAuthInvalidateAccessTokenOn Category: OAuth Mule Version: 3.5 and later Marks a method which automatically refreshes the tokens. Note: This annotation is deprecated. Use @ReconnectOn instead. @OAuthPostAuthorization Category: OAuth Mule Version: 3.5 and later Marks a method inside OAuth as the responsible for setting up the connector after OAuth completes. @OAuthProtected Category: OAuth Mule Version: 3.5 and later Marks a method inside a Connector as requiring an OAuth access token. Such a method fails to execute while the connector is not authorized. Therefore, forcing the OAuth to happen first. @OAuthScope Category: OAuth Mule Version: 3.5 and later Indicates that access to the Protected Resources must be restricted in scope. A field annotated with @OAuthScope must be present and contain a String indicating the desired scope. @OnException Category: Exception Handling Mule Version: 3.6 and later later,: @Optional Category: Parameters Mule Version: 3.5 and later Marks a @Configurable field or a @Processor or @Source parameters as optional. @OutboundHeaders Category: Argument Passing Mule Version: 3.5 and later Used to pass outbound headers. @Paged Category: General Mule Version: 3.5 and later Marks a method inside a @Connector as an operation that returns a paged result set. Methods annotated with this interface must also be annotated with @Processor and must return an instance of @ProviderAwarePagingDelegate. Category: Display Mule Version: 3.5 and later Identifies a field or method parameter as being a password, or more generally as a variable which contains data that cannot be displayed as plain text. The following shows how the password appears in the Global Element Properties: See also: @FriendlyName @Path @Placement @Summary @Text @Path Category: Display Mule Version: 3.5 and later Identifies a field or method parameter as being a path to a file. This displays a window at Studio to choose a file from the filesystem. See also: @FriendlyName @Password @Placement @Summary @Text @Payload Category: Argument Passing Mule Version: 3.5 and later Marks arguments to receive the payload. @Placement Category: Display Mule Version: 3.5 and later Defines the placement of a configurable attribute in the Anypoint Studio configuration. Use this annotation to instance. The following code creates the General > Basic Settings for Consumer Key and Consumer Secret settings: The generated screen is: This code creates the Advanced Settings > Application Name setting under the General Information section: The generated screen is: See also: @FriendlyName @Password @Path @Summary @Text @Processor Category: General Mule Version: 3.5 and later Marks a method as an operation in a connector. A @Processor method generates a general purpose message processor. The parameters for this annotation are optional. The friendlyName lets you specify the display name for the Operation. @Query Category: DataSense Mule Version: 3.5 and later Supports easy query building by using DataSense Query Language (DSQL). Define @Query within an @Connector scope. @QueryPart Category: DataSense Mule Version: 3.5 and later Used in advanced @Query scenarios. @QueryTranslator Category: DataSense Mule Version: 3.5 and later Translates a DSQL query into a native one. @ReconnectOn Category: Connection Management Mule Version: 3.5 and later. @RefOnly Category: Parameters Mule Version: 3.5 and later Marks a @Configurable field, a @Processor parameter, or @Source parameter as being passed by reference only. @RequiresEntitlement Checks to see if a @Module or @Processor requires an Enterprise license with a particular entitlement. Works at connector level. Enterprise only. @RequiresEnterpriseLicense Checks to see if a @Module or @Processor requires an Enterprise license. The license can be an evaluation license or not. Works at connector level. Enterprise only. @RestCall Category: REST Mule Version: 3.5 and later. Optional arguments: contentType: The content-type of the response from this method call. exceptions: A list of exceptions to throw, configured by pairing an exception type and an expression which is evaluated. In this case, the @RestExceptionOn annotation is used to throw an exception on a specified criteria. In the example above, if the HTTP status is not 200, an exception is thrown. @RestExceptionOn Category: REST Mule Version: 3.5 and later Throws an exception on specified criteria. @RestHeaderParam Category: REST Mule Version: 3.5 and later. When you use the @RestHeaderParam on a specific argument in a method, the header is only included if the method is called. @RestHttpClient Category: REST Mule Version: 3.5 and later An annotation to mark the HttpClient the module uses. This way, you avoid creating multiple clients and have the opportunity to perform your own calls or to configure the HttpClient to fulfill special needs: @RestPostParam Category: REST Mule Version: 3.5 and later: Another way is to annotate an @Configurable variable with @RestPostParam as follows: @RestQueryParam Category: REST Mule Version: 3.5 and later. When the getByType message processor is called with mule as a parameter, the resultant call would be: @RestTimeout Category: REST Mule Version: 3.5 and later. @RestUriParam Category: REST Mule Version: 3.5 and later Allows you to dynamically generate URIs by inserting parameters which are annotated with the @RestUriParam annotation.. When applying annotations to @Processor methods, specify a placeholder in the URI by surrounding the placeholder with curly braces, for example {type}. You can apply @RestUriParam to @Processor methods arguments as follows: Another way is to annotate the @Configurable variable with @RestUriParam as follows: The next example replaces the path: Reference the path argument: @SessionHeaders Category: Argument Passing Mule Version: 3.5 and later. @Start Category: LifeCycle Mule Version: 3.5 and later Mark a method to be started during a method’s org.mule.lifecycle.Startable phase. Note: start is a reserved word and cannot be used as the method’s name. See also: @Dispose @Initialise @Stop @Stop Category: LifeCycle Mule Version: 3.5 and later Mark a method to be stopped during a method’s org.mule.lifecycle.Stoppable phase. Note: stop is a reserved word and cannot be used as the method’s name. See also: @Dispose @Initialise @Start @Source Category: General Mule Version: 3.5 and later. Invoke this method as follows: This flow subscribes to a topic and when an update appears, invokes the logger message processor. @Summary Category: Display Mule Version: 3.5 and later Adds display information to a field or parameter. Use this annotation to instance variables and method parameters to provide a way to override the default inferred description for a @Configurable variable or a @Processor, @Source, @Transformer method parameter. See also: @FriendlyName @Password @Path @Placement @Text @TestConnectivity Category: Connection Management Mule Version: 3.6 and later Indicates a class for testing connection connectivity. @TestConnectivity makes a connector simpler and helps build better connection strategies. The following example is for connectors with connection management and connectivity testing: The following example is for connectors with connection management and no connectivity testing: The following example is for connectors without connection management and connectivity testing: later later Marks a method as a Transformer of data-types or as data formats in the context of the connector. This annotation identifies a method that becomes a Mule transformer. @TransformerResolver Category: General Mule Version: 3.5 and later. @ValidateConnection Category: Connection Management Mule Version: 3.5 and later.
https://docs.mulesoft.com/anypoint-connector-devkit/v/3.6/annotation-reference
2017-12-11T03:51:23
CC-MAIN-2017-51
1512948512121.15
[array(['./_images/Screen+Shot+2014-12-30+at+1.06.11+PM.png', 'Screen+Shot+2014-12-30+at+1.06.11+PM'], dtype=object) array(['./_images/friendlyName-screenshot.png', 'friendlyName-screenshot'], dtype=object) array(['./_images/password-screenshot.png', 'password-screenshot'], dtype=object) array(['./_images/placement-1-screenshot.png', 'placement-1-screenshot'], dtype=object) array(['./_images/placement-2-screenshot.png', 'placement-2-screenshot'], dtype=object) ]
docs.mulesoft.com
Statistical Tests on. This is where statistical testing can help. It can identify tables that may contain interesting results. There are two different approaches to performing significance testing on tables with the goal of helping data exploration: column comparisons and cell comparisons. Contents Column comparisons As shown in the first row of the table above, 65% of people aged 18 to 24 preferred Coca-Cola, compared to 41% of people aged 25 to 29, 55% of people aged 30 to 39, etc. One approach to conducting significance tests on this table is, for each row, to compare the percentages all possible pairs of columns. That is, test to see if the 18 to 24 year olds preference to Coca-Cola is different to the preference of people aged 25 to 29, is different to the people aged 30 to 39, and so on. The table below shows P-Values computed between each of the columns' percentages in the first row. The p-values are bold where they are less than or equal to the significance level cut-off of 0.05. Each column's age category has been assigned a letter and the significant pairs of columns are: A-B, A-D, A-F, A-G, C-D and C-F. If we use greater than and less than signs to indicate which values are higher, we have: A>B, A>D, A>F, A>G, C>D and C>F. Although the six pairs are all significant at the 0.05 level, some have much lower p-values than others. If we use upper-case letters to indicate results significant at the 0.05 level and lower-case to indicate results significant at the 0.001 level we get: a>b, A>D, a>f, a>g, c>d and c>f. (Often commercial studies use upper-case for significant at the 0.05 level and lower case for significant at the 0.10 level.) The table below places the letters indicating significance onto the table. Letters are only shown beneath the higher of the comparisons. Thus, only the 18 to 24 and 30 to 39 categories have letters for Coca-Cola. Tests have been shown for all the rows on the table. Also known as This approach is sometimes referred to as pairwise comparisons, post hoc testing and multiple comparisons. Cell comparisons An alternative approach to testing is to compare each cell with the combined data from the other cells in the rows. For example, we can compare the 65% preference of Coca-Cola by 18 to 24 year olds with the preference of all the people in the other age groups. The table below shows the some data as above but with the unweight counts shown on each table (labeled as n). We can compute the preference for Coca-Cola amongst the people not aged 18 to 24 is (16+38+17+18+18+8)/(39+69+60+39+50+22)=42%. A significance test computes the p-value of 65% versus 42% as being 0.0001. In the same way, we can compare each of the age categories with the combined results from the other age categories. The table below shows the resulting p-values of the seven significance tests. The table below shows the significance tests for all the cells in the table. Arrows are used to indicate results significant at the 0.05 level. The length of the arrows is determined by the p-value. Smaller p-values are represented by longer arrows. In contrast to the column comparisons shown above, this approach to representing significance is a little easier to read as the arrows provide visual cues which highlight the nature of the patterns in the data and thus draws the reader's attention to exceptions. Also known as There is no standard name for this approach to showing statistical significance. Although it is referred to as cell comparisons in this site, it is also sometimes described as residuals analysis and exception reporting (both of these terms have other meanings as well).
http://docs.displayr.com/wiki/Statistical_Tests_on_Tables
2017-12-11T04:01:53
CC-MAIN-2017-51
1512948512121.15
[]
docs.displayr.com
Message-ID: <1793840670.102394.1593731337256.JavaMail.j2ee-conf@bmc1-rhel-confprod1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_102393_452358900.1593731337256" ------=_Part_102393_452358900.1593731337256 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Please login or sign up. You may also need to provide = your support ID if you have not already done so.=20 HP Business Availability Center (BAC), allows you to optimize the availa= bility, performance and effectiveness of business services and applications= . It helps you understand the business impact an outage or degradation may = have on business services and applications.
https://docs.bmc.com/docs/exportword?pageId=586752843
2020-07-02T23:08:57
CC-MAIN-2020-29
1593655880243.25
[]
docs.bmc.com
BizTalk Health Monitor v2 released! BizTalk Health Monitor MMC snap-in is available since June 2014 with the release of BizTalk Server 2013 R2 and also as a standalone version for BizTalk Server 2010 and 2013. This first version was developed based on the known MBV engine to provide out of the box way to administer your BizTalk environment. BHM gives you a powerful dashboard to monitor the health of your BizTalk group. For more details on BHM and its features visit this post. We collected lot of feedbacks from BHM users to enrich BHM with new features, at the same time made it more reliable to provide a better experience. This post will concentrate on all the new features of BHM with v2. This version will be available to download for BizTalk Server 2010 and 2013 from BHM download link whereas it will be released for 2013 R2 with the upcoming cumulative update. Following is the list of new features with BHM v2. Click on each feature to get more details: - Customized Dashboard – Now you can customize your dashboard by adding\removing\resizing custom tiles. - Custom Queries – You can add your own queries to the BHM to make it more personalize and enrich the out of the box BHM query repository. - Custom Rules – BHM v2 will also allow you to add custom rules on your custom queries or existing BHM queries so you can easily monitor your environment specific information. - Profiles enhancements – You can create multiple profiles to monitor a single or multiple BizTalk groups. Based on customers feedback we have made some enhancements in the profile management: - Now you can add the option to specify a different user under which BHM should collect the report. - We also moved the report management from BHM to per profile level so that you can manage the BHM reports for each profile separately. - You can easily create a copy of a profile and reuse it for other group or some other modifications. - You now have an option to select if you want to create the HTML page. You can uncheck this option to preserve disk space. - Renamed it from “Group” to “Profile” Customized Dashboard First version of BHM Dashboard comes up with seven default tiles showing categorized information about the BizTalk group. With this version we have given an option to add your own custom tiles. These custom tiles can be created to display any entries from the Warnings, Summary or Query output sections of BHM. This can be done by just a right click on one or more selected entries of these views. You can either pin them to a new or existing custom tiles on your dashboard to display output of a specific query or the portion of the summary report or warnings. This feature is useful for a quick display of some critical information of your specific environment in the dashboard view of a given profile. Custom tiles are displayed in green color and can be resized, renamed, removed by just a right click on it. Another important point is that this customization is done per profile so it gives you a lot of flexibility in customization. Like default tiles, each list view (bottom second half of the dashboard) entry of a custom tile provides an hyperlink to the source query and also an hyperlink to the rule which produced it and you can in one click disable the rule. We haven’t put any limit on the number of custom tiles you can have on the dashboard and you can’t remove\rename the default tiles but you can resize them. This is a sample of dashboard customization: Let’s have a look at what you can pin and how you can pin custom tiles to the dashboard: Following are the items which can be pinned to the BHM dashboard: - Warnings and Critical Warning sections - Summary Report sections - Any query from the query report - Any specific row of the query from query report The steps to add a custom tile for first three are pretty much same. Here’s how you can do it: - Select the item that you would like to add as a custom tile on the dashboard. - Right click on the item and then select “Pin to the dashboard” - This gives you an option to either add a new tile or add to an existing tile Following are the snapshots of how it can be done in all the three cases: Warnings and Critical Warning sections Summary Report sections Any query output from the query report Notes: If you pin a query output in its own tile, the entire content of the query output will be displayed in the dashboard when selecting the custom tile. If however you add a query in an existing tile, only a link entry to the query will be added in the list view of the custom tile. Multiple query output links from the query report Any specific row of the query from query report You can also choose to display some specific row(s) from a query output by clicking on a custom tile. For example you can chose to display some specific send ports with their names and status by just selecting their rows in the Send ports query and adding them to the dashboard. To achieve that: - expand the Queries output node of an existing report and then expand the category node of the query. - Select the query to display its rows in the MMC detail view (right side) - Select and right click on one or more rows which you want to add as a custom tile. - Select “Pint to dashboard” to display the context menu. - Select which columns you want to show as Caption and its corresponding Values. In the end this is how the customized dashboard looks like: BHM Repository customization This is another important feature of BHM v2: the possibility to add your own queries and rules in the repository of BHM. You can very quickly add your own custom queries in the query repository of a given profile and create your own rules, selecting existing or custom source queries. Custom queries You can of course edit or remove custom queries” Custom Rules tree types of action: - Add an entry in the summary or warnings sections (if the information level field is a warning) - Add an entry in the topology section - Spawn a process Follow this to add a rule: - Right click on the Profile and select settings. - Select the tab “Rules” listing all the BHM rules and their source query. - Click on the button “Add custom rule” - Select the target query, click ok. - Fill the rule properties, conditions and actions in the rule Dialog Box - Test your rule : an HTML view will be displayed with the query output and the Summary & warnings sections - Validate that the rule is added to the rules list view - Validate at the settings page that the rule is persisted as a custom rule in your profile You can of course edit or remove custom rules Notes: - You can also add a rule by right clicking on a query in the queries output node -. - Some rules of BHM will not display conditions because they are coded in a custom assembly. -. Profile enhancements Profile nodes BHM v1 allowed you to create additional group nodes in the BHM MMC to target different BizTalk groups and specify different level of information to collect. We extended this notion to name it as “Profiles” where you can continue to target different BizTalk group but also different sets of customization settings. You can so create multiple profiles targeting a same BizTalk group but having different level of information to collect (different queries created and selected, different rules enabled, etc..) This is the new settings Dialog Box of a given profile: Report management We have moved the BHM report management option from MMC to profiles level. So you can manage your BHM reports per profile basis and mention different specify retention period for your reports. This change be useful if you target different BizTalk groups in you profiles. Profile Duplication An existing profile can be now quickly be copied over to create a new profile. This feature is interesting if you want to reuse an existing profile settings for monitoring other BizTalk groups. To do this, just right click on a the profile and select the menu item “Duplicate profile” Do not generate HTML files BHM now provides an option in the “Information Level” tab of the profile settings Dialog Box to not generate BHM HTML files during a collect (analyze). This option can preserve disk space for each report generated in the profile output folder, but you will not be able to open reports in the browser however.
https://docs.microsoft.com/en-us/archive/blogs/biztalkhealthmonitor/biztalk-health-monitor-v2-released
2020-07-02T23:11:42
CC-MAIN-2020-29
1593655880243.25
[]
docs.microsoft.com
You can uninstall the Template Service Broker if you no longer require access to the template applications that it provides. The following procedure uninstalls the Template Service Broker and its Operator using the web console. The Template Service Broker is installed. Service Broker Operator, then click on it. On the right-hand side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu. When prompted by the Remove Operator Subscription window, optionally select the Also completely remove the Operator from the selected namespace check box if you want all components related to the installation to be removed. This removes the CSV, which in turn removes the Pods, Deployments, CRDs, and CRs associated with the Operator. Select Remove. This Operator will stop running and no longer receive updates. The Template Service Broker Operator is no longer installed in your cluster. After the Template Service Broker is uninstalled, users will no longer have access to the template applications provided by the Template Service Broker.
https://docs.openshift.com/container-platform/4.2/applications/service_brokers/uninstalling-template-service-broker.html
2020-07-02T21:38:54
CC-MAIN-2020-29
1593655880243.25
[]
docs.openshift.com
MassFX Flow creates a simple Particle Flow setup that is wired for MassFX simulation; just play the animation to see the physics simulation. The flow comprises two events: one global and one local, similar to the Standard Flow setup. The global PF Source event has the Quantity Multiplier settings for both Viewport and Render set to 100% and Integration Step for both Viewport and Render set to Half Frame. This is done to have the same amount of particles in render and viewport; and have the same integration step, thus ensuring that the rendered simulation works the same as the viewport simulation. The local event, Event 001, contains the following operators: Birth Grid, Shape, MassFX Shape, Spin, MassFX World, and Display. The Birth Grid icon is positioned 60 units above the ground, and its size combined with the Grid Size setting are calculated to generate 100 particles. The Shape operator is set to the Cube type, with its Size value slightly smaller than the Grid Size setting in the Birth Grid operator. This prevents particles from colliding at the moment the simulation starts. The particles are set in MassFX Shape to collide as boxes. The Spin operator has a low Spin Rate value to create a random small spin for all particles. This enhances the simulation effect, because particles move slightly differently from one another and do not copy or clone each other in their relative motion. In addition to the MassFX World operator, the preset creates a MassFX World helper (the simulation driver) and associates the two. The MassFX World helper has both gravity and the floor (Ground Collision Plane) enabled, and is positioned at Z=0.0. The Display operator is set to Geometry, because with simulations it's important to see the actual particle shapes in the viewport. Last but not least, Real Time is turned off in the 3ds Max Time Configuration settings. So, again, you can just play the animation and see the simulation with optimal settings, immediately.
http://docs.autodesk.com/3DSMAX/15/ENU/3ds-Max-Help/files/GUID-5AC4E748-A134-4DCC-A749-E329C6C1F317.htm
2020-07-02T22:17:05
CC-MAIN-2020-29
1593655880243.25
[]
docs.autodesk.com
# Bucket sort A bucket sort can be described as an ordered set of sorting criteria. All the documents are sorted within the first criterion, then documents that can not be distinguished will be sorted using the second criterion, and so on. Thus, all documents are not sorted for every criterion, which induces a reduced compute time. Here is the ordered list of the default criteria used in MeiliSearch.
https://docs.meilisearch.com/guides/advanced_guides/bucket_sort.html
2020-07-02T21:16:48
CC-MAIN-2020-29
1593655880243.25
[]
docs.meilisearch.com
The C++ switch Statement The C++ switch statement allows selection among multiple sections of code, depending on the value of an expression. The expression enclosed in parentheses, the “controlling expression,” must be of an integral type or of a class type for which there is an unambiguous conversion to integral type. Integral promotion is performed as described in Integral Promotions in Chapter 3. The switch statement causes an unconditional jump to, into, or past the statement that is the “switch body,” depending on the value of the controlling expression, the values of the case labels, and the presence or absence of a default label. The switch body is normally a compound statement (although this is not a syntactic requirement). Usually, some of the statements in the switch body are labeled with case labels or with the default label. Labeled statements are not syntactic requirements, but the switch statement is meaningless without them. The default label can appear only once. Syntax caseconstant-expression:statement default:statement The constant-expression in the case label is converted to the type of the controlling expression and is then compared for equality. In a given switch statement, no two constant expressions in case statements can evaluate to the same value. The behavior is shown in Table 5.1. Table 5.1 Switch Statement Behavior An inner block of a switch statement can contain definitions with initializations as long as they are reachable — that is, not bypassed by all possible execution paths. Names introduced using these declarations have local scope. The following code fragment shows how the switch statement works: most deeply nested switch statements that enclose them. For example: switch( msg ) { case WM_COMMAND: // Windows command. Find out more. switch( wParam ) { case IDM_F_NEW: // File New menu command. delete wfile; wfile = new WinAppFile; break; case IDM_F_OPEN: // File Open menu command. wfile->FileOpenDlg(); break; ... } case WM_CREATE: // Create window. ... break; case WM_PAINT: // Window needs repainting. ... break; default: return DefWindowProc( hWnd, Message, wParam, lParam ); } The preceding code fragment from a Microsoft Windows® message loop shows how switch statements can be nested. The switch statement that selects on the value of wParam is executed only if msg is WM_COMMAND. The case labels for menu selections, IDM_F_NEW and IDM_F_OPEN, associate with the inner switch statement. Control is not impeded by case or default labels. To stop execution at the end of a part of the compound statement, insert a break statement. This transfers control to the statement after the switch statement. This example demonstrates how control “drops through” unless a break statement is used: BOOL fClosing = FALSE; ... switch( wParam ) { case IDM_F_CLOSE: // File close command. fClosing = TRUE; // fall through case IDM_F_SAVE: // File save command. if( document->IsDirty() ) if( document->Name() == "UNTITLED" ) FileSaveAs( document ); else FileSave( document ); if( fClosing ) document->Close(); break; } The preceding code shows how to take advantage of the fact that case labels do not impede the flow of control. If the switch statement transfers control to IDM_F_SAVE, fClosing is FALSE. Therefore, after the file is saved, the document is not closed. However, if the switch statement transfers control to IDM_F_CLOSE, fClosing is set to TRUE, and the code to save a file is executed.
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-6.0/aa278223(v=vs.60)
2018-10-15T10:16:07
CC-MAIN-2018-43
1539583509170.2
[]
docs.microsoft.com
Introduction to the nested user controls problem One of the issues most users of MVVM face is that “nested user controls” problem. The problem is that most (actually all that we’ve seen) MVVM Frameworks only support one view model for a window (or if you’re lucky, a user control). However, the “nested user controls” problem raises lots of questions: - What if the requirements are to build a dynamic UI where the nested user controls are loaded dynamically when they are required? - What about validation in nested user controls? - When should the nested user control view models be saved? Most MVVM developers just answer: “Put all the properties of the nested user controls on the main view model”. Say that again? Are you kidding me? That’s not a real world solution for a real world problem. So, we as developers of Catel offer you a real world solution for the “nested user controls” problem in the form of the UserControl. The real power of the UserControl class lays in the fact that it is able to construct view models dynamically based on its data context. So, the only thing the developers have to take care of is to set the right data context. Below is a graphical presentation of the “nested user controls” problem: As the images above show, the method that Catel uses to solve the problem is much more professional. Below are a few reasons: - Separation of concerns (each control has a view model only containing the information for itself, not for children); - User controls are built so they can be re-used. Without the user controls to be able to have their own view models, how should one actually use user controls with MVVM? The idea behind the user control is pretty complex, especially because XAML frameworks aren’t very good at runtime data context type changing. However, with a few workarounds (very well described in the source code of UserControl), it is possible to dynamically construct view models. The user control constructs the view model with or without a constructor as described earlier in this article. When the view model is constructed, the user control tries to find a (logical or visual) parent that implements the IViewModelContainer interface. Thanks to this interface, a view model can subscribe itself to a parent view model and the validation chain is created as shown below: As the image above shows, all children in the chain are validated, and when the last child is validated, the view model reports the result of its children and itself back to its parent. This way, it is still possible to disable a command when one of the nested user control view models has an error. Saving a chain of nested view models works exactly the same as the validation. First, the view model saves all children, then itself and finally reports back its result to the parent. Now, let’s go to some “real-life” example. I don’t want to make it too complex, but not too easy as well, but don’t want to put the focus on the content of the data, but on the user control and view model creation. Therefore, I have chosen for the data model below: The image shows that we have a house. In that house, we have multiple rooms. In each room, there can be several tables with chairs and beds. This shows a “complex” UI tree with lots of different user controls (each object has its own representation and thus user control). Now our goal is to create user controls that can be used in the window that shows the full house, but also in “sub-parts” and we want to be fully independent of the HouseWindowViewModel (which is the only view model that would be created in a regular MVVM Framework). The example below shows only the Room control and the corresponding view model. The full source code of this article is provided in the source code repository of Catel, so the whole example is available if you are interested or need a more complete example. First, we start with a simple model. For the model, we use the ModelBase class. By using the provided code snippets, this model is setup within a minute: /// <summary> /// Bed class which fully supports serialization, property changed notifications, /// backwards compatibility and error checking. /// </summary> [Serializable] public class Room : ModelBase<Room> { #region Constructor & destructor /// <summary> /// Initializes a new object from scratch. /// </summary> public Room() : this(NameProperty.GetDefaultValue<string>()) { } /// <summary> /// Initializes a new instance of the <see cref="Room"/> class. /// </summary> /// <param name="name">The name.</param> public Room(string name) { // Create collections Tables = new ObservableCollection<Table>(); Beds = new ObservableCollection<Bed>(); // Store values Name = name; } /// <summary> /// Initializes a new object based on <see cref="SerializationInfo"/>. /// </summary> /// <param name="info"><see cref="SerializationInfo"/> that contains the information.</param> /// <param name="context"><see cref="StreamingContext"/>.</param> protected Room(SerializationInfo info, StreamingContext context) : base(info, context) { } #endregion #region Properties /// ), "Room"); /// <summary> /// Gets or sets the table collection. /// </summary> public ObservableCollection<Table> Tables { get { return GetValue<ObservableCollection<Table>>(TablesProperty); } set { SetValue(TablesProperty, value); } } /// <summary> /// Register the Tables property so it is known in the class. /// </summary> public static readonly PropertyData TablesProperty = RegisterProperty("Tables", typeof(ObservableCollection<Table>)); /// <summary> /// Gets or sets the bed collection. /// </summary> public ObservableCollection<Bed> Beds { get { return GetValue<ObservableCollection<Bed>>(BedsProperty); } set { SetValue(BedsProperty, value); } } /// <summary> /// Register the Beds property so it is known in the class. /// </summary> public static readonly PropertyData BedsProperty = RegisterProperty("Beds", typeof(ObservableCollection<Bed>)); #endregion } Next, we are going to create the view model. Again, by the use of code snippets explained earlier in this article, the view model is set up within a few minutes: /// <summary> /// Room view model. /// </summary> public class RoomViewModel : ViewModelBase { #region Variables private int _bedIndex = 1; private int _tableIndex = 1; #endregion #region Constructor & destructor /// <summary> /// Initializes a new instance of the <see cref="RoomViewModel"/> class. /// </summary> public RoomViewModel(Models.Room room) { // Store values Room = room; // Create commands AddTable = new Command(OnAddTableExecuted); AddBed = new Command(OnAddBedExecuted); } #endregion #region Properties /// <summary> /// Gets the title of the view model. /// </summary> /// <value>The title.</value> public override string Title { get { return "Room"; } } #region Models /// <summary> /// Gets or sets the room. /// </summary> [Model] public Models.Room Room { get { return GetValue<Models.Room>(RoomProperty); } private set { SetValue(RoomProperty, value); } } /// <summary> /// Register the Room property so it is known in the class. /// </summary> public static readonly PropertyData RoomProperty = RegisterProperty("Room", typeof(Models.Room)); #endregion #region View model /// <summary> /// Gets or sets the name. /// </summary> [ViewModelToModel("Room")] public string Name { get { return GetValue<string>(NameProperty); } set { SetValue(NameProperty, value); } } /// <summary> /// Register the Name property so it is known in the class. /// </summary> public static readonly PropertyData NameProperty = RegisterProperty("Name", typeof(string)); /// <summary> /// Gets or sets the table collection. /// </summary> [ViewModelToModel("Room")] public ObservableCollection<Models.Table> Tables { get { return GetValue<ObservableCollection<Models.Table>>(TablesProperty); } set { SetValue(TablesProperty, value); } } /// <summary> /// Register the Tables property so it is known in the class. /// </summary> public static readonly PropertyData TablesProperty = RegisterProperty("Tables", typeof(ObservableCollection<Models.Table>)); /// <summary> /// Gets or sets the bed collection. /// </summary> [ViewModelToModel("Room")] public ObservableCollection<Models.Bed> Beds { get { return GetValue<ObservableCollection<Models.Bed>>(BedsProperty); } set { SetValue(BedsProperty, value); } } /// <summary> /// Register the Beds property so it is known in the class. /// </summary> public static readonly PropertyData BedsProperty = RegisterProperty("Beds", typeof(ObservableCollection<Models.Bed>)); #endregion #endregion #region Commands /// <summary> /// Gets the AddTable command. /// </summary> public Command AddTable { get; private set; } /// <summary> /// Method to invoke when the AddTable command is executed. /// </summary> private void OnAddTableExecuted() { Tables.Add(new Models.Table(string.Format("Table {0}", _tableIndex++))); } /// <summary> /// Gets the AddBed command. /// </summary> public Command AddBed { get; private set; } /// <summary> /// Method to invoke when the AddBed command is executed. /// </summary> private void OnAddBedExecuted() { Beds.Add(new Models.Bed(string.Format("Bed {0}", _bedIndex++))); } #endregion } As you can see, the view model can only be constructed by passing a Room model object. It is very important to be aware of this construction. The reason that there is no empty constructor is because there is no support for views that do not represent a Room model. In the view model, the properties of the Room model are mapped by the use of the Model attribute and the ViewModelToModel attribute. Last but not least, commands are defined to be able to add new tables and beds to the Room model. Another way to add a new user control is to use the item templates Now the model and the view model are fully set up, the last thing to do is to create the actual view. To accomplish this, add a new WPF user control to the project. First thing to do is to implement the code-behind, since that is the easiest to do: <summary> /// Interaction logic for Room.xaml /// </summary> public partial class Room : UserControl { /// <summary> /// Initializes a new instance of the <see cref="Room"/> class. /// </summary> public Room() { // Initialize component InitializeComponent(); } } The only thing we changed from the default user control template is that the user control now derives from Catel.Windows.Controls.UserControl control instead of the default System.Windows.Controls.UserControl control. This is it for the code-behind, let’s move up to the view. The last thing to do now is the actual xaml view. For the sake of simplicity, the actual content is left out (it’s just a grid with a textbox and itemscontrols for the children): <catel:UserControl x: <!-- For the sake of simplicity, the content is left out --> </catel:UserControl> A few things are very important to notice in the xaml code shown above. The first thing to notice is that (like the code-behind), the base class is now catel:UserControl instead of UserControl. That’s all that can be learned about solving the “nested user control” problem. We have set up the model, view model and finally the view. Now, let’s take a look at how it looks in a screenshot (and notice the construction time of the view model, they are really constructed on-demand): The red border is the control that we just created. It shows the name of the room, the view model construction time and the child objects (inside expanders). Have a question about Catel? Use StackOverflow with the Catel tag!
http://docs.catelproject.com/5.4/introduction/mvvm/introduction-to-nested-user-controls-problem/
2018-10-15T11:03:31
CC-MAIN-2018-43
1539583509170.2
[array(['../../../images/introduction/mvvm/introduction-to-nested-user-controls-problem/overview.png', None], dtype=object) array(['../../../images/introduction/mvvm/introduction-to-nested-user-controls-problem/validation.png', None], dtype=object) array(['../../../images/introduction/mvvm/introduction-to-nested-user-controls-problem/hierarchy.png', None], dtype=object) array(['../../../images/introduction/mvvm/introduction-to-nested-user-controls-problem/example.png', None], dtype=object) ]
docs.catelproject.com
Contents Now Platform Capabilities Previous Topic Next Topic Activate legacy chat ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Activate legacy chat You can activate the Chat plugin within the instance if you have the admin role. Before you beginRole required: admin About this task Before activating the Chat plugin, consider the installed components, dependencies, and impact. Installed Components: Include tables, a field, business rules, a script include, an application, a user role, properties, an event, and an email notification. For more information, review the components that are installed with chat. Dependencies (installed automatically): Social IT Infrastructure . Impact: The plugin installs new features; it does not overwrite or impact current configurations. It has minimal impact on the system. However, when the system is configured to use short polling (see the Properties in Installed with legacy chat) and the client is in debug mode, users may experience a performance impact. Polling also keeps the session alive when the chat desktop is open.Get started with the legacy chat featureRelated ConceptsLegacy chat useLegacy chat administrationRelated ReferenceInstalled with legacy chat On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/helsinki-servicenow-platform/page/use/using-social-it/task/t_ActivateTheChatPlugin.html
2018-10-15T11:16:04
CC-MAIN-2018-43
1539583509170.2
[]
docs.servicenow.com
BZZ URL schemes¶ Swarm offers 6 distinct URL schemes:": "" } bzz-resource¶ bzz-resource allows you to receive hash pointers to content that the ENS entry resolved to at different versions bzz-resource://<id> - get latest update bzz-resource://<id>/<n> - get latest update on period n bzz-resource://<id>/<n>/<m> - get update version m of period n <id> = ens name
https://swarm-guide.readthedocs.io/en/latest/usage/bzz.html
2018-10-15T10:19:57
CC-MAIN-2018-43
1539583509170.2
[]
swarm-guide.readthedocs.io
Best Practices for Working with AWS Lambda Functions The following are recommended best practices for using AWS Lambda: Topics Function Code Separate the Lambda handler (entry point) from your core logic. This allows you to make a more unit-testable function. In Node.js this may look like: exports.myHandler = function(event, context, callback) { var foo = event.foo; var bar = event.bar; var result = MyLambdaFunction (foo, bar); callback(null, result); } function MyLambdaFunction (foo, bar) { // MyLambdaFunction logic here }. Use Environment Variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable. Control the dependencies in your function's deployment package. The AWS Lambda execution environment contains a number of libraries such as the AWS SDK for the Node.js and Python runtimes (a full list can be found here: Lambda Execution Environment and Available Libraries). To enable the latest set of features and security updates, Lambda will periodically update these libraries. These updates may introduce subtle changes to the behavior of your Lambda function. To have full control of the dependencies your function uses, we recommend packaging all your dependencies with your deployment package. Minimize your deployment package size to its runtime necessities. This will reduce the amount of time that it takes for your deployment package to be downloaded and unpacked ahead of invocation. For functions authored in Java or .NET Core, avoid uploading the entire AWS SDK library as part of your deployment package. Instead, selectively depend on the modules which pick up components of the SDK you need (e.g. DynamoDB, Amazon S3 SDK modules and Lambda core libraries). Reduce the time it takes Lambda to unpack deployment packages authored in Java by putting your dependency .jarfiles in a separate /lib directory. This is faster than putting all your function’s code in a single jar with a large number of .classfiles. Minimize the complexity of your dependencies. Prefer simpler frameworks that load quickly on Execution Context startup. For example, prefer simpler Java dependency injection (IoC) frameworks like Dagger or Guice, over more complex ones like Spring Framework. Avoid using recursive code in your Lambda function, wherein the function automatically calls itself until some arbitrary criteria is met. This could lead to unintended volume of function invocations and escalated costs. If you do accidentally do so, set the function concurrent execution limit to 0immediately to throttle all invocations to the function, while you update the code. Function Configuration Performance testing your Lambda function is a crucial part in ensuring you pick the optimum memory size configuration. Any increase in memory size triggers an equivalent increase in CPU available to your function. The memory usage for your function is determined per-invoke and can be viewed in AWS CloudWatch Logs. On each invoke a REPORT:entry will be made, as shown below: REPORT RequestId: 3604209a-e9a3-11e6-939a-754dd98c7be3 Duration: 12.34 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 18 MB By analyzing the Max Memory Used:field, you can determine if your function needs more memory or if you over-provisioned your function's memory size. Load test your Lambda function to determine an optimum timeout value. It is important to analyze how long your function runs so that you can better determine any problems with a dependency service that may increase the concurrency of the function beyond what you expect. This is especially important when your Lambda function makes network calls to resources that may not handle Lambda's scaling. Use most-restrictive permissions when setting IAM policies. Understand the resources and operations your Lambda function needs, and limit the execution role to these permissions. For more information, see Authentication and Access Control for AWS Lambda. Be familiar with AWS Lambda Limits. Payload size, file descriptors and /tmp space are often overlooked when determining runtime resource limits. Delete Lambda functions that you are no longer using. By doing so, the unused functions won't needlessly count against your deployment package size limit. If you are using Amazon Simple Queue Service as an event source, make sure the value of the function's expected execution time does not exceed the Visibility Timeout value on the queue. This applies both to CreateFunction and UpdateFunctionConfiguration. In the case of CreateFunction, AWS Lambda will fail the function creation process. In the case of UpdateFunctionConfiguration, it could result in duplicate invocations of the function. Alarming and Metrics Use AWS Lambda Metrics and CloudWatch Alarms instead of creating or updating a metric from within your Lambda function code. It's a much more efficient way to track the health of your Lambda functions, allowing you to catch issues early in the development process. For instance, you can configure an alarm based on the expected duration of your Lambda function execution time in order to address any bottlenecks or latencies attributable to your function code. Leverage your logging library and AWS Lambda Metrics and Dimensions to catch app errors (e.g. ERR, ERROR, WARNING, etc.) Stream Event Invokes Test with different batch and record sizes so that the polling frequency of each event source is tuned to how quickly your function is able to complete its task. BatchSize controls the maximum number of records that can be sent to your function with each invoke. A larger batch size can often more efficiently absorb the invoke overhead across a larger set of records, increasing your throughput. Note When there are not enough records to process, instead of waiting, the stream processing function will be invoked with a smaller number of records. Increase Kinesis stream processing throughput by adding shards. A Kinesis stream is composed of one or more shards. Lambda will poll each shard with at most one concurrent invocation. For example, if your stream has 100 active shards, there will be at most 100 Lambda function invocations running concurrently. Increasing the number of shards will directly increase the number of maximum concurrent Lambda function invocations and can increase your Kinesis stream processing throughput. If you are increasing the number of shards in a Kinesis stream, make sure you have picked a good partition key (see Partition Keys) for your data, so that related records end up on the same shards and your data is well distributed. Use Amazon CloudWatch on IteratorAge to determine if your Kinesis stream is being processed. For example, configure a CloudWatch alarm with a maximum setting to 30000 (30 seconds). Async Invokes Create and use Dead Letter Queues to address and replay async function errors. Lambda VPC The following diagram guides you through a decision tree as to whether you should use a VPC (Virtual Private Cloud): Don't put your Lambda function in a VPC unless you have to. There is no benefit outside of using this to access resources you cannot expose publicly, like a private Amazon Relational Database instance. Services like Amazon Elasticsearch Service can be secured over IAM with access policies, so exposing the endpoint publicly is safe and wouldn't require you to run your function in the VPC to secure it. Lambda creates elastic network interfaces (ENIs) in your VPC to access your internal resources. Before requesting a concurrency increase, ensure you have enough ENI capacity (the formula for this can be found here: Configuring a Lambda Function to Access Resources in an Amazon VPC) and IP address space. If you do not have enough ENI capacity, you will need to request an increase. If you do not have enough IP address space, you may need to create a larger subnet. Create dedicated Lambda subnets in your VPC: This will make it easier to apply a custom route table for NAT Gateway traffic without changing your other private/public subnets. For more information, see Configuring a Lambda Function to Access Resources in an Amazon VPC This also allows you to dedicate an address space to Lambda without sharing it with other resources.
https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html
2018-10-15T10:46:41
CC-MAIN-2018-43
1539583509170.2
[]
docs.aws.amazon.com
Deploying PAS on OpenStack - Step 1: Add PAS to Ops Manager - Step 2: Assign Availability Zones and Networks - Step 3: Configure Domains - Step 4: Configure Networking - Step 5: Configure Application Containers - Step 6: Configure Application Developer Controls - Step 7: Review Application Security Groups - Step 8: Configure UAA - Step 9: Configure CredHub - Step 10: Configure Authentication and Enterprise SSO - Step 11: Configure System Databases - Step 12: (Optional) Configure Internal MySQL - Step 13: Configure File Storage - Step 14: (Optional) Configure System Logging - Step 15: (Optional) Customize Apps Manager - Step 16: (Optional) Configure Email Notifications - Step 17: (Optional) Configure App Autoscaler - Step 18: Configure Cloud Controller - Step 19: Configure Smoke Tests - Step 20: (Optional) Enable Advanced Features - Step 21: Configure Errands - Step 22: Enable Traffic to Private Subnet - Step 23: (Optional) Scale Down and Disable Resources - Step 24: Complete PAS Installation Page last updated: This topic describes how to install and configure Pivotal Application Service (PAS) after deploying Pivotal Cloud Foundry (PCF) on OpenStack. Use this topic when Installing Pivotal Cloud Foundry on OpenStack. Before beginning this procedure, ensure that you have successfully completed all steps in the Provisioning the OpenStack Infrastructure topic and the Configuring BOSH Director on OpenStack topics. Step 1: Add PAS to Ops Manager. Navigate to the Pivotal Cloud Foundry Operations Manager Installation Dashboard. Click the Pivotal Network link on the left to add PAS to Ops Manager. For more information, refer to the Adding and Deleting Products topic. Step 2: Assign Availability Zones and Networks Note: Pivotal recommends at least three Availability Zones for a highly available installation of PAS. Select Assign AZ and Networks. These are the Availability Zones that you create when configuring BOSH PAS. Click Save. Note: When you save this form, a verification error displays because the PCF security group blocks ICMP. You can ignore this error. Step 3: Configure Domains Select Domains. Enter the system and application domains. - The System Domain defines your target when you push apps to PAS. - The Apps Domain defines where PAS. : - Protected Domains: Enter a comma-separated list of domains from which PCF can receive traffic. - Plugin Interface,. By default, containers use the same DNS servers as the host. If you want to override the DNS servers to be used in containers, enter a comma-separated list of servers in DNS Servers. Note: If your deployment uses BOSH DNS, which is the default, you cannot use this field to override the DNS servers used in containers.. (Optional) To disable TCP routing, click Select this option if you prefer to enable TCP Routing at a later time. For more information, see the Configuring TCP Routing in PAS topic.. If you want to disable the Garden Root filesystem (GrootFS), deselect the Enable the GrootFS container image plugin for Garden RunC checkbox. Pivotal recommends using this plugin, so it is enabled by default. However, some external components are sensitive to dependencies with filesystems such as GrootFS. If you experience issues, such as antivirus or firewall compatibility problems, deselect the checkbox to roll back to the plugin that is built into Garden RunC. For more information about GrootFS, see Component: Garden and Container Mechanics. is enabled by default. In an upgrade, NFSv3 volume services is set to the same setting as it was named volume-servicesthat belongs to organizational units (OU) named service-accountsand my-company, and your domain is named. Click Save. Step 6:. Select the Allow Space Developers to manage network policies checkbox to permit developers to manage their own network policies for their applications. Select the Enable Service Discovery for Apps checkbox to let apps find each others’ internal addresses to communicate directly, container-to-container. Click Save. Step 7: menu When you configure the UAA to use an internal MySQL database, it uses the type of database selected in the Databases pane. See the Configure Internal Databases section for details. Select Internal MySQL. Note: If you configure your system databases as external in the Databases pane, selecting Internal MySQL in the UAA pane has no effect. Click Save. Ensure that you complete the “Configure Internal MySQL” step later in this topic to configure high availability for your internal MySQL databases. External Database Configuration From the UAA section in Pivotal Application Service (PAS), CredHub Note: Enabling CredHub is not required. However, you cannot leave the fields under Encryption Keys blank. If you do not intend to use CredHub, enter any text in the Name and Key fields as placeholder values. - Select CredHub. Choose the location of your CredHub database. PAS includes this CredHub database for services to store their service instance credentials. If you chose External, enter the following: - Hostname. This is the IP address of your database server. - TCP Port. This is the port of your database server, such as 3306. - Username. This is a unique username that can access your CredHub database on the database server. - Password. This is the password for the provided username. - Database CA Certificate. This certificate is used when encrypting traffic to and from the database., set the number of instances to 2. This is the minimum instance count required for high availability. Click Save. For more information about using CredHub for securing service instance credentials, see Securing Service Instance Credentials with Runtime CredHub. Step 11: Configure System Databases You can configure PAS to use the internal MySQL database provided with PCF, or you can configure an external database provider 12: (Optional) Configure Internal MySQL to configure high availability for your internal MySQL databases. External Database Configuration WARNING: Protect whichever database you use in your deployment with a password. To create your Pivotal Application Service (PAS) databases, follow the procedure below. Note: Exact configurations depend on your database provider. The following procedure uses AWS RDS as an example. PAS PAS, select Databases. Select the External Databases option. Note: If you configure databases as external, you cannot configure an internal database in the UAA pane.. Note: Ensure that the networkpolicyserver database user has the ALL PRIVILEGESpermission. Click Save. Step 12: . Warning: You must configure a load balancer to achieve complete high-availability., disable 13: Pivotal Application Service (PAS) tile, select File Storage. Select Internal WebDAV, and click Save. External S3 or Ceph Filestore To use an external S3-compatible filestore for PAS file storage, perform the following steps: - In the PAS tile, select File Storage. Select the External S3-Compatible Filestore option and complete the following fields: - Enter the Endpoint for your region. For example, in the us-west-2 region, enter. - Enter the Access Key and Secret Key of the pcf-useryou created when configuring AWS for PCF. - From the S3 Signature Version dropdown, select V4 Signature. For more information about S4 signatures, see Signing AWS API Requests in the AWS documentation. - For Region, enter the region in which your S3 buckets are located. us-west-2is an example of an acceptable value for this field. - Select Server-side Encryption to encrypt the contents of your S3 filestore. This option is only available for AWS S3. - (Optional) If you selected Server-side Encryption, you can also specify a KMS Key ID. PAS uses the KMS key to encrypt files uploaded to the blobstore. If you do not provide a KMS Key ID, PAS uses the default AWS key. For more information, see Protecting Data Using Server-Side Encryption with AWS KMS–Managed Keys (SSE-KMS). (Optional) Enable Versioning is enabled for all buckets listed below for S3 blobstore backup and restore, if your buckets have versioning enabled. Note: Backup and restore only supports versioned S3-compatible blobstores. For more information about setting up versioning for your blobstore, see Enable Versioning on Your External Blobstore in the Backup and Restore with External Blobstores topic of the Cloud Foundry documentation. - Enter the following values for the remaining fields: Click Save. Note: For more information regarding AWS S3 Signatures, see the Authenticating Requests topic in the AWS documentation. Note: To enable backup and restore of your PAS tile that uses unversioned S3 compatible blobstore, see Enable External Blobstore Backups. Step 14: (Optional) Configure System Logging If you forward logging messages to an external Reliable Event Logging Protocol (RELP) server, complete the following steps: -, uncheck specify a custom syslog rule, enter it in the Custom rsyslog Configuration field in RainerScript syntax. For more information about customizing syslog rules, see Customizing Syslog Rules. - Click Save. Step 16: (Optional) Configure Email Notifications PAS 17: 15 seconds, and the maximum interval is 120 seconds. The default value is 35.. Click Save. Step 20: . Step 21: PAS, the Smoke Test Errand defaults to always run. The PAS_1<< deselecting the checkbox for this errand PAS. See the Enabling NFS Volume Services topic for more information. Step 22: Enable Traffic to Private Subnet Unless you are using your own load balancer, you must enable traffic flow to the OpenStack private subnet as follows. Give each HAProxy a way of routing traffic into the private subnet by providing public IP addresses as floating IP addresses. Click Resource Config. Enter one or more IP addresses in Floating IPs for each HAProxy. (Optional) If you have enabled the TCP Routing feature, enter one or more IP addresses in Floating IPs column for each TCP Router. Click Save. Step 23: -down menus under Instances for each job. By default, PAS also uses an internal filestore and internal databases. If you configure PAS to use external resources, you can disable the corresponding system-provided resources in Ops Manager to reduce costs and administrative overhead. Complete the following procedures to disable specific VMs in Ops Manager:. - Backup Prepare Node: Enter 0in Instances. If you disabled TCP routing, enter 0Instances in the TCP Router field. If you are not using HAProxy, enter 0Instances in the HAProxy field. Click Save. Step 24: Complete PAS Installation Click the Installation Dashboard link to return to the Installation Dashboard. Click Apply Changes. If the following ICMP error message appears, click Ignore errors and start the install. PAS installs. The image shows the Changes Applied message that Ops Manager displays when the installation process successfully completes. Return to Installing Pivotal Cloud Foundry on OpenStack.
https://docs.pivotal.io/pivotalcf/2-1/customizing/openstack-er-config.html
2018-10-15T11:13:24
CC-MAIN-2018-43
1539583509170.2
[array(['images/asg.png', 'Asg'], dtype=object) array(['images/errands-on.png', 'Errands on'], dtype=object)]
docs.pivotal.io
Contents IT Service Management Previous Topic Next Topic Product hierarchy in Release Management v2 ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Product hierarchy in Release Management v2 Once a product has releases defined, the Product Hierarchy related link displays the hierarchy of releases and features associated with the product. Figure 1. Release Management v2 Product Hierarchy On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/istanbul-it-service-management/page/product/release-management/concept/c_ProductHierarchy.html
2018-10-15T11:05:13
CC-MAIN-2018-43
1539583509170.2
[]
docs.servicenow.com
Effects of Data Type Changes on Existing Attributes Data Type Change Behavior If the type of an existing attribute is changed in the Modeler, mostly the existing column will be dropped and a new column will be created. For some attribute type changes, Mendix tries to convert existing data in the database to the new type. If data should NOT be converted to the new type, you must remove the attribute in the Modeler and create a new column (with the same name) instead of only changing the data type. Even if you change the type and rename the column, Mendix remembers the old column name and will try to convert the column values if possible. Conversion Table In the table below, for each data type change, you can see whether Mendix will convert the values. This is the key to the table above: Manual Conversion Even if Mendix cannot convert the values of a specific column to another type, you can still manage that manually. First, change the name of the attribute (for example, append the text “Deleted” to its name). Second, create a new attribute with the same name and the new data type. Next, look up occurrences of the old (renamed) attribute in the whole model and change these to the new attribute. Be sure that there is no microflow or form anymore that form that calls this microflow. When you deploy, you have to run this microflow one time, after which you can remove both the microflow and the button pointing to it. Then you can also remove the old attribute. Effects of.
https://docs.mendix.com/refguide6/attributes-type-migration
2018-10-15T11:37:17
CC-MAIN-2018-43
1539583509170.2
[]
docs.mendix.com
Msg Applies To: Windows Vista, Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, Windows 8 Sends a message to a user on a Remote Desktop Session Host (RD Session Host) server. For examples of how to use this command, see Examples. Note In Windows Server 2008 R2, Terminal Services was renamed Remote Desktop Services. To find out what's new in the latest version, see What’s New in Remote Desktop Services in Windows Server 2012 in the Windows Server TechNet Library.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc771903%28v%3Dws.11%29
2018-10-15T10:29:39
CC-MAIN-2018-43
1539583509170.2
[]
docs.microsoft.com
if you want to avoid that the proxy server to be used needs to be configured in the application programs (e.g., in the web browser). In this case, all web queries sent from a client are automatically rerouted to and answered by the proxy server. The prerequisite for such a configuration is that the proxy server is entered as the standard gateway in the network configuration on all clients. The LDAP authentication on the proxy server must not be enabled. If the Univention Configuration Registry variable squid/transparentproxy is set to yes, packet filter rules are automatically included. These rules redirect all queries for the ports specified in the Univention Configuration Registry variable squid/webports which are routed over the UCS system to the proxy server. After setting the variable the Univention Firewall component needs to be restarted with /etc/init.d/univention-firewall restart:
http://docs.software-univention.de/networks-3.1.html
2018-10-15T10:42:22
CC-MAIN-2018-43
1539583509170.2
[]
docs.software-univention.de
You can set specific responses to host failures that occur in your vSphere HA cluster. This page is editable only if you have enabled vSphere HA. Procedure - In the vSphere Web Client, browse to the vSphere HA cluster. - Click the Configure tab. - Select vSphere Availability and click Edit. - Click Failures and Responses and then expand Host Failure Response. - Select from the following configuration options. - Click OK. Results Your settings for the host failure response take effect.
https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.avail.doc/GUID-C26254A3-9A4F-4D32-96EC-4A08314753B1.html
2018-10-15T10:10:43
CC-MAIN-2018-43
1539583509170.2
[]
docs.vmware.com
Multi-User Account To allow other people to edit your documents, or to allow specific people to view your documents (without the ability to edit them), you need to switch to a multi-user account. After you add a person to your account, they will be able to see your account's documents list and edit or view documents, based on permissions you assign. Before getting started, you need to know the person's email address. Contents Switch to a Multi-User Account - Go to your Account Settings page. - Go to the Settings section. - Do you see a list of Users and Groups? Your account is already multi-user. - Look for the label Do you want to invite users to join (your company)? - Click the Expand button next to the label. Add Users to Your Account - Go to your Account Settings page. - Go to the Settings section. - Click the New User button. - Type in the person's name and email. - For Group membership: - Select Administrators if you want this person to administer your account with the same permissions you have - including adding/deleting users, purchasing licenses, etc. Edit and View membership is automatically inherited. - Select Edit Projects if you want this person to be able to edit any document in your account. View membership is automatically inherited. This user will need to be assigned an annual license in order to edit documents. - Select View Projects if you want this person to be able to view any document in your account. This user does not need a license. When the user views a document, their usage will be deducted from the pool of view-mode time. - Click Save. - An email is sent to the person to invite them into your account. If the person doesn't already have a Displayr account, they will be instructed to sign up first. Allow Users to Edit Documents Assuming you have added a user with permission to edit documents, you still need to assign a license to this user so they can use edit-mode time. Without this, when they try to edit a document they will see an error message. Purchase a License for the New User - Go to your Account Settings page. - Go to the Licenses section. - Do you already have an Unassigned PAYG or annual license? If so, you can skip the remaining steps and instead assign this license to the new user. - Click Add next to the type of license to purchase. You can choose either a PAYG or an annual license. See Account_Settings#Licenses for guidance. Assign a License to the New User - Go to your Account Settings page. - Go to the Licenses section. - Next to the Unassigned PAYG or annual license, click Assign. - Select the new user and click OK. - The new user can now edit your documents. The system will not allow more than 1 annual license to be assigned to a single user. This is because having an annual licence already enables access to the company's shared pool of view- and edit-mode time. Transfer a License from Another User If you have previously assigned an annual license to a user, and that user no longer needs to edit documents, you can remove the license assignment by clicking the Unassign button. This will return the license to the Unassigned row, whence it can be assigned to a different user. Please note: - If you unassign a License by mistake, you can immediately assign it back to the same user provided that you have not made any other changes in the meantime. - Except as noted in the previous dot point, you cannot assign an annual license to a user who had one within the preceding 7 days.
http://docs.displayr.com/wiki/Multi-User_Account
2017-12-11T04:03:17
CC-MAIN-2017-51
1512948512121.15
[]
docs.displayr.com
Why not let us know how to help you most effectively? New ideas and suggested improvements are always welcome. Please share your thoughts to the Customer Success Team through one of the following mechanisms: - Feedback Button - on the home page of the Support Portal, click on the Feedback button and provide us your comments. - Survey - sent to you when your support case is closed. There is a feedback section as part of completing the survey. - Log a support case within the Support Portal. - Provide your comments to your Customer Success Manager. Please take the time to tell us what you think. The feedback is used to continuosly improve our service. Thank you for your comments and business.
http://docs.alfresco.com/support/concepts/su-feedback.html
2017-12-11T03:49:35
CC-MAIN-2017-51
1512948512121.15
[]
docs.alfresco.com
DataStax Enterprise times out when starting When starting DataStax Enterprise as a service a timed out message is displayed. When starting DataStax Enterprise (DSE) as a service, a script sets up the environment and launches the service. After the DSE service is launched, the script verifies if the service is running. The service takes a few seconds to start, and might display: WARNING: Timed out while waiting for DSE to start. This error does not necessarily mean that the DSE service failed to start. Verify by checking the log files in /var/log/cassandra/system.out. The start script checks if the DSE service is running once per second, so the number of checks is equal to the number of seconds. To increase the time until the service is declared not to launch successfully, uncomment and edit the WAIT_FOR_START option in the /etc/default/dse file, and then restart the DataStax Enterprise: # Uncomment if you want longer/shorter waits checking if the service is up WAIT_FOR_START=14
https://docs.datastax.com/en/dse-trblshoot/doc/troubleshooting/dseTimesOut.html
2017-12-11T04:08:57
CC-MAIN-2017-51
1512948512121.15
[]
docs.datastax.com
How to manage marketplace listing The video below will show you how to manage marketplace listing: You can also follow the steps below: - Login to the Admincp - From the left menu, click on “Manage Features” - Click on “Marketplace” - Click on “Listings” - Select category - Click on “Edit” to edit a category - Click on delete icon to delete a category Thanks for Reading
http://docs.crea8social.com/docs/site-management/how-to-manage-marketplace-listing/
2019-10-13T22:20:46
CC-MAIN-2019-43
1570986648343.8
[array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-17.png', None], dtype=object) array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-41.png', None], dtype=object) array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-9-23.png', None], dtype=object) array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-14-16.png', None], dtype=object) array(['http://docs.crea8social.com/wp-content/uploads/2018/02/download-6-12.png', None], dtype=object) ]
docs.crea8social.com
DynaMesh Resolution and Details The geometry resolution generated by DynaMesh is limited to a cube of 2048×2048 (about 4 million polygons per cube face and approximately 24 million polygons per DynaMesh/Subtool). Remember this is a maximum – DynaMesh is intended as a concept tool and works best at lower resolutions – working with many millions of polygons will slow down your computer. However, also bear in mind that when the mesh bounding box is pushed out of this maximum resolution, the geometry can start lose details. When you start with a low resolution, a larger model will be allowed before it starts losing details. A higher resolution will allow more small details, but it will be limited in terms of how much you can expand the bounding box size – in other words, how far you can push the surface between remesh operations. DynaMesh Options: - Resolution Slider: Defines the resolution of the DynaMesh, controlling the overall polygon density of the model. A low value will create a low resolution mesh with a low polygon count, while using a higher value will create a high resolution mesh that will retain more details at the cost of a higher polygon count. A low resolution DynaMesh will update faster while a high resolution one will take more time to update. As long as the DynaMesh remains in a 1024x1024x1024 resolution cube all details will be maintained as you remesh. If your sculpting causes the DynaMesh to exceed a 1024x1024x1024 space, the mesh will be updated to once again fit with the cube. At this point it could begin losing details. - Group mode: When enabled, any DynaMesh with multiple PolyGroups will be split into separate pieces. It will still be kept as one SubTool. - Project mode: When enabled, the current details of the model will be projected onto the DynaMesh automatically. This can be useful when converting a polymesh with existing details to a DynaMesh. Remember that the Resolution setting will play a big part in the amount of detail that can be retained. - Blur Slider: Applies a smoothing effect to the DynaMesh when Project is enabled. A low value generates a small amount of smoothness while a high value will smooth all major details on the model. - Polish Mode: When enabled, this option applies the various ClayPolish settings each time you update the DynaMesh. This is meant to smooth sharp corners. - Thickness slider: Defines the thickness of the shell in relation to the resolution of the DynaMesh. DynaMesh with TransPose TransPose can be highly useful when working with DynaMesh. See the TransPose section of this documentation to learn about actions such as duplicating an inserted mesh (both positive and negative) and working with masks.
http://docs.pixologic.com/user-guide/3d-modeling/modeling-basics/creating-meshes/dynamesh/options/
2019-10-13T22:35:04
CC-MAIN-2019-43
1570986648343.8
[]
docs.pixologic.com
Administration Lab on Demand Administration Introduction to Lab on Demand UI Explanation of Lab on Demand UI. Lab Interface Frequently Asked Questions Frequently asked questions about the Lab on Demand lab interface. Lab Profile Creation and Explanation Lab profile creation, and explanation of lab profile configuration. Lab Series Creation and Explanation Lab Series creation, and explanation of lab series configuration. Organization RAM Limits Max RAM usage, max active lab instances and max RAM per lab profile. Themes Create themes to customize the look and feel of labs using CSS, and JavaScript. Virtual Machine Profile Creation and Explanation Virtual machine profile creation, and explanation of virtual machine profile configuration.
https://docs.learnondemandsystems.com/lod/home-landing-pages/lod-admin-landing.md
2019-10-13T23:54:39
CC-MAIN-2019-43
1570986648343.8
[]
docs.learnondemandsystems.com
Invoke-Web Request Syntax Invoke-WebRequest [-UseBasicParsing] [-Uri] <Uri> [-WebSession <WebRequestSession>] [-SessionVariable <String>] [-Credential <PSCredential>] [-UseDefaultCredentials] [-CertificateThumbprint <String>] [-Certificate <X509Certificate>] [-UserAgent <String>] [-DisableKeepAlive] [-TimeoutSec <Int32>] [-Headers <IDictionary>] [-MaximumRedirection <Int32>] [-Method <WebRequestMethod>] [-Proxy <Uri>] [-ProxyCredential <PSCredential>] [-ProxyUseDefaultCredentials] [-Body <Object>] [-ContentType <String>] [-TransferEncoding <String>] [-InFile <String>] [-OutFile <String>] [-PassThru] [. Note By default, script code in the web page may be run when the page is being parsed to populate the ParsedHtml property. Use the -UseBasicParsing switch to suppress this. Examples Example 1: Send a web request This command uses the Invoke-WebRequest cmdlet to send a web request to the Bing.com site. $R = Invoke-WebRequest -URI $R.AllElements | Where-Object { $_.name -like "* Value" -and $_.tagName -eq "INPUT" } | Select-Object Name, Value name value ---- ----- From Value 1 To Value 5280 The first command issues the request and saves the response in the $R variable. The second command filters the objects in the AllElements property where the name property is like "* Value" and the tagName is "INPUT". The filtered results are piped to Select-Object to select the name and value properties. Example 2: Use a stateful web service This example shows how to use the Invoke-WebRequest cmdlet with a stateful web service, such as Facebook. $R = Invoke-WebRequest -SessionVariable fb # This command stores the first form in the Forms property of the $R variable in the $Form variable. $Form = $R.Forms[0] # This command shows the fields available in the Form. $Form.fields Key Value --- ----- ... email pass ... # These commands populate the username and password of the respective Form fields. $Form.Fields["email"]="[email protected]" $Form.Fields["pass"]="P@ssw0rd" # This command creates the Uri that will be used to log in to facebook. # The value of the Uri parameter is the value of the Action property of the form. $Uri = "" + $Form.Action # Now the Invoke-WebRequest cmdlet is used to sign into the Facebook web service. # The WebRequestSession object in the $FB variable is passed as the value of the WebSession parameter. # The value of the Body parameter is the hash table in the Fields property of the form. # The value of the *Method* parameter is POST. The command saves the output in the $R variable. $R = Invoke-WebRequest -Uri $Uri -WebSession $FB -Method POST -Body $Form.Fields $R.StatusDescription The first command uses the Invoke-WebRequest cmdlet to send a sign-in request. The command specifies a value of "FB" for the value of the SessionVariable parameter, and saves the result in the $R variable. When the command completes, the $R variable contains an HtmlWebResponseObject and the $FB variable contains a WebRequestSession object. After the Invoke-WebRequest cmdlet signs in to facebook, the StatusDescription property of the web response object in the $R variable indicates that the user is signed in successfully. Example 3: Get links from a web page This command gets the links in a web page. (Invoke-WebRequest -Uri "").Links.Href The Invoke-WebRequest cmdlet gets the web page content. Then the Links property of the returned HtmlWebResponseObject is used to display the Href property of each link. Example 4: Catch non success messages from Invoke-WebRequest When Invoke-WebRequest encounters a non-success HTTP message (404, 500, etc.), it returns no output and throws a terminating error. To catch the error and view the StatusCode you can enclose execution in a try/catch block. The following example shows how to accomplish this. try { $response = Invoke-WebRequest -Uri "" -ErrorAction Stop # This will only execute if the Invoke-WebRequest is successful. $StatusCode = $Response.StatusCode } catch { $StatusCode = $_.Exception.Response.StatusCode.value__ } $StatusCode 404 The first command calls Invoke-WebRequest with an ErrorAction of Stop, which forces Invoke-WebRequest to throw a terminating error on any failed requests. The terminating error is caught by the catch block which retrieves the StatusCode from the Exception object. Parameters Specifies the body of the request. The body is the content of the request that follows the headers. You can also pipe a body value to Invoke-WebRequest. The Body parameter can be used to specify a list of query parameters or specify the content of the response. When the input is a GET request and the body is an IDictionary (typically, a hash table), the body is added to the URI as query parameters. For other GET requests, the body is set as the value of the request body in the standard name=value format. When the body is a form, or it is the output of an Invoke-WebRequest call, PowerShell sets the request content to the form fields. For example: $r = Invoke-WebRequest $r.Forms\[0\].Name = "MyName" $r.Forms\[0\].Password = "MyPassword" Invoke-RestMethod -Body $r - or - Invoke-RestMethod -Body $r.Forms\[0\] Specifies the client certificate that is used for a secure web request. Enter a variable that contains a certificate or a command or expression that gets the certificate. To find a certificate, use Get-PfxCertificate or use the Get-ChildItem cmdlet in the Certificate ( Cert:) drive. If the certificate is not valid or does not have sufficient authority, the command fails. Specifies the digital public key certificate (X509) of a user account that has permission to send the request. content type of the web request. If this parameter is omitted and the request method is POST, Invoke-WebRequest sets the content type to application/x-www-form-urlencoded. Otherwise, the content type is not specified in the call. Specifies a user account that has permission to send the request. The default is the current user. Type a user name, such as User01 or Domain01\User01, or enter a PSCredential object, such as one generated by the Get-Credential cmdlet. Indicates that the cmdlet sets the KeepAlive value in the HTTP header to False. By default, KeepAlive is True. KeepAlive establishes a persistent connection to the server to facilitate subsequent requests. Specifies the headers of the web request. Enter a hash table or dictionary. To set UserAgent headers, use the UserAgent parameter. You cannot use this parameter to specify UserAgent or cookie headers. Gets the content of the web request from a file. Enter a path and file name. If you omit the path, the default is the current location. Specifies how many times PowerShell redirects a connection to an alternate Uniform Resource Identifier (URI) before the connection fails. The default value is 5. A value of 0 (zero) prevents all redirection. Specifies the method used for the web request. The acceptable values for this parameter are: - Default - - Get - Head - Merge - Options - Patch - Put - Trace Specifies the output file for which this cmdlet saves the response body. Enter a path and file name. If you omit the path, the default is the current location. By default, Invoke-WebRequest returns the results to the pipeline. To send the results to a file and to the pipeline, use the Passthru parameter. Indicates that the cmdlet returns the results, in addition to writing them to a file. This parameter is valid only when the OutFile parameter is also used in the command. Specifies a proxy server for the request, rather than connecting directly to the Internet resource. Enter the URI of a network proxy server. Specifies a user account that has permission to use the proxy server that is specified by the Proxy parameter. The default is the current user. Type a user name, such as User01 or Domain01\User01, or enter a PSCredential object, such as one generated by the Get-Credential cmdlet. This parameter is valid only when the Proxy parameter is also used in the command. You cannot use the ProxyCredential and ProxyUseDefaultCredentials parameters in the same command. Indicates that the cmdlet uses the credentials of the current user to access the proxy server that is specified by the Proxy parameter. This parameter is valid only when the Proxy parameter is also used in the command. You cannot use the ProxyCredential and ProxyUseDefaultCredentials parameters in the same command. Specifies a variable for which this cmdlet creates a web request session and saves it in the value. Enter a variable name without the dollar sign ( $) symbol. When you specify a session variable, Invoke-WebRequest creates a web request session object and assigns it to a variable with the specified name in your PowerShell session. You can use the variable in your session as soon as the command completes. use the web request session in subsequent web requests, specify the session variable in the value of the WebSession parameter. PowerShell uses the data in the web request session object when establishing the new connection. To override a value in the web request session, use a cmdlet parameter, such as UserAgent or Credential. Parameter values take precedence over values in the web request session. You cannot use the SessionVariable and WebSession parameters in the same command. Specifies how long the request can be pending before it times out. Enter a value in seconds. The default value, 0, specifies an indefinite time-out. A Domain Name System (DNS) query can take up to 15 seconds to return or time out. If your request contains a host name that requires resolution, and you set TimeoutSec to a value greater than zero, but less than 15 seconds, it can take 15 seconds or more before a WebException is thrown, and your request times out. Specifies a value for the transfer-encoding HTTP response header. The acceptable values for this parameter are: - Chunked - Compress - Deflate - GZip - Identity Specifies the Uniform Resource Identifier (URI) of the Internet resource to which the web request is sent. Enter a URI. This parameter supports HTTP, HTTPS, FTP, and FILE values. This parameter is required. Indicates that the cmdlet uses the response object for HTML content without Document Object Model (DOM) parsing. This parameter is required when Internet Explorer is not installed on the computers, such as on a Server Core installation of a Windows Server operating system. Indicates that the cmdet uses the credentials of the current user to send the web request. Specifies a user agent string for the web request. The default user agent is similar to Mozilla/5.0 (Windows NT; Windows NT 6.1; en-US) WindowsPowerShell/3.0 with slight variations for each operating system and platform. To test a website with the standard user agent string that is used by most Internet browsers, use the properties of the PSUserAgent class, such as Chrome, FireFox, InternetExplorer, Opera, and Safari. For example, the following command uses the user agent string for Internet Explorer Specifies a web request session. Enter the variable name, including the dollar sign ( $). To override a value in the web request session, use a cmdlet parameter, such as UserAgent or Credential. Parameter values take precedence over values in the web request session. create a web request session, enter a variable name (without a dollar sign) in the value of the SessionVariable parameter of an Invoke-WebRequest command. Invoke-WebRequest creates the session and saves it in the variable. In subsequent commands, use the variable as the value of the WebSession parameter. You cannot use the SessionVariable and WebSession parameters in the same command. Inputs System.Object You can pipe the body of a web request to Invoke-WebRequest. Outputs Microsoft.PowerShell.Commands.HtmlWebResponseObject Related Links Feedback
https://docs.microsoft.com/en-us/powershell/module/Microsoft.PowerShell.Utility/invoke-webrequest?view=powershell-5.1
2019-10-13T23:16:21
CC-MAIN-2019-43
1570986648343.8
[]
docs.microsoft.com
. Known Issues Compiling Library Projects: // Extension DLL Header file: __declspec( dllexport ) void EnsureManagedInitialization () { // managed code that won't be optimized away System::GC::KeepAlive(System::Int32::MaxValue); } Compile with Visual C++. Versions Prior to Visual C++ 2003 If you are upgrading to Visual Studio 2010 from a version prior to Visual C++ 2003, you may see compiler errors related to the enhanced C++ standard conformance in Visual C++ 2003 Upgrading from Visual C++ 2003 Projects previous built with Visual C++ 2003 should also first be compiled without /clr as Visual Studio now has increased ANSI/ISO compliance and some breaking changes. The change that is likely to require the most attention is Security Features in the CRT. Code that uses the CRT is very likely to produce deprecation warnings. These warnings can be suppressed, but migrating to the new Security-Enhanced Versions of CRT Functions Extensions for C++ won't compile under /clr. Use /clr:oldSyntax instead. Convert C Code to C++ Although Visual Studio will compile C files, it is necessary to convert them to C++ for a /clr compilation. The actual filename doesn't have to be changed; you can use /Tp (see /Tc, /Tp, /TC, /TP (Specify Source File Type).): Reconfigure Project Settings After your project compiles and runs in Visual Studio 2010 New Project Configuration Dialog Box /clr (Common Language Runtime Compilation). As mentioned previously, this step will automatically disable conflicting project settings. Note When upgrading a managed library or web service project from Visual C++ 2003, the /Zl compiler option will added to the Command Line property page. This will cause LNK2001. Remove /Zl from the Command Line property page to resolve. See /Zl (Omit Default Library Name) and How to: Open Project Property Pages for more information. Or, add msvcrt.lib and msvcmrt.lib to the linker's Additional Dependencies property. For projects built with makefiles, incompatible compiler options must be disabled manually once /clr is added. See //clr Restrictions Studio 2010, #using Directive (C/C++).. The common language runtime starts COM as MTA by default; use /CLRTHREADATTRIBUTE (Set CLR Thread Attribute). Using New Visual C++ Features details on converting Managed Extensions for C++, see C++/CLI Migration Primer. For information on .NET programming in Visual C++ see: See Also Concepts Mixed (Native and Managed) Assemblies
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/ms173265%28v%3Dvs.100%29
2019-10-13T22:59:03
CC-MAIN-2019-43
1570986648343.8
[]
docs.microsoft.com
Tools¶ There are many tools you can use to create a diagram, and contributors are free to use the tool of their choice. However, elements within a diagram must be easily edited or easy to reproduce with a different tool. Follow the file saving conventions outlined in Use recommended file formats. This ensures that a diagram can be reviewed and edited as OpenStack projects continue to change. Open source tools can help streamline the review process by making diagrams easy to edit. Many open-source tools contain shapes and stencils that can be used in OpenStack. The following is a list recommended open source tools:
https://docs.openstack.org/doc-contrib-guide/diagram-guidelines/tools.html
2019-10-13T22:51:46
CC-MAIN-2019-43
1570986648343.8
[]
docs.openstack.org
All public logs Combined display of all available logs of docs. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive). - 18:37, 21 April 2014 Jeff Epperson (talk | contribs) uploaded File:Email settings2.jpg (Select Accounts to view/edit/add email accounts to your CallProof account.)
http://docs.callproof.com/index.php?title=Special:Log&page=File%3AEmail+settings2.jpg
2019-10-13T22:18:53
CC-MAIN-2019-43
1570986648343.8
[]
docs.callproof.com
InterPlay 2.2.1 User Guide Save PDF Selected topic Selected topic and subtopics All content About external databases or internal tables: See Datasources. General configuration for the system and user interface:Global settings. See Global settings.Object Type SelectionsCollection Type Selections The configuration editor is implemented via InterPlay and configuration objects are handled as Objects. Related Links
https://docs.axway.com/bundle/InterPlay_221_UserGuide_allOS_en_HTML5/page/Content/UserGuide/Common/Concepts/Configuration_objects/Configuration_objects_overview.htm
2019-10-13T23:13:10
CC-MAIN-2019-43
1570986648343.8
[]
docs.axway.com
- Microsoft Dynamics 365 Source Microsoft Dynamics 365 sources are created in your Coveo Cloud organization and managed by Coveo for Microsoft Dynamics 365 (see What Is Coveo for Microsoft Dynamics 365? and Installing Coveo for Microsoft Dynamics 365). A Microsoft Dynamics 365 source supports the Microsoft Dynamics 365 security scheme so that in the search results, users can only see the items they have access to in Microsoft Dynamics 365 (see Coveo, Dynamics, and Security). In the Coveo Cloud administration console, administrators and content managers can view the Microsoft Dynamics 365 source edition panel where they can modify the mappings (see Administration Console). Source Features Summary Edit a Microsoft Dynamics 365 Source If not already in the Edit a Microsoft Dynamics 365 Source panel, go to the panel (in the main menu, under Content, select Sources > Microsoft Dynamics 365 source row > Edit in the Action bar). In the Configuration tab, no actions are available. You must manage your Microsoft Dynamics 365 source from the Coveo for Microsoft Dynamics 365 configuration interface (see About the Configuration Interface and Add/Edit a Microsoft Dynamics 365 Source Panel). Optionally, consider editing or adding mappings (see Adding and Managing Source Mappings). You can only manage mapping rules once you build the source (see Refresh, Rescan, or Rebuild Sources). Click Save and Rebuild when you want to save your source configuration. Rebuilding is required to take into account changes made to the field mapping rules. If you do not rebuild, changes will only apply to new or modified items. What’s Next? Review the default update schedule in which a source refresh starts every 15 minutes (see Edit a Source Schedule).
https://docs.coveo.com/en/1915/
2019-10-13T22:25:01
CC-MAIN-2019-43
1570986648343.8
[]
docs.coveo.com
Fixtures To test functionality correctly, we must use consistent data. If we are testing our code with the same data each time, we can trust our tests to yield reliable results and to identify when the logic changes. Each test run in SilverStripe starts with a fresh database containing no records. Fixtures provide a way to describe the initial data to load into the database. The SapphireTest class takes care of populating a test database with data from fixtures - all we have to do is define them. To include your fixture file in your tests, you should define it as your $fixture_file: app/tests/MyNewTest.php use SilverStripe\Dev\SapphireTest; class MyNewTest extends SapphireTest { protected static $fixture_file = 'fixtures.yml'; } You can also use an array of fixture files, if you want to use parts of multiple other tests. If you are using SilverStripe\Dev\TestOnly dataobjects in your fixtures, you must declare these classes within the $extra_dataobjects variable. app/tests/MyNewTest.php use SilverStripe\Dev\SapphireTest; class MyNewTest extends SapphireTest { protected static $fixture_file = [ 'fixtures.yml', 'otherfixtures.yml' ]; protected static $extra_dataobjects = [ Player::class, Team::class, ]; } Typically, you'd have a separate fixture file for each class you are testing - although overlap between tests is common. Fixtures are defined in YAML. YAML is a markup language which is deliberately simple and easy to read, so it is ideal for fixture generation. Say we have the following two DataObjects: use SilverStripe\ORM\DataObject; use SilverStripe\Dev\TestOnly; class Player extends DataObject implements TestOnly { private static $db = [ 'Name' => 'Varchar(255)' ]; private static $has_one = [ 'Team' => 'Team' ]; } class Team extends DataObject implements TestOnly { private static $db = [ 'Name' => 'Varchar(255)', 'Origin' => 'Varchar(255)' ]; private static $has_many = [ 'Players' => 'Player' ]; } We can represent multiple instances of them in YAML as follows: app/tests/fixtures.yml Team: hurricanes: Name: The Hurricanes Origin: Wellington crusaders: Name: The Crusaders Origin: Canterbury Player: john: Name: John Team: =>Team.hurricanes joe: Name: Joe Team: =>Team.crusaders jack: Name: Jack Team: =>Team.crusaders This YAML is broken up into three levels, signified by the indentation of each line. In the first level of indentation, Player and Team, represent the class names of the objects we want to be created. The second level, john/ joe/ jack & hurricanes/ crusaders, are identifiers. Each identifier you specify represents a new object and can be referenced in the PHP using objFromFixture $player = $this->objFromFixture('Player', 'jack'); in our example YAML, his team is the Hurricanes which is represented by =>Team.hurricanes. This sets the has_one relationship for John with with the Team object hurricanes. Note that we use the name of the relationship (Team), and not the name of the database field (TeamID). Also be aware the target of a relationship must be defined before it is referenced, for example the hurricanes team must appear in the fixture file before the line Team: =>Team.hurricanes. This style of relationship declaration can be used for any type of relationship (i.e has_one, has_many, many_many). We can also declare the relationships conversely. Another way we could write the previous example is: Player: john: Name: John joe: Name: Joe jack: Name: Jack Team: hurricanes: Name: Hurricanes Origin: Wellington Players: =>Player.john crusaders: Name: Crusaders Origin: Canterbury Players: =>Player.joe,=>Player.jack The database is populated by instantiating DataObject objects and setting the fields declared in the YAML, then calling write() on those objects. Take for instance the hurricances record in the YAML. It is equivalent to writing: $team = new Team([ 'Name' => 'Hurricanes', 'Origin' => 'Wellington' ]); $team->write(); $team->Players()->add($john); As the YAML fixtures will call write, any onBeforeWrite() or default value logic will be executed as part of the test. Fixtures for namespaced classes As of SilverStripe 4 you will need to use fully qualfied class names in your YAML fixture files. In the above examples, they belong to the global namespace so there is nothing requires, but if you have a deeper DataObject, or it has a relationship to models that are part of the framework for example, you will need to include their namespaces: MyProject\Model\Player: john: Name: join MyProject\Model\Team: crusaders: Name: Crusaders Origin: Canterbury Players: =>MyProject\Model\Player.john If your tests are failing and your database has table names that follow the fully qualified class names, you've probably forgotten to implement private static $table_name = 'Player'; on your namespaced class. This property was introduced in SilverStripe 4 to reduce data migration work. See DataObject for an example. Defining many_many_extraFields many_many relations can have additional database fields attached to the relationship. For example we may want to declare the role each player has in the team. use SilverStripe\ORM\DataObject; class Player extends DataObject { private static $db = [ 'Name' => 'Varchar(255)' ]; private static $belongs_many_many = [ 'Teams' => 'Team' ]; } class Team extends DataObject { private static $db = [ 'Name' => 'Varchar(255)' ]; private static $many_many = [ 'Players' => 'Player' ]; private static $many_many_extraFields = [ 'Players' => [ Fixture Factories While manually defined fixtures provide full flexibility, they offer very little in terms of structure and convention. Alternatively, you can use the FixtureFactory class, which allows you to set default values, callbacks on object creation, and dynamic/lazy value setting. SapphireTest uses FixtureFactory under the hood when it is provided with YAML based fixtures. The idea is that rather than instantiating objects directly, we'll have a factory class for them. This factory can have blueprints defined on it, which tells the factory how to instantiate an object of a specific type. Blueprints need a name, which is usually set to the class it creates such as Member or Page. Blueprints are auto-created for all available DataObject subclasses, you only need to instantiate a factory to start using them. use SilverStripe\Core\Injector\Injector; $factory = Injector::inst()->create('FixtureFactory'); $obj = $factory->createObject('Team', 'hurricanes'); In order to create an object with certain properties, just add a third argument: $obj = $factory->createObject('Team', 'hurricanes', [ 'Name' => 'My Value' ]); It is important to remember that fixtures are referenced by arbitrary identifiers ('hurricanes'). These are internally mapped to their database identifiers. After we've created this object in the factory, getId is used to retrieve it by the identifier. $databaseId = $factory->getId('Team', 'hurricanes'); Default Properties Blueprints can be overwritten in order to customise their behavior. For example, if a Fixture does not provide a Team name, we can set the default to be Unknown Team. $factory->define('Team', [ 'Name' => 'Unknown Team' ]); Dependent Properties Values can be set on demand through anonymous functions, which can either generate random defaults, or create composite values based on other fixture data. $factory->define('Member', [ 'Email' => function($obj, $data, $fixtures) { if(isset($data['FirstName']) { $obj->Email = strtolower($data['FirstName']) . '@example.com'; } }, 'Score' => function($obj, $data, $fixtures) { $obj->Score = rand(0,10); } )]; Relations Model relations can be expressed through the same notation as in the YAML fixture format described earlier, through the => prefix on data values. $obj = $factory->createObject('Team', 'hurricanes', [ 'MyHasManyRelation' => '=>Player.john,=>Player.jo->copyVersionToStage(Versioned::DRAFT, Versioned:
https://docs.silverstripe.org/en/4/developer_guides/testing/fixtures/
2019-10-13T23:49:21
CC-MAIN-2019-43
1570986648343.8
[]
docs.silverstripe.org
Overview RightScale easily handles the discovery and inventory of existing resources in public or private clouds. This document summarizes how to: - Connect to clouds - Review your existing cloud resources - Enable additional management functions Connect to Clouds It is easy to connect to public and private clouds or vSphere environments: - Add a public cloud to a RightScale account. - Register a private cloud - Use RightScale Cloud Appliance for vSphere to connect and manage your vSphere environments Review Your Existing Cloud Resources To see your existing cloud resources in Cloud Management, navigate to Manage > Instances and Servers. Here you can see all of your existing cloud resources, including those that were just discovered and those that have been launched through RightScale. You can customize the display in a number of ways: - Use the Filter Within The Table field to find instances on particular cloud providers or in particular regions or datacenters. - Sort columns by ascending or descending order. Columns are customizable by the Show/Hide Column button. Learn more about the Instances and Servers page. Enable Additional Management Functions Once you have connected clouds and discovered running instances, you can organize your resources and also perform basic management tasks. For example, once an instance has been discovered, you can click on the actions bar to do things including: - Add tag(s) - Remove tag(s) - Add to a Deployment - Reboot - Terminate You can also access Audit Trails for each instance. To perform additional management such as monitoring, alerting, running operations scripts, managed SSH you will need to install the lightweight RightLink agent. RightScale RightLink™ helps you manage workloads already running in the cloud without relaunching, re-architecting or disrupting running applications. The RightLink agent runs on each server, connects to the RightScale platform, and facilities two-way communication - the agent can report status or local state changes to the RightScale platform, and the RightScale platform can provide data to the agent as well as send it commands. To install RightScale RightLink:
http://docs.rightscale.com/cm/rs101/discovery_inventory_in_rightscale.html
2018-04-19T15:26:32
CC-MAIN-2018-17
1524125936981.24
[array(['/img/cm-instances-servers-filter.png', 'cm-instances-servers-filter.png'], dtype=object) array(['/img/cm-audit-trails.png', 'cm-audit-trails.png'], dtype=object)]
docs.rightscale.com
Launching an AWS Marketplace Instance You can subscribe to an AWS Marketplace product and launch an instance from the product's AMI using the Amazon EC2 launch wizard. For more information about paid AMIs, see Paid AMIs. To cancel your subscription after launch, you first have to terminate all instances running from it. For more information, see Managing Your AWS Marketplace Subscriptions. To launch an instance from the AWS Marketplace using the launch wizard Open the Amazon EC2 console at. From the Amazon EC2 dashboard, choose Launch Instance. On the Choose an Amazon Machine Image (AMI) page, choose the AWS Marketplace category on the left. Find a suitable AMI by browsing the categories, or using the search functionality. Choose Select to choose your product. A dialog displays an overview of the product you've selected. You can view the pricing information, as well as any other information that the vendor has provided. When you're ready, choose Continue. Note You are not charged for using the product until you have launched an instance with the AMI. Take note of the pricing for each supported instance type, as you will be prompted to select an instance type on the next page of the wizard. Additional taxes may also apply to the product. On the Choose an Instance Type page, select the hardware configuration and size of the instance to launch. When you're done, choose Next: Configure Instance Details. On the next pages of the wizard, you can configure your instance, add storage, and add tags. For more information about the different options you can configure, see Launching an Instance Using the Launch Instance Wizard. Choose Next until you reach the Configure Security Group page. The wizard creates a new security group according to the vendor's specifications for the product. The security group may include rules that allow all IPv4 addresses ( 0.0.0.0/0) access on SSH (port 22) on Linux or RDP (port 3389) on Windows. We recommend that you adjust these rules to allow only a specific address or range of addresses to access your instance over those ports. When you are ready, choose Review and Launch. On the Review Instance Launch page, check the details of the AMI from which you're about to launch the instance, as well as the other configuration details you set up in the wizard. When you're ready, choose Launch to select or create a key pair, and launch your instance. Depending on the product you've subscribed to, the instance may take a few minutes or more to launch. You are first subscribed to the product before your instance can launch. If there are any problems with your credit card details, you will be asked to update your account details. When the launch confirmation page displays, choose View Instances to go to the Instances page. Note You are charged the subscription price as long as your instance is running, even if it is idle. If your instance is stopped, you may still be charged for storage. When your instance is in the running state, you can connect to it. To do this, select your instance in the list and choose Connect. Follow the instructions in the dialog. For more information about connecting to your instance, see Connecting to Your Windows Instance. Important Check the vendor's usage instructions carefully, as you may need to use a specific user name to log in to the instance. For more information about accessing your subscription details, see Managing Your AWS Marketplace Subscriptions. Launching an AWS Marketplace AMI Instance Using the API and CLI To launch instances from AWS Marketplace products using the API or command line tools, first ensure that you are subscribed to the product. You can then launch an instance with the product's AMI ID using the following methods:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/launch-marketplace-console.html
2018-04-19T15:35:39
CC-MAIN-2018-17
1524125936981.24
[]
docs.aws.amazon.com
Note: This document is updated based on Jelastic version 4.8 Proving the appellation of Cloud Platform with-no-constraints, Jelastic allows to easily move environments between Clouds in order to find out which one meets your requirements the best. This procedure consists of 2 main stages - export of the already existing environment (we’ll consider this operation in details below) and its subsequent import to the target Jelastic installation. Both of these operations will take just a few minutes. As a result, you’ll receive an identical ready-to-work copy of your environment being run at another Jelastic installation.
https://docs.jelastic.com/environment-export
2018-04-19T15:38:58
CC-MAIN-2018-17
1524125936981.24
[]
docs.jelastic.com
Troubleshooting¶ Basics¶ - Have you searched known issues? - Have you tried rebooting? (Kidding!) - Did you read all the items on this page? - Then you should contact support! Bug Reproduction¶ Running Binary Ninja with debug logging will make your bug report more useful. ./binaryninja --debug --stderr-log Alternatively, it might be easier to save debug logs to a file instead: ./binaryninja -d -l logfile.txt (note that both long and short-form of the command-line arguments are demonstrated in the above examples) Plugin Troubleshooting¶ While third party plugins are not officially supported, there are a number of troubleshooting tips that can help identify the cause. The most important is to enable debug logging as suggested in the previous section. This will often highlight problems with python paths or any other issues that prevent plugins from running. Additionally, if you're having trouble running a plugin in headless mode (without a GUI calling directly into the core), make sure you'er running the Commercial version of Binary Ninja as the Student/Non-Commercial edition does not support headless processing. Next, if running a python plugin, make sure the python requirements are met by your existing installation. Note that on windows, the bundled python is used and python requirements should be installed either by manually copying the modules to the plugins folder. License Problems¶ - If experiencing problems with Windows UAC permissions during an update, the easiest fix is to completely un-install and recover the latest installer and license. Preferences are saved outside the installation folder and are preserved, though you might want to remove your license. - If you need to change the email address on your license, contact support. OS X¶ While OS X is generally the most trouble-free environment for Binary Ninja, very old versions may have problems with the RPATH for our binaries and libraries. There are two solutions. First, run Binary Ninja with: DYLD_LIBRARY_PATH="/Applications/Binary Ninja.app/Contents/MacOS" /Applications/Binary\ Ninja.app/Contents/MacOS/binaryninja Or second, modify the binary itself using the install_name_tool. Linux¶ Given the diversity of Linux distributions, some work-arounds are required to run Binary Ninja on platforms that are not officially supported. Headless Ubuntu¶ If you're having trouble getting Binary Ninja installed in a headless server install where you want to be able to X-Forward the GUI on a remote machine, the following should meet requiremetns (for at least 14.04 LTS): apt-get install libgl1-mesa-glx libfontconfig1 libxrender1 libegl1-mesa libxi6 libnspr4 libsm6 Arch Linux¶ - Install python2 from the official repositories ( sudo pacman -S python2) and create a sym link: sudo ln -s /usr/lib/libpython2.7.so.1.0 /usr/lib/libpython2.7.so.1 - Install the libcurl-compat library with sudo pacman -S libcurl-compat, and run Binary Ninja via LD_PRELOAD=libcurl.so.3 ~/binaryninja/binaryninja KDE¶ To run Binary Ninja in a KDE based environment, set the QT_PLUGIN_PATH to the QT sub-folder: cd ~/binaryninja QT_PLUGIN_PATH=./qt ./binaryninja Debian¶ For Debian variants that (Kali, eg) don't match packages with Ubuntu LTS or the latest stable, the following might fix problems with libssl and libcrypto: $ cd binaryninja $ ln -s plugins/libssl.so libssl.so.1.0.0 $ ln -s plugins/libcrypto.so libcrypto.so.1.0.0 Alternatively, you might need to (as root): apt-get install libssl-dev ln -s /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.2 /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 ln -s /usr/lib/x86_64-linux-gnu/libssl.so.1.0.2 /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 Gentoo¶ One Gentoo user reported a failed SSL certificate when trying to update. The solution was to copy over /etc/ssl/certs/ca-certificates.crt from another Linux distribution.
http://docs.binary.ninja/guide/troubleshooting/index.html
2018-04-19T15:40:46
CC-MAIN-2018-17
1524125936981.24
[]
docs.binary.ninja
Table of Contents Synchronize your network with NTP NTP (Network Time Protocol) allows clock synchronization between computer systems. The following HOWTO describes: - configuring an NTP server on Slackware Linux; - synchronizing client PCs with your local NTP server. Introduction When several users manipulate shared data on different client PCs on a network, it's important that these machines are all synchronized. This is especially true if you share files over NFS, or if you use NIS for centralized authentication. You'll get all sorts of weird errors if your clocks are out of sync. Unfortunately, the clients' onboard clocks aren't sufficiently precise. That's where NTP (Network Time Protocol) comes in handy. It allows networked machines to adjust their clocks so as to be perfectly synchronized. A series of public time servers on the Internet allow the reception of the exact time. From this point, we can use NTP in several ways. - The ntpdatecommand makes an initial correction of the BIOS clock. - This one-time-adjustment isn't sufficient for a server that is supposed to be up 24/7, since its clock will drift away gradually from the exact time. In that case, we have to configure the ntpddaemon (shipping with the ntppackage). This daemon contacts public time servers at regular intervals and proceeds with incremental corrections of the local clock. - The ntpddaemon can in its turn be configured as a time server for the local client machines. It's considered good practice to use ntpdate for the initial adjustment and ntpd for regular time synchronization. Firewall considerations The NTP services uses UDP port 123. Open this port if you want to allow remote machines to connect to your NTP server. Synchronize a LAN server or a public root server with an NTP server on the Internet Create an empty log file: # touch /var/log/ntp.log Visit and choose a list of servers according to your country. Configure the NTP service by editing /etc/ntp.conf. You might backup the existing ntp.conf file and start from scratch. In the example below, the list of four servers is chosen for my company's location (France): # /etc/ntp.conf driftfile /etc/ntp/drift logfile /var/log/ntp.log server 0.fr.pool.ntp.org server 1.fr.pool.ntp.org server 2.fr.pool.ntp.org server 3.fr.pool.ntp.org server 127.127.1.0 fudge 127.127.1.0 stratum 10 restrict default nomodify nopeer notrap restrict 127.0.0.1 mask 255.0.0.0 Here's a little explanation for some options: - The fudge 127.127.1.10 stratum 10directive is a “dummy” server acting as fallback IP in case the external time source becomes momentarily unreachable. When this happens, NTP will continue to work and base itself on this “internal” server. - NTP has its own arsenal of rules to limit access to the service, which can be used independently from a firewall. The restrictdirectives in the above configuration prevent distant computers from changing the servers' configuration (first restrictstatement), and the machine is configured to trust itself (second restrictstatement). - A restrictstatement without any argument but followed by the hostname boils down to an allow all. Manage the NTP service Before starting the service, proceed to an initial adjustment of your system clock: # ntpdate pool.ntp.org ntpdatecommand is normally considered obsolete, but it still comes in handy when performing important time adjustments. The “orthodox” way would be to use the ntpd -gcommand - the official replacement for ntpdate- but its use will fail if your system clock is off for more than half an hour. Activate the NTP service: # chmod +x /etc/rc.d/rc.ntpd Manage the NTP service: # /etc/rc.d/rc.ntpd start|stop|restart|status Now display the list of servers your machine is actually connected to: # ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================== *panopea.unstabl 213.251.128.249 2 u 30 64 377 56.136 -249.48 80.680 +88-190-17-126.r 145.238.203.14 2 u 29 64 377 77.571 -205.94 94.278 +62.210.255.117 192.93.2.20 2 u 29 64 377 77.097 -249.57 85.641 -ntp.univ-poitie 145.238.203.10 3 u 29 64 377 57.747 -191.58 107.002 LOCAL(0) .LOCL. 10 l 164 64 374 0.000 0.000 0.001 The little * asterisk preceding one of the above lines means your machine is effectively synchronized with the respective NTP server. Synchronize your client PC(s) with your local NTP server In a LAN, it is considered good practice to synchronize only one machine - the server - with a public NTP server, and the client PCs with the local server. This saves bandwidth and takes some load off the public NTP servers. As above, proceed to an initial adjustment of the system clock: # ntpdate pool.ntp.org Create an empty logfile: # touch /var/log/ntp.log Now configure NTP to synchronize with the LAN server. Replace the example's IP ( 192.168.2.1) with your real server's IP: # /etc/ntp.conf driftfile /etc/ntp/drift logfile /var/log/ntp.log server 192.168.2.1 server 127.127.1.0 fudge 127.127.1.0 stratum 10 restrict default ignore restrict 127.0.0.1 mask 255.0.0.0 restrict 192.168.2.1 mask 255.255.255.255 - The three restrictstatements mean we're blocking all NTP traffic except for the client itself and the server. Activate and start the NTP service: # chmod +x /etc/rc.d/rc.ntpd # /etc/rc.d/rc.ntpd start As above, use the ntpq -p command to check if the synchronization went well: # ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.2.1 81.19.16.225 3 u 916 1024 377 0.367 7.897 2.552 LOCAL(0) .LOCL. 10 l 10h 64 0 0.000 0.000 0.000 Monitor the performance of ntpd You will notice that the logfile /var/log/ntp.log does not contain any information about the actual accuracy of your system clock. If it's important to you, you can log the statistics of time corrections applied by NTP daemon to the system clock. To do this, add the following lines to /etc/ntp.conf: statsdir /var/log/ntp/ statistics loopstats filegen loopstats file loops type day link enable You have to create the statsdir manually. Once the configuration changes are in effect, ntpd will create files named loops.YYYYMMDD in that directory. Below is an example line from one of these files: 56690 3950.569 0.001199636 2.297 0.001830770 0.571576 10 The first and second number are the UTC time (expressed as Modified Julian Date and seconds elapsed since midnight). The third and fourth number are the offsets of time (in seconds) and of frequency (in parts per million). The fifth and sixth number are their respective uncertainties. To monitor the performance of ntpd, you can examine the plot of clock offset or frequency offset vs. time: $ awk '{printf("%f %f %f\n", $1+$2/86400, $3, $5)}' /var/log/ntp/loops.* > time $ awk '{printf("%f %f %f\n", $1+$2/86400, $4, $6)}' /var/log/ntp/loops.* > freq $ gnuplot gnuplot> set xzeroaxis gnuplot> plot 'time' with yerror gnuplot> plot 'freq' with yerror Given enough data, visual examination of the plots will allow you to see peculiarities in ntpd performance, should they arise. For example, in the case illustrated by the figure below, the rapid decrease of the frequency offset was caused by replacing the power supply unit of the machine. Sources - Originally written by Niki Kovacs - Performance monitoring section contributed by Dominik Drobek
http://docs.slackware.com/howtos:network_services:ntp
2018-04-19T15:21:58
CC-MAIN-2018-17
1524125936981.24
[array(['https://docs.slackware.com/lib/plugins/bookcreator/images/add.png', None], dtype=object) array(['https://docs.slackware.com/lib/plugins/bookcreator/images/del.png', None], dtype=object) ]
docs.slackware.com
Start the Server¶ Last Updated: May 2020 The Tethys Portal production deployment uses NGINX and Daphne servers. Rather than manage these processes individually, you should use the supervisorctl command to perform start, stop, and restart operations: Start:sudo supervisorctl start all Stop:sudo supervisorctl stop all Restart:sudo supervisorctl restart all You can also start, stop, or restart nginx: sudo supervisorctl restart nginx You can also start, stop, and restart all of the the Daphne processes: sudo supervisorctl restart asgi:*
http://docs.tethysplatform.org/en/latest/installation/production/start_stop.html
2020-09-18T23:52:39
CC-MAIN-2020-40
1600400189264.5
[]
docs.tethysplatform.org
. Uses the acl subresource to set the access control list (ACL) permissions for an object that already exists in an S3 bucket. You must have WRITE_ACP permission to set the ACL of an object. For more information, see What permissions can I grant? in the Amazon Simple Storage Service Developer Guide . Depending on your application needs, you can choose to set the ACL on an object using either the request body or the headers. For example, if you have an existing application that updates a bucket ACL using the request body, you can continue to use that approach. For more information, see Access Control List (ACL) Overview in the Amazon S3 Developer Guide . Access Permissions You can set access permissions using one of the following methods: Specify a canned ACL with the x-amz-acl request header. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. Specify the canned ACL name as the value of x-amz-ac l. If you use this header, you cannot use other access control-specific headers in your request. For more information, see Canned ACL . Specify access permissions explicitly with the x-amz-grant-read , x-amz-grant-read-acp , x-amz-grant-write-acp , and x-amz-grant-full-control headers. When using these headers, you specify explicit access permissions and grantees (AWS accounts or Amazon S3 groups) who will receive the permission. If you use these ACL-specific headers, you cannot use x-amz-acl header to set a canned ACL. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview . You specify each grantee as a type=value pair, where the type is one of the following:. For example, the following x-amz-grant-read header grants list objects permission to the two AWS accounts identified by their email addresses. x-amz-grant-read: emailAddress="[email protected]", emailAddress="[email protected]" You can use either a canned ACL or specify access permissions explicitly. You cannot do both. Grantee Values You can specify the person (grantee) to whom you're assigning access rights (using request elements) in the following ways:. Versioning The ACL of an object is set at the object version level. By default, PUT sets the ACL of the current version of an object. To set the ACL of a different version, use the versionId subresource. Related Resources See also: AWS API Documentation See 'aws help' for descriptions of global parameters. put-object-acl [--acl <value>] [--access-control-policy <value>] --bucket <value> [--content-md5 <value>] [--grant-full-control <value>] [--grant-read <value>] [--grant-read-acp <value>] [--grant-write <value>] [--grant-write-acp <value>] --key <value> [--request-payer <value>] [--version-id <value>] [--expected-bucket-owner <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>] --acl (string) The canned ACL to apply to the object. For more information, see Canned ACL . Possible values: - private - public-read - public-read-write - authenticated-read - aws-exec-read - bucket-owner-read - bucket-owner-full-control --access-control-policy (structure) Contains the elements that set the ACL permissions for an object per grantee. Grants -> (list) A list of grants. . Owner -> (structure) Container for the bucket owner's display name and ID. DisplayName -> (string)Container for the display name of the owner. ID -> (string)Container for the ID of the owner. JSON Syntax: { "Grants": [ { "Grantee": { "DisplayName": "string", "EmailAddress": "string", "ID": "string", "Type": "CanonicalUser"|"AmazonCustomerByEmail"|"Group", "URI": "string" }, "Permission": "FULL_CONTROL"|"WRITE"|"WRITE_ACP"|"READ"|"READ_ACP" } ... ], "Owner": { "DisplayName": "string", "ID": "string" } } --bucket (string) The bucket name that contains the object to which you want to attach the ACL. . --content-md5 (string) --grant-full-control (string) Allows grantee the read, write, read ACP, and write ACP permissions on the bucket. --grant-read (string) Allows grantee to list the objects in the bucket. --grant-read-acp (string) Allows grantee to read the bucket ACL. --grant-write (string) Allows grantee to create, overwrite, and delete any object in the bucket. --grant-write-acp (string) Allows grantee to write the ACL for the applicable bucket. --key (string) Key for which the PUT operation was initiated. - --version-id (string) VersionId used to reference a specific version of the object. - grants full control to two AWS users ([email protected] and [email protected]) and read permission to everyone: aws s3api put-object-acl --bucket MyBucket --key file.txt --grant-full-control [email protected],[email protected] --grant-read uri= See for details on custom ACLs (the s3api ACL commands, such as put-object-acl, use the same shorthand argument notation).
https://docs.aws.amazon.com/ja_jp/cli/latest/reference/s3api/put-object-acl.html
2020-09-19T00:34:34
CC-MAIN-2020-40
1600400189264.5
[]
docs.aws.amazon.com
Designing globally available services using Azure SQL Database Azure SQL Database When building and deploying cloud services with Azure SQL Database, you use active geo-replication or auto-failover groups to provide resilience to regional outages and catastrophic failures. The same feature allows you to create globally distributed applications optimized for local access to the data. This article discusses common application patterns, including the benefits and trade-offs of each option. Note If you are using Premium or Business Critical databases and elastic pools, you can make them resilient to regional outages by converting them to zone redundant deployment configuration. See Zone-redundant databases. Scenario 1: Using two Azure regions for business continuity with minimal downtime In this scenario, the applications have the following characteristics: - Application is active in one Azure region - All database sessions require read and write access (RW) to data - Web tier and data tier must be collocated to reduce latency and traffic cost - Fundamentally, downtime is a higher business risk for these applications than data loss In this case, the application deployment topology is optimized for handling regional disasters when all application components need to fail over together. The diagram below shows this topology. For geographic redundancy, the application’s resources are deployed to Region A and B. However, the resources in Region B are not utilized until Region A fails. A failover group is configured between the two regions to manage database connectivity, replication and failover. The web service in both regions is configured to access the database via the read-write listener <failover-group-name>.database.windows.net (1). Azure Traffic Manager is set up to use priority routing method (2). Note Azure Traffic Manager is used throughout this article for illustration purposes only. You can use any load-balancing solution that supports priority routing method. The following diagram shows this configuration before an outage: After an outage in the primary region, SQL Database detects that the primary database is not accessible and triggers failover to the secondary region based on the parameters of the automatic failover policy (1). Depending on your application SLA, you can configure a grace period that controls the time between the detection of the outage and the failover itself. It is possible that Azure Traffic Manager initiates the endpoint failover before the failover group triggers the failover of the database. In that case the web application cannot immediately reconnect to the database. But the reconnections will automatically succeed as soon as the database failover completes. When the failed region is restored and back online, the old primary automatically reconnects as a new secondary. The diagram below illustrates the configuration after failover. Note All transactions committed after the failover are lost during the reconnection. After the failover is completed, the application in region B is able to reconnect and restart processing the user requests. Both the web application and the primary database are now in region B and remain co-located. If an outage happens in region B, the replication process between the primary and the secondary database gets suspended but the link between the two remains intact (1). Traffic Manager detects that connectivity to Region B is broken and marks the endpoint web app 2 as Degraded (2). The application's performance is not impacted in this case, but the database becomes exposed and therefore at higher risk of data loss in case region A fails in succession. Note For disaster recovery, we recommend the configuration with application deployment limited to two regions. This is because most of the Azure geographies have only two regions. This configuration does not protect your application from a simultaneous catastrophic failure of both regions. In an unlikely event of such a failure, you can recover your databases in a third region using geo-restore operation. Once the outage is mitigated, the secondary database automatically resynchronizes with the primary. During synchronization, performance of the primary can be impacted. The specific impact depends on the amount of data the new primary acquired since the failover. Note After the outage is mitigated, Traffic Manager will start routing the connections to the application in Region A as a higher priority end-point. If you intend to keep the primary in Region B for a while, you should change the priority table in the Trafic Manager profile accordingly. The following diagram illustrates an outage in the secondary region: The key advantages of this design pattern are: - The same web application is deployed to both regions without any region-specific configuration and doesn’t require additional logic to manage failover. - Application performance is not impacted by failover as the web application and the database are always co-located. The main tradeoff is that the application resources in Region B are underutilized most of the time. Scenario 2: Azure regions for business continuity with maximum data preservation This option is best suited for applications with the following characteristics: - Any data loss is high business risk. The database failover can only be used as a last resort if the outage is caused by a catastrophic failure. - The application supports read-only and read-write modes of operations and can operate in "read-only mode" for a period of time. In this pattern, the application switches to read-only mode when the read-write connections start getting time-out errors. The web application is deployed to both regions and includes a connection to the read-write listener endpoint and different connection to the read-only listener endpoint (1). The Traffic Manager profile should use priority routing. End point monitoring should be enabled for the application endpoint in each region (2). The following diagram illustrates this configuration before an outage: When Traffic Manager detects a connectivity failure to region A, it automatically switches user traffic to the application instance in region B. With this pattern, it is important that you set the grace period with data loss to a sufficiently high value, for example 24 hours. It ensures that data loss is prevented if the outage is mitigated within that time. When the web application in region B is activated the read-write operations start failing. At that point, it should switch to the read-only mode (1). In this mode the requests are automatically routed to the secondary database. If the outage is caused by a catastrophic failure, most likely it cannot be mitigated within the grace period. When it expires the failover group triggers the failover. After that the read-write listener becomes available and the connections to it stop failing (2). The following diagram illustrates the two stages of the recovery process. Note If the outage in the primary region is mitigated within the grace period, Traffic Manager detects the restoration of connectivity in the primary region and switches user traffic back to the application instance in region A. That application instance resumes and operates in read-write mode using the primary database in region A as illustrated by the previous diagram. If an outage happens in region B, Traffic Manager detects the failure of the end point web-app-2 in region B and marks it degraded (1). In the meantime, the failover group switches the read-only listener to region A (2). This outage does not impact the end-user experience but the primary database is exposed during the outage. The following diagram illustrates a failure in the secondary region: Once the outage is mitigated, the secondary database is immediately synchronized with the primary and the read-only listener is switched back to the secondary database in region B. During synchronization performance of the primary could be slightly impacted depending on the amount of data that needs to be synchronized. This design pattern has several advantages: - It avoids data loss during the temporary outages. - Downtime depends only on how quickly Traffic Manager detects the connectivity failure, which is configurable. The tradeoff is that the application must be able to operate in read-only mode. Scenario 3: Application relocation to a different geography without data loss and near zero downtime In this scenario the application has the following characteristics: - The end users access the application from different geographies - The application includes read-only workloads that do not depend on full synchronization with the latest updates - Write access to data should be supported in the same geography for majority of the users - Read latency is critical for the end-user experience In order to meet these requirements you need to guarantee that the user device always connects to the application deployed in the same geography for the read-only operations, such as browsing data, analytics etc. Whereas the OLTP operations are processed in the same geography most of the time. For example, during the day time OLTP operations are processed in the same geography, but during the off hours they could be processed in a different geography. If the end-user activity mostly happens during the working hours, you can guarantee the optimal performance for most of the users most of the time. The following diagram shows this topology. The application’s resources should be deployed in each geography where you have substantial usage demand. For example, if your application is actively used in the United States, European Union and South East Asia the application should be deployed to all of these geographies. The primary database should be dynamically switched from one geography to the next at the end of the working hours. This method is called “follow the sun”. The OLTP workload always connects to the database via the read-write listener <failover-group-name>.database.windows.net (1). The read-only workload connects to the local database directly using the databases server endpoint <server-name>.database.windows.net (2). Traffic Manager is configured with the performance routing method. It ensures that the end-user’s device is connected to the web service in the closest region. Traffic Manager should be set up with end point monitoring enabled for each web service end point (3). Note The failover group configuration defines which region is used for failover. Because the new primary is in a different geography the failover results in longer latency for both OLTP and read-only workloads until the impacted region is back online. At the end of the day, for example at 11 PM local time, the active databases should be switched to the next region (North Europe). This task can be fully automated by using Azure Logic Apps. The task involves the following steps: - Switch primary server in the failover group to North Europe using friendly failover (1) - Remove the failover group between East US and North Europe - Create a new failover group with the same name but between North Europe and East Asia (2). - Add the primary in North Europe and secondary in East Asia to this failover group (3). The following diagram illustrates the new configuration after the planned failover: If an outage happens in North Europe for example, the automatic database failover is initiated by the failover group, which effectively results in moving the application to the next region ahead of schedule (1). In that case the US East is the only remaining secondary region until North Europe is back online. The remaining two regions serve the customers in all three geographies by switching roles. Azure Logic Apps has to be adjusted accordingly. Because the remaining regions get additional user traffic from Europe, the application's performance is impacted not only by additional latency but also by an increased number of end-user connections. Once the outage is mitigated in North Europe, the secondary database there is immediately synchronized with the current primary. The following diagram illustrates an outage in North Europe: Note You can reduce the time when the end user’s experience in Europe is degraded by the long latency. To do that you should proactively deploy an application copy and create the secondary database(s) in another local region (West Europe) as a replacement of the offline application instance in North Europe. When the latter is back online you can decide whether to continue using West Europe or to remove the copy of the application there and switch back to using North Europe. The key benefits of this design are: - The read-only application workload accesses data in the closets region at all times. - The read-write application workload accesses data in the closest region during the period of the highest activity in each geography - Because the application is deployed to multiple regions, it can survive a loss of one of the regions without any significant downtime. But there are some tradeoffs: - A regional outage results in the geography to be impacted by longer latency. Both read-write and read-only workloads are served by the application in a different geography. - The read-only workloads must connect to a different end point in each region. Business continuity planning: Choose an application design for cloud disaster recovery Your specific cloud disaster recovery strategy can combine or extend these design patterns to best meet the needs of your application. As mentioned earlier, the strategy you choose is based on the SLA you want to offer to your customers and the application deployment topology. To help guide your decision, the following table compares the choices based on recovery point objective (RPO) and estimated recovery time (ERT). Next steps - For a business continuity overview and scenarios, see Business continuity overview - To learn about active geo-replication, see Active geo-replication. - To learn about auto-failover groups, see Auto-failover groups. - For information about active geo-replication with elastic pools, see Elastic pool disaster recovery strategies.
https://docs.microsoft.com/en-us/azure/azure-sql/database/designing-cloud-solutions-for-disaster-recovery
2020-09-18T22:31:09
CC-MAIN-2020-40
1600400189264.5
[array(['media/designing-cloud-solutions-for-disaster-recovery/scenario1-a.png', 'Scenario 1. Configuration before the outage.'], dtype=object) array(['media/designing-cloud-solutions-for-disaster-recovery/scenario1-b.png', 'Scenario 1. Configuration after failover'], dtype=object) array(['media/designing-cloud-solutions-for-disaster-recovery/scenario1-c.png', 'Scenario 1. Configuration after an outage in the secondary region.'], dtype=object) array(['media/designing-cloud-solutions-for-disaster-recovery/scenario2-a.png', 'Scenario 2. Configuration before the outage.'], dtype=object) array(['media/designing-cloud-solutions-for-disaster-recovery/scenario2-b.png', 'Scenario 2. Disaster recovery stages.'], dtype=object) array(['media/designing-cloud-solutions-for-disaster-recovery/scenario2-c.png', 'Scenario 2. Outage of the secondary region.'], dtype=object) array(['media/designing-cloud-solutions-for-disaster-recovery/scenario3-a.png', 'Scenario 3. Configuration with primary in East US.'], dtype=object) array(['media/designing-cloud-solutions-for-disaster-recovery/scenario3-b.png', 'Scenario 3. Transitioning the primary to North Europe.'], dtype=object) array(['media/designing-cloud-solutions-for-disaster-recovery/scenario3-c.png', 'Scenario 3. Outage in North Europe.'], dtype=object) ]
docs.microsoft.com
To install Jottacloud, simply go to our Download Page. Install Jottacloud To install Jottacloud, double click the jottacloud.dmg file. This opens the Jottacloud installer archive. Drag and drop the Jottacloud icon on to the Applications folder. Step 1 - Start Jottacloud Launch the Jottacloud application by double clicking the Jottacloud icon inside the applications folder. Step 2 - Login and Select Computer Login with you username and password. Step 3: Provide a name for your computer. It can be “your name” pc or any other variation you would like. The computer name must be at least six characters long and cannot contain spaces. Click “Next.” Step 4: The setup will automatically choose a default synchronization location. Click “Next” to accept the default location or “…” to select a setup location of your choice. Step 5: Tick the checkboxes in front of the folders you want to back up. A checkmark will appear next to the folders you have chosen. Click “Next” when you complete your selections. You can add more folders to this list at any point. Step 6 – Jottacloud will now take you on a five-screen tour of the software. After you review each screen, click “Next.” Once the tour is complete, click “Start Jottacloud.”
https://docs.jottacloud.com/en/articles/1292926-jottacloud-for-macos
2019-08-17T22:59:10
CC-MAIN-2019-35
1566027313501.0
[array(['https://downloads.intercomcdn.com/i/o/38354883/6b67da0c9069bc317db9d677/Skjermbilde_2014-06-25_kl._10.29.36.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/38354902/4e38cc94b95821a499065b63/Skjermbilde_2014-06-18_kl._13.29.13.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/38354928/926c3b0b566aac3ed27f0a41/Skjermbilde_2014-06-18_kl._13.28.58.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/38354949/c3580b1207bbf3527197fb62/Skjermbilde_2014-06-18_kl._13.40.00.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/38354977/e0cb9b3d87b7a33bbc1d02f6/Skjermbilde_2014-06-18_kl._13.43.22.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/38354993/87a9b684e1e7ae9cc5e4c500/Skjermbilde_2014-06-18_kl._13.44.47.png', None], dtype=object) ]
docs.jottacloud.com