content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Our Icon Fonts Manager plugin was revolutionary when we first introduced it. It made adding new Icon Fonts in WordPress site swift and possible to non coders. You just download the .zip file from IcoMoon and upload it Icon Fonts Manager plugin. Rest is all figured out and automatically managed for you. This article will explain all that happens behind the scene when you upload a new Icon Font zip in the Icon Fonts Manager. 1. Our plugin uploads the .zip file in the appropriate directory at wp-content/uploads/ 2. When you click “Insert Fonts Zip file” the zip will be extracted in the directory and files will be located at /wp-content/uploads/smile_fonts/smile_temp. 3. The directory “smile_temp” is renamed with the font name. 4. While creating the font directory, a config file “charmap.php” is created dynamically. The contents of this file is dependent on the zip file you generated on IcoMoon. The contents of charmap.php file would generally looks like this – You can customize these contents (if you wish) for every font before downloading the .zip file from IcoMoon. The name & tags are used for search functionality. 5. In this process, if the permissions are not appropriate (lesser than 755 to directories) to execute the functions, our script would return an error and the font directory will be deleted from the smile_fonts directory. To learn more about file permission in WordPress, you may go through this article. 6. A database entry is added. smile_fonts in wp_options table with an array. array contains – * An array is for each Font zip. * This array is serialized. In case, if you want to add entries in DB manually, you would need to serialize first. 7. CSS style of the font icon will be en-queued on the front end and on the font manager page in admin and on the edit / new post page in admin.
https://docs.brainstormforce.com/icon-fonts-manager-behind-the-scene/
2019-10-14T03:06:01
CC-MAIN-2019-43
1570986649035.4
[array(['https://docs.brainstormforce.com/wp-content/uploads/2015/08/Screen-Shot-2015-08-05-at-11.07.33-AM.png', None], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2015/08/icon-font-screenshot.png', None], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2015/08/charmap.png', None], dtype=object) array(['https://docs.brainstormforce.com/wp-content/uploads/2015/08/error.png', None], dtype=object) ]
docs.brainstormforce.com
You may change your account settings for where your email notifications will be sent. You may change settings like organization name, email for email notifications and your account country. Country Setting It's important for tax reports to add your country and any other required information needed for taxation based in your country. FeelBack is a LLC established in the US and for US customers we need your EIN and state information. We offer thru our customer support chat the option to cancel your account, please feel free to give us a inquire in our chat icon letting us know you need to delete your account.
https://docs.feelback.io/docs/account-settings
2019-10-14T03:54:09
CC-MAIN-2019-43
1570986649035.4
[]
docs.feelback.io
Contents IT Service Management Previous Topic Next Topic Oracle software models Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Oracle software models In order to count Oracle software licenses, you must create software models for your Oracle software. For more information, see Manage software models. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-it-service-management/page/product/asset-management/concept/c_CreatingOracleSoftwareModels.html
2019-10-14T04:41:08
CC-MAIN-2019-43
1570986649035.4
[]
docs.servicenow.com
Do I keep my trial content when I add a seat? Yes! If you add a seat before your trial seat ends then any content and zapcodes that you created during your trial will remain live and in your account. If your trial seat ends and you don't add a paid seat then your content may be archived then deleted. For more information see What happens if I let my seat expire?
https://docs.zap.works/accounts-billing/business/do-i-keep-my-trial-content-when-i-add-a-seat/
2019-10-14T03:51:47
CC-MAIN-2019-43
1570986649035.4
[]
docs.zap.works
Welcome to part 8 of the DC/OS 101 Tutorial. PrerequisitesPrerequisites - A running DC/OS cluster with the DC/OS CLI installed. - app2 and Marathon-LB deployed and running in your cluster. ObjectiveObjective In this session, you will scale your application to multiple instances and learn how internal and external services choose which instance to use once the application has been scaled. StepsSteps Load-balancers decide which instance of an app internal or external services should use. With DC/OS, you have two different built-in load-balancer options: You have already explored these load balancing mechanisms in the context of service discovery, and in a previous tutorial you used Marathon-LB to publicly expose app2. Now let’s explore them a bit more. First, scale app2to two instances: dcos marathon app update /dcos-101/app2 instances=2 - Marathon-LB - Check app2as before via http://<public-node>10000. When you do this repeatedly you should see the request being served by different instances of app2. - You can also check the Marathon-LB stats via http://<public-node>:9090/haproxy?stats - Named VIPs SSH to the leading master node: dcos node ssh --master-proxy --leader Use curl to get the raw HTML output from the app: curl dcos-101app2.marathon.l4lb.thisdcos.directory:10000 When you do this repeatedly you should see the request being served by different instances. Scale app2back to one instance: dcos marathon app update /dcos-101/app2 instances=1 OutcomeOutcome You used Marathon-LB and VIPs to load balance requests for two different instances of your app. Deep DiveDeep Dive Consider these features and benefits when choosing the load balancing mechanism. - are a layer 4 load balancer mechanism used for internal TCP traffic. As they are tightly integrated with the kernel, they provide a load balanced IP address which can be used from anywhere within the cluster.
http://docs-staging.mesosphere.com/mesosphere/dcos/1.12/tutorials/dcos-101/loadbalancing/
2019-10-14T03:25:18
CC-MAIN-2019-43
1570986649035.4
[]
docs-staging.mesosphere.com
This document describes the structural elements of a Report Wizard that is used to create new reports and bind existing reports to data in the End-User Report Designer for WinForms. To learn how to customize the Report Wizard of the End-User Report Designer for WPF, see Wizard Customization Overview. The Data Source Wizard is used to bind an existing report to a data source. The Report wizard re-uses all Data Source wizard pages and adds extra pages to configure the report layout, as well as pages to create a label and inherited reports. You can specify a custom list of labels available in the Label Report wizard. To do this, assign a path to an XML file containing custom label definitions to the static LabelWizardCustomization.ExternalLabelProductRepository property. The Report wizard architecture is based on the MVP (Model-View-Presenter) design pattern. Every wizard page is defined by a presenter and view. Model Settings defined on the Report Wizard pages are stored by the ReportModel class. These settings are translated to the XtraReportModel class that also stores data-related settings defined on the Data Source Wizard pages. To save any additional data to a model object from custom wizard pages, use the XtraReportModel.Tag property. When adding custom fields to this model, make sure that they implement the Equals method. Presenters define the logic behind a specific wizard page. Each presenter defines how a page is initialized, how the user-specified data is processed in the context of the current page as well as how settings specified by an end-user are submitted to the report model. Each page presenter should descend from the abstract WizardPageBase<TView, TModel> class that implements the IWizardPage<TWizardModel> interface. The TView type parameter of this class allows you to associate a page presenter with an appropriate view. The following documents list the default page views and presenters used in the Data Source and Report wizards. To define wizard customization logic for the Report and Data Source wizards of a WinForms End-User Report Designer, implement the IWizardCustomizationService interface. This interface contains the following four methods, which you need to implement. Both the CustomizeDataSourceWizard and CustomizeReportWizard methods receive an object implementing the IWizardCustomization<TModel> interface. This interface exposes methods covering different aspects of wizard customization. For example, it allows you to register custom wizard pages and obtain various wizard resources from an internal container. To apply your wizard customization logic to the End-User Report Designer for WinForms, pass an instance of your IWizardCustomizationService implementation to the report designer's XRDesignMdiController.AddService method. The following code examples illustrate the wizard customization API.
https://docs.devexpress.com/XtraReports/118388/create-end-user-reporting-applications/winforms-reporting/end-user-report-designer/api-and-customization/wizard-customization-overview
2019-10-14T03:30:09
CC-MAIN-2019-43
1570986649035.4
[]
docs.devexpress.com
Return Values for the CStr Function (Visual Basic) The following table describes the return values for CStr for different data types of expression. CStr and Date. Note The CStr function performs its conversion based on the current culture settings for the application. To get the string representation of a number in a particular culture, use the number's ToString(IFormatProvider) method. For example, use Double.ToString when converting a value of type Double to a String. See also Feedback
https://docs.microsoft.com/en-us/dotnet/visual-basic/language-reference/functions/return-values-for-the-cstr-function
2019-10-14T04:51:39
CC-MAIN-2019-43
1570986649035.4
[]
docs.microsoft.com
Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/api/access.optiongroup.inselection
2019-10-14T04:21:57
CC-MAIN-2019-43
1570986649035.4
[]
docs.microsoft.com
Keyboard Support Microsoft Silverlight will reach end of support after October 2021. Learn more.. - Keyboard Events - Keyboard Event Handlers - Attaching a Keyboard Event Handler - Defining a Keyboard Event Handler - Using KeyEventArgs - Keyboard Routed Events - Text Input and Controls - Keyboard Event Handling and Reentrancy - Input Method Editors - Related Topics Keyboard Events. Keyboard Event Handlers. Attaching a Keyboard Event Handler You can attach keyboard event-handler functions for any Silverlight object that includes the event as a member (any UIElement derived class). The following XAML example shows how to attach handlers for the KeyUp event for a Canvas. <Canvas KeyUp="Canvas_KeyUp"> ... </Canvas> Attaching a keyboard event handler using code is not covered here. See Events Overview for Silverlight. Defining a Keyboard Event Handler The following example shows the incomplete event handler definition for the OnKeyUp method attached above. void Canvas_KeyUp(object sender, KeyEventArgs e) { //handling code here } Private Sub Canvas_KeyUp(ByVal sender As Object, ByVal e As KeyEventArgs) 'handling code here End Sub Using KeyEventArgs. void Canvas_KeyUp(object sender, KeyEventArgs e) { //check for the specific 'v' key, then check modifiers if (e.Key==Key.V) { if ((Keyboard.Modifiers & ModifierKeys.Control) == ModifierKeys.Control) { //specific Ctrl+V action here } } // else ignore the keystroke } Private Sub Canvas_KeyUp(ByVal sender As Object, ByVal e As KeyEventArgs) 'check for the specific v key, then check modifiers If (e.Key = Key.V) Then If ((Keyboard.Modifiers And ModifierKeys.Control) = ModifierKeys.Control) Then 'specific Ctrl+V action here Else 'ignore the keystroke End If End If End Sub Keyboard Routed Events. <StackPanel KeyUp="TopLevelKB"> <Button Name="ButtonA" Content="Button A"/> <Button Name="ButtonB" Content="Button B"/> <TextBlock Name="statusTextBlock"/> </StackPanel> The following example shows how to implement the KeyUp event handler for the corresponding XAML content in the preceding example. void TopLevelKB(object sender, KeyEventArgs e) { if (e.Key!=Key.Unknown) { String msg = "The key " + e.Key.ToString(); msg += " was pressed while focus was on " + (e.OriginalSource as FrameworkElement).Name; statusTextBlock.Text = msg; } } Private Sub TopLevelKB(ByVal sender As Object, ByVal e As KeyEventArgs) If Not (e.Key = Key.Unknown) Then Dim fe As FrameworkElement = e.OriginalSource Dim msg As String = "The key " & e.Key.ToString() msg = msg & " was pressed while focus was on " msg = msg & fe.Name statusTextBlock.Text = msg End If End Sub? OriginalSource is how you. void ButtonKB(object sender, KeyEventArgs e) { if (e.Key!=Key.Unknown) { e.Handled=true; String msg = "The key " + e.Key.ToString(); msg += " was handled while focus was on " + (sender as FrameworkElement).Name; statusTextBlock.Text = msg; } } Private Sub ButtonKB(ByVal sender As Object, ByVal e As KeyEventArgs) If Not (e.Key = Key.Unknown) Then e.Handled=True Dim fe As FrameworkElement = sender Dim msg As String = "The key " & e.Key.ToString() msg = msg & " was handled while focus was on " msg = msg & fe.Name statusTextBlock.Text = msg End If End Sub. Text Input and Controls Certain controls react to keyboard events with their own handling. For instance, a TextBox is a control that is designed to capture and then visually represent text that was entered by using the keyboard, and, and. Keyboard Event Handling and Reentrancy). Input Method Editors. See Also
https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/cc189015%28v%3Dvs.95%29
2019-10-14T04:28:21
CC-MAIN-2019-43
1570986649035.4
[]
docs.microsoft.com
Performance Tuning for Oracle Publishers SQL Server Azure SQL Database Azure SQL Data Warehouse Parallel Data Warehouse The Oracle publishing architecture is similar to the MicrosoftSQL Server publishing architecture; therefore the first step in tuning Oracle replication for performance requires following the general tuning recommendations found in Enhance General Replication Performance. In addition there are two options for Oracle Publishers that are performance related: Specifying the appropriate publishing option: Oracle or Oracle Gateway. Configuring the transaction set job to process changes on the Publisher at an appropriate interval. Specifying the Appropriate Publishing Option. Specify this option when identifying the Oracle Publisher at the SQL Server Distributor. For more information, see Create a Publication from an Oracle Database. Configuring the Transaction Set Job, see Configure the Transaction Set Job for an Oracle Publisher (Replication Transact-SQL Programming). See Also Configure an Oracle Publisher Oracle Publishing Overview Feedback
https://docs.microsoft.com/en-us/sql/relational-databases/replication/non-sql/performance-tuning-for-oracle-publishers?view=sql-server-2017
2019-10-14T03:59:25
CC-MAIN-2019-43
1570986649035.4
[]
docs.microsoft.com
The Color Editor opens when you click the + icon on the properties panel or the Change or New Shade option on the Theme Color context menu. Here is the Color Editor... So here I’m editing a Theme Color 1. The black crosshair shows the new color I’ve chosen which is the New red color. I can easily compare it with the Old red color. This is great when making subtle changes to compare against the previously used color. You can edit color names, just tap on the name field and edit. Also you can copy and edit the hex RGB value from the Color value field at the bottom. Or I could select a totally.
https://docs.xara.com/en/articles/1129777-the-color-editor
2019-10-14T04:48:02
CC-MAIN-2019-43
1570986649035.4
[array(['https://downloads.intercomcdn.com/i/o/52719141/504d4cf4dc922ea934e3a6bd/ColEd.jpg', None], dtype=object) ]
docs.xara.com
Contents ServiceNow PlatformThis information is available in a tooltip when you point to a workflow activity in an error state.Tracking errorsError handling provides visual cues within the workflow,
https://docs.servicenow.com/bundle/geneva-servicenow-platform/page/administer/workflow_administration/concept/c_WorkflowErrorHandling.html
2019-10-14T04:03:03
CC-MAIN-2019-43
1570986649035.4
[]
docs.servicenow.com
Configure Splunk forwarding to use the default certificate The default root certificate that ships with Splunk software is the same root certificate in every download. That means that anyone who has downloaded Splunk software has server certificates that have been signed by the same root certificate and would requireClientCert=false ): [tcpout] server = 10.1.12.112:9997 sslVerifyServerCert = false sslRootCAPath: $SPLUNK_HOME/etc/auth/... </code>outputs.conf</code>. Splunk Enterprise recommends setting this value to false (which is the default). 2. Restart splunkd: Next steps Next, you should check your connection to make sure your configuration works. See "Validate your configuration" I don't believe these default certificates live in these file paths anymore on 6.3.2: rootCA = $SPLUNK_HOME/etc/auth/cacert.pem serverCert = $SPLUNK_HOME/etc/auth/server.pem
https://docs.splunk.com/Documentation/Splunk/6.3.7/Security/ConfigureSplunkforwardingtousethedefaultcertificate
2019-10-14T03:49:25
CC-MAIN-2019-43
1570986649035.4
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Marketmakers¶ The DutchX is an open, decentralized trading protocol for ERC20 tokens using the Dutch auction mechanism to determine a fair value for the tokens. The mechanisms used on the DutchX differ to orderbook-based exchanges. The DutchX has a number of inherent benefits for high volume traders. Check out some details in this slide deck: Looking for Marketmakers?¶ In case you would like technical help in listing your token to the DutchX Protocol and run minimal liquidity bots, we have collected a handful of market makers that are able to facilitate this. We have been in touch with all of them, have explained the exact mechanism and are confident that they are technically able to interact with the DutchX protocol. We have no contractual relations and do not know what proposal they will make and which services they include. In no particular order, feel free to reach out to: - Keyrock: write an to [email protected] with subject Market Making for DutchX + [Your Name] - POINT95 Global Trading Limited: write an email to [email protected] - Prycto: write an email to [email protected]
https://dutchx.readthedocs.io/en/latest/market-makers.html
2019-10-14T03:28:20
CC-MAIN-2019-43
1570986649035.4
[]
dutchx.readthedocs.io
Choose how often emails are sent and whether to combine alerts for multiple items in a single message. Alerts are processed with a SharePoint Timer Job. Timer jobs are monitored in SharePoint Central Administration. If desired, the schedules for the default jobs can be modified in Central Admin. There are five available choices when configuring an alert. Four of them are associated with a default timer job that come with the Alert Plus Web Part out-of-the-box and the fifth choice causes a custom timer job to be created. Learn more about each choice in the table below.
https://docs.bamboosolutions.com/document/how_often_should_e-mails_be_sent/
2020-10-20T02:52:32
CC-MAIN-2020-45
1603107869785.9
[]
docs.bamboosolutions.com
{"metadata":{"image":[],"title":"","description":""},"api":{"url":"/v3/transfers/","auth":"required","apiSetting":"5d1d8bb82d46d1004a02581d","examples":{"codes":[{"language":"curl","code":"curl -X POST \\\n -H 'Content-Type: application/json' \\\n -H 'Authorization: Bearer YOUR_TOKEN' \\\n -H 'Accept: application/json' \\\n -d '{\"target_address\": \"1a2b3c\", \"amount\": 0.0001, \"account\": \"234abc\"}' \\\n\n"},{"language":"python","code":"import json\nimport requests\n\nTOKEN = 'YOUR TOKEN'\nurl = ''\n\nheaders = {\n 'Authorization': 'Bearer {}'.format(TOKEN),\n 'Content-Type': 'application/json;charset=UTF-8',\n 'Accept': 'application/json'\n}\n\nbody = json.dumps({\n \"account\": \"234abc\",\n \"target_address\": \"1a2b3c\",\n \"amount\": 0.0001\n})\n\n\nrequests.post(url, headers=headers, data=body)"}]},"results":{"codes":[{"status":201,"language":"json","code":"{\n 'transfer': {\n 'status': 'pending',\n 'account': '234abc',\n 'exchange': null,\n 'created_at': '2015-05-28T02:58:12.984219Z',\n 'payment': 'x45b1',\n 'target_address': '1a2b3c',\n 'amount': '0.00010000',\n 'id': '1ba2b'\n }\n}","name":""}]},"params":[{"name":"account","type":"string","default":"","desc":"The [account](doc:crypto-accounts) id of the sender.","required":true,"in":"body","ref":"","_id":"5d1d9a69184c8e025e303aaf"},{"name":"target_address","type":"string","default":"","desc":"The phone number, email address, or wallet address of the recipient.","required":true,"in":"body","ref":"","_id":"5d1d9a69184c8e025e303aae"},{"name":"amount","type":"string","default":"","desc":"The amount to transfer, denominated in the currency of the source account.","required":true,"in":"body","ref":"","_id":"5d1d9a69184c8e025e303aad"},{"name":"currency","type":"string","default":"","desc":`.","required":false,"in":"body","ref":"","_id":"5d1d9a69184c8e025e303aac"},{"name":"verification_code","type":"string","default":"","desc":"A two factor code from the user. Only used when the sender has enabled two factor authentication.","required":false,"in":"body","ref":"","_id":"5d1d9a69184c8e025e303aab"},{"name":"message","type":"string","default":"","desc":"Optional comment to for the transaction","required":false,"in":"body","ref":"","_id":"5d1d9a69184c8e025e303aa9"}],"method":"post","settings":"5d2c16e9ab606a0056359c3c"},"next":{"description":"","pages":[]},"title":"transfers","type":"endpoint","slug":"transfers-create","excerpt":"Transfer funds between two accounts","body":"[block:callout]\n{\n \"type\": \"warning\",\n \"body\": \"Please note that you must be business verified to be able to make a transfer. For more information, you may contact [business:::at:::coins.ph](mailto:[email protected]).\"\n}\n[/block]\nThe transfers API allows sending funds between coins wallet accounts of different users. It allows sending funds with different currencies (ie. `PHP->BTC`, `BTC->PHP`, `PHP->PHP`, `BTC->BTC`) and takes care of the necessary conversion between currencies.\n\n## Properties\n\n* **id** - ID of the transfer record.\n* **account** - [Account](doc:crypto-accounts) ID of the user making the transfer.\n* **target_address** - The wallet address of the recipient.\n* **payment** - Specifies the outgoing payment ID of the transfer if the transfer was successful.\n* **exchange** - The exchange ID for non BTC->BTC transfers, such as BTC->PHP.\n* **status** - Specifies whether the transaction is `pending`, `success`, or `failed`.\n* **created_at** - Specifies the date when the transfer was created.\n\n## Errors\n\n### Fee amount errors\n\n* HTTP 400 - **Whoops! We had to update the Blockchain Fee to 0.0001. Please confirm before continuing.** Fee amount is optional parameter, but when it's provided, we compare it with the current fee 0.0001.\n\n### Amount errors\n\n* HTTP 400 - **Ensure this value is greater than or equal to X** The error depends on currency limits which are set in the system.\n* HTTP 400 - **Ensure this value is lower than or equal to X.** The error depends on currency limits which are set in the system.\n\n### Currency errors\n\n* HTTP 400 - **Unsupported currency.** The error appears when you specified a currency that is different from currency of sender account or target address.\n\n### Target address errors\n\n* HTTP 400 - **You cannot transfer to yourself.** This happens when the specified target address is owned by the sender.\n* HTTP 400 - **Target account not found.** This happens when the specified target address does not exist.\n* HTTP 400 - **Invalid address format.** API failed to recognise a type of target address.\n* HTTP 400 - **You cannot transfer between addresses of the same account.**\n* HTTP 400 - **Target account not found.** In case we find out that target address is \"internalcoin\" (for example PBTC) and it does not exist, we return the error.\n* HTTP 400 - **Currency mismatch.** It happens when user tries to send funds from account to target address and its currency is different from the account.\n* HTTP 400 - **Direct transfer is not allowed.** We limit ability to send money from/to non Bitcoin accounts except the system ones.\n\n### Non field errors\n\n* HTTP 400 - **Sending funds from your account is temporarily disabled, please contact our customer support team.** This happens when the sender's account is currently disabled.\n- HTTP 400 - **Insufficient balance.** It happens when account does not have enough funds\n to create a payment. Historically it's part of non field errors.\n- HTTP 400 - **Phone or email verification is required.**\n- HTTP 400 - **This is a flagged Bitcoin address used in Phishing attacks, for the safety of your funds, we have locked down your account.Please contact our support team to reactivate.** We block users who try to send Bitcoins to a blacklisted list of addresses. Original motivation was that our customers got a virus which replaced a Bitcoin address and funds were \"stolen\". To prevent that ASAP, we have added user blocking on API level.\n- HTTP 400 - **Sending bitcoins is temporarily disabled.**\n- HTTP 400 - **To transfer with low priority, please send at least 0.001 BTC.**\n- HTTP 400 - **Please update your application in order to process this payment.**\n- HTTP 400 - **This payment would result in exceeding recipient's limit.**\n- HTTP 400 - **This account is no longer active.** Sending to inactive accounts is not allowed.\n\n### Activity limit errors\n\n* HTTP 400 - **The amount you are sending will exceed your recipient's daily limits. Please try sending a smaller amount or wait 24 hours to send the funds.**\n* HTTP 400 - **This recipient's daily limits have been met. Please wait 24 hours to send funds.**\n* HTTP 400 - **The amount you are sending will exceed your recipient's limits. Please try sending a smaller amount.**\n* HTTP 400 - **Unfortuately you cannot send to this recipient at this time because their limits have been met.**","updates":["5f51e3fe0d514c00428f8ad5"],"order":0,"isReference":true,"hidden":false,"sync_unique":"","link_url":"","link_external":false,"_id":"5d1d9a69184c8e025e303aa:06:08.234Z","createdAt":"2019-07-04T06:19:21.118Z","project":"544fc17e698ab40800b4f891","category":{"sync":{"isSync":false,"url":""},"pages":[],"title":"Wallets","slug":"wallets-1","order":11,"from_sync":false,"reference":true,"_id":"5d1d99810b2e4600500eb5ff","project":"544fc17e698ab40800b4f891","version":"56326e9cdf556c0d00cd08ca","isAPI":false,"createdAt":"2019-07-04T06:15:29.680Z","__v":0},"user":"5d19a189b4596f0072f571d4","__v":7,"parentDoc":null} posttransfers Transfer funds between two accounts Definition {{ api_url }}{{ page_api_url }} Parameters Body Params account: required string The [account](doc:crypto-accounts) id of the sender. target_address: required string The phone number, email address, or wallet address of the recipient. amount: required string The amount to transfer, denominated in the currency of the source account. currency: string`. verification_code: string A two factor code from the user. Only used when the sender has enabled two factor authentication. string Optional comment to for the transaction
https://docs.coins.asia/docs/transfers-create
2020-10-20T02:26:54
CC-MAIN-2020-45
1603107869785.9
[]
docs.coins.asia
Covered in this doc High-level summary of what Percy does Overview of how to get started with visual testing Percy is an all-in-one visual testing and review platform. We help teams automate manual QA to catch visual bugs and gain insight into UI changes on each commit. Our goal is to give you and your team confidence in the visual integrity of your UI every time you deploy. Learn more about the benefits of visual testing. How it works Percy handles everything from capturing and rendering screenshots, to detecting and notifying your team of visual changes. Step 1: Integrate Percy into your stack To integrate Percy, start by installing one of our SDKs to add snapshot commands where you'd like visual coverage. We’ve built several SDKs to help you get started with visual testing for your websites, web applications, components, and end-to-end testing frameworks. You can also build your own SDK. If you're unsure about which SDK you should use, feel free to reach out to support. Step 2: Run visual tests Percy is designed to integrate with your tests and CI environment. Just set the PERCY_TOKEN environment variable in your CI environment. Note: Although we strongly recommend running Percy via your CI workflow, you can also run Percy locally and may find it helpful to do so while you’re first getting started. Regardless of how you’re running your visual tests, Percy handles all asset discovery, rendering, and visual change detection. When you push changes to your codebase, Percy captures DOM snapshots, renders screenshots, and compares them to previously generated screenshots to see if any pixels have changed. Screenshots for individual components and pages are grouped into snapshots, and snapshots are grouped into builds to be reviewed. In the image above, the left area is the baseline screenshot and the area on the right is the new screenshot overlayed with the visual diff—the changed pixels highlighted in red. Percy uses a variety of strategies to determine the optimal baseline for comparison, including any installed source code integrations, the default branch of the project, and which commits have previously finished builds. Step 3: Review and approve visual changes Our review workflow is designed to keep your team up-to-date with visual changes on every commit. With our source code integrations, we manage your pull/merge request statuses, notifying you when visual changes are detected and changes are requested. With a source code integration enabled, one click will take you from your pull request to Percy where you can review all visual changes. Our side-by-side UI makes it easy to tell what exactly has changed, and to spot visual regressions across responsive widths and browsers. Your team is alerted when changes are requested or when changes are approved and ready to ship! Visual testing with Percy gives teams complete insight and confidence on each and every deploy. You can easily catch visual bugs, review design changes, and be sure that every pixel you deploy to customers is correct, before they even see it. Updated 4 months ago What's next Learn more about the platform, check out our 2-minute tutorials, or jump right into integrating an SDK.
https://docs.percy.io/docs/getting-started
2020-10-20T03:48:28
CC-MAIN-2020-45
1603107869785.9
[array(['https://files.readme.io/fdae6c8-changes-requested-docs-1.png', 'changes-requested-docs-1.png'], dtype=object) array(['https://files.readme.io/fdae6c8-changes-requested-docs-1.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/db45e94-github-approved_copy.png', 'github-approved copy.png'], dtype=object) array(['https://files.readme.io/db45e94-github-approved_copy.png', 'Click to close...'], dtype=object) ]
docs.percy.io
The release notes cover the following topics: What's New in this Release With Smart Assurance 10.1.2 release, the following features, enhancements, and changes are introduced: Miscellaneous enhancements: [VM-ER-8]: MBIM server logging has been enhanced to capture the device details with Device name and duration of maintenance windows. Henceforth, the details of the scheduled maintenance device can be retrieved later for any debugging or auditing purposes. [ER-1063]: The processing of the Dying Gasp trap for SMARTS has been enhanced, so that it can process the trap from Cisco Device running on the IOS release 12.2(60)EZ10 and later. [VM-ER-3]: SAM has been enhanced to allow operators to add any device component into the maintenance windows if the device component is present in the SAM topology. [VM-ER-54]: The VeloCloud VEdge instance in the ESM server has a new attribute “activationTime” that captures the activation time for the edge. - [VM-ER-59] [VM-ER-102]: The VeloCloud VEdge instance in the ESM server has been enhanced with new additional attributes: - Model: Represents the model number of the Edge. The “modelNumber” property that is returned by the getEdge REST API is used to populate this field. - Type: Represents the device family of the Edge. The “deviceFamily” property that is returned by the getEdge REST API is used to populate this field. - Location: It is of the form <country>,<city>,<street>. Represents the location of the Edge. - haSerialNumber: Represents the HA serial number for this edge. The haSerialNumber property that is returned by the getEdge REST API is used to populate this field. - [VM-ER-81]: The VeloCloud VEdge instance in the ESM server has the below two attributes populated with the proper value: - id: Represents the Edge id as maintained by the VCO. - TenantId: Represents the tenant id for the Edge. - [VM-ER-63]: If the user changes the configuration at a Tenant/Edge level, then to get the new changes, the incremental discovery can be used. The incremental discovery can be triggered for an existing Tenant/Edge. The rediscovery can only be triggered using a remote API client or dmctl command. Refer to, VeloCloud SD-WAN Monitoring section in the Deployment Scenario guide for more information. - [VM-ER-95]: In SAM, the Element Name attribute of the SAM notification is the same as the device name whenever there is a notification on AggregatePort. This behavior needs to be explicitly enabled by setting the “ElementNameMapToDevice” attribute of the ICS_NotificationFactory::ICS-NotificationFactory and reattaching the underlying domain or restarting the SAM server. For security vulnerabilities addressed in Smarts, see Smart Security Update for Multiple Vulnerabilities. Third-party support 10.1.2 changes - Java is upgraded to OpenJDK Runtime Environment Zulu 11.0.7. - ASAM java upgraded to 1.8.0_252. - SAM console java upgraded to 1.8.0_252. - Tomcat upgraded to 9.0.37 (for SAM 64 bit and 32 bit). - Jackson-databind upgraded to 2.11 (for IP only). For OSL (Open Source License) file (open_source_license_VMware_Smart_Assurance_GA.txt), navigate to <BASEDIR>/smarts/setup/osl. Platform Support The VMware Smart Assurance SAM, IP, ESM, MPLS, and NPM Managers Support Matrix available from the VMware Support website provides the latest platform and interoperability information. For detailed information about platform support and interoperability, refer support matrix for your release. Note: In SMARTS 10.1.2 release, some of the document(s) do not require modification. The older version document(s) are released as it is. Resolved Issues - SMARTA-329 / SR-19008628706 Issue with InputPacketBroadcastRate calculation query. Resolution:The InputPacketBroadcastRate formula has been modified, to calculate rate, on the basis of ifHCInBroadcastPkts packets. - SMARTA-642 / SR-20093959701 ESM with VeloCloud feature enabled consumes 100% CPU, and the domain becomes unusable once this happens. Resolution:The FD leak in the ESM monitoring subsystem has been fixed to resolve this issue. - SMARTA-700 / SR-20110190003 ESM with VeloCloud feature enabled reports “java.lang.OutOfMemoryError: Java heap space”. Resolution: ESM server is changed to consume the kafka message at a configurable rate. This is achieved by two configurable parameters specified in esm-param.conf. The parameter and its description are given below. When these parameters are not set the fix is not enabled and ESM server consumes all the outstanding messages in the kafka bus. The below parameter is for specifying the maximum size of the kafka adapter internal queue size at which the consumer is stopped: # to minimize the memory footprint of the ESM server. #VCO_KAFKA_ADAPTER_QUEUE_HIGH_WATERMARK 50000 # The below parameter is for specifying the minimum size of the kafka adapter internal queue size at which the consumer is restarted. #VCO_KAFKA_ADAPTER_QUEUE_LOW_WATERMARK 5000 - SMARTA-556 / SR- 19080029711 VeloCloud discovery for the MSP user takes longer time. Resolution: The redundant loops has been fixed, which brought down the discovery time to an acceptable range. - SMARTA-716 / SR- 20093544701 Changing the VeloCloud Edge/Enterprise name in the VCO breaks the monitoring for that Edge. Resolution: Instead of using the Edge name and Enterprise name in naming the VEdge instance the Edge id and Enterprise id are used. The DisplayName property of the VEdge will still have useful reference to the enterprise name. - SMARTA-684 / SR- 20103429002 Partitions are having devices from other customers, despite using IP tagging. Resolution: During the partition creation, SMARTS invoke getNeighbors() method, and this method has been altered not to return IPNetwork tagged neighbors. Hence, code has been modified to ensure partition does contain members from non-tagged IPNetworks. - SMARTA-648 / SR- 20095306401 Smarts IP slow startup and dull performance. Resolution: As part of the fix, a new flag (DisableIPNetworkGuessPattern) has been introduced in tpmgr-param.conf. Customer can update this flag with the respective IPNetwork (in the current case its 10.0.0.0) for which SMARTS will not guess the Netmask for the same and will be excluded being part of IPNetwork. The default value of the flag is : 10.107.119.0|FDCC:0:0:BD:0:0:0:0. The User can set any value to it for the first time via the tpmgr-param.conf. Post which; if the flag value needs to be changed, the below steps can be followed: Step 1: To reset the Flag, use the below command: ./dmctl -s <Incharge_domain_manager_Name> -b <broker>:<port> invoke ICF_TopologyManager::ICFTopologyManager insertParameter DisableIPNetworkGuessPattern " " Step 2: Execute the below command to check if the value is set properly. ./dmctl -s <Incharge_domain_manager_Name> -b <broker>:<port> invoke ICF_TopologyManager::ICF-TopologyManager findParameter DisableIPNetworkGuessPattern Step 3: A new value can be set again or override by executing the below command: ./dmctl -s <Incharge_domain_manager_Name> -b <broker>:<port> invoke ICF_TopologyManager::ICF-TopologyManager insertParameter DisableIPNetworkGuessPattern "10.0.0.0" - SMARTA-549 / SR- 1907832321 The failover is not able to activate ESM domain due to errors in actions logs. Resolution: The obsoleted properties of the ESM server are removed from the relevant failover scripts. The changes are available in both ESM and SAM. - SMARTA-501 / SR- 19068962310 The IP Domain manager running for more than a day for a single device discovery. - SMARTA-693 / SR-20103165702 The SM_Config utility exports the details to default /local instead of SM_WRITEABLE. Resolution: When Initiating a remote action, the environment variable “SM_WRITEABLE” is set via SSH. - SMARTA-780 / SR 20115216704 The filter pattern is not working in System Name or IP Pattern tab in Application Signatures. Resolution: The code has been modified to enhance the filter pattern to neutralize the systems for the application creation code block. - SMARTA-384 To incorporate the LoadBalancer class available into NPM and to effectively discover and monitor LoadBalancers on NPM domain manager. Resolution: Code has been modified to add the LoadBalancer class to the NPM domain manager, to import classes to create protocol endpoints and monitor them. - SMARTA-630 / SR- 19089925812 Multiple Fiberlink Line Failure alerts have been reported for same port. Resolution: Upstream/Downstream RCA correlation has been enhanced to deduce a single root cause for LOS ( Loss Of Signal) failures at multiple WDM (OTS-OMS-OCH) layers. - SMARTA-766 / SR- 20119403804 Multiple Fiberlink notifications generated for IsSignalDegradeDetected event. Resolution: Upstream/Downstream RCA correlation has been enhanced to deduce a single root cause for SD (Signal Degrade) and LOS failures at multiple WDM (OTS-OMS-OCH) layers . Known Issues The known issues are grouped as follows. - Known Issues in SAM - Known Issues in ESM - Known Issues in IP - Known Issues in MPLS - Known Issues in NPM - Known Issues in Smart Assurance UI - Common Issues to all products - Known Issues in Smart Assurance Management Pack - Known Issues in ASAM - Known Issues in ACM - SMAR-1567 Installing SAM console on the Linux platform is not creating applications "VMware Smarts Global Console" icon for quick launch. User needs to go to install directory and type sm_gui command only, the icon option is not available. This appears when previous version 9.5.1. or 9.6 console is already installed. - SA1-883 The map icons for VEdge and VGateway are missing for “SDWAN VCO Connectivity” in the 10.0 SAM console. Workaround: User has to either use 10.1 SAM console or edit map icons for VEdge and VGateway classes. - VSAC-209 Default DCF Password is not working for DCF SAM-OI Communication. Notification will not be pulled from vROPS to SAM-OI. Workaround: Create a new user and password in DCF at <DCF Location>/Tools/Webservice-Gateway/Default/conf/users. For example: admin1:admin123at users file, and restart the Webservice-Gateway in DCF. - SMAR-1444 All Services are not installed post upgrade for SAM and SAM-Console. Workaround: Refer Services for the Service Assurance Manager section in VMware Smart Assurance Installation Guide to install services manually. - SMAR-1727 Incase of Console mode installation and upgrade, below irrelevant message is displayed in command prompt which does not have any functional impact: cwd: C:\Users\Administrator\AppData\Local\Temp\2\I1559131575\Windows cmd: "C:\Users\Administrator\AppData\Local\Temp\2\I1559131575\Windows\resource\jre\bin\java.exe" --add-opens java.base/jdk.internal.loader=ALL-UNNAMED -Xms16777216 -Xmx50331648 -classpath "C:\Users\Administrator\AppData\Local\Temp\2\I1559131575\InstallerData\IAClasses.zip;C:\Users\Administrator\AppData\Local\Temp\2\I1559131575\InstallerData\Execute.zip;C:\Users\Administrator\AppData\Local\Temp\2\I1559131575\Windows\InstallerData\Execute.zip;C:\Users\Administrator\AppData\Local\Temp\2\I1559131575\InstallerData\Resource1.zip;C:\Users\Administrator\AppData\Local\Temp\2\I1559131575\Windows\InstallerData\Resource1.zip;C:\Users\Administrator\AppData\Local\Temp\2\I1559131575\InstallerData;C:\Users\Administrator\AppData\Local\Temp\2\I1559131575\Windows\InstallerData;" com.zerog.lax.LAX "C:/Users/Administrator/AppData/Local/Temp/2/I1559131575/Windows/setup-CONSOLE-10_0_0_0-win.lax" "C:/Users/Administrator/AppData/Local/Temp/2/lax827D.tmp" -i console - VSAC-32 The following map error warning message appears for INCHARGE-SA and INCHARGE-OI in respective logs, while starting the server. WARNING: register package 'map_error' version 468026d2 failed (0x276f4e00) [May 11, 2020 2:50:50 AM EDT +804ms] t@375936768 InCharge Framework FDXAM-*-DXA_TOPOCONC-TopologySync is running in CONCURENT mode with minimum delay=120 seconds. WARNING: register package 'Map_mm' version a1894c1c failed (0x276f4e00) [May 11, 2020 2:50:52 AM EDT +072ms] t@375936768 InCharge Framework There is no functional impact due to this issue. - VSAC-200 The following ASL error message appears for INCHARGE-SA and INCHARGE-SA-PRES server, while starting the servers. [May 11, 2020 2:50:50 AM EDT +710ms] t@881858304 InCharge Framework ASL-W-ERROR_RULE_SOURCE-While executing rule set '/opt/INCHARGE10120/SAM/smarts/rules/import.asl' ASL-ERROR_ACTION-While executing action at: ASL-CALL_STACK_RULE- RuleName: INSERT, Line: 181 ASL-CALL_STACK_RULE- RuleName: OBJECT_PROPERTY, Line: 177 ASL-CALL_STACK_RULE- RuleName: OBJECT_PROPERTIES, Line: 171 ASL-CALL_STACK_RULE- RuleName: OBJECT_VALUE, Line: 138 ASL-ERROR_INSERT-While attempting to insert into property 'MapViews' of object 'Map_Containment::Map-Containment' MR-CALL_OBJ_PROP-Object: Map-Containment property: Map-Containment; in file "/work/redcurrent/DMT-10.1.2.0/3/smarts/repos/mr/obj.c" at line 2102 MR-KEY_DUPLICATION-violation of uniqueness requirement for a key [May 11, 2020 2:50:52 AM EDT +095ms] t@881858304 InCharge Framework There is no functional impact due to this issue. - VSAC-250 The Rabbit MQ service is not started in SAM, due to the following error: [root@vmwbgc052 bin]# ./sm_rabbitmq --ignoreme ERROR: epmd error for host vmwbgc052: address (cannot connect to host/port) [root@vmwbgc052 bin]# Workaround: If the RabitMQ process fails to start, you need to add the following mapping in the /etc/hosts file: 127.0.0.1 <FQDN of the host> - VSAC-253 In some hosts, the pre-installed services such as smarts-tomcat and smarts-elasticsearch may fail to start. Workaround: Restart the ic-serviced daemon process by running the following commands: /etc/init.d/ic-serviced stop /etc/init.d/ic-serviced stop Note: If the ic-business-dashboard service in SAM Console does not start, then restart the ic-serviced daemon process by running the earlier commands. - VSAC-223 After SAM 10.1.2.0 upgrade, epmd service-related errors appear in the SAM upgrade log: /opt/InCharge/SAM/smarts/toolbox/OTP/erts-9.2/bin/epmd (Text file busy) at com.zerog.util.expanders.ExpandToDiskZip.ag(Unknown Source) at com.zerog.util.expanders.ExpandToDiskPMZ.ab(Unknown Source) - VSAC-261 SMARTS NCM adapter fails to connect to VSA domain managers when there are a higher number of domains registered with the broker. As broker.getDomainManagers() API returning all domain managers attached to the broker and due to the character variable size restriction at the NCM adapter code, there are failures in processing the domain manager list. The acceptable characters are limited to 256. Workaround: Connect to the broker where there is less number of domain managers registered. - SMAR-1558 Clearwater SNMP collector does not support SNMP V3, Clearwater supports SNMP v2c and UDP protocol. - SMAR-1539 Whenever user performs discoverAll in ESM, it triggers the discovery for all the hosts in the ESM Server and as well all the underlying INCHARGE-AM-PM servers. The host discovery triggers an Orchestrator discovery. Also the INCHARGE-AM-PM discovery triggers another Orchestrator discovery. So if there is a Orchestrator discovery which is InProgress, then if there another request for the same orchestrator discovery, then this request will be blocked and the device will be moved to the pending list. There is no functionality impact as one of the discovery is completed successfully. - SMAR-1121 Badge Reports are having negative value in vIMS performance reports in vROps. - SMAR-1112 Manage/Unmanage option is not available for NetworkConnection Class instances. Workaround: To Unmanage the NetworkConnection, user needs to Unmanage one of the connected interface of that NetworkConnection. - SMAR-1509 When all the EdgeNodes of NSX-T goes down which has Tier 0 Router due to ESX(Host)Down or VM going down then Compute VMs host Unresponsive will not be explained by the Host(ESX) / VM Down RCA alert. Currently there is no workaround available however RCA alert will be shown only impact to Host unresponsive will be not available for NSX-T Compute VMs. In the impact list EdgeNode Down, LogicalRouter(T0) Down will be shown. - SMAR-1668 User needs to mandatorily discover ESX Servers for getting Virtual Machine Down event. Currently the Virtual Machine Down event is not generated if the corresponding ESX Servers are not discovered in IP Server. So, its recommended to discover Virtual Machines to get proper Root cause events. - SMAR-1666 Manage / Unmanage option is not available for network connection class instances, However the Instrumented by relationship is available with NetworkConnection_Fault class. - SMAR-1470 While deploying clearwater collector through DCC, user have an option to provide Community string and port through clientConnect.conf, but it is limited to only one. The user can update the community string in agents-groups.xml file of DCF collector, however string does not get encrypted. - SA1-576 When Kafka is not reachable, monitoring failure event appears in SAM after reconfiguration of ESM Server. Workaround: - If Kafka server goes down, a notification for VCD monitoring failure appears in SAM once ESM Server is reconfigured. - When Kafka is brought back, the failure notification is cleared from SAM once ESM server is reconfigured. - SMAR-1848 Exception appears when selecting "Tiered application map" by right clicking on any map icon. - VSAC-126 Mapping of ElementName attribute to the device for notification on AggregatePort is missing. ElementName appears as Aggregate Port instance name instead of Device name. Workaround: Detach the INCHARGE-AM-PM from SA and underly it again. Once you re-attach INCHARGE-SA-PRES to the domain manager, you may lose all the active notifications. - SA11-1244 The following error message appears in the ESM log while discovering VCD and its dependant components like NSX-T, vCenter, etc.. NV_MESSAGE-*-NV_GENERIC-MSG ERR : [Thread-8 DmtObjectFactory]:insert(TransportNodeInterface::10.107.146.14/10.107.146.69::ConnectedVia += Tunnel:): SVIF-E-EREMOTE-Remote error occurred. See exception chain for detail.; in file "/work/redcurrent/DMT-10.1.2.0/3/smarts/skclient/linux_rhAS50-x86-64/optimize/SmLocalInterfaceHandler.c" at line 3623' should be empty. - VSAC-201 In the VeloCloud use case, the 'haSerialNumber' attribute is not getting updated for the incremental discovery of VEdge in VSA10.1.2. - SMAR-1555 Objects deleted in IP are not getting deleted in vROps even after object deletion time interval. Workaround: To delete the objects manually: - Go to Administration > Configuration > Inventory Explorer. - Select the objects to be deleted and click delete button (Bulk deletion is also supported). - SMAR-1544 A set of reports are created by default when user installs the Smart Assurance Adapter pack. These reports are retained in the dashboard even after uninstalling the management pack. Ideally, these reports must be deleted as part of the management pack uninstallation process. Workaround: User can manually delete the reports from the dashboard. - SMAR-1543 When a new credentials added without any username and password while configuring an instance of the adapter and edit it later with correct username and password, the password field wont be updated resulting the connection failure. Workaround: Edit the credentials again and update the password filed again and save it. - SMAR-1441 Special characters are present after migration from Windows to Linux. - SA1-958 Patch files are getting migrated, when performing migration on ESM 10.0.0.1 and in IP 10.0.0.2 to 10.1.0.0 respectively. Workaround: This is applicable only if migration is triggered on the below two products and patch versions: - ESM - 10.0.0.1 (ESM 10.0 patch 1) - IP - 10.0.0.2 (IP 10.0 Patch 2) When performing migration only on the above two versions, you need to follow the below procedure: Step 1: Collect the backup from the old installation using sm_migrate utility. -Run 'sm_perl sm_migrate.pl --old=<dir> --archive=<tar/zip> --sitemod=<local_directories>' from the old installation. Step 2: Perform manual migration on the customized files. -Copy the archive file from ‘step1’ to new host. -Run 'sm_perl sm_migrate.pl --archive=<tar/zip> --new=<dir>' from the new installation. -Once the sm_migrate utility backs up all the customizations under smarts directory (Eg: /opt/InCharge/IP/smarts/.migrate.bkp.10.0.0.2) and prompts for merging the customizations with below options, the user has to select “N” to not merge. a. Press 'n' to skip FileMergeUtilty. b. Press any other key to start FileMergeUtilty...[y]n Step 3: You need to manually copy customizations from backup directory (Eg: /opt/InCharge/IP/smarts/.migrate.bkp.10.0.0.2) to respective local directory and rename the files to remove ‘.local’ extension. - SA1-380 Null pointer exceptions are observed in IP server log file after Cisco ACI Discovery. - SMAR-1535 ASL error present in INCHARGE-MPLS-TOPOLOGY after starting server from fresh install. - SMAR-1512 Post migration from Windows to Linux, EIGRP server is not coming up. Workaround: While starting the EIGRP server use option --ignore-restore-errors. - SMAR-1365 EIGRP classes are not populated in NPM EIGRP, due to ASL error. - SMAR-1118 Null Value filtering is not possible form log view Filter. - SMAR-1108 When we scroll the scroll bar in notification window, the column name must not be hide, it must be fridge. Working fine in Chrome, but in IE, the column name is hiding. - SMAR-932 In case number of notification is 0 or 1 in Smart Assurance UI, the edit field in filter pop up is not visible. - SMAR-1615 The Smart Assurance UI fails to display live notifications above 10,000. - SMAR-1223 Expanded notification view does not provide distinguish between parent and child(impacted) notifications. - SMAR-1671 During Smarts Metric collector installation, if user wants to collect only Metrics or Topology data separately, and selects option 2 or 3 as displayed below: [1] AM/PM Topology & Metrics [2] AM/PM Metrics [3] AM/PM Topology The installation fails with errors. Workaround: During the collector installation user needs to select the default Option 1 to collect both metric and topology data. Once the installation is complete: - To collect only the Metric data, open <DCF Install Location>/Collecting/Collector-Manager/<Collector Name>/conf/collecting.xml - Change the value - To collect only the Topology data, open <DCF Install Location>/Collecting/Collector-Manager/<Collector Name>/conf/collecting.xml - Change the value - Restart the collector process. - SMAR-1377 During product upgrade, patch folder is not highlighted. - VSAC-220 On RHEL 7.8 version, if you start any domain manager as a service, the domain gets registered to a broker using both v4 and v6 IP address space. Due to this issue domain manager v6 entry will go to DEAD state in brcontrol output and the communication between the servers is failing sometimes due to this issue. Note: Issue also detected on some machines with RHEL 7.2 and 7.6 Workaround: To avoid a domain running in v6 mode, allow only v4, by setting the below flag in runcmd_env.sh file: SM_IP_VERSIONS=v4 Restart the domain manager, after updating runcmd_env.sh file. - SMAR-1569 "-help" command installer displays following invalid options: - -i [gui] - -jvmxms <size> - -jvmxmx <size> - -add <feature_name_1> [<feature_name_2 ...] - -remove <feature_name_1> [<feature_name_2 ...] - -repair - -uninstall - SMAR-1463 Smart Assurance products in silent mode are getting installed in root folder, when user disable or comment the user install directory (<Products>SUITE.installLocation=/opt/InCharge) in silent response file. - SA1-419 Exception appears in Collector logs when Orchestrator closes connection during discovery. - SMAR-1583 Few Log messages are tagged as ERROR in the collector log incorrectly. - SA1-719 The vCenter Management Pack does not have any API to detect floating IP. If a VNF (sprout or bono) is configured with floating IP and private IP, only the private IP is used to establish the relationship between VNFs (P-CSCF or I/S-CSCF) and VirtualMachines. - VSAC-143 ASAM server is not running in service way, but the user able to run via the Server way. When the user installs any application with java 11 enabled on the VM, ic-serviced (sm_serviced) is also installed, which is compiled with java 11. And, when the user installs another application with java 8 installed (in this case ASAM), then the application crashes during the starting of service (ic-asam). This is because all the executables installed under ASAM bin directory are compiled with java 8 which in turn not compatible with ic-serviced, hence crashing only in the service way starting. Workaround: User needs to stop ic-serviced and start it again from the ASAM bin directory and then again try to start the server in service way. Also, the ASAM must be installed on a fresh VM without any other application installed along with it (except SAM CONSOLE Linux which is also on java 8). - VSAC-258 When ACM is upgraded to 10.1.2, ASL error appears while performing full discovery (DiscoverALL) and after restarting the INCHARGE-AM, INCHARGE-OI, and ACM servers. Following error message appears in ACM log: ASL-W-ERROR_RULE_SOURCE-While executing rule set '/opt/InCharge1012ACM/ACM/smarts/rules/app-sig/standard-probe.asl' ASL-ERROR_ACTION-While executing action at: ASL-CALL_STACK_RULE- RuleName: CREATE_TOPOLOGY, Line: 224 ASL-CALL_STACK_RULE- RuleName: START, Line: 73 ASL-ERROR_INVOKE-While attempting to invoke operation 'makeSoftwareServiceOnHost' of object 'Application_SourceObjectFactory::Application-SourceObjectFactory-AS-IANA-smarts-broker_hostname=10.62.72.138-STANDARD' APPF-NULL_OBJECT-Null Object 'DXA_TopologySource::Application-SourceObjectFactory-AS-IANA-smarts-broker_hostname=10.62.72.138-STANDARD'
https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.2/rn/smart-assurance-1012-release-notes.html
2020-10-20T03:54:13
CC-MAIN-2020-45
1603107869785.9
[]
docs.vmware.com
Selecting the user accounts to move to the destination MDM domain You can select the user accounts that you want to migrate using any of the following methods: - Search for and select the user accounts that are associated with a specific MDM domain, BlackBerry Enterprise Server, group, or IT policy. - Specify the email addresses of each user account. - Import a CR-delimited, flat text file that contains the email addresses of the user accounts. - If you want to run the tool in bulk mode, select a source BlackBerry Enterprise Server; the tool will migrate all of the users associated with that BlackBerry Enterprise Server. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/46352/Selecting_users_to_move_to_destn_BBConfigDB_555386_11.jsp
2014-11-21T10:42:51
CC-MAIN-2014-49
1416400372819.5
[]
docs.blackberry.com
. This group is currently inactive. If you are interested in working on this issue, contact any PLT member. The Update and Migration Working Group met at J and Beyond 2012, Saturday May 19, 09:15-10:15 (GMT+1). A recap of the one hour session includes: Action items include:
http://docs.joomla.org/index.php?title=Update_and_Migration_Working_Group&diff=104995&oldid=67312
2014-11-21T10:55:21
CC-MAIN-2014-49
1416400372819.5
[]
docs.joomla.org
5) Where any municipality employs an electronic voting system which utilizes automatic tabulating equipment, either at the polling place or at a central counting location, the municipal clerk shall, on any day not more than 10 days prior to the election day on which the equipment is to be utilized, have the equipment tested to ascertain that it will correctly count the votes cast for all offices and on all measures. Public notice of the time and place of the test shall be given by the clerk at least 48 hours prior to the test by publication of a class 1 notice under ch. 985 in one or more newspapers published within the municipality if a newspaper is published therein, otherwise in a newspaper of general circulation therein. The test shall be open to the public. The test shall be conducted by processing a preaudited group of ballots so punched or marked as to record a predetermined number of valid votes for each candidate and on each referendum. The test shall include for each office one or more ballots which have votes in excess of the number allowed by law and, for a partisan primary election, one or more ballots which have votes cast for candidates of more than one recognized political party, in order to test the ability of the automatic tabulating equipment to reject such votes. If any error is detected, the municipal clerk shall ascertain the cause and correct the error. The clerk shall make an errorless count before the automatic tabulating equipment is approved by the clerk for use in the election.. : Down Down /2001/related/acts/16 true acts /2001/related/acts/16/29p acts/2001/16,29p acts/2001/16,29
http://docs.legis.wisconsin.gov/2001/related/acts/16/29p
2014-11-21T10:33:01
CC-MAIN-2014-49
1416400372819.5
[]
docs.legis.wisconsin.gov
Caution Buildbot no longer supports Python 2.7 on the Buildbot master. 2.5.14. - class buildbot.reporters.gitlab. GitLabStatusPush(token, startDescription=None, endDescription=None, context=None, baseURL=None, generators=None, verbose=False)¶ - Parameters token (string) – Private token of user permitted to update status for commits. (can be a Secret) startDescription (string) – Description used when build starts. This parameter is deprecated, use generatorsinstead. endDescription (string) – Description used when build ends. This parameter is deprecated, use generatorsinstead. context (string) – Name of your build system, eg.
https://docs.buildbot.net/2.10.1/manual/configuration/reporters/gitlab_status.html
2021-05-06T13:36:56
CC-MAIN-2021-21
1620243988753.97
[]
docs.buildbot.net
Transitioning the Sentry service to Apache Ranger Before transitioning your cluster to CDP Private Cloud Base, you must prepare the Apache Sentry authorization privileges so they can be converted to Apache Ranger permissions. Apache Ranger supports the components like HDFS, Hive, and YARN. Apache Ranger functions as a centralized security administrator and provides greater access controls and auditing capabilities. Perform the following steps after you have upgraded Cloudera Manager to version 7.1 or higher: - Export Sentry Permissions. In the Cloudera Manager Admin Console, go to the Sentry service and select. The authzmigrator tool creates the /user/sentry/export-permissions/permissions.json file in HDFS. This file contains the Sentry metadata required for Ranger to recreate the roles and permissions. - Make sure a MySQL, Oracle, or PostgreSQL database instance is running and available to be used by Ranger before you create a new cluster or upgrade your cluster from CDH to Cloudera Runtime. See the links below for procedures to set up these databases. - After you have set up the database, you can continue upgrading the cluster. After the upgrade, Sentry privileges are converted in to Ranger service policies. For more information about how these privileges appear in Ranger, see Sentry to Ranger Permissions.
https://docs.cloudera.com/cdp-private-cloud/latest/upgrade-cdh/topics/cdpdc-sentry-pre-upgrade-migration-to-ranger.html
2021-05-06T14:23:04
CC-MAIN-2021-21
1620243988753.97
[]
docs.cloudera.com
In addition to the immediate money they brought in, coupons should be monitored over time too. How did they influence the customers who used them for their first-ever purchase from your store? Did these coupons bring in quality customers that stick with your brand? You can find out with Metrilo's cohort analysis. Coupon cohort analysis Go to the Retention Analysis tab and switch to Cohort view. Then, choose cohorts by Coupon codes (other options are months and products). Now, you're looking at customer cohorts segmented by the coupon they used with their first order. The number of people included in the cohort is just below its name. Compare their stats over time: revenue (total over time) share of returning customers average order value average revenue per customer 5. Some things you can find with such an analysis (just a few suggestions): Even with a similar number of people in the beginning, one coupon can greatly outperform another if its retention rate is higher. Good coupons should bring high Average order value because they make people shop more, not just discount products. Holiday-themed coupons may no be the best way to win loyal customers. Understanding how coupons fit in the overall shopping experience of your customers. what works and what doesn't, will save you time and money when preparing your next coupon promo. Maybe you don't need to give out so many coupons and so much discounts. Or maybe a permanent free shipping coupon is all you need? You'll find out thanks to data.
https://docs.metrilo.com/en/articles/936911-evaluate-and-optimize-your-use-of-coupons-for-greater-returns
2021-05-06T13:19:38
CC-MAIN-2021-21
1620243988753.97
[]
docs.metrilo.com
Filtered association loading and re-creating an entity graph across a web service boundary... Roger Jennings, in his recent post Controlling the Depth and Order of EntitySets for 1:Many Associations, makes a case for the importance of two features in an O/RM if you want to build data-centric web services using it: the ability to do a filtered relationship navigation, and the ability to either serialize a graph or at least re-create the graph on the other side of a web service boundary. Well, to be fair, he probably is asking for more than these two features, but there are two key things which are possible with the entity framework today (as of beta 3). So I thought I'd take a few minutes to explain how. I’m going to take these two topics out of order, though, because there are some important concepts related to re-creating an entity graph which I hope will help clarify things when it comes to filtered association loading. Re-creating an entity graph across a web service boundary While this is by no means a simple problem, for an important sub-set of the overall scenarios it’s pretty easy to solve this with the entity framework today. The thing which makes this possible is the beta 3 feature of serializable EntityReferences. In my previous post about Entity Relationship Concepts, I gave some background about relationship “stub” entries in the object state manager, and these are effectively what gets serialized when an EntityReference is serialized. Stubs are very versatile because they automatically upgrade to real entities either if the real entity is already present when the system tries to load a stub into the state manager or if a stub is present and a real entity is loaded. When you add to this the fact that the framework ensures that the state manager and its entries are automatically kept in sync with the collections and references on the objects, the result is a pretty simple mechanism for breaking a graph into pieces and then cleanly reassembling the whole. Maybe a picture will help: So, if you want to remote an object graph across a web service and you do not have many-many relationships, then you can just remote each of the entities which participate in the object graph individually (maybe just as an array of objects) and then attach all of them to an ObjectContext on the other side at which point the EF will recreate the graph for you. This is, in fact, part of how my general purpose container sample (promised as an illustration of strategy #4 in this post but not quite yet ready for sharing) works. Filtered Association Loading While it might be nice if there were a way to pass a predicate to either the Load method on EntityCollection and EntityReference or to the Include method on ObjectQuery, it is possible to accomplish these things today. If you want to filter a relationship load, one way to do it is with EntityCollection’s Attach method. As a simple example, if I wanted to load into a customer’s orders collection all orders with date > January 1, 2007, then I could do something like this: customer.Orders.Attach(customer.Orders.CreateSourceQuery().Where(order => order.Date >= new DateTime(2007, 1, 1))); The CreateSourceQuery method returns an ObjectQuery<T> which will retrieve the set of entities that Load would retrieve, I then refine that query to filter to the subset of orders we really want and call Attach which tells the collection to incorporate the retrieved orders as though they were Loaded. Another interesting trick is the fact that object services automatically rewrites ObjectQueries (except those with merge option of NoTracking) to automatically bring along the EntityKey for EntityReferences and then creates the relationship entries and stubs as needed when the results of the query are attached to the context. What this means in practice is that the above operation could also have been accomplished without the attach call just by creating the filtered query and enumerating its results—as the results are enumerated, objects are materialized and attached to the state manager and (like in the diagram above) the orders bring along the keys of their corresponding customers and cause the graph to be fixed up to match. What this means for the include statement is that there are multiple ways to load a set of customers with a filtered set of related orders: 1) You could query the customers and as you iterate over each one you could use the trick above to query the filtered set of orders for that customer and load them in. This would, unfortunately, result in n+1 queries (where n is the number of customers). foreach(Customer c in db.Customers) { c.Orders.Attach(c.Orders.CreateSourceQuery().Where(o => o.Date >= new DateTime(2007, 1, 1))); // do stuff } 2) You could query the customers and then once you were done iterating over all of them you could just query for all orders since January 1, 2007. This would require only 2 queries, and it would recreate the graph for you automatically. The only downside is that if you filtered to a subset of all the customers in your first query, then you would have to apply a similar filter to your orders query or else you would retrieve orders for customers you weren’t interested in. var customers = new List<Customer>(db.Customers); var orders = new List<Order>(db.Orders.Where(o => o.Date >= new DateTime(2007, 1, 1))); foreach (Customer c in customers) { // do stuff } 3) You could create a projection query which retrieved customers in the first column and collections of orders for that customer in the second column and then attach the collection in the second column to the collection property on the customers. This would use only one query. foreach (var customerAndOrders in from c in db.Customers select new { Customer = c, Orders = (from o in c.Orders where o.Date >= new DateTime(2007, 1, 1) select o) }) { customerAndOrders.Customer.Orders.Attach(customerAndOrders.Orders); // do stuff } Well, it’s growing late in the afternoon on the Friday before Christmas, and my family is calling me home. So I’ll leave you with this. Here’s hoping that all of you have a wonderful holiday and new year! - Danny
https://docs.microsoft.com/en-us/archive/blogs/dsimmons/filtered-association-loading-and-re-creating-an-entity-graph-across-a-web-service-boundary
2021-05-06T13:54:25
CC-MAIN-2021-21
1620243988753.97
[array(['http://www.the-simmons.net/images/stubs.gif', 'Relationship Stubs Relationship Stubs'], dtype=object)]
docs.microsoft.com
Docker Registry v2 (including Docker Hub) 2. Scan images on Docker Registry v2 (including Docker Hub) Most vendors' registries comply with the Docker Registry version 2 API, including Docker Hub. To scan a Docker Registry v2 repository, create a new registry scan rule. For Docker Hub repositories: To specify an official Docker Hub repository, enter library/, followed by the short string used to designate the repo. For example, to scan the images in the official Alpine Linux repository, enter library/alpine. To specify non-official repositories, enter the user name or organization name, followed by a slash, followed by the name of the repo. For example, to specify the alpine repository in onescience’s account, enter onescience/alpine. To scan all repos from a user or organization, simply enter the user or organization name, followed by a wildcard ( *). For example, to scan all repos created by onescience, enter onescience*. Prerequisites: You have installed a Defender somewhere in your environment. Open Console, and then go to Defend > Vulnerabilities > Registry. Click Add registry settings. In the dialog, enter the following information: In the Version drop-down list, select Docker Registry v2. Leave the Registry field blank. An empty field specifies Docker Hub (hub.docker.com). In Repository name, enter the name of the repo to scan. For example, enter library/alpine to scan the official Alpine image. If the repo is part of an organization, use the organization/repository format. For example, bitnami/nginx. In Credential, select the credentials to use. If you are scanning a public repository, leave this field blank. If you are scanning a private repository, and Console doesn’t have your credentials yet, click Add New. Select either Basic authentication or Certificate-based authentication, and fill out the rest of the fields. For certificate-based authentication, provide a client certificate with private key, and an optional CA certificate. the default value of 5 will scan the most recent 5.
https://docs.twistlock.com/docs/enterprise_edition/vulnerability_management/registry_scanning/scan_docker_registry_v2.html
2021-05-06T12:16:15
CC-MAIN-2021-21
1620243988753.97
[]
docs.twistlock.com
DELETE Statement (CDH 5.10 or higher only) Deletes an arbitrary number of rows from a Kudu table. This statement only works for Impala tables that use the Kudu storage engine. Syntax: DELETE [FROM] [database_name.]table_name [ WHERE where_conditions ] DELETE table_ref FROM [joined_table_refs].table_name [ WHERE where_conditions ] The first form evaluates rows from one table against an optional WHERE clause, and deletes all the rows that match the WHERE conditions, or all rows if WHERE is omitted. The second form evaluates one or more join clauses, and deletes all matching rows from one of the tables. The join clauses can include non-Kudu tables, but the table from which the rows are deleted must be a Kudu table. The FROM keyword is required in this case, to separate the name of the table whose rows are being deleted from the table names of the join clauses. Usage notes: The conditions in the WHERE clause are the same ones allowed for the SELECT statement. See SELECT Statement for details. The conditions in the WHERE clause can refer to any combination of primary key columns or other columns. Referring to primary key columns in the WHERE clause is more efficient than referring to non-primary key columns. If the WHERE clause is omitted, all rows are removed from the table. Because Kudu currently does not enforce strong consistency during concurrent DML operations, be aware that the results after this statement finishes might be different than you intuitively expect: If some rows cannot be deleted because their some primary key columns are not found, due to their being deleted by a concurrent DELETE operation, the statement succeeds but returns a warning. A DELETE statement might also overlap with INSERT, UPDATE, or UPSERT statements running concurrently on the same table. After the statement finishes, there might be more or fewer rows than expected in the table because it is undefined whether the DELETE applies to rows that are inserted or updated while the DELETE is in progress. The number of affected rows is reported in an impala-shell message and in the query profile. Statement type: DML Examples: The following examples show how to delete rows from a specified table, either all rows or rows that match a WHERE clause: -- Deletes all rows. The FROM keyword is optional. DELETE FROM kudu_table; DELETE kudu_table; -- Deletes 0, 1, or more rows. -- (If c1 is a single-column primary key, the statement could only -- delete 0 or 1 rows.) DELETE FROM kudu_table WHERE c1 = 100; -- Deletes all rows that match all the WHERE conditions. DELETE FROM kudu_table WHERE (c1 > c2 OR c3 IN ('hello','world')) AND c4 IS NOT NULL; DELETE FROM t1 WHERE (c1 IN (1,2,3) AND c2 > c3) OR c4 IS NOT NULL; DELETE FROM time_series WHERE year = 2016 AND month IN (11,12) AND day > 15; -- WHERE condition with a subquery. DELETE FROM t1 WHERE c5 IN (SELECT DISTINCT other_col FROM other_table); -- Does not delete any rows, because the WHERE condition is always false. DELETE FROM kudu_table WHERE 1 = 0; The following examples show how to delete rows that are part of the result set from a join: -- Remove _all_ rows from t1 that have a matching X value in t2. DELETE t1 FROM t1 JOIN t2 ON t1.x = t2.x; -- Remove _some_ rows from t1 that have a matching X value in t2. DELETE t1 FROM t1 JOIN t2 ON t1.x = t2.x WHERE t1.y = FALSE and t2.z > 100; -- Delete from a Kudu table based on a join with a non-Kudu table. DELETE t1 FROM kudu_table t1 JOIN non_kudu_table t2 ON t1.x = t2.x; -- The tables can be joined in any order as long as the Kudu table -- is specified as the deletion target. DELETE t2 FROM non_kudu_table t1 JOIN kudu_table t2 ON t1.x = t2.x; Related information: Using Impala to Query Kudu Tables, INSERT Statement, UPDATE Statement (CDH 5.10 or higher only), UPSERT Statement (CDH 5.10 or higher only)
https://docs.cloudera.com/documentation/enterprise/5-10-x/topics/impala_delete.html
2021-05-06T14:26:34
CC-MAIN-2021-21
1620243988753.97
[]
docs.cloudera.com
Fedora CoreOS Frequently Asked Questions If you have other questions than are mentioned here or want to discuss further, join us in our IRC channel, irc://irc.freenode.org/#fedora-coreos, or on our new discussion board. Please refer back here as some questions and answers will likely get updated. What is Fedora CoreOS?. How does Fedora CoreOS relate to Red Hat CoreOS? Fedora CoreOS is a freely available, community distribution that is the upstream basis for Red Hat CoreOS. While Fedora CoreOS embraces a variety of containerized use cases, Red Hat CoreOS provides a focused OS for OpenShift, released and life-cycled in tandem with the platform. Does Fedora CoreOS replace Container Linux? What happens to CL? Fedora CoreOS is the official successor to CoreOS Container Linux, but it is not a drop-in replacement. CoreOS Container Linux will reach its end of life on May 26, 2020, and will no longer receive updates after that date. For notes on migrating from Container Linux to Fedora CoreOS, see the migration guide. Does Fedora CoreOS replace Fedora Atomic Host? What happens to Fedora Atomic Host and CentOS Atomic Host? Fedora CoreOS is the official successor to Fedora Atomic Host. The last Fedora Atomic Host release was version 29, which has now reached end-of-life. CentOS Atomic Host will continue producing downstream rebuilds of RHEL Atomic Host and will align with the end-of-life. The Fedora CoreOS project will be the consolidation point for the community distributions. Users are encouraged to move there in the future. What happens to Project Atomic? Project Atomic is an umbrella project consisting of two flavors of Atomic Host (Fedora and CentOS) as well as various other container-related projects. Project Atomic as a project name will be sunset by the end of 2018 with a stronger individual focus on its successful projects such as Buildah and Cockpit. This merges the community side of the operating system more effectively with Fedora and allows for a clearer communication for other community-supported projects, specifically the well-adopted #nobigfatdaemons approach of Buildah and the versatile GUI server manager Cockpit. What are the communication channels around Fedora CoreOS? We have the following new communication channels around Fedora CoreOS: mailing list: [email protected] #fedora-coreos on IRC Freenode forum at website at Twitter at @fedora (all Fedora and other relevant news) There is a community meeting that happens every week. See the Fedora CoreOS fedocal for the most up-to-date information. If you think you have found a problem with Fedora CoreOS, file an issue in our issue tracker. Technical FAQ Where can I download Fedora CoreOS? Fedora CoreOS artifacts are available at getfedora.org. Does Fedora CoreOS embrace the Container Linux Update Philosophy? Yes, Fedora CoreOS comes with automatic updates and regular releases. Multiple update channels are provided catering to different users' needs. It introduces a new node-update service based on rpm-ostree technologies, with a server component that can be optionally self-hosted. Failures that prevent an update from booting will automatically be reverted. How are Fedora CoreOS nodes provisioned? Can I re-use existing cloud-init configurations? Fedora CoreOS is provisioned with Ignition. However, existing Ignition configurations will require changes, as the OS configuration will be different from Container Linux. Existing cloud-init configurations are not supported and will need to be migrated into their Ignition equivalent. What data persists across upgrades and reboots? The directories /etc and /var are mounted as read-write which lets users write and modify files. The directory /etc may be changed by deployments, but will not override user made changes. The content under /var is left untouched by rpm-ostree when applying upgrades or rollbacks. For more information, refer to the Mounted Filesystems section. How do I migrate from Container Linux to Fedora CoreOS? Migrations can be accomplished by re-provisioning the machine with Fedora CoreOS. For notes on migrating from Container Linux to Fedora CoreOS, see the migration guide. How do I migrate from Fedora Atomic Host to Fedora CoreOS? As with Container Linux, the best practice is to re-provision the node, due to the cloud-init/Ignition transition at least. Since Fedora CoreOS uses rpm-ostree technology, it may be possible to rebase from Fedora Atomic Host to Fedora CoreOS, but it is not recommended. It’s preferable to gain experience deploying systems using Ignition so that they can be re-provisioned easily if needed. For notes on migrating from Atomic Host to Fedora CoreOS, see the migration guide. Which container runtimes are available on Fedora CoreOS? Fedora CoreOS includes Docker and podman by default. Based on community engagement and support this list could change over time. Which platforms does Fedora CoreOS support? Fedora CoreOS runs on at least Alibaba Cloud, AWS, Azure, GCP, OpenStack, QEMU, VMware, and bare-metal systems if installed to disk or network-booted. Can I run Kubernetes on Fedora CoreOS? Yes. However, we envision Fedora CoreOS as not including a specific container orchestrator (or version of Kubernetes) by default — just like Container Linux and Atomic Host. We will work with the upstream Kubernetes community on tools (e.g. kubeadm) and best practices for installing Kubernetes on Fedora CoreOS. How do I run custom applications on Fedora CoreOS? On Fedora CoreOS, containers are the way to install and configure any software not provided by the base operating system. The package layering mechanism provided by rpm-ostree will continue to exist for use in debugging a Fedora CoreOS machine, but we strongly discourage its use. For more about this, please refer to documentation. Where is my preferred tool for troubleshooting? The FCOS image is kept minimal by design. Not every troubleshooting tool are included by default. Instead, it is recommended to use the toolbox utility. How do I coordinate cluster-wide OS updates? Is locksmith or the Container Linux Update Operator available for Fedora CoreOS? The etcd-lock feature from locksmith has been directly ported to Zincati, as a lock-based updates strategy. It has also been augmented to support multiple backends, not being anymore constrained to etcd2 only. The capabilities of Container Linux Update Operator (CLUO) have been embedded into the Machine Config Operator (MCO), which is a core component of OKD. The MCO additionally covers reconciliation of machine configuration changes. How do I upload Fedora CoreOS to private AWS EC2 regions? Fedora CoreOS today is only uploaded to the standard AWS regions. For regions in other AWS partitions like GovCloud and AWS China, you must upload the images yourself. Note that Fedora CoreOS uses a unified BIOS/UEFI partition layout. As such, it is not compatible with the aws ec2 import-image API (for more information, see related discussions). Instead, you must use aws ec2 import-snapshot combined with aws ec2 register-image. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs. Can I run containers via docker and podman at the same time? No. Running containers via docker and podman at the same time can cause issues and unexpected behavior. We highly recommend against trying to use them both at the same time. It is worth noting that in Fedora CoreOS we have docker.service disabled by default but it is easily started if anything communicates with the /var/run/docker.sock because docker.socket is enabled by default. This means that if a user runs any docker command (via sudo docker) then the daemon will be activated. We did this to make the transition easier for users of Container Linux. In coreos/fedora-coreos-tracker#408 it was pointed out that because of socket activation users who are using podman for containers could unintentionally start the docker daemon. This could weaken the security of the system because of the interaction of both container runtimes with the firewall on the system. To prevent making this mistake you can disable docker completely by masking the docker.service systemd unit. variant: fcos version: 1.3.0 systemd: units: - name: docker.service mask: true Are Fedora CoreOS x86_64 disk images hybrid BIOS+UEFI bootable? The x86_64 images we provide can be used for either BIOS (legacy) boot or UEFI boot. They contain a hybrid BIOS/UEFI partition setup that allows them to be used for either. The exception to that is the metal4k 4k native image, which is targeted at disks with 4k sectors and does not have a BIOS boot partition because 4k native disks are only supported with UEFI. What’s the difference between Ignition and Butane configurations? Ignition configuration is a low-level interface used to define the whole set of customizations for an instance. It is primarily meant as a machine-friendly interface, with content encoded as JSON and a fixed structure defined via JSON Schema. This JSON configuration is processed by each FCOS instance upon first boot. Many high-level tools exist that can produce an Ignition configuration starting from their own specific input formats, such as terraform, matchbox, openshift-installer, and Butane. Butane is one such high-level tool. It is primarily meant as a human-friendly interface, thus defining its own richer configuration entries and using YAML documents as input. This YAML configuration is never directly processed by FCOS instances (only the resulting Ignition configuration is). Although similar, Ignition configurations and Butane ones do not have the same structure; thus, converting between them is not just a direct YAML-to-JSON translation, but it involves additional logic. Butane exposes several customization helpers (e.g. distribution specific entries and common abstractions) that are not present in Ignition and make the formats not interchangeable. Additionally, the different formats (YAML for Butane, JSON for Ignition) help to avoid mixing up inputs by mistake. What is the format of the version number? This is covered in detail in the design docs. The summary is that Fedora CoreOS uses the format X.Y.Z.A Xis the Fedora major version (i.e. 32) Yis the datestamp that the package set was snapshotted from Fedora (i.e. 20200715) Zis a code number used by official builds 1for the nextstream 2for the testingstream 3for the stablestream Ais a revision number that is incremented for each new build with the same X.Y.Zparameters The version numbering scheme is subject to change and is not intended to be parsed by machine. Why is the dnsmasq.service systemd unit masked? We have found that the dnsmasq binary can be used for several host applications, including podman and NetworkManager. For this reason we include the dnsmasq package in the base OSTree layer, but we discourage the use of the dnsmasq.service in the host by masking it with systemctl mask dnsmasq.service. "Why do you mask the service?" dnsmasq is useful for running a DHCP/DNS/TFTP server for external clients (i.e. not local to the host), too, but that is something we’d prefer users to do in a container. Putting the service in a container insulates the hosted service from breakage as a result of host layer changes. For example, if NetworkManager and podman stopped using dnsmasq, we would remove it from the host and the service you depend on would cease to work. "But, I really want to use it!" We don’t recommend it, but if you really want to use it you can just unmask and enable it: variant: fcos version: 1.3.0 systemd: units: - name: dnsmasq.service mask: false enabled: true For more information see the tracker issue discussion. Why does SSH stop working after upgrading to Fedora 33? In Fedora 33 there was a change to implement stronger crypto defaults. Part of this included taking the advice of OpenSSH upstream and disabling the use of the ssh-rsa public key signature algorithm. You may hit issues if you use RSA keys and: use an old version of the SSHclient use tooling/software libraries that don’t support using RSA SHA2 public key signatures For example, Go has an open issue to solve this problem in its SSH implementation, but has yet to resolve it. This has been hit and worked around by the FCOS community in our build tooling and also our higher level projects: If you run into this problem and need to work around the issue, you have a few options: Switch to a newer non-RSA key type. Provide a configuration to your machine that re-enables the insecure key signatures: variant: fcos version: 1.3.0 storage: files: - path: /etc/ssh/sshd_config.d/10-insecure-rsa-keysig.conf mode: 0600 contents: inline: | PubkeyAcceptedKeyTypes=+ssh-rsa Why do I get SELinux denials after updates if I have local policy modifications? Currently the OSTree and SELinux tooling conflict a bit. If you have permanently applied local policy modifications then policy updates delivered by the OS will no longer apply; your policy stays frozen. This means any policy "fixes" needed to enable new functionality will not get applied. See coreos/fedora-coreos-tracker#701 for more details. This means you may see denials like the following, which can take down critical parts of a system like in coreos/fedora-coreos-tracker#700: systemd-resolved[755]: Failed to symlink /run/systemd/resolve/stub-resolv.conf: Permission denied audit[755]: AVC avc: denied { create } for pid=755 comm="systemd-resolve" name=".#stub-resolv.confc418434d59d7d93a" scontext=system_u:system_r:systemd_resolved_t:s0 tcontext=system_u:object_r:systemd_resolved_var_run_t:s0 tclass=lnk_file permissive=0 To see if your system currently has local policy modifications you can run ostree admin config-diff. The following system has a modified policy: $ sudo ostree admin config-diff | grep selinux/targeted/policy M selinux/targeted/policy/policy.32 To work around this incompatibility, please attempt to apply policy modifications dynamically. For example, for an SELinux boolean you can use the following systemd unit that executes on every boot: variant: fcos version: 1.3.0 systemd: units: - name: setsebool.service enabled: true contents: | [Service] Type=oneshot ExecStart=setsebool container_manage_cgroup true RemainAfterExit=yes [Install] WantedBy=multi-user.target If your system’s basic functionality has stopped working because of SELinux denials check to see if your system currently has local policy modifications. You can check with ostree admin config-diff: $ sudo ostree admin config-diff | grep selinux/targeted/policy M selinux/targeted/policy/policy.32 If your system is in this state you have two options: Re-deploy starting with the latest image artifacts. This means you start with the latest policy. Follow the workaround in coreos/fedora-coreos-tracker#701 to restore the base policy. Why is the systemd-repart.service systemd unit masked? system-repart is a tool to grow and add partitions to a partition table. On Fedora CoreOS, we only support using Ignition to create partitions, filesystems and mount points, thus systemd-repart is masked by default. Ignition runs on first boot in the initramfs and is aware of Fedora CoreOS specific disk layout. It is also capable of reconfiguring the root filesystem (from xfs to ext4 for example), setting up LUKS, etc… See the Configuring Storage page for examples. See the Why is the dnsmasq.service systemd unit masked entry for an example config to unmask this unit.
https://docs.fedoraproject.org/es/fedora-coreos/faq/
2021-05-06T13:07:12
CC-MAIN-2021-21
1620243988753.97
[]
docs.fedoraproject.org
Image Pyramid Plugin¶ There are a series of interesting raster “formats” that make use of a bunch of raster files and present the result as a single image. This format uses a magic directory structure combined with a property file describing what goes where. Reference - - Maven: <dependency> <groupId>org.geotools</groupId> <artifactId>gt-gt-imagepyramid</artifactId> <version>${geotools.version}</version> </dependency> Example¶ On disk an image pyramid is going to look a bit like the following (you can use any format for the tiles from MRSid to tiff): directory/ directory/pyramid.properties directory/0/mosaic metadata files directory/0/mosaic_file_0.tiff directory/0/... directory/0/mosiac_file_n.tiff directory/... directory/0/32/mosaic metadata files directory/0/32/mosaic_file_0.tiff directory/0/32/... directory/0/32/mosiac_file_n.tiff The format of that pyramid.properties file is magic, while we can look at the javadocs (and the following example), you are going to have to read the source code on this one: # Pyramid Description # # Name of the coverage Name=ikonos #different resolution levels available Levels=1.2218682749859724E-5,9.220132503102996E-6 2.4428817977683634E-5,1.844026500620314E-5 4.8840552865873626E-5,3.686350299024973E-5 9.781791400307775E-5,7.372700598049946E-5 1.956358280061555E-4,1.4786360643866836E-4 3.901787184256844E-4,2.9572721287731037E-4 #where all the levels reside LevelsDirs=0 2 4 8 16 32 #number of levels available LevelsNum=6 #envelope for this pyramid Envelope2D=13.398228477973406,43.591366397808976 13.537912459169803,43.67121274528585
https://docs.geotools.org/maintenance/userguide/library/coverage/pyramid.html
2021-05-06T12:41:39
CC-MAIN-2021-21
1620243988753.97
[]
docs.geotools.org
JBoss.orgCommunity Documentation.>
https://docs.jboss.org/resteasy/docs/4.5.2.Final/userguide/html/Securing_JAX-RS_and_RESTeasy.html
2021-05-06T13:17:16
CC-MAIN-2021-21
1620243988753.97
[]
docs.jboss.org
Resync the KV store When a KV store member fails to transform its data with all of the write operations, then the KV store member might be stale. To resolve this issue, you must resynchronize the member. Before downgrading Splunk Enterprise to version 7.1 or earlier, you must use the REST API to resynchronize the KV store. Identify a stale KV store member You can check the status of the KV store using the command line. - Log into the shell of any KV store member. - Navigate to the binsubdirectory in the Splunk Enterprise installation directory. - Type ./splunk show kvstore-status. The command line returns a summary of the KV store member you are logged into, as well as information about every other member in the KV store cluster. - Look at the replicationStatusfield and identify any members that have neither "KV store captain" nor "Non-captain KV store member" as values. Resync stale KV store members If more than half of the members are stale, you can either recreate the cluster or resync it from one of the members. See Back up KV store for details about restoring from backup. To resync the cluster from one of the members, use the following procedure. This procedure triggers the recreation of the KV store cluster, when all of the members of current existing KV store cluster resynchronize all data from the current member (or from the member specified in -source sourceId). The command to resync the KV store cluster can be invoked only from the node that is operating as search head cluster captain. - Determine which node is currently the search head cluster captain. Use the CLI command splunk show shcluster-status. - Log into the shell on the search head cluster captain node. - Run the command splunk resync kvstore [-source sourceId]. The source is an optional parameter, if you want to use a member other than the search head cluster captain as the source. SourceIdrefers to the GUID of the search head member that you want to use. - Enter your admin login credentials. - Wait for a confirmation message on the command line. - Use the splunk show kvstore-statuscommand to verify that the cluster is resynced. If fewer than half of the members are stale, resync each member individually. - Stop the search head that has the stale KV store member. - Run the command splunk clean kvstore --local. - Restart the search head. This triggers the initial synchronization from other KV store members. - Run the command splunk show kvstore-statusto verify synchronization. Prevent stale members by increasing operations log size If you find yourself resyncing KV store frequently because KV store members are transitioning to stale mode frequently (daily or maybe even hourly), this means that apps or users are writing a lot of data to the KV store and the operations log is too small. Increasing the size of the operations log (or oplog) might help. After initial synchronization, noncaptain KV store members no longer access the captain collection. Instead, new entries in the KV store collection are inserted in the operations log. The members replicate the newly inserted data from there. When the operations log reaches its allocation (1 GB by default), it overwrites the beginning of the oplog. Consider a lookup that is close to the size of the allocation. The KV store rolls the data (and overwrites starting from the beginning of the oplog) only after the majority of the members have accessed it, for example, three out of five members in a KV store cluster. But once that happens, it rolls, so a minority member (one of the two remaining members in this example) cannot access the beginning of the oplog. Then that minority member becomes stale and need to be resynced, which means reading from the entire collection (which is likely much larger than the operations log). To decide whether to increase the operations log size, visit the Monitoring Console KV store: Instance dashboard or use the command line as follows: - Determine which search head cluster member is currently the captain by running splunk show shcluster-statusfrom any cluster member. - On the captain, run splunk show kvstore-status. - Compare the oplog start and end timestamps. The start is the oldest change, and the end is the newest one. If the difference is on the order of a minute, you should probably increase the operations log size. While keeping your operations log too small has obvious negative effects (like members becoming stale), setting an oplog size much larger than your needs might not be ideal either. The KV store takes the full log size that you allocate right away, regardless of how much data is actually being written to the log. Reading the oplog can take a fair bit of RAM, too, although it is loosely bound. Work with Splunk Support to determine an appropriate operations log size for your KV store use. The operations log is 1 GB by default. To increase the log size: - Determine which search head cluster member is currently the captain by running splunk show shcluster-statusfrom any cluster member. - On the captain, edit server.conf file, located in $SPLUNK_HOME/etc/system/local/. Increase the oplogSizesetting in the [kvstore]stanza. The default value is 1000 (in units of MB). - Restart the captain. - For each of the other cluster members: - Stop the member. - Run splunk clean kvstore --local. - Restart the member. This triggers the initial synchronization from other KV store members. - Run splunk show kvstore-statusto verify synchronization. Downgrading Splunk Enterprise Before downgrading Splunk Enterprise to version 7.1 or earlier, resync the KV store with the following command: curl -u username:password -XPOST https://<splunk>:8089/services/kvstore/resync/resync?featureCompatibilityVersion=3.4 If you use this command and and then restart Splunk before downgrading, run this command again before downgrading.!
https://docs.splunk.com/Documentation/Splunk/7.3.0/Admin/ResyncKVstore
2021-05-06T13:53:50
CC-MAIN-2021-21
1620243988753.97
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
RiotKit-Do (RKD) usage and development manual¶ RKD is a stable, open-source, multi-purpose automation tool which balance flexibility with simplicity. The primary language is Python and YAML syntax. RiotKit-Do can be compared to Gradle and to GNU Make, by allowing both Python and Makefile-like YAML syntax. What I can do with RKD? - Simplify the scripts - Put your Python and Bash scripts inside a YAML file (like in GNU Makefile) - Do not reinvent the wheel (argument parsing, logs, error handling for example) - Share the code across projects and organizations, use native Python Packaging to share tasks (like in Gradle) - Natively integrate scripts with .env files - Automatically generate documentation for your scripts - Maintain your scripts in a good standard RKD can be used on PRODUCTION, for development, for testing, to replace some of Bash scripts inside docker containers, and for many more, where Makefile was used. , asking user questions and validating answers) Install RKD¶ RiotKit-Do is delivered as a Python package that can be installed system-wide or in a virtual environment. The virtual environment installation is similar in concept to the Gradle wrapper (gradlew) # 1) via PIP pip install rkd # 2) Create project (will create a virtual env and commit files to GIT) rkd :rkd:create-structure --commit Getting started in freshly created structure¶ The “Quick start” section ends Create your first task with Beginners guide - on YAML syntax example¶ Check how to use commandline to run tasks in RKD with Commandline basics¶ See how to import existing tasks to your Makefile with Importing tasks page¶ Keep learning¶ - YAML syntax is described also in Tasks development section - Writing Python code in makefile.yaml requires to lookup Tasks API - Learn how to import installed tasks via pip - Importing tasks - You can also write tasks code in pure Python and redistribute those tasks via Python’s PIP - see Tasks development - With RKD you can create interactive installers - check the Creating installer wizards with RKD section Contents: - Beginners guide - on YAML syntax example - Extended usage - Makefile in Python syntax - Commandline basics - Importing tasks - Detailed usage - Importing tasks - Troubleshooting - Tasks development - Global environment variables - Custom distribution - Tasks API - Working with YAML files - Creating installer wizards with RKD - Good practices - Process isolation and permissions changing with sudo - Docker entrypoints under control - Built-in tasks
https://riotkit-do.readthedocs.io/en/v2.1.3/
2021-05-06T13:20:49
CC-MAIN-2021-21
1620243988753.97
[array(['_images/syntax.png', '_images/syntax.png'], dtype=object)]
riotkit-do.readthedocs.io
Copy the transitioned configuration to the upgrade metadata directory After you have successfully migrated the Solr configuration, and verified that the updated configuration works in Cloudera Runtime 7.1.1 or higher, you have to copy it to the Upgrade Metadata Directory and change its ownership to the Solr service superuser. /cr7-solr-metadata/migrated-config. - Copy the upgraded configuration to the upgrade metadata directory: sudo mkdir -p /cr7-solr-metadata/migrated-config sudo cp -r $HOME/cr7-migrated-solr-config/* /cr7-solr-metadata/migrated-config - Change ownership to the Solr service superuser (solr in this example): sudo chown -R solr:solr /cr7-solr-metadata - If you have made any changes to the configuration after testing on a Cloudera Runtime 7.1.1 or higher cluster, you must copy the updated configuration from the Cloudera Runtime 7.1.1 or higher test cluster to the CDH 5 cluster you are upgrading. After copying the upgraded configuration to the upgrade metadata directory, you are ready to upgrade to Cloudera Runtime 7.1.1 or higher following the regular process as documented in Upgrading a Cluster. After the upgrade is complete, continue to Cloudera Search post-upgrade tasks.
https://docs.cloudera.com/cdp-private-cloud/latest/upgrade-cdh/topics/search-upgrade-copy-migrated-config-to-metadata-dir.html
2021-05-06T12:05:57
CC-MAIN-2021-21
1620243988753.97
[]
docs.cloudera.com
Purpose The values task is used to get the value of keys in an object. It returns an array whose elements are the enumerable property values found in the given object. Potential Use Case The values task could be used to create an array that you may want to use in a loop at a later time. This task would most likely be used along with other tasks in the workflow of an automation. Properties Input and output parameters are shown below. Examples Example 1 In this IAP example the obj keys are "first" and "last" and the values for each key are "myFirstName" and "myLastName", respectively. The expected values that return are shown below in an array. Example 2 In this example, the obj key is now "name" and the values for the object key are "first": "myFirstName", "last": "myLastName", respectively. The values that return are shown in the following array.
https://docs.itential.io/user/Itential%20Automation%20Platform/Automation%20Studio/Automations/Task%20References/values/
2021-05-06T12:22:55
CC-MAIN-2021-21
1620243988753.97
[array(['image/valuestask-ex01.png', 'valueskeyinput'], dtype=object) array(['image/valuestask-ex02.png', 'valueskeyresult'], dtype=object) array(['image/valuestask-ex03.png', 'valueskeyinputalt'], dtype=object) array(['image/valuestask-ex04.png', 'valueskeyresultalt'], dtype=object)]
docs.itential.io
The Endpoint Agent is custom-built for each account. It contains a built-in ThousandEyes account key, allowing performance data collected by the agent to be routed to the correct account group in the ThousandEyes platform. Ensure you are in the correct account group/account before proceeding. Access to the Endpoint Agent installer from the ThousandEyes application requires a ThousandEyes user account with the Download Endpoint Agents permission for the account group (for example, the built-in Account Admin or Organization Admin roles). To download the Endpoint Agent: In the ThousandEyes web application, navigate to Endpoint Agents -> Agent Settings. Select the relevant Endpoint Agent radio button: Endpoint Agent Endpoint Agent Pulse Note: For more information about Endpoint Agent vs Endpoint Agent Pulse, see Endpoint Agent. Optional: Toggle the Allow anyone with the link to download switch to generate shareable download links for Endpoint Agent users that do not have/need access to the ThousandEyes web application. Note: If this link gets compromised or requires refreshing, resetting it will render any previous links issued invalid. Download the Endpoint Agent via either the installation file or generated download link: a. Click the download button for the relevant operating system to download the Endpoint Agent installation file. For Windows Users: After clicking the download button, a drop-down menu will open. Select the relevant architecture to download the installation file: x64 Windows MSI x86 Windows MSI Note: After clicking the download button, a pop-up notification may appear (depending on the local security settings). Click Keep to continue the download process. For Mac OS Users: After clicking the download button, a pop-up notification may appear (depending on the local security settings). Click Keep to continue the download process. b. Click the generated download link to download the Endpoint Agent installation file. Once the Endpoint Agent installation file has been downloaded, see Install the Endpoint Agent.
https://docs.thousandeyes.com/product-documentation/global-vantage-points/endpoint-agents/installing/download-the-installer
2021-05-06T12:01:03
CC-MAIN-2021-21
1620243988753.97
[]
docs.thousandeyes.com
1. INTRODUCTIONQs. These Product FAQs are so convenient to read as they appear in a pull-out manner. It’s possible that one or more FAQs can be applied to multiple Categories. You can add these common product FAQs to multiple Categories by simply assigning them to the Category. With Rootways Product FAQ extension, you can manage the Product FAQ page within a few clicks from your Magento admin. No technical knowledge required! Flexibly set up your Products FAQ functionality for your Products. You don’t need to worry about your site speed. Rootways Products FAQ extension doesn’t affect your site speed. You don’t need to worry about your site speed. Rootways Products FAQ extension doesn’t affect your site speed. Let’s take a look at its Features below: - Allows your customers to quickly get the overview of the product details by going through the Product FAQs. - Facility to customize the text for the answer to the FAQs as it allows adding HTML tags in the answer text box. - Choice to add one or more questions in Product detail page as well as Category page. - Admin can delete more than one FAQs at a time from admin panel. - Easy to Install and Configure. - Improves user experience. - Reliable and Prompt Support by Rootways to help you solve any difficulties in using Product FAQ. 2. HOW TO USE AND CONFIGURE This section will show you how to configure Rootways Product FAQs Extension. It’s very easy and fast! 2.1 General Settings: To Manage Products FAQ Extension from Configuration Settings, Do as follows. Log in to Admin Panel and then click System -> Configuration -> Rootways (Left sidebar) -> Products FAQ. Or you can click on Products FAQ -> Configuration from the navigation pane.Below is the Screen shot of Free gift item Settings and also the detailed description of each setting. - Enable: Enable/Disable Products FAQ Products FAQ Management: This section will show you how to add Product FAQ from Magento Admin Panel. Go to Admin -> Products FAQ -> Manage Products FAQ. To add Products FAQ: – Click on ‘Add Question’. You will be directed to its General settings. – Now Enter the Question and Answer for any product. – The Answer text box provides customization of Text by allowing HTML tags. – Also set the ‘Sort order’ of the FAQ in Sort Order parameter. – You can set the status of the question by selecting ‘Enabled’ or ‘Disabled’ from Status parameter. – To add the Q&A to any products, click on ‘Sharing Products’ under General settings from left sidebar. – Now enter the SKU for the product you want this Q&A to be displayed. Click the checkbox of the product name. You can select multiple products too. – To add the same Q&A to multiple categories, simply click on ‘Sharing Categories’ from Left sidebar. – Now select all the Categories that you want this Q&A to be displayed. After filling all required information for Products FAQ parameters click on ‘Save’ or ‘Save And Continue Edit’. Now check the front-end of your website, reload it and you’ll be able to see the FAQ in the product details page that you selected from admin. These FAQs are shown in a very user-friendly pull out manner. Which also saves the space and allows more FAQ to be added. FRONT END After successfully installing and setting the parameters for the Products FAQ extension and also after reloading the front-end of your website you’ll be able to view FAQs for products. For any query regarding product details, users can simply click on the Question and the Answer will be displayed as per the the format you have added from admin panel. To stop seeing the answer click on the question again and the answer will be slid up. Rootways Products FAQ allows HTML tags in answer thus users can see the customized answer text with different font colors, sizes etc. To temporarily remove the question from the products FAQ list, admin can set its status to ‘Disabled’ and the question will be stop displaying on front end of your website. RESPONSIVENESS: Rootways Products FAQ is fully Responsive and flexible. Our Fully Responsive Products FAQ Extension refers that your website will have the same user experience on any device having different screen sizes. That’s how easy it is to use Product FAQ extension by Rootways. Please contact us for any queries regarding Magento and custom web-development of E-commerce websites. Our Shop: Phone: 1-855-766-8929 Our team is working on the newer version of Rootways Product FAQ extension with an extra ordinary feature that you’ve never seen with any extension!!!
https://docs.rootways.com/product-faq-extension/
2021-05-06T13:02:38
CC-MAIN-2021-21
1620243988753.97
[]
docs.rootways.com
ThoughtSpot version 4.5.1 is now available. For a complete list of issues that we fixed in recent releases, see Fixed issues. Supported Upgrade Paths If you are running the following version, you can upgrade to the 4.5.1.x release directly: - 4.4.1.3 to 4.5.1.x If you are running 4.4.1.2 or earlier, you must do a multiple pass upgrade: - Upgrade your cluster to 4.4.1.3. - Upgrade from 4.4.1.3 to 4.5.1.x. 4.5.1 New Features and Functionality These are the new and enhanced features in this release. For a complete list of issues that we fixed in this release, see 4.5.1 Fixed issues.. Admin chart color palettes auto update based on primary colors In the Admin Style Customization "Chart Color Palettes", secondary color gradients are now based off of the primary colors selected. When a different primary color is chosen, the associated secondary color gradients below it automatically update. Custom color palettes are reflected in users chart color picker Color palette changes made by the Admin in Style Customization, are now reflected in users' chart configuration color palettes (not just in the auto-generated chart colors, as in previous releases). Admins can enable or disable auto color rotation in Style Customization Chart Color Palettes When there is a single color on a chart (no legend), ThoughtSpot auto-rotates through primary colors to render the chart for visual variety. If an Admin does not want this behavior, they can disable it by choosing Disable Color Rotation on the "Chart Color Palettes", in which case ThoughtSpot will always use the default color on single-color charts.. 4.5 New Features and Functionality These are the new and enhanced features in this release. For a complete list of issues that we fixed in this release, see 4.5 Fixed issues. UI Reports query cancellation Beginning in this release, ThoughtSpot reports queries which exceed system resource: Query cancelled due to memory limits being exceeded (OOM). Data type information available on hover The data type information is now available when a user hovers over a column in the left panel. Improved session security New improvements in security reduce the amount of information made available by the UI during a user session. Improved memory management logs This release includes improvements to how the system logs memory situations. The logs now record when a situation begins and ends plus information about which request triggered the situation. The system also now keeps a tally of how many distinct clients experienced a rejection. Improvements in upgrade This release includes significant improvements in the performance of upgrades particularly those installations with large objects. Multiple data/time formatted columns in data import Your imported data can now include columns with different date/time formats. New commands to install R packages This release includes the `tscli rpackage` command. This command allows users to manage R packages for use with SpotIQ. Setting user feedback email Users in ThoughtSpot may be asked for feedback for new or BETA features in the system. By default, feedback goes directly to ThoughtSpot support. Alternatively, you can send feedback to someone in your company. See the tscli commands reference for details. SpotIQ profile preferences In this release, you can configure your SpotIQ preferences in your user profile. These preferences control notifications and allows you to exclude nulls or zero measures from your analysis. Flying our new colors In this release, we are changing our primary navigation bar from black to light gray. Screen captures in the documentation may show the older color scheme. Expect them to update over all. Expand RLS configuration to include all underlying tables. Higher bulk filter limit Users can now have up to 10K values in a bulk filter. Additionally, bulk filtering no longer requires validation of filtered values. Values in the bulk filter that do not exist in the data are allowed in the filter. This allows a filter to anticipate data that may be present in the future. New home page This release includes updates to the application home page. It now contains several new sections intended to encourage users to explore and learn about your company data: - All time popular/Recently trending answers and pinboards - Recently viewed answers and pinboards - Recent team activity Answers, pinboards, worksheets, and tables people in your company have created or edited recently. - Did you know? Auto analysis results from SpotIQ The areas are restricted by privileges just as other areas. For example, if a user doesn't have the ability to use SpotIQ, that option does not appear. Stricter column sharing feature This release includes the ability to apply strict column level security. Under the standard column sharing, users without access to a specific table column can still see the column's data if subsequent worksheets relying on that data were shared with them. Now, you can for your installation, prevent this permissive sharing and prevent users from ever seeing the data. Speak with ThoughtSpot Customer Support for information on enabling this feature. Grant Download/Upload to All This release includes two APIs ( v1/group/addprivilege or v1/group/removeprivilege) that allow you to add or remove the DATADOWNLOADING or USERDATAUPLOADING privilege to/from the system default ALL_GROUP group. New date functions for formulas This release includes several new date functions for formulas: - day_number_of_quarter - day_number_of_week - month_number_of_quarter - week_number_of_month - week_number_of_quarter - week_number_of_year Metrics pipeline improvements Included in this release are metrics pipelines that empower both our team and yours to enrich the ThoughtSpot product experience. The new metric pipelines enable: - Faster issue resolution: ThoughtSpot collects the diagnostic information from your system on an ongoing basis: there is no time needed to collect diagnostic information after a problem is reported. Our support team can begin working to remediate any issue with you at once. - Failure prevention: Metrics provides direct visibility to the ThoughtSpot team on your system's limits. Therefore, our Support team can proactively identify critical threshold issues and work to prevent failures. Metrics also help reduce SLA times as the team can debug much faster. - Improved Search: ThoughtSpot can tune search algorithms by studying search history and schema. - Improve Performance: ThoughtSpot analyzes expensive and complex query patterns to look for performance optimizations. - Improved Browser Performance: Finally, the metrics pipeline allows ThoughtSpot to identify application-use patterns that contribute to performance bottlenecks with specific browsers and help your team prevent or alleviate them. Relative time filtering This release includes support for filtering with relative time frame. The syntax for this filter is: last [N] <period> for each <period> For example, this filter presents results for the last two months for all the years available in the data. last 2 months for each year Gridlines for charts with x/y axis Users can now enable the display of gridlines in charts that have an x and y axis. Improvements to Growth-over-time queries This release includes improvements with queries that use growth of queries with formats such as the following: growth of <measure_column> by <date_column> <bucket> <period-over-period> This table shows the possible buckets and the period-over keywords you can combine: New period keywords This release includes expansions to the time-series keywords. The quarter of year and day of month keywords were added. Ability to set table load prioritization You can now use tql to set table load priority. You can set priority values between 1-100. The default priority is 50. A lower number indicates a higher priority, with 1 being the highest priority. Tables set to a load priority of 1, load before tables set to higher numbers. The following illustrates examples of the new commands for setting and changing table load priority: alter table 't1' set load priority [value] alter table 't1' remove load priority
https://docs.thoughtspot.com/4.5/release/notes.html
2021-05-06T12:30:11
CC-MAIN-2021-21
1620243988753.97
[array(['/4.5/images/notes/custom_help_item.png', None], dtype=object) array(['/4.5/images/notes/customize_help.png', None], dtype=object) array(['/4.5/images/notes/chart-color-palette-admin-style-customize.png', None], dtype=object) array(['/4.5/images/notes/chart-color-user-config-inherit-palette.png', None], dtype=object) array(['/4.5/images/notes/chart-color-palette-admin-rotation-on-off.png', None], dtype=object) ]
docs.thoughtspot.com
October, October. The leaves are turning color across the country, Halloween is approaching, and the Fall Classic is winding up, hopefully with our hometown Giants on top for the third time in just five years. It's a good month. Here at ThousandEyes, we've been hard at work and have some exciting updates to announce, particularly in the area of alerting. Check the release notes below for details. We've added a Cloud Agent in Dubai, United Emirates. To request access to the Dubai agent, send us a note. HTTP Server tests now support setting a Desired Response code. Useful for service-targeted monitoring that doesn't return an HTTP/200 response (for example, targeting a site that requires client-side certificates), Administrators can change the desired response code from the default (2xx or 3xx responses) to a specific response code. This expected response code is used to determine the availability of an agent. To set the desired response code, uncheck the Default (2xx or 3xx) option and set the response code, using Test Settings > Advanced Settings > HTTP Response section of the test settings interface. The desired response code must be a single response code; neither ranges nor wildcards are supported. Note: this setting is independent of the Alert Settings option for specific response. The default alerting condition for HTTP Server tests "ERROR is ANY" will respect the Desired Status Code entered here. This is the first of two releases covering significant changes in our alerting. Based on feedback from our customers (remember, you can make use of the "Make a Wish" at the bottom of each page in the platform to request), we have added the following new features: Our alerting has long supported the concept of consecutive alerts before notification. In this release we've changed the behavior of this option, and moved it from the notification rule logic into the alert triggering logic. This change was made to reduce alerting noise in the system, and make alert notifications. Now, you can set the number of locations, and the number of times they must meet this criteria before the alert is triggered in the alert settings page. For accounts who were using the Consecutive Violations before Notification setting (which has been removed in light of this new option), we have moved the value used in notification into the alert rule triggering criteria. We've introduced the ability to use PagerDuty to handle notification and escalation of your alerts generated by ThousandEyes. From PagerDuty's website: "PagerDuty is an alerting system for IT professionals. PagerDuty connects with a variety of systems (including monitoring tools and ticketing systems) and dispatches alerts via automated phone call, SMS and email." Alert rules can be configured to notify a PagerDuty Service, via the Settings > Alerts > Notifications tab. This article outlines configuring PagerDuty to receive alerts from ThousandEyes. Based on the reverse DNS name of layer 3 devices found in the path between agents and test targets, we now show the Interface type and Vendor (if available) of devices transited during Path Trace attempts. This is shown when hovering over a node in the Path Visualization view, for a device matching patterns determined by our development efforts. We've added the roundsBeforeTrigger field (introduced in today's release) to the alert-rules endpoint for APIv4 and higher, as well as corrected an issue with retrieval of Saved Event data for Page Load Tests. Check our Developer Reference site for more information. As always, send us a note if you have a question, a comment, or just want to say hi.
https://docs.thousandeyes.com/release-notes/2014/2014-10-29-release-notes
2021-05-06T13:44:26
CC-MAIN-2021-21
1620243988753.97
[]
docs.thousandeyes.com
] Removes a user from a group so that the user is no longer a member of the group. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. delete-group-membership --member-name <value> --group-name <value> --aws-account-id <value> --namespace <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>] --member-name (string) The name of the user that you want to delete from the group membership. --group-name (string) The name of the group that you want to delete the user from. -.
https://docs.aws.amazon.com/cli/latest/reference/quicksight/delete-group-membership.html
2021-05-06T14:17:36
CC-MAIN-2021-21
1620243988753.97
[]
docs.aws.amazon.com
Spark Guide supports two cluster managers: YARN and Spark Standalone. When run on YARN, Spark application processes are managed by the YARN ResourceManager and NodeManager roles. When run on Spark Standalone, Spark application processes are managed by Spark Master and Worker roles. Unsupported Features The following Spark features are not supported: - Spark SQL: - Thrift JDBC/ODBC server - Spark SQL CLI - Spark Dataset API - SparkR - GraphX - Spark on Scala 2.11 - Mesos cluster manager
https://docs.cloudera.com/documentation/enterprise/5-10-x/topics/spark.html
2021-05-06T14:23:44
CC-MAIN-2021-21
1620243988753.97
[]
docs.cloudera.com
SchedulerControl.ResourceNavigatorVerticalStyle Property Gets or sets a style of vertical resource navigator displayed in the scheduler control when appointments are grouped by dates or resources. ResourceNavigatorVerticalStyle property specifies the style that groups together properties, resources, and event handlers and shares them between instances of the ResourceNavigatorControl type. Target Type: ResourceNavigatorControl To learn more, see Styles and Templates.
https://docs.devexpress.com/WPF/DevExpress.Xpf.Scheduler.SchedulerControl.ResourceNavigatorVerticalStyle?v=19.1
2021-05-06T12:45:14
CC-MAIN-2021-21
1620243988753.97
[]
docs.devexpress.com
New in version 2016.3.0. Salt supports minion blackouts. When a minion is in blackout mode, all remote execution commands are disabled. This allows production minions to be put "on hold", eliminating the risk of an untimely configuration change. Minion blackouts are configured via a special pillar key, minion_blackout. If this key is set to True, then the minion will reject all incoming commands, except for saltutil.refresh_pillar. (The exception is important, so minions can be brought out of blackout mode) Salt also supports an explicit whitelist of additional functions that will be allowed during blackout. This is configured with the special pillar key minion_blackout_whitelist, which is formed as a list: minion_blackout_whitelist: - test.version - pillar.get
https://docs.saltproject.io/en/latest/topics/blackout/index.html
2021-05-06T12:55:29
CC-MAIN-2021-21
1620243988753.97
[]
docs.saltproject.io
Manage Directories for Amazon WorkSpaces Amazon WorkSpaces uses a directory to store and manage information for your WorkSpaces and users. You can use one of the following options: AD Connector — Use your existing on-premises Microsoft Active Directory. Users can sign into their WorkSpaces using their on-premises credentials and access on-premises resources from their WorkSpaces. AWS Managed Microsoft AD — Create a Microsoft Active Directory hosted on AWS. Simple AD — Create a directory that is compatible with Microsoft Active Directory, powered by Samba 4, and hosted on AWS. Cross trust — Create a trust relationship between your AWS Managed Microsoft AD directory and your on-premises domain. For tutorials that demonstrate how to set up these directories and launch WorkSpaces, see Launch a Virtual Desktop Using Amazon WorkSpaces. For a detailed exploration of directory and virtual private cloud (VPC) design considerations for various deployment scenarios, see the Best Practices for Deploying Amazon WorkSpaces After you create a directory, you'll perform most directory administration tasks using tools such as the Active Directory Administration Tools. You can perform some directory administration tasks using the Amazon WorkSpaces console and other tasks using Group Policy. For more information about managing users and groups, see Manage WorkSpaces Users and Set Up Active Directory Administration Tools for Amazon WorkSpaces. Shared directories are not currently supported for use with Amazon WorkSpaces. If you configure your AWS Managed Microsoft AD directory for multi-Region replication, only the directory in the primary Region can be registered for use with Amazon WorkSpaces. Attempts to register the directory in a replicated Region for use with Amazon WorkSpaces will fail. Multi-Region replication with AWS. Contents
https://docs.aws.amazon.com/workspaces/latest/adminguide/manage-workspaces-directory.html
2021-05-06T12:07:47
CC-MAIN-2021-21
1620243988753.97
[]
docs.aws.amazon.com
Create a SnapMirror endpoint Contributors Download PDF of this page You must create a SnapMirror endpoint in the NetAppElement UI before you can create a relationship. A SnapMirror endpoint is an ONTAP cluster that serves as a replication target for a cluster running Element software. Before you create a SnapMirror relationship, you first create a SnapMirror endpoint. You can create and manage up to four SnapMirror endpoints on a storage cluster running Element software. For details about API methods, see Manage storage with the Element API. You should have enabled SnapMirror in the Element UI for the storage cluster. You know the ONTAP credentials for the endpoint. Click Data Protection > SnapMirror Endpoints. Click Create Endpoint. In the Create a New Endpoint dialog box, enter the cluster management IP address of the ONTAP system. Enter the ONTAP administrator credentials associated with the endpoint. Review additional details: LIFs: Lists the ONTAP intercluster logical interfaces used to communicate with Element. Status: Shows the current status of the SnapMirror endpoint. Possible values are: connected, disconnected, and unmanaged. Click Create Endpoint.
https://docs.netapp.com/us-en/element-software/storage/task_snapmirror_create_an_endpoint.html
2021-05-06T12:55:37
CC-MAIN-2021-21
1620243988753.97
[]
docs.netapp.com
Source code for libqtile.widget.memory # -*- coding: utf-8 -*- # Copyright (c) 2015 Jörg Thalheim (Mic from libqtile.widget import base def get_meminfo(): val = {} with open('/proc/meminfo') as file: for line in file: key, tail = line.split(':') uv = tail.split() val[key] = int(uv[0]) // 1000 val['MemUsed'] = val['MemTotal'] - val['MemFree'] return val[docs]class Memory(base.InLoopPollText): """Displays memory usage.""" orientations = base.ORIENTATION_HORIZONTAL defaults = [ ("fmt", "{MemUsed}M/{MemTotal}M", "see /proc/meminfo for field names") ] def __init__(self, **config): super(Memory, self).__init__(**config) self.add_defaults(Memory.defaults) def poll(self): return self.fmt.format(**get_meminfo())
https://docs.qtile.org/en/v0.10.4/_modules/libqtile/widget/memory.html
2021-05-06T12:29:59
CC-MAIN-2021-21
1620243988753.97
[]
docs.qtile.org
ContentTypes / Taxonomies Note: You are currently reading the documentation for Bolt 3.6. Looking for the documentation for Bolt 4.0 instead? Most content can be structured in logical ways, using categorisation or labelling. Basically, taxonomies can be used to create 'groupings' or 'classifications' between different content, regardless of their ContentTypes. Common examples of taxonomies on websites are 'categories' or 'tags'. For example, a website about movies might have "Romance", "Science Fiction" and "Comedy" as categories for its reviews. Any classification like this is broadly called a taxonomy. Bolt allows you to define and use your own taxonomies to classify and structure your content. You can create Bolt Taxonomies by adding them to taxonomy.yml. Bolt can use the common 'tags' and 'categories' out of the box, but it also allows you to customize them to your needs. You can define your own Taxonomies, and choose how they behave. There are three main types of Taxonomy: tags: Tags are a sort of 'freeform' labeling. Each record can have several tags, that do not have to be selected from a predefined list. Just add tags, as you go! Examples of websites that use tags extensively are Flickr or Delicious. The Taxonomy can be set up to allow spaces in tag names or not. categories: Categories are chosen predefined categorizations for your record. These are often found on weblogging sites, to define the different types of blogpostings. The Taxonomy can be limited to either one or more categories for each record. grouping: Grouping is like categories but it is - by definition - more strict. When a grouping applies to a certain record, that record should be viewed as a part of the other records with the same grouping. As such, a record can have only one 'grouping' at most. The default taxonomy.yml has good examples of all three types. If name and singular_name are omitted, they are generated automatically by Bolt. tags: slug: tags singular_slug: tag behaves_like: tags allow_spaces: false #listing_template: tag-listing.twig #custom template chapters: slug: chapters singular_slug: chapter behaves_like: grouping options: [ main, meta, other ] categories: name: Categories slug: categories singular_name: Category singular_slug: category behaves_like: categories multiple: 1 options: [ news, events, movies, music, books, life, love, fun ] The common options are: Setting options¶ Both the grouping as well as the categories Taxonomies use a number of set options. You can set these possible options in your taxonomy.yml, after which the editor can select one or more of them when they are editing the content. Yaml allows us to specify these options in a few different ways, depending on your needs. Simple sequence¶ categories: … options: [ news, events, movies ] Mapping¶ If you like more control over the display names for the taxonomies, you can specify the options using a mapping in your Yaml: categories: … options: news: Latest News events: Current Events movies: Cool Movies Sorting order¶ Bolt ContentTypes can have their own sorting order. Usually this is defined as something like sort: title in the ContentType to sort alphabetically by title. Sometimes it might make more sense to use a grouping Taxonomy, and sort within those groups. To do this, you can add has_sortorder, which allows the editor to not only select a group to classify a record, but it also allows them to set a sorting order by which the records inside that specific group are sorted. In Bolt's backend listing for the content-type, the content will be organised by the selected group, and it will be sorted by the sorting order: Note that the sorting is done inside the group only. Adding Taxonomies to ContentTypes¶ Once the Taxonomies are added, you need to add them to your ContentTypes in contenttypes.yml, so you can use them in your content. For example: entries: name: Pages singular_name: Page fields: … taxonomy: chapters If you'd like to use more than one Taxonomy for a ContentType, be sure to use an array: pages: … taxonomy: [ categories, tags ] Displaying Taxonomies in templates¶ If you'd like to show only one specific Taxonomy, for example 'tags', use something like this: {% for tag in record|taxonomy.tags|default %} {{ tag }}{% if not loop.last %}, {% endif %} {% endfor %} If you're using a listing, and would like to access the taxonomy name, simply use {{ slug }}. Displaying all used taxonomies¶ If you'd like to just display the used Taxonomies in your templates, you can either write some Twig code to output all of them in sequence, but for a quick fix, you can use a snippet like the following: After updating your content with Taxonomies, you can edit your templates to show the Taxonomies and to link to automatically generated listing pages: {% for type, values in record|taxonomy %} <em>{{ type }}:</em> {% for link, value in values %} <a href="{{ link }}">{{ value }}</a>{% if not loop.last %}, {% endif %} {% endfor %} {% if not loop.last %} - {% endif %} {% endfor %} For a slightly more sophisticated example, see the default taxonomy links template _sub_taxonomylinks.twig, or even use it directly in your own theme: {% include '_sub_taxonomylinks.twig' with {record: record} %} Couldn't find what you were looking for? We are happy to help you in the forum, on Slack or on Github.
https://docs.bolt.cm/3.6/contenttypes/taxonomies
2021-05-06T12:55:11
CC-MAIN-2021-21
1620243988753.97
[]
docs.bolt.cm
Create IJC shared project action is used to generate Java Web Start files for one or more schemas selected. Selection is prefilled from main screen when entering wizard. User has to enter destination folder where shared project files should be generated and optionally database user name and database password. When checking “Generate files for different database user/pass then in current connection” user has to enter user name and password for this operation. If checkbox is left unchecked user don’t have to fill in database username and password because ones from source connection are used for the operation.
https://docs.chemaxon.com/display/docs/create-jws-files-operation.md
2021-05-06T12:01:09
CC-MAIN-2021-21
1620243988753.97
[]
docs.chemaxon.com
Schema Modeler Schema Modeler is a powerful tool provided with Data Abstract aimed at developers, designers and database administrators so they can define schemas, edit column mappings, write Business Rules, maintain an existing schema and more. How the tool works is largely identical between the two OS platforms however there are some differences, particularly in how you actually start the tool. The following pages cover the version available to each operating system.
https://docs.dataabstract.com/Tools/SchemaModeler/
2021-05-06T13:20:02
CC-MAIN-2021-21
1620243988753.97
[]
docs.dataabstract.com
To add a second form of identity verification to your sign in process, you need to configure the two-factor authentication requirements for your account. You can configure a requirement for all users and for individual users as needed. For accounts configured to use the Client Certificate or the One-Time Password (OTP) option, you can only configure requirements for individual users. These types of account configurations require all account members to use their username and password and a second form of authentication to sign in to their account: client certificate or one-time password. In your CertCentral account, in the left main menu, go to Settings > Authentication Settings. In the Two-Factor Authentication Requirements section, click Add New Requirement. Authentication Type On the Add Two Factor Requirement page, under Authentication Type, select the second form of authentication you want to require: Apply Rule To Under Apply Rule To, select who you want the rule to apply to: Click Create Requirement. On the Authentication Settings page (in the left main menu, go to Settings > Authentication Settings), in the Two-Factor Authentication Requirements section, each new two-factor authentication rule/requirement is added to the table. Additionally, as users sign in and generate client certificates and initialize OTP apps or devices, they are added to the applicable table—One-Time Password (OTP) Devices or Issued Client Certificates.
https://docs.digicert.com/zh-tw/manage-account/certcentral-two-factor-authentication/configure-two-factor-authentication-requirements/
2021-05-06T13:38:30
CC-MAIN-2021-21
1620243988753.97
[]
docs.digicert.com
User Guide Version 1.0.0 Table of Contents 1. INTRODUCTION - Magento PSiGate payment gateway extension specializes for capturing payment from credit card to your PSiGate account. - Rootways PSiGate Extension is quite easy to install and configure. Just enable payment method, add your PSiGate information and you can use PSiGate Payment Method for your website or Magento store. - No technical knowledge required! Features Listing: - Choose payment action as Authorize Only or Authorize and Capture (Sale) - Refund payment from Magento admin. - Secure payment method - Easy to install and configure. - User Friendly Interface. - Reliable and Prompt Support by Rootways 2. HOW TO USE AND CONFIGURE This section will guide you to configure Rootways PSiGate Payment Gateway Extension. It is quite easy and fast! 2.1 License Key Settings: - Log in to Admin Panel and then click STORES -> Configuration -> ROOTWAYS EXTENSIONS -> PSiGate Payment -> Settings - Below is the screen shot of PSiGate Payment Gateway license key settings. Please enter license key for this extension emailed to you by us after your purchase. 2.2 PSIGate Payment Method Management: - Go to STORES -> Configuration -> SALES -> Payment Methods -> PSiGate Payment Method – By Rootways Inc. - You can see below screen. Fill up all required detail and save it. - Enable: You can enable/disable this payment method using this option. - Title: Title of the payment method to be visible at front side. - PSiGate Store ID:Add store ID of your PSiGate account. - PSiGate Passphrase:Add passphrase of your PSiGate account. - Test Mode:Choose whether to enable/disable the test mode. - Credit Card Verification: If set to ‘No’ then CVC will not be checked while placing order - Payment Action: Choose whether it is Authorized Only or Sale. - Credit Card Types: Choose type of credit cards that you want to display at front-end - Email Customer: If set to ‘Yes’ then email will send to customer - Merchant Email: Add merchant email. - New order status: Choose whether new order status is pending or processing - Debug: If set to ‘Yes’ then you can check log of it. - Payment from Applicable Countries: Choose whether you want to allow payment method for all or specific countries only. - Payment from Specific Countries: Select countries in which you want to allow this payment method. Front End: - After setting up PSiGate Payment Method you can see it on the checkout page, as in the screenshot below. Now you can capture ample amount of orders using PSiGate extension. - That’s how easy it is to use PSiGate Payment Gateway extension/ by Rootways. Please contact us for any queries regarding Magento and custom web-development of E-commerce websites.
https://docs.rootways.com/magento-2-psigate-payment-module-user-guide/
2021-05-06T13:42:52
CC-MAIN-2021-21
1620243988753.97
[]
docs.rootways.com
Sometimes, there is a notification "Please login your Ryviu account" even you have already logged in to your Ryviu account. The reason why you are facing this issue is that your browser is blocking cookies from the Ryviu application. Please fix it by following this instruction: Go to your Chrome Settings > Advanced > Content Settings > Cookies and add the addresses below to the Allow field to fix this error. [*.]ryviu.com [*.]aliexpress.com If you have any further questions, please let us know via live chat.
https://docs.ryviu.com/en/articles/2491040-solve-issue-please-login-your-ryviu-account
2021-05-06T12:24:03
CC-MAIN-2021-21
1620243988753.97
[]
docs.ryviu.com
You can configure multicast on a tier-0 gateway and optionally on a tier-1 gateway for an IPv4 network to send the same multicast data to a group of recipients. In a multicast environment, any host, regardless of whether it is a member of a group, can send to a group. However, only the members of a group will receive packets sent to that group. The multicast feature has the following capabilities and limitations: - PIM Sparse Mode with IGMPv2. - No Rendezvous Point (RP) or Bootstrap Router (BSR) functionality on NSX-T. However, RP information can be learned via PIM Bootstrap Messages (BSMs). In addition, multiple Static RPs can be configured. When a Static RP is configured, it serves as the RP for all multicast groups (224/4). If candidate RPs learned from BSMs advertise candidacy for the same group range, the Static RP is preferred. However, if candidate RPs advertise candidacy for a specific group or range of groups, they are preferred as the RP for those groups. - The Reverse Path Forwarding (RPF) check for all multicast-specific IPs (senders of data traffic, BSRs, RPs) requires that a route to each of them exists. - The RPF check requires a route to each multicast-specific IP with an IP address as the next hop. Reachability via device routes, where the next hop is an interface index, is not supported. - Both tier-0 and tier-1 gateways are supported. To enable multicast on a tier-1 gateway, an Edge cluster must be selected and the tier-1 gateway must be linked to a tier-0 gateway that also has multicast enabled. - All uplinks on a tier-0 gateway are supported. - Multiple Static RPs with discontinuous group ranges are supported. - IGMP local groups on uplink interfaces are supported. - PIM Hello Interval and Hold Time are supported. - Active-Cold Standby only is supported. - The NSX Edge cluster can be in active-active or active-standby mode. When the cluster is in active-active mode, two of the cluster members will run multicast in active-cold standby mode. You can run the CLI command get mcast high-availability role on each Edge to identify the two nodes participating in multicast. Also note that since unicast reachability to NSX-T in an active-active cluster is via ECMP, it is imperative that the northbound PIM router selects the ECMP path that matches a PIM neighbor to send PIM Join/Prune messages to NSX-T. In this way it will select the active Edge which is running PIM. - East-west multicast replication: up to 4 VTEP segments for maximum replication efficiency. - ESXi host and NSX Edge only (KVM not supported). - Layer 2 bridge attached to a downlink segment not supported. - Edge Firewall services are not supported for multicast. Distributed Firewall is supported. - Multi-site (NSX Federation) not supported. - Multi-VRF not supported. Multicast Configuration Prerequisites Underlay network configurations: - Acquire a multicast address range from your network administrator. This will be used to configure the Multicast Replication Range when you configure multicast on a tier-0 gateway (see Configure Multicast). - Enable IGMP snooping on the layer 2 switches to which GENEVE participating transport nodes are attached. If IGMP snooping is enabled on layer 2, IGMP querier must be enabled on the router or layer 3 switch with connectivity to multicast enabled networks. Multicast Configuration Steps - Create an IGMP profile. See Create an IGMP Profile. - Optionally create a PIM profile to configure a Static Rendezvous Point (RP). See Create a PIM Profile. - Configure a tier-0 gateway to support multicast. See Add a Tier-0 Gateway and Configure Multicast. - Optionally configure tier-1 gateways to support multicast. See Add a Tier-1 Gateway.
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-6AAC3360-4F79-4FBF-BCC1-0D8C220B0621.html
2021-05-06T13:27:24
CC-MAIN-2021-21
1620243988753.97
[]
docs.vmware.com
Description Moodle is an open source online Learning Management System (LMS) widely used at universities, schools, and corporations worldwide. It is modular and highly adaptable to any type of online learning. First steps with the Bitnami Moodle? This is required so your application can send notifications via email. Find below an example of configuring the email using a Gmail account from the Moodle Administration Panel. Log in as the administrative user and go to the "Site administration -> Server -> Email -> Outgoing mail configuration" section. Replace USERNAME and PASSWORD with your Gmail account username and password respectively. SMTP hosts = smtp.gmail.com:465 SMTP security = SSL SMTP username: [email protected] SMTP password: PASSWORD To configure the application to use other third-party SMTP services for outgoing email, such as SendGrid or Mandrill, refer to the FAQ. How to install a plugin on Moodle?. How to create a full backup of Moodle? Backup The Bitnami Mood: $/ - Navigate to the application from a browser and follow the steps to upgrade the database to the latest Moodle/moodle/bnconfig --machine_hostname NEW_DOMAIN If you have configured your machine to use a static domain name or IP address, you should rename or remove the /opt/bitnami/apps/moodle/bnconfig file. $ sudo mv /opt/bitnami/apps/moodle/bnconfig /opt/bitnami/apps/mood access phpMyAdmin on Windows through an SSH tunnel using a private key:). You have two options to configure the SSH tunnel: connect to the server using a private key (recommended) or connect to the server using a SSH password. Follow the instructions below per each option: Option 1: Connect to the server using a private key - Option 2: Connect to the server using a SSH password BigBlueButtonBn and RecordingBn plugins for Moodle? Copy the folders to /opt/bitnami/apps/moodle/htdocs/mod/: $ cp -R bigbluebuttonbn/ /opt/bitnami/apps/moodle/htdocs/mod/ $ cp -R recordingsbn/ /opt/bitnami/apps/moodle/htdocs/mod/ Login to your Moodle site as administrator. Moodle. How to disable the admin web interface option for installing add-ons? To hide this option from the admin web interface, edit the configuration file at /opt/bitnami/apps/moodle/htdocs/config.php and set: $CFG->disableonclickaddoninstall = true; On Linux systems, also remove the write permissions from the web server group in the Moodle directories: $ find /opt/bitnami/apps/moodle/htdocs/ -type d 755 How to install a cron task for Moodle? Moodle requires a cron task that must be run regularly. The administrator can do this from the Admin panel's "Site Administration -> Notifications" menu. To run this task in the background, create a cron job. To edit the crontab, run the following command: $ sudo crontab -e Add the following line to it: */15 * * * * sudo su daemon -s /bin/sh -c "/opt/bitnami/php/bin/php /opt/bitnami/apps/moodle/htdocs/admin/cli/cron.php > /dev/null" This cron entry will run the script every 15 minutes. How to migrate. How to secure. Troubleshooting You may see the following error: PHP Notice: Undefined index: HTTP_HOST in /opt/bitnami/apps/moodle/htdocs/config.php To resolve this specify your domain name or IP address in the /opt/bitnami/apps/moodle/htdocs/config.php file, as shown below: $CFG->wwwroot = '';
https://docs.bitnami.com/azure/apps/moodle/
2018-07-15T20:44:16
CC-MAIN-2018-30
1531676588972.37
[]
docs.bitnami.com
TRAME¶ TRAME (Texts and Manuscript Transmission of the Middle Ages in Europe) is a research infrastructure project for the development and interoperability of web databases about medieval manuscript tradition. 1. What is it?¶ TRAME is a metasearch tool for medieval manuscripts, hosted by the Fondazione Ezio Franceschini and SISMEL. It aims to assist users in finding primary sources for the study of medieval culture. Originally developed in 2011 to allow a combined search on a group of different Italian repositories and databases, it has increasingly widened its scope to become a global research infrastructure for all scholars of the Middle Ages. Its main aim is to build a research infrastructure that focuses on promoting interoperability and fostering discoverability among the different digital resources available in the medieval digital ecosystem. Since 2014 TRAME has focused primarily on extending the meta-search approach to other web resources, using the user’s interaction with the research tool in an attempt to define a medieval manuscripts ontology, and redesigning the portal with the aim of improving the accessibility and usability of the site. Currently it implements a number of features (including simple search, shelf-mark search, and advanced search) on more than 80 selected scholarly digital resources devoted to western medieval manuscripts, authors, and texts across the EU and USA. These resources include digital libraries, research databases and other projects from leading research institutions. TRAME is a research tool rooted in the international medieval scholarly community and an ongoing collaborative international effort. Its development is in line with the Memorandum of Understanding of the COST Action IS1005 “Medieval Europe Medieval Cultures and Technological Resources”, representing 260 researchers coming from 39 leading institutions in 24 countries across the European Union. 2. What can I find there?¶ TRAME allows the user to search simultaneously in library catalogues, project databases and research portals. It combines online resources that are inside the TRAME network as well as external sites. The infrastructure combines both: Bibliographies pertaining to manuscripts, e.g. - MEL – Medioevo latino. Bollettino bibliografico della cultura europea da Boezio e Erasmo (secoli VI-XV) - MEM – Medioevo musicale. Bollettino bibliografico della musica medievale - TLION mss – Bibliografia dei manoscritti citati in rivista - BibMan – Bibliografia dei manoscritti in alfabeto latino conservati in Italia Repertories of texts and manuscripts, e.g. - LIO – Repertorio della lirica italiana delle Origini - BAI – Biblioteca Agiografica Italiana - MAFRA – Repertorio dei manoscritti gallo-romanzi esemplati in Italia - MAGIS – Manoscritti agiografici dell’Italia del Sud - IRHT – Jonas : Répertoire des textes et des manuscrits médiévaux d’oc et d’oïl - JONAS - Ramon Llull Database - Repertorium Biblicum Medii Aevii - BHL Biblioteca Hagiographica Latina manuscrita Catalogues of manuscripts (with images), e.g. - CODEX – Inventario dei manoscritti medievali della Toscana - MDI – Manoscritti datati d’Italia - Manuscriptorium - Enluminures Repertories pertaining to the history of traditions, e.g. - TETRA – La trasmissione dei testi latini del Medioevo - TLION – Tradizione della letteratura italiana online Bio-bibliographies - BISLAM – Bibliotheca Scriptorum Latinorum Medii Recentiorisque Aevi - CALMA – Compendium Auctorum Latinorum Medii Aevi (500-1500) Library catalogues, e.g. - Beinecke Digital Collections - Bodleian Library - Bibliothèque nationale de France - Trinity College - Bayerische Staatsbibliothek - Hill Museum & Manuscript Library - Munchen (BSB) - MDZ - Early Manuscripts at Oxford University - Bibliothèque Municipale de Lyon - Mazarinum - Les collections numeriques de la Bibliotheque Mazarine - Mazarinum 4. TRAME version 2¶ A completely revised TRAME user interface is currently in development. Not all of the search functions are already operational. In the new version of TRAME 2 you can perform searches as a normal user or as a registered user. The main difference is that as registered user you can build and save your own searches in ‘search packages’ (meaning your own selection of databases). You will have the opportunity to makes these packages public, so others can use them, or to keep them private. The search options remain the same as in the old version: a simple or freetext search, shelf-mark and advanced search. At this at this stage the only search fully working is the freetext. Video User interface TRAME 2 5. Technical Background¶ TRAME’s development has been influenced by changes regarding the nature of information available in the WWW. TRAME has developed from a basic meta-search approach towards an attempt to establish a Medieval Semantic Knowledge base, by using custom applications for information collection and integration (i.e.: web crawler, data miner). The application is written in OO-PHP, the design follows the MVC Pattern, the RDBMS is MySql and the front-end combines Xhtml and Javascript. The search engine scans a set of sources for searched query terms and retrieves links to provide a wide range of information, including simple references, detailed manuscript record, and full-text transcriptions. Currently, it is possible to perform queries by freetext, shelf-mark, author, title, date, copyist or incipit, on more than 80 selected scholarly digital resources across the EU and the USA. Advantages of TRAME’s search of remote resources: - TRAME has light and flexible infrastructure, as both data indexes are not stored in a central database. Actually no information is stored except for a few technical metadata. - TRAME will send the user query across a vast number of repositories and present the results in a single list. - TRAME can send a user query across a number or remote systems over HTTP protocol, it’s also supporting OAI-PMH on selected repositories and (if available) specific APIs - The results will be divided in groups according to their provenance or type (the original data provider) - All search results found by TRAME’s meta-search engine are accessible via the original provider’s web site, with their own policies and licensing methods - A user query is sent simultaneously over a wide number of connected systems in order to collect a unique list of results. The search results will have all the information needed to identify each individual manuscript, such as localization (City, Library and Holding), shelf-mark and the link to the actual digital resource (URI: uniform resource identifier) To learn more about the technical background of TRAME and TRAME 2 please have a look at the source code documentation.
https://docs.cendari.dariah.eu/user/trame.html
2018-07-15T21:07:15
CC-MAIN-2018-30
1531676588972.37
[array(['../_images/TRAME_01.png', '../_images/TRAME_01.png'], dtype=object)]
docs.cendari.dariah.eu
Reference for the Web Application Package by Tali Smith Introduction Every application in the Web Application Gallery has at least two XML files that enable the Web Platform Installer (Web PI) to use the Web Deployment Tool (WDT) to deploy the application on Windows® operating systems. These files are the Manifest.xml and Parameters.xml files. In addition, many applications add a SQL script to be run by the WDT as part of the pre-setup installation. PHP applications also include a Web.config file. Manifest.xml The Manifest.xml file (or the manifest) is the main file that tells the WDT what to do with an application. It is an XML file made up of "providers". Each of the providers can be modified by user input, which is described in the Parameters.xml file. There should generally be at least one parameter for each provider in the manifest. The available providers include: - iisapp - This is the only required provider in the manifest. The iisapp element has a single required attribute - "path". The path specifies the sub folder within the application package (compressed [Zip] file) that contains the files and directories required for running the application. When the application is installed, the contents of the specified folder are placed in the root of the Web site application folder. - dbfullsql - This provider identifies a connection to a Microsoft® SQL Server® or Microsoft® SQL Server® Express database instance. The "path" attribute is required and identifies a SQL script to be executed against the database. The credentials and database information required to connect to the database are provided by the user through the parameters. The default behavior of this provider is to treat each SQL file as if it were a single database transaction which can be committed or rolled-back as needed. If the contents of your script maintain their own transactions, or have components which must run outside of a transaction, you can specify a "transacted" attribute and set its value to "false". - dbmysql - This provider identifies a connection to a MySQL database engine. The "path" attribute is required and identifies a SQL script to be executed against the database. The credentials and database information required to connect to the database are provided by the user through the parameters. If your SQL scripts contain stored procedures which require you to define a different command delimiter from the MySQL default of a semicolon (;), you can specify the new delimiter with the optional "commandDelimiter" attribute. If you specify a commandDelimiter, you can have it removed from the SQL by specifying a "removeCommandDelimiter" attribute and setting its value to "true". Use removeCommandDelimiter if you are using a delimiter other than a semicolon (;). - setacl - This provider is used to apply an access control list (ACL) to a file or directory. ACLs are used to define the access and permissions that a user or an application has on a file or directory. By default, everything installed by the WDT is installed without changing permissions. In most cases, this means that files will be readable by PHP (and Microsoft® ASP.NET), but not writable. If your application needs additional permissions, you can use this provider to set them. - alias - Alias is used when you need to copy a file from the application package to another name or location. Typically this is used for applications that provide sample configuration files. Prior to the WDT, these applications often required the user to copy the sample file and edit it by hand. With the alias provider, you can have the WDT make the copy and use the parameters to fill in any necessary values. Parameters.xml The Parameters.xml file defines the WDT and Web PI interactions with the user. Parameters.xml is an XML file made up of "parameters", presented to the user as a form that needs to be filled out to customize the application installation for the user's environment. Each parameter represents a data item that the WDT needs to collect to perform the installation. These parameters can be used to modify the attributes of a manifest provider or to replace targeted text in any file in the application distribution, or they can be used as components of the value of other parameters. The parameters have a number of attributes, including: - name - This is a required attribute which uniquely identifies the parameter within the scope of the Parameters.xml file. It is also used as the parameter's variable name. - friendlyName - The friendlyName is used by most user interfaces (UIs) as the name above the form element that the parameter defines. If there is no friendlyName defined, the UIs will default to the parameter name. - description - The description provides the caption for the form elements. These are most often used to provide instructions to the user about what is expected for the parameter and how it will be used. - defaultValue - For parameters that are shown to the user, the defaultValue gets placed in the form as a suggestion for the value of the parameter. For parameters that are hidden from the user (see the description of the "hidden" tag below), the defaultValue is used to calculate the value for the parameter. - tags - Tags are a comma-separated list of words or phrases that provide extra information for the WDT and Web PI to categorize a parameter. If your parameter is tagged with one of the ten well-known tags from the table below, the UI will automatically provide a friendlyName and description for the parameter. If you specify these elements, you override the defaults. Do not override the defaults unless there is a significant reason to do so. Common parameter tags include: - MySqlConnectionString - SqlConnectionString - DbAdminPassword - DbAdminUsername - DbName - DbServer - DbUsername - DbUserPassword - IisApp - SetAcl In addition, each parameter can have sub elements that perform various functions. ParameterEntry types There are four distinct ParameterEntry types available to specify parameter substitutions, each providing a different way of identifying the target. Three attributes are required for each instance: - type—specifies the mechanism for identifying the target of the parameter replacement. - scope—identifies the file or component that will be searched for the target. - match—specifies the search parameters for identifying the target Each ParameterEntry type has a specific format for defining the scope and the match attributes. The combination of all three elements should always identify at least one match within the application package. You may have as many ParameterEntrys as you need for your application; for more than one type of substitution or more than one target for the substitutions (for example, if two different files both need the substitution), you will need a distinct ParameterEntry element for each one. TextFile ParameterEntry types With the TextFile type, the scope is a regular expression used to identify a file or set of files to target. The match is a regular expression that represents the string to be replaced. The WDT applies the parameter substitution to all strings that are found by the match in all of the files identified by the scope. When defining regular expressions for TextFile ParameterEntry types, it is important to make sure that you limit both your match and your scope to the most precise expressions you can define. TextFile gives you the most reliable method for identifying targets within a file when you have complete control over the source and you are able to define the strings to be targeted. If you do not have complete control over the source, you can use the TextFilePosition parameterEntry type to uniquely identify the target. This most commonly happens with configuration files that have other aspects of the application that expect specific data in both modified and unmodified versions of the file. For example, you could specify that a database user name needs to be placed within a SQL script that was written for use with the WDT. The WDT would look for the string "PlaceholderForDbUsername" and substitute the user's parameter input for that string: <parameter name="dbUsername" defaultValue="appUser" tags="MySQL"> <parameterEntry type="TextFile" scope="install.sql" match="PlaceholderForDbUsername" /> </parameter> XmlFile Many applications use XML files for data or configuration. All Microsoft® .NET applications, and many PHP applications, use a Web.config file for storing Web site and application configuration information. When identifying a target within a XML file, the most reliable mechanism to use is an XPath query. The scope for an XmlFile entry type is defined the same way as for TextFile, using a regular expression to identify the target file(s). The match is specified as an XPath query. An XmlFile entry will replace every existence of a matching XPath in a file. This will frequently mean that you need a detailed XPath query to make sure you replace only the targeted match. For example, an application might have a SQL connectionString in a Web.config file. There may be more than one connectionString in the file to allow for different database types: <connectionStrings> <clear /> <add name="SQLiteDbConnection" connectionString="Data Source=|DataDirectory|application_data.sqlite;Version=3;" /> <add name="SqlServerDbConnection" connectionString="Data Source=(local);Initial Catalog=Application;Integrated Security=true;Application Name=Application" /> </connectionStrings> You could use the generated connection string parameter in the following example to replace the value in the configuration file with the values that the user entered. In this case, the XPath query searches from the root of the XML file following the element path in the query for the first "add" element under the "connectionStrings" element under the "configuration" element with an attribute of "name" that equals "SqlServerDbConnection". It then replaces the value of the "connectionString" attribute of that element with the parameterized value. <parameter name="ConnectionString For Config" defaultValue="DataSource={dbServer};Database={dbName};uid={dbUsername};Pwd={dbPassword};" tags="Hidden"> <parameterEntry type="XmlFile" scope="\\web.config$" match="/configuration/connectionStrings/add[@name='SqlServerDbConnection']/@connectionString" /> </parameter> For more information, see the article "XPath Syntax." ProviderPath The ProviderPath type is used for applying parameters to providers specified in the Manifest.xml file. When you are developing your application package, you cannot know all of the specifics about your users' system environments. So when you specify each of the providers, you associate defaults for each one that matches the expectations of the provider type. There will frequently be one ParameterEntry element in the Parameters.xml file for each directive in the Manifest.xml file. The scope of a ProviderPath entry refers to the type of provider being parameterized. The match specifies a regular expression that uniquely identifies the value of the "path" attribute for that provider element. The table below describes the substitutions for each of the providers that are allowed in the manifest. TextFilePosition ParameterEntry type There may be times when using a regular expression to replace a text string in a file may not be precise enough. For example, there may be a string that you want to replace in one portion of the file but not in another. Or you may not be able to use a regular expression to uniquely identify the string to be changed. For those situations, you can use the TextFilePosition ParameterEntry type. The scope for a TextFilePositionEntry type is defined the same way as for TextFile, using a regular expression to identify the target file(s). The match is specified as a series of three integers separated by semicolons (;). The first integer specifies the line number in the file. The second integer specifies the number of characters from the beginning of the line. The third integer specifies the number of characters to replace. You do need to be careful when specifying your target, as the parameter replacement will extend beyond the end of a line if the match numbers specify a target that would include end-of-line characters within the target length. For example, you could specify that the database user name from the TextFile example also needs to be included in a configuration file. The target in this file cannot be uniquely identified with a regular expression. Using TextFilePosition, we can target the specific string in the file based on its location instead of its content. <parameter name="dbUsername" defaultValue="appUser" tags="MySQL"> <parameterEntry type="TextFile" scope="install.sql" match="PlaceholderForDbUsername" /> <parameterEntry type="TextFilePosition" scope="application\\config.php" match="22;20;12" /> </parameter> Parameter Tags Parameter Tags are used to tell various UIs how to display and use parameters. Some of these tags are mandatory in certain situations. The descriptions that follow specify when specific tags are mandatory and how the tags should be used. iisapp- This identities a parameter as the application path to install the application. The defaultValue will be displayed by most installers. The defaultValue should be set to something like "Default Web Site/application1", where "Default Web Site" is the default Web site for the server and "application1" is the subdirectory under that Web site. Web PI and other installers will use this data to suggest a Web site location for the installation when they don't already know where the user wants to install the application. This is a required tag. There must be at least one parameter that has this tag and that specifies the iisApp provider as its target. Note On IIS 5.1, the Web site portion of this will always be "Default Web Site". The user will be able to select a directory under the default for the Web site to keep applications separate. - Hidden- A Hidden parameter is not shown to the user as part of the installation UI. A Hidden parameter must have a defaultValue set. These parameters are used either to set a hard-coded default value or to set a computed default value. Hard-coded defaults are sometimes used when establishing a parameter for future use. Computed values are used to construct a parameter's value from other parameters. When constructing computed values, you can refer to other parameters by putting the other parameter name surrounded by {}s in the place you want the value. Please refer to the "Connection String" parameter in the example for a common usage of this tag. - SQL or MySQL- These parameters are used in relation to a specific database. If the manifest contains parameters for both SQL and MySQL, the UI can choose which database to use and the user will only be presented with parameters relevant for that database. - Password- This identifies a field that will be used as a password that is already known. The UI will hide the value of that password. - New- When used together with the Password tag, this identifies a field that will be used to set a new password. The UI will hide the value of the password and ask the user to confirm it (for example, "New,Password"). - dbUsername, dbUserPassword, dbAdminUsername, dbAdminPassword, dbServer, dbName- Some UIs that install application packages will handle the database creation themselves. In the case where the user already has a database created, the UIs will seamlessly hide and fill in the administrative credentials correctly to avoid requiring the user to enter data two times. These tags identify the parameters that must be modified if the UI will handle database creation. > [!NOTE] > If there is a SQL or MySQL provider in the Manifest.xml, then there must be six parameters in the Parameters.xml file, and each parameter must have one of these tags. - SQLConnectionString, MySQLConnectionString- This identifies that a field (usually a Hidden one) is being used as a connection string to a database. Some UIs will use the connection string in conjunction with the dbXxxx tags above to display a specific dialog box. - Validate- Validate is only allowed when specified with one of the connection string tags. Validate specifies a connection string that must be valid for the installation to succeed and for any SQL scripts to run as part of the installation. UIs will have a choice on how they want to implement this function. The Web PI will validate the database credentials before allowing the user to proceed through the rest of the installation. - VistaDB, SQLite, FlatFile - These tags identify a parameter that is used with these flat file data types. There is no corresponding provider in the Manifest.xml file for these. The WebPI and other UIs will recognize these tags in the Parameters.xml file. If there is more than one type of database tag (for example, SQL, MySQL, SQLite, and VistaDB all in the same Parameters.xml file), the UI will present the user with a choice of database engines for the application. When the user selects the database engine, all parameters that are tagged for a different engine and not the selected one will be bypassed. Parameter Validation There are several types of parameter validation that are available. If none of these are specified, the user is presented with a simple text box to enter the parameter's value: - AllowEmpty - Most UIs will require a value for all parameters that are not hidden. You can specify a validation type of "AllowEmpty" to instruct the UI that a blank or empty value is acceptable. AllowEmpty can be used in conjunction with any of the other parameter validation types, or on its own. The syntax for AllowEmpty is: <parameter name="AllowEmptyParameter"> <parameterValidation type="AllowEmpty" /> </parameter> Boolean- Boolean parameters are simple True / False questions. Depending on the UI, the user might be presented with a check box or two option buttons to select their choice. Booleans replace values in the same way as other parameters. For Booleans, the replacement value is either "True" or "False". If you need to have Boolean values other than "True" or "False", use an enumeration with only two values. The syntax for Booleans is: <parameter name="BooleanParameter"> <parameterValidation type="Boolean" /> </parameter> Enumeration- Enumeration allows you to limit the user's input to a list of discrete possible values. Most UIs will implement this as a drop-down list box, where the user will have the ability to choose one value from the list. Any whitespace in the validationString will be included as part of the possible values. Therefore, there should be no whitespace on either side of a comma, unless you want that whitespace to be included in the parameter substitution. The syntax for Enumeration is: <parameter name="Enumeration Parameter"> <parameterValidation type="Enumeration" validationString="value1,value2,value3,value4" /> </parameter> Currently, there is no way to escape a comma (,) so that it may be included as part of one of the values of an enumeration. Regular Expression- With Regular Expression validation, the user is presented with a simple text box the way a non-validated parameter would be. Then, when the user goes to submit the form and move on to the next part of the installation, the entry in the text box will be compared to the validationString in the RegularExpression. For more information about specifying a regular expression, please refer to the Microsoft® Developer Network (MSDN®) Regular Expression Language Elements or the Regular Expressions Info Web site. The syntax for Regular Expression validation is: <parameter name="RegEx Parameter"> <parameterValidation type="RegularExpression" validationString=".+" /> </parameter> For Regular Expressions, even if you specify a RegEx that allows for empty values, most UIs will require a value anyway. To specify that a value can be blank, or empty, add 'allowEmpty' to the type of the 'parameterValidation', as in the example below: <parameter name="RegEx Parameter2"> <parameterValidation type="RegularExpression,allowEmpty" validationString="^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,4}$" /> </parameter> The regular expressions in the two previous examples are explained here: - ".+" - Specifies that the parameter entry must contain at least one of any type of character. Alternatively, use ".*" to specify that the parameter entry may contain at least one of any type of character, which means that it can be empty. - "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+.[a-zA-Z]{2,4}$" - This is a simple e-mail address validation expression. It does not catch every incorrect address, but it catches most. This regular expression does not allow blank or empty parameters. However, since "allowEmpty" was included in the "type" attribute, empty parameters will be accepted. Database Credentials Validation If your application package contains a SQL Server or MySQL database provider, you can specify a set of database credentials to be validated by the UI prior to other steps in the installation process. Web PI will validate the tagged connection string when all of the parameters have been filled out for the application and the user presses the Next button. This validation consists of Web PI using the provided credentials to connect to the specified database and server. If the validation fails, the user will be informed and returned to the previous screen to update the database parameters as needed. Other UIs will provide some mechanism, such as a validate button or a validation dialog, to perform a similar function. The parameter to use for the validation is identified with the "Validate" tag. Parameter Internationalization Web PI and the Web Deployment Tool are available in versions localized for ten languages, including English and the following: - Japanese - Spanish - German - Italian - Korean - Russian - Chinese (China) - Chinese (Taiwan) If Web PI detects that the user's system default location is something other than "en", the user will be presented with parameters using the user's system locale, for every parameter that has a translation available. The WDT has translations for the "friendlyName" for each parameter, which is used as the parameter's caption. The WDT also has translations for the "description" for each parameter, which is displayed below the caption to inform the user of the purpose of the parameter. Certain parameters are translated automatically into one of those languages based on the user's system language. The parameters that are automatically translated are ones that have one of the tags in the table below. Note that if there is a "description" or "friendlyName" attribute present in the parameter, the translation of that parameter will not occur. - MySqlConnectionString - SqlConnectionString - DbAdminPassword - DbAdminUsername - DbName - DbServer - DbUsername - DbUserPassword - IisApp - SetAcl For the ten well-known parameters above, best practice is to use the parameter descriptions and friendly names that the WDT automatically generates. This leads to a more consistent experience for users and lets your application take advantage of additional languages as they are added. If necessary, specify your own translations for any of those parameters or for any parameters that do not have a default translation. This is done by providing alternate descriptions and friendlyNames in the parameter elements of the Parameters.xml file. The default description for a parameter is provided as one of the attributes of the parameter element, while the translations are provided as distinct elements. For example: <parameters> <parameter name="AppPath" friendlyName="Application Path" description="Full site path where you want to install your application (for example, Default Web Site/Application)." tags="iisApp" > <description culture="en">Full site path where you want to install your application (for example, Default Web Site/Application).</description> <description culture="de">Vollständiger Pfad, unter dem Sie Ihre Anwendung installieren möchten (z.B. Standardwebsite/Anwendung).</description> <friendlyName culture="en">Application Path</friendlyName> <friendlyName culture="de">Anwendungs Pfad</friendlyName> </parameter> </parameters> In the example, note: - The UI displays the description where the "culture" attribute matches the system locale. If there is no specific match, the UI will display the default description. - When only the language is specified (as in the case of "de" above), all cultures that include "de" as their language component will use that translation (for example, "de-DE" and "de-AT"). - iisApp is one of the 10 automatically translated parameters; do not specify translations for this parameter unless the default translations were insufficient for your needs. - Specify translations for both the "friendlyName" and the "description". If you do not specify a friendlyName, the UI will use the parameter name in all cases. For a table of common International Organization for Standardization (ISO) culture codes on MSDN, see the Table of Language Culture Names, Codes, and ISO Values Method [C++]. xxxx.sql An application package can have any number of SQL scripts that will be executed as part of the installation package. These SQL scripts can contain any valid commands for the specified database engine, including all DDL, DML, and stored procedures. For details about the WDT interaction with the database, see the article "Database Notes for packaging applications for use with the Web Application Gallery." Web.config A Web.config file can be placed at any level in an application's directory tree. To learn more about the Web.config file, see the IIS 7.0 Configuration Reference. Integration Samples Samples of Web App Gallery integration are available for reference. Note This article is based on information from "Application Packaging Guide for the Windows Web Application Gallery" by the IIS team, published on September 24, 2009. Links for Further Information - Regular Expression Language Elements. - XPath syntax. - Table of Language Culture Names, Codes, and ISO Values Method [C++]. Web App Gallery Integration Samples.
https://docs.microsoft.com/en-us/iis/develop/windows-web-application-gallery/reference-for-the-web-application-package
2018-07-15T21:21:11
CC-MAIN-2018-30
1531676588972.37
[]
docs.microsoft.com
Table of contents Overview Importing the relationship between the users and the CI allows to add the users that use a CI. It is important to have previously added the users and the CI before doing this type of import. References Required Fields - CI – Text (125) - Name of CI to be related - User - The identifier of the user to be related - See section User identification method Configuration File (XML) The declaration of the source is done by indicating the CIUser value in the <Content> tag. <?xml version="1.0" encoding="utf-8" ?> <Sources> <Source Name="User CI Import"> <ConnectionString>Provider=Microsoft.ACE.OLEDB.12.0;Data Source=c:\Import\CI.xlsx;Extended Properties="Excel 12.0 Xml;HDR=YES";</ConnectionString> <ViewName>[Import CI User$]</ViewName> <UserIdentificationMethod>UserByWindowsUsername </UserIdentificationMethod> <ManageRelations>False</ManageRelations> <Content>CIUser</Content> </Source> </Sources> Information on Additional Tags To import CI users, the XML file used can contain 1 additional tag. This tag is not mandatory and if it's not specified, the default value will be used. User Identification Method In the XML file to import the relationship between the users and CI, it is possible to specify how the user will be found. This value becomes the unique key to the import. If this tag is not specified, the default value will be the CI name. Permitted values for the UserIdentificationMethod tag: - UserByID: User employee number - UserByName: First and last name of user (in the following format John Smith) - UserByWindowsUsername (Default value): Windows username of the user To use this tag, add the following line to the XML file: <UserIdentificationMethod>VALUE</UserIdentificationMethod> Thank you, your message has been sent.
https://docs.octopus-itsm.com/en/articles/dataimporter-import-relations-between-users-and-ci
2018-07-15T20:56:57
CC-MAIN-2018-30
1531676588972.37
[array(['/sites/all/files/DataImporter - CI-User/RelationEntreCI_User_EN.png', None], dtype=object) array(['https://wiki.octopus-itsm.com/sites/all/files/checmark.png', None], dtype=object) ]
docs.octopus-itsm.com
A fully-featured inventory file can serve as the source of truth for your network. Using an inventory file, a single playbook can maintain hundreds of network devices with a single command. This page shows you how to build an inventory file, step by step. Topics ansible-vault First, group your inventory logically. Best practice is to group servers and network devices by their What (application, stack or microservice), Where (datacenter or region), and When (development stage): Avoid spaces, hyphens, and preceding numbers (use floor_19, not 19th_floor) in your group names. Group names are case sensitive. This tiny example data center illustrates a basic group structure. You can group groups using the syntax [metagroupname:children] and listing groups as members of the metagroup. Here, the group network includes all leafs and all spines; the group datacenter includes all network devices plus all webservers. [leafs] leaf01 leaf02 [spines] spine01 spine02 [network:children] leafs spines [webservers] webserver01 webserver02 [datacenter:children] network webservers Next, you can set values for many of the variables you needed in your first Ansible command in the inventory, so you can skip them in the ansible-playbook command. In this example, the inventory includes each network device’s IP, OS, and SSH user. If your network devices are only accessible by IP, you must add the IP to the inventory file. If you access your network devices using hostnames, the IP is not necessary. [leafs] leaf01 ansible_host=10.16.10.11 ansible_network_os=vyos ansible_user=my_vyos_user leaf02 ansible_host=10.16.10.12 ansible_network_os=vyos ansible_user=my_vyos_user [spines] spine01 ansible_host=10.16.10.13 ansible_network_os=vyos ansible_user=my_vyos_user spine02 ansible_host=10.16.10.14 ansible_network_os=vyos ansible_user=my_vyos_user [network:children] leafs spines [servers] server01 ansible_host=10.16.10.15 ansible_user=my_server_user server02 ansible_host=10.16.10.16 ansible_user=my_server_user [datacenter:children] leafs spines servers When devices in a group share the same variable values, such as OS or SSH user, you can reduce duplication and simplify maintenance by consolidating these into group variables: [leafs] leaf01 ansible_host=10.16.10.11 leaf02 ansible_host=10.16.10.12 [leafs:vars] ansible_network_os=vyos ansible_user=my_vyos_user [spines] spine01 ansible_host=10.16.10.13 spine02 ansible_host=10.16.10.14 [spines:vars] ansible_network_os=vyos ansible_user=my_vyos_user [network:children] leafs spines [servers] server01 ansible_host=10.16.10.15 server02 ansible_host=10.16.10.16 [datacenter:children] leafs spines servers The syntax for variable values is different in inventory, in playbooks and in group_vars files, which are covered below. Even though playbook and group_vars files are both written in YAML, you use variables differently in each. key=valuefor variable values: ansible_network_os=vyos. .ymlor .yamlextension, including playbooks and group_varsfiles, you must use YAML syntax: key: value group_varsfiles, use the full keyname: ansible_network_os: vyos. keyname, which drops the ansibleprefix: network_os: vyos As your inventory grows, you may want to group devices by platform. This allows you to specify platform-specific variables easily for all devices on that platform: [vyos_leafs] leaf01 ansible_host=10.16.10.11 leaf02 ansible_host=10.16.10.12 [vyos_spines] spine01 ansible_host=10.16.10.13 spine02 ansible_host=10.16.10.14 [vyos:children] vyos_leafs vyos_spines [vyos:vars] ansible_connection=network_cli ansible_network_os=vyos ansible_user=my_vyos_user [network:children] vyos [servers] server01 ansible_host=10.16.10.15 server02 ansible_host=10.16.10.16 [datacenter:children] vyos servers With this setup, you can run first_playbook.yml with only two flags: ansible-playbook -i inventory -k first_playbook.yml With the -k flag, you provide the SSH password(s) at the prompt. Alternatively, you can store SSH and other secrets and passwords securely in your group_vars files with ansible-vault. ansible-vault¶ The ansible-vault command provides encryption for files and/or individual variables like passwords. This tutorial will show you how to encrypt a single SSH password. You can use the commands below to encrypt other sensitive information, such as database passwords, privilege-escalation passwords and more. First you must create a password for ansible-vault itself. It is used as the encryption key, and with this you can encrypt dozens of different passwords across your Ansible project. You can access all those secrets (encrypted values) with a single password (the ansible-vault password) when you run your playbooks. Here’s a simple example. Create a file and write your password for ansible-vault to it: echo "my-ansible-vault-pw" > ~/my-ansible-vault-pw-file Create the encrypted ssh password for your VyOS network devices, pulling your ansible-vault password from the file you just created: ansible-vault encrypt_string --vault-id [email protected]~/my-ansible-vault-pw-file 'VyOS_SSH_password' --name 'ansible_ssh_pass' If you prefer to type your ansible-vault password rather than store it in a file, you can request a prompt: ansible-vault encrypt_string --vault-id [email protected] 'VyOS_SSH_password' --name 'ansible_ssh_pass' and type in the vault password for my_user. The --vault-id flag allows different vault passwords for different users or different levels of access. The output includes the user name my_user from your ansible-vault command and uses the YAML syntax key: value: Encryption successful This is an example using an extract from a YAML inventory, as the INI format does not support inline vaults: ... vyos: # this is a group in yaml inventory, but you can also do under a host vars: ansible_connection: network_cli ansible_network_os: vyos ansible_user: my_vyos_user ... To use an inline vaulted variables with an INI inventory you need to store it in a ‘vars’ file in YAML format, it can reside in host_vars/ or group_vars/ to be automatically picked up or referenced from a play via vars_files or include_vars. To run a playbook with this setup, drop the -k flag and add a flag for your vault-id: ansible-playbook -i inventory --vault-id [email protected]~/my-ansible-vault-pw-file first_playbook.yml Or with a prompt instead of the vault password file: ansible-playbook -i inventory --vault-id [email protected] first_playbook.yml To see the original value, you can use the debug module. Please note if your YAML file defines the ansible_connection variable (as we used in our example), it will take effect when you execute the command below. To prevent this, please make a copy of the file without the ansible_connection variable. cat vyos.yml | grep -v ansible_connection >> vyos_no_connection.yml ansible localhost -m debug -a var="ansible_ssh_pass" -e "@vyos_no_connection.yml" --ask-vault-pass Vault password: localhost | SUCCESS => { "ansible_ssh_pass": "VyOS_SSH_password" } Warning Vault content can only be decrypted with the password that was used to encrypt it. If you want to stop using one password and move to a new one, you can update and re-encrypt existing vault content with ansible-vault rekey myfile, then provide the old password and the new password. Copies of vault content still encrypted with the old password can still be decrypted with old password. For more details on building inventory files, see the introduction to inventory; for more details on ansible-vault, see the full Ansible Vault documentation. Now that you understand the basics of commands, playbooks, and inventory, it’s time to explore some more complex Ansible Network examples.
https://docs.ansible.com/ansible/devel/network/getting_started/first_inventory.html
2018-11-13T01:42:59
CC-MAIN-2018-47
1542039741176.4
[]
docs.ansible.com
[ aws . servicecatalog ] Lists the resources associated with the specified TagOption. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. list-resources-for-tag: ResourceDetails list-resources-for-tag-option --tag-option-id <value> [--resource-type <value>] [--page-size <value>] [--cli-input-json <value>] [--starting-token <value>] [--max-items <value>] [--generate-cli-skeleton <value>] --tag-option-id (string) The TagOption identifier. --resource-type (string) The resource type. - Portfolio - Product -ourceDetails -> (list) Information about the resources. (structure) Information about a resource. Id -> (string)The identifier of the resource. ARN -> (string)The ARN of the resource. Name -> (string)The name of the resource. Description -> (string)The description of the resource. CreatedTime -> (timestamp)The creation time of the resource. PageToken -> (string) The page token for the next set of results. To retrieve the first set of results, use null.
https://docs.aws.amazon.com/cli/latest/reference/servicecatalog/list-resources-for-tag-option.html
2018-11-13T00:52:04
CC-MAIN-2018-47
1542039741176.4
[]
docs.aws.amazon.com
[ aws . machinelearning . wait ] Wait until JMESPath query Results[].Status returns COMPLETED for all elements when polling with describe-evaluations. It will poll every 30 seconds until a successful state has been reached. This will exit with a return code of 255 after 60 failed checks. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. evaluation: Results evaluation . - asc - Arranges the list in ascending order (A-Z, 0-9). - dsc - Arranges the list in descending order (Z-A, 9-0). Results are sorted by FilterVariable . Possible values: - asc - d.
https://docs.aws.amazon.com/cli/latest/reference/machinelearning/wait/evaluation-available.html
2018-11-13T00:58:02
CC-MAIN-2018-47
1542039741176.4
[]
docs.aws.amazon.com
PoolByteArray¶ Category: Built-In Types Description¶ Raw byte array. Contains bytes. Optimized for memory usage, can’t fragment the memory. Note that this type is passed by value and not by reference. Method Descriptions¶ - PoolByteArray PoolByteArray ( Array from ) Create from a generic array. File’s COMPRESS_* constants. - PoolByteArray decompress ( int buffer_size, int compression_mode=0 ) Returns a new PoolByteArray with the data decompressed. Set buffer_size to the size of the uncompressed data. Set the compression mode using one of File’s COMPRESS_* constants. Returns a copy of the array’s contents as String. Fast alternative to PoolByteArray.get_string_from_utf8 if the content is ASCII-only. Unlike the UTF-8 function this function maps every byte to a character in the array. Multibyte sequences will not be interpreted correctly. For parsing user input always use PoolByteArray.get_string_from_utf8. Returns a copy of the array’s contents as String. Slower than PoolByteArray.
https://godot.readthedocs.io/en/latest/classes/class_poolbytearray.html
2018-11-13T01:32:52
CC-MAIN-2018-47
1542039741176.4
[]
godot.readthedocs.io
Mongo DB Enabling MongoDB In Your WorkspaceEnabling MongoDB In Your Workspace Mongo DB databases are available to add as a KintoBlock dependency. In order to add a database to a project, you must first enable it in your workspace. Go to Services via the sidebar, and select Mongo DB. Details of the service are available underneath. If you select this service, you can not disable it, but this service is currently free and you will not be charged without notification. If you would like to use the service please select Enable MongoDB. Once you have enabled the service it is possible to add it to a KintoBlock by enabling the toggle on the KintoBlock Manage Page. Enabling MongoDB in ApplicationsEnabling MongoDB in Applications You can also enable services on the application manage page. If a KintoBlock in your application requires a MongoDB database you must make sure it is enabled either here or via the services page. Referring To a Database In Your CodeReferring To a Database In Your Code The MongoDB Connection String will be provided behind the scenes, and in your code you can refer to it as the environment variable MONGO_CONNECTION_STRING. This is an example of a connection string: mongodb://user:[email protected]:27017/auth-database Points To NotePoints To Note If you add a database service to a KintoBlock that is already deployed, you will need to trigger a new build by adding a new commit. In the future there will be a button to trigger a new build. You have one database available per Application. Once you have enabled a service it is available to every account in the workspace. Allow at least one minute between activating MongoDB and using MongoDB.
http://docs.kintohub.com/docs/mongo-db
2018-11-13T00:45:42
CC-MAIN-2018-47
1542039741176.4
[array(['/docs/assets/services_sidebar_screenshot.png', 'Screenshot - Docs - Examples'], dtype=object) array(['/docs/assets/kintoblock_services_screenshot.png', 'Screenshot - Docs - Examples'], dtype=object) ]
docs.kintohub.com
Logging¶ Logging - class DIRAC.FrameworkSystem.private.standardLogging.Logging. Logging(father=None, fatherName='', name='', customName='')¶ Logging is a wrapper of the logger object from the standard “logging” library which integrate some DIRAC concepts. It is the equivalent to the old gLogger object. It is used like an interface to use the logger object of the “logging” library. Its purpose is to replace transparently the old gLogger object in the existing code in order to minimize the changes. In this way, each Logging embed a logger of “logging”. It is possible to create sublogger, set and get the level of the embedded logger and create log messages with it. Logging could delegate the initialization and the configuration to a factory of the root logger be it can not because it has to wrap the old gLogger. Logging should not be instancied directly. It is LoggingRoot which is instancied and which instantiates Logging objects. __init__(father=None, fatherName='', name='', customName='')¶ Initialization of the Logging object. By default, ‘fatherName’ and ‘name’ are empty, because getChild accepts only string and the first empty string corresponds to the root logger. Example: logging.getLogger(‘’) == logging.getLogger(‘root’) == root logger logging.getLogger(‘root’).getChild(‘log’) == root.log == log child of root getSubLogger(subName, child=True)¶ Create a new Logging object, child of this Logging, if it does not exists.
https://dirac.readthedocs.io/en/latest/CodeDocumentation/FrameworkSystem/private/standardLogging/Logging.html
2018-11-13T00:55:49
CC-MAIN-2018-47
1542039741176.4
[]
dirac.readthedocs.io
API Gateway 7.6.2 Authentication and Authorization Integration Guide About this guide API Gateway contains a set of message filters that directly or indirectly restrict access to resources or web services. Filters that directly control access include XML-signature verification, CA certificate chain verification, and SAML assertion verification. With these filters, policy decisions are made and enforced within API Gateway itself. Filters that indirectly control access offload the policy decision to an external access management system. With these filters, the policy decision is made by the external system but then enforced by API Gateway., the API Gateway can authorize users, look up user attributes, and validate certificates against third-party Identity Management servers. This guide describes how to configure API Gateway to integrate with the following products: LDAP servers — see LDAP identity manager integration. CA SiteMinder — see CA SiteMinder integration. RSA Access Manager — see RSA Access Manager integration Oracle Access Manager integration - see Oracle Access Manager 11gR2 integration Oracle Entitlements Server integration - see Oracle Entitlements Server 11g and 11gR2 integration Who should read this guide The intended audience for this guide is personnel in charge of the technical integration of an API Gateway solution. Related Links
https://docs.axway.com/bundle/APIGateway_762_AuthAuthIntegrationGuide_allOS_en_HTML5/page/Content/AuthAuthIntegrationTopics/about_auth_integrtn.htm
2018-11-13T00:11:53
CC-MAIN-2018-47
1542039741176.4
[]
docs.axway.com
DomainKeys Protection Enabling DomainKeys Protection on the Server To enable DomainKeys spam protection on your server, go to Tools & Settings > Mail Server Settings (in the Mail group) and scroll down to the DomainKeys spam protection section. The two options found there enable you to manage DomainKeys on your server: - Allow signing outgoing mail. This enables customers to switch on the DomainKeys signing of outgoing email on a per-domain basis. It does not automatically enable signing of outgoing email messages. - Verify incoming mail. This enables DomainKeys checking for all incoming email. Messages sent from domains supporting DomainKeys signing are checked and, if the check fails, marked with the DomainKey-Status: 'bad' header. Messages sent from domains that do not support DomainKeys are accepted without being checked. Note that both options can be selected independently of each other. You can choose to enable signing of outgoing email, checking of incoming email, or both. Select the checkboxes corresponding to the chosen options and click OK. Enabling DomainKeys Email Signing for a Domain To enable DomainKeys signing of outgoing email for an individual domain, open the corresponding subscription for managing, go to Websites & Domains > Mail Settings, select the Use DomainKeys spam protection system to sign outgoing email messages checkbox and click OK. Note: DomainKeys signing will function only for domains that use the Plesk DNS server.
https://docs.plesk.com/en-US/12.5/administrator-guide/mail/antispam-tools/domainkeys-protection.59433/
2018-11-13T01:00:07
CC-MAIN-2018-47
1542039741176.4
[]
docs.plesk.com
Project Management for Non-Project Managers (2018 Estimating Resource Needs) Start Date : September 12, 2018 End Date : September 12, 2018 Time : 9:00 am to11:00 am Phone : 800-447-9407 Location : 161 Mission Falls Lane, Suite 216, Fremont, CA 94539, USA?.
http://meetings4docs.com/event/project-management-for-non-project-managers-2018-estimating-resource-needs/
2018-11-13T00:43:33
CC-MAIN-2018-47
1542039741176.4
[]
meetings4docs.com
Running uWSGI in a Linux CGroup¶ Linux cgroups are an amazing feature available in recent Linux kernels. They allow you to “jail” your processes in constrained environments with limited CPU, memory, scheduling priority, IO, etc.. Note uWSGI has to be run as root to use cgroups. uid and gid are very, very necessary. Enabling cgroups¶ First you need to enable cgroup support in your system. Create the /cgroup directory and add this to your /etc/fstab: none /cgroup cgroup cpu,cpuacct,memory Then mount /cgroup and you’ll have jails with controlled CPU and memory usage. There are other Cgroup subsystems, but CPU and memory usage are the most useful to constrain. Let’s run uWSGI in a cgroup: ./uwsgi -M -p 8 --cgroup /cgroup/jail001 -w simple_app -m --http :9090 Cgroups are simple directories. With this command your uWSGI server and its workers are “jailed” in the ‘cgroup/jail001’ cgroup. If you make a bunch of requests to the server, you will see usage counters – cpuacct.* and memoryfiles.* in the cgroup directory growing. You can also use pre-existing cgroups by specifying a directory that already exists. A real world example: Scheduling QoS for your customers¶ Suppose you’re hosting apps for 4 customers. Two of them are paying you $100 a month, one is paying $200, and the last is paying $400. To have a good Quality of Service implementation, the $100 apps should get 1/8, or 12.5% of your CPU power, the $200 app should get 1/4 (25%) and the last should get 50%. To implement this, we have to create 4 cgroups, one for each app, and limit their scheduling weights. ./uwsgi --uid 1001 --gid 1001 -s /tmp/app1 -w app1 --cgroup /cgroup/app1 --cgroup-opt cpu.shares=125 ./uwsgi --uid 1002 --gid 1002 -s /tmp/app2 -w app1 --cgroup /cgroup/app2 --cgroup-opt cpu.shares=125 ./uwsgi --uid 1003 --gid 1003 -s /tmp/app3 -w app1 --cgroup /cgroup/app3 --cgroup-opt cpu.shares=250 ./uwsgi --uid 1004 --gid 1004 -s /tmp/app4 -w app1 --cgroup /cgroup/app4 --cgroup-opt cpu.shares=500 The cpu.shares values are simply computed relative to each other, so you can use whatever scheme you like, such as (125, 125, 250, 500) or even (1, 1, 2, 4). With CPU handled, we turn to limiting memory. Let’s use the same scheme as before, with a maximum of 2 GB for all apps altogether. ./uwsgi --uid 1001 --gid 1001 -s /tmp/app1 -w app1 --cgroup /cgroup/app1 --cgroup-opt cpu.shares=125 --cgroup-opt memory.limit_in_bytes=268435456 ./uwsgi --uid 1002 --gid 1002 -s /tmp/app2 -w app1 --cgroup /cgroup/app2 --cgroup-opt cpu.shares=125 --cgroup-opt memory.limit_in_bytes=268435456 ./uwsgi --uid 1003 --gid 1003 -s /tmp/app3 -w app1 --cgroup /cgroup/app3 --cgroup-opt cpu.shares=250 --cgroup-opt memory.limit_in_bytes=536870912 ./uwsgi --uid 1004 --gid 1004 -s /tmp/app4 -w app1 --cgroup /cgroup/app4 --cgroup-opt cpu.shares=500 --cgroup-opt memory.limit_in_bytes=1067459584
http://uwsgi.readthedocs.io/en/latest/Cgroups.html
2017-06-22T20:37:23
CC-MAIN-2017-26
1498128319902.52
[]
uwsgi.readthedocs.io
Constructing a Notification Object Unless you want to send a basic generic message to all registered devices, you need to construct a JSON notification object to pass as a payload to the send request. You can include the following sections in the push notification object: - Filter expression: The Filteroption is a generic field that allows you to target a subset (segment) of your devices with the push notification. For detailed discussion of filters and targeting, see Targeting Push Notifications. - Generic options: Options that apply to all types of devices. Push Notification Object Field Reference provides a list of all supported generic options. - Vendor-specific options: Options that apply only to certain devices such as iOS or Android. Push Notification Object Field Reference provides a list of all supported vendor-specific options. The next JSON object includes all three sections: Filter, the UseLocalTime generic option, and a section for each supported device vendor: { "Filter": "{\"Parameters.City\":\"London\"}", "UseLocalTime": true, "Android": { "data": { "title": "Push Title", "message": "Push message for Android", "customData": "Custom data for Android" } }, "IOS": { "aps": { "alert": "Push message for iOS", "badge": "+1", "sound": "default", "category": "MY_CATEGORY" }, "customData": "Custom data for iOS" }, "WindowsPhone": { "Toast": { "Title": "Push title", "Message": "Push message for Windows Phone" } }, "Windows": { "Toast": { "template": "ToastText01", "text": ["Push message for Windows"] } } } Telerik Platform uses the information to send the appropriate push notification to each targeted device. If you fail to specify options for a given vendor, its devices are served from the generic options. For example the next object sends a generic message to all devices except for iOS devices. Note that in this case the Message option is required. { "Message": "This text will appear on all devices except for iOS devices" "IOS": { "aps": { "alert": "This text will override the generic Message on iOS", "badge": "+1", "sound": "default", "category": "MY_CATEGORY" }, "customData": "Custom data for iOS" } }
http://docs.telerik.com/platform/backend-services/javascript/push-notifications/send-and-target/push-notfication-object
2017-06-22T20:34:56
CC-MAIN-2017-26
1498128319902.52
[]
docs.telerik.com
On demand vassals (socket activation)¶ Inspired by the venerable xinetd/inetd approach, you can spawn your vassals only after the first connection to a specific socket. This feature is available as of 1.9.1. Combined with –idle/–die-on-idle options, you can have truly on-demand applications. When on demand is active for particular vassal, emperor won’t spawn it on start (or when it’s config changes), but it will rather create socket for that vassal and monitor if anything connects to it. At the first connection, the vassal is spawned and the socket passed as the file descriptor 0. File descriptor 0 is always checked by uWSGI so you do not need to specify a –socket option in the vassal file. This works automagically for uwsgi sockets, if you use other protocols (like http or fastcgi) you have to specify it with the –protocol option Important If you will define in your vassal config same socket as used by emperor for on demand action, vassal will override that socket file. That could lead to unexpected behaviour, for example on demand activation of that vassal will work only once. On demand vassals with filesystem-based imperial monitors¶ For filesystem-based imperial monitors, such as dir:// or glob://, defining on demand vassals involves defining one of three additional settings for your emperor: –emperor-on-demand-extension <ext>¶ this will instruct the Emperor to check for a file named <vassal>+<ext>, if the file is available it will be read and its content used as the socket to wait for: uwsgi --emperor /etc/uwsgi/vassals --emperor-on-demand-extension .socket supposing a myapp.ini file in /etc/uwsgi/vassals, a /etc/uwsgi/vassals/myapp.ini.socket will be searched for (and its content used as the socket name). Note that myapp.ini.socket isn’t a socket! This file only contains path for actual socket (tcp or unix). –emperor-on-demand-directory <dir>¶ This is a less-versatile approach supporting only UNIX sockets. Basically the name (without extension and path) of the vassal is appended to the specified directory + the .socket extension and used as the on-demand socket: uwsgi --emperor /etc/uwsgi/vassals --emperor-on-demand-directory /var/tmp using the previous example, the socket /var/tmp/myapp.socket will be automatically bound. –emperor-on-demand-exec <cmd>¶ This is most flexible solution for defining socket for on demand action and (very probably) you will use it in very big deployments. Every time a new vassal is added the supplied command is run passing full path to vassal config file as the first argument. The STDOUT of the command is used as the socket name. Using on demand vassals with other imperial monitors¶ For some imperial monitors, such as pg://, mongodb://, zmq:// socket for on demand activation is returned by imperial monitor by itself. For example for pg:// if executed on database query returns more than 5 fields, 6th field will be used as socket for on demand activation. Check Imperial monitors for more information. For some imperial monitors, such as amqp://, socket activation is not possible yet. Combining on demand vassals with --idle and --die-on-idle¶ For truly on demand applications, you can add to each vassal --idle and --die-on-idle options. This will allow suspend or completely turn off applications that are no longer requested. --idle without --die-on-idle will work pretty much like without emperor, but adding --die-on-idle will give you superpower for completely shutting down applications and returning back to on-demand mode. Emperor will simply put vassal back to on-demand mode when it dies gracefully and turn it back on when there are any requests waiting or socket. Important As mentioned before, you should never put in your vassal config file socket that was passed to emperor for on-demand mode. For unix sockets, file path that socket lives on will be rewritten with new socket, but old socket will be still connected to your emperor. Emperor will listen for connections on that old socket, but all requests will arrive to new one. That means, if your vassal will be shut down because of idle state, it will be never put back on (emperor won’t receive any connections for on-demand socket). For tcp socket, that can cause each request to be handled twice.
http://uwsgi.readthedocs.io/en/latest/OnDemandVassals.html
2017-06-22T20:40:12
CC-MAIN-2017-26
1498128319902.52
[]
uwsgi.readthedocs.io
After you have the latest version of WordPress, save the downloaded Blossom Travel Pro theme file (blossom-travel: Blossom Travel Pro theme file includes: A WordPress Theme Files (in .zip format)— This (blossom-travel: - Log in to WordPress Dashboard - Go to Appearance > Themes - Click on Add New button - Click on Upload Theme - Click on “Choose File…”, select the “blossom-travel-pro.zip” file from your computer and click Open - Click Install Now - After the theme is installed, click on “Activate” to use the theme on your website
https://docs.blossomthemes.com/docs/blossom-travel-pro/theme-installation-and-activation/how-to-install-blossom-travel-pro-wordpress-theme/
2019-06-16T01:49:50
CC-MAIN-2019-26
1560627997508.21
[array(['https://docs.blossomthemes.com/wp-content/uploads/2019/05/Blossom-Travel-Pro-Installation.gif', 'Blossom Travel Pro Installation'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/05/Add-New-theme-blossom-themes.png', None], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/05/Upload-new-theme-blossom-themes.png', 'Upload new theme blossom themes'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/05/Click-on-Choose-file-blossom-themes.png', 'Click on Choose file blossom themes'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/05/Install-Now-Blossom-Travel-Pro.png', 'Install Now Blossom Travel Pro'], dtype=object) array(['https://docs.blossomthemes.com/wp-content/uploads/2019/05/Activate-Blossom-Travel-pro.png', 'Activate Blossom Travel pro'], dtype=object) ]
docs.blossomthemes.com
Upgrade Windows devices to Windows Pro Creators Update Upgrade. Note If you have Windows devices running Windows 7 Pro, Windows 8 Pro, or Windows 8.1 Pro, your Microsoft 365 Business subscription entitles you to a Windows 10 upgrade - you don't require a Product Key. See Set up Windows devices for Microsoft 365 Business users to complete setting up Windows 10 devices. See Set up mobile devices for Microsoft 365 Business users to complete setting up Android and iOS devices. Feedback Send feedback about:
https://docs.microsoft.com/en-us/microsoft-365/business/upgrade-to-windows-pro-creators-update
2019-06-16T02:28:03
CC-MAIN-2019-26
1560627997508.21
[]
docs.microsoft.com
Launchpad bugs and StoryBoard stories are used to track known issues and defects in OpenStack software. Here are the different fields available in Launchpad bugs and StoryBoard tasks, and how we use them within the OpenStack project.. Bugs go through multiple stages before final resolution. When you find a bug, you should file it against the proper OpenStack project. reviewer or a member of the project bug supervision team, you should also prioritize the bug: Importance: <Bug impact> (see above) reviewer or bug supervisor to set: Status: Triaged. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/project-team-guide/bugs.html
2019-06-16T01:25:41
CC-MAIN-2019-26
1560627997508.21
[]
docs.openstack.org
tvOS Support SimpleMDM supports the following features for tvOS devices: - Device Actions: Push assigned apps, wipe device, restart device, re-enroll/unenroll device, update OS version. - Device Details: Device information, wireless MAC address, enrollment information, installed application inventory. - Apps: Volume Purchase Program (VPP), Enterprise tvOS app deployment. - Configurations: App Restrictions, Global HTTP Proxy, Home Screen Layout, Single App Lock, OS Auto Update Policy. All require tvOS supervision. - Enrollment: URL, Apple Configurator, and DEP. - Rules & Quarantine
https://docs.simplemdm.com/article/91-tvos-support
2019-06-16T01:07:33
CC-MAIN-2019-26
1560627997508.21
[]
docs.simplemdm.com
Breaking: #80628 - Extension rtehmlarea moved to TER¶ See Issue #80628 Description¶ The legacy extension EXT:rtehtmlarea has been removed from the TYPO3 CMS core and is only available as TER extension. Impact¶ The new extension EXT:rte_ckeditor is loaded by default, if you need features of the old rtehmlarea extension, you have to install EXT:rtehtmlarea from TER. An upgrade wizard can do this for you in the upgrade process of the install tool. If you have allowed images in RTE, you should install the rtehtmlarea extension, the ckeditor extension does not support images in RTE. Affected Installations¶ Most installations are not affected. Instances are only affected if a loaded extension has a dependency to EXT:rtehtmlarea extension, or if the instance has used special plugins.
https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/8.7/Breaking-80628-ExtensionRtehmlareaMovedToTER.html
2019-06-16T01:50:54
CC-MAIN-2019-26
1560627997508.21
[]
docs.typo3.org
After setting up the ESB samples, you can run them using the sample clients. The sample clients can be executed from the <ESB_HOME>/samples/axis2Client directory by specifying the relevant ant command. You can execute ant from the <ESB_HOME>/samples/axis2Client directory, in order to view the available sample clients and some of the sample options used to configure them. This section describes each of=../../repository/samples. The ESB> Note You can apply a WS-Policy to the request using the policy property in order to enforce QoS aspects such as WS-Security on the request. The policy specifies details such as timestamps, signatures, and encryption., while the transport URL set to the ESB ensures that any mediation required. ant stockquote -Dtrpurl=<esb> Proxy client mode The client uses the prxurl as an HTTP proxy to send the request. Therefore, by setting the prxurl to the ESB, the client can ensure that the message would reach the ESB for mediation. The client can optionally set a WS-Addressing EPR if required. ant stockquote -Dprxurl=<es/samples/resources/mtom/asf-logo.gif] Note The JMS client assumes the existence of a default ActiveMQ (5.2.0) installation on your local machine in order to run some of the=./../../repository/samples]
https://docs.wso2.com/display/ESB490/Using+the+Sample+Clients
2019-06-16T00:59:36
CC-MAIN-2019-26
1560627997508.21
[]
docs.wso2.com
librbd Settings¶ See Block Device for additional details. Cache Settings¶ Read-ahead Settings¶ RBD Features¶ RBD supports advanced features which can be specified via the command line when creating images or the default features can be specified via Ceph config file via ‘rbd_default_features = <sum of feature numeric values>’ or ‘rbd_default_features = <comma-delimited list of CLI values>’ Layering Striping v2 Exclusive locking Object map Fast-diff Deep-flatten Journaling Data pool Operations Migrating RBD QOS Settings¶ RBD supports limiting per image IO, controlled by the following settings. rbd qos iops limit rbd qos bps limit rbd qos read iops limit rbd qos write iops limit rbd qos read bps limit rbd qos write bps limit rbd qos iops burst rbd qos bps burst rbd qos read iops burst rbd qos write iops burst rbd qos read bps burst rbd qos write bps burst rbd qos schedule tick min
http://docs.ceph.com/docs/master/rbd/rbd-config-ref/
2019-06-16T01:48:21
CC-MAIN-2019-26
1560627997508.21
[]
docs.ceph.com
Your organization profile gives you a chance to share information about your entire organization, including its mission. This will help people learning about your organization's service year opportunities see how those fit in with your organization's broader mission. Before you can submit a position for certification, you will need to complete your profile. The basic sections you'll complete in your organization profile are: Organization Details: This section mostly includes information that will be added to your public profile, including: A logo and cover photo, mission statement, description of your organization's overall work and impact, your organization type, and address. Employees & Financial Data: This section includes general information about your organization's size and financial information. The fields you complete in this section will not be included in your public profile. If you are adding AmeriCorps positions, you do not need to complete this section. Submitting your profile: Your organization profile will be submitted to our team when you submit a position for certification. Until you have a certified position, your profile will not be public on ServiceYear.org. Learn more about adding a position to become certified.
http://docs.serviceyear.org/articles/694659-organization-profile
2019-06-16T01:44:27
CC-MAIN-2019-26
1560627997508.21
[]
docs.serviceyear.org
. FileMaker Server When looking at FileMaker Server, you'll see any tables you've mapped in DayBack's settings: learn how to map FileMaker tables. Salesforce Objects When you're looking at Salesforce sources, you'll see your Tasks, Events, and Campaigns by default, and you can customize these mappings or add your own Salesforce objects to the calendar.
https://docs.dayback.com/article/133-calendar-sources
2019-06-16T01:19:41
CC-MAIN-2019-26
1560627997508.21
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/568d5975c69791436155c1b3/images/5b0aea662c7d3a2f9011f394/file-p9yrCNohkp.png', None], dtype=object) ]
docs.dayback.com
Keeping All the Acronyms Straight As you read through this guide, you’ll notice a variety of acronyms and ecommerce terminology that may be unfamiliar if you're relatively new to ecommerce design. Let’s review some of these conventions now so that you’ll be well-equipped going forward. There is no way to predict where the shopper will land on the PWA experience (more on this later). They may be directed from an email newsletter, they may follow a link from a social media newsfeed, or perhaps they will type in the URL directly; they could begin their journey at any page! Even though shoppers will not always conform to a specific pattern, we typically refer to the “buyer’s journey” as the act of navigating through a particular flow, namely Home > Category Listing Page (CLP) > Product Listing Page (PLP) > Product Detail Page (PDP) > Cart > Checkout. Let’s review each of these core templates. An ecommerce homepage is usually reserved for displaying promotions and recent news and developments. Additionally, it should offer a clear overview of the site structure, and give shoppers a launchpad into products and offerings. It’s crucial that the homepage allows users to quickly infer the purpose of the site, in addition to learning the kinds of products available. This is usually accomplished with a visual category list navigation. Category Listing Page (CLP) The Category Listing Page (CLP) shows a list of product categories. Often in ecommerce sites, products are organized into a tree structure; for example, a running shoe may be categorized in the following manner: Women > Footwear > Sports. A CLP can be top-level or a sub-category. It is different from a PLP, described next, as it does not primarily showcase products. Key Interactions: - Can sort and filter product results. Product Listing Page (PLP) The Product Listing Page (PLP) shows all products available within a certain category. A PLP is a sub-category. It is different from a CLP as it exists to showcase products, not additional sub-categories. Key Interactions: - View product variations (e.g. colour or size) with quick preview. - Sort and filter product results. - Buy products instantly (Quick Pay with Apple Pay / Android Pay). Product Detail Page (PDP) The Product Detail Page, commonly known as a PDP, is a page or screen that offers full details on a specific product, along with purchasing options and controls to add to cart, wishlist, or buy immediately. PDPs are traditionally the primary purchasing interface for a product. They're often an endpoint for PLPs, search results, and sales tactics such as sales promotions, or push notifications. Key Interactions: - Scroll through product images in an Image Carousel. - Zoom in on product images with the Product Zoom pattern. - Disclose product details. - Select Product Options. - Add product to a wishlist. - Add a product to cart. - Buy product immediately (Quick Pay with Apple Pay / Android Pay). Cart The cart (often named “bag” or “shopping bag”) is a listing of all items the shopper has reserved for purchasing, along with functions to checkout. The cart is typically accessible as a singular template (e.g. /cart), but sometimes available from every page/screen as a mini-cart. Key Interactions: - Interactions are variable and depend on the customer’s business logic. - Adjust quantity of items in cart. - Adjust options for items in cart. - Remove items from cart. - Add a promotional code. - Select shipping option. - Choose gift wrapping. - Begin checkout flow with various payment options. The checkout flow is arguably the most critical area of an ecommerce website or app — in fact, optimizing a checkout flow design has been shown by the Baymard Institute to increase conversion rates by up to 35.2%. (NOTE: Appleseed, Jamie, Christian Holst, Thomas Grønne, Edward Scott, Lauryn Smith, and Christian Vind. “ecommerce Usability: Checkout.” Accessed December 13, 2016.) The user experience of an ecommerce checkout also has lasting effects on brand perception and repeat business potential. The primary purpose of an ecommerce checkout is to collect billing and shipping information from the user to complete the purchase. Key Interactions: The majority of interactions within a Checkout are form input fields such as: - Full Name - Billing Address - Shipping Address - Phone Number - Credit Card Details The confirmation action is the final critical action required by the user to complete the purchase.
https://docs.mobify.com/design/getting-started/mobile-ecommerce-conventions/
2019-06-16T01:08:02
CC-MAIN-2019-26
1560627997508.21
[array(['images/design.001.jpeg', 'Major templates of a mobile ecommerce experience'], dtype=object) array(['images/home-example.png', 'Example homepage layout'], dtype=object) array(['images/plp-example.png', 'Merlins Potions PLP page'], dtype=object) array(['images/pdp-example.png', 'A product detail page example'], dtype=object) array(['images/cart-example.png', 'Examples of shopping carts'], dtype=object) ]
docs.mobify.com
Note: The Designing App-Like Patterns series is intended for interaction designers. This series will show you how to use the Mobify Platform to design performant shopping experiences, with app-like user experience best practices. Related Links: Introduction Performant product loading is a strategy that can help you improve the user's perception of how quickly your Progressive Web App (PWA) loads. This strategy uses a combination of lazy loading and "Load More" buttons to alter the user's perception of speed. Lazy loading is a technique that delays rendering of off-screen content until it’s actually needed. This technique has many benefits. It reduces initial load times, speeds up the perceived performance, and decreases overall data usage. In the Mobify SDK, you can use the LazyLoader component to implement a lazy loading effect. "Load More" buttons are buttons located at the bottom of the loaded content which will load more content in when pressed. Appropriate uses for lazy loading Lazy loading can be applied: - To any large file size assets, such as images, to speed up the initial page load. - On long scrolling pages with many instances of similar items, such as search results pages or product list pages. - Alongside a "Load More" button on product list pages as an alternative to pagination for loading the next set of products. Best practices Use lazy loading for large assets Lazy loading is recommended for all large assets, such as images, that occur below the initial page load view. This will speed up the initial page load. Use lazy loading for long scrolling pages Use lazy loading on long scrolling pages with many instances of similar items, such as search results pages or product listing pages. With long scrolling pages, be cautious about the number of products you choose to display on one page. A never-ending page of results can seem overwhelming, and it’s difficult to navigate. Combine lazy loading with a "Load More" button Research by Baymard has shown that users generally browsed more products on websites with lazy loading and a "Load More" button. According to Baymard, it’s best to implement the following design pattern: "display 10–30 products on initial page load and then to lazy-load another 10–30 products until reaching 50–100 products – then display a “Load More” button. Once clicked, another 10–30 products are loaded in, resuming the lazy-loading until the next 50–100 products have loaded, at which point the 'Load more' button once again appears." The lazy loading that comes in Mobify’s SDK can be used to achieve this design. We suggest applying this best practice from Baymard if your ecommerce site has a long list of products in one category, with many visual product images. As stated earlier, be cautious of displaying too many products on one page. It can be daunting for your user to browse through an infinite page of results. IN THIS ARTICLE:
https://docs.mobify.com/design/performant-product-loading-strategy/
2019-06-16T01:32:09
CC-MAIN-2019-26
1560627997508.21
[array(['images/lancome-scroll.gif', 'image_tooltip alt_text'], dtype=object) array(['images/load-more-button.png', 'image_loadmorebutton_bestpractice Load More Button'], dtype=object)]
docs.mobify.com
The Cable GuyDNS Enhancements in Windows Server 2008 Joseph Davies This article is based on a prerelease version of Windows Server 2008. All information herein is subject to change. Microsoft has included a Domain Name System (DNS) Server service in versions of Windows Server since Windows NT 4.0. DNS is a hierarchical, distributed database that contains mappings of DNS domain names to various types of data, such as IP addresses. With Windows Server 2008, the DNS Server service includes new background zone loading, enhancements to support IPv6, support for read-only domain controllers (RODCs), and the ability to host global single-label names. Background Zone Loading The DNS Server service in Windows Server® 2008 makes data retrieval faster by implementing background zone loading. In the past, enterprises with zones containing large numbers of records in Active Directory® experienced delays of up to an hour or more when the DNS Server service in Windows Server 2003 tried to retrieve the DNS data from Active Directory on restart. During these delays, the DNS server was unavailable to service DNS client requests for any of its hosted zones. To address this issue, the DNS Server service in Windows Server 2008 retrieves zone data from Active Directory in the background after it starts so that it can respond to requests for data from other zones. When the service starts, it creates one or more threads of execution to load the zones that are stored in Active Directory. Because there are separate threads for loading the Active Directory-based zones, the DNS Server service can respond to queries while zone loading is in progress. If a DNS client requests data in a zone that has already been loaded, the DNS server responds appropriately. If the request is for data in a zone that has not yet been entirely retrieved, the DNS server retrieves the specific data from Active Directory instead. This ability to retrieve specific data from Active Directory during zone loading provides an additional advantage over storing zone information in files—namely that the DNS Server service has the ability to respond to requests immediately. When the zone is stored in files, the service must sequentially read through the file until the data is found. Enhanced Support for IPv6 IPv6, which has been covered in previous editions of this column, is a new suite of Internet standard protocols. IPv6 is designed to address many of the issues of the current version—IPv4—such as address depletion, security, autoconfiguration, and the need for extensibility. One difference in IPv6 is that its addresses are 128 bits long, while IPv4 addresses are only 32 bits. IPv6 addresses are expressed in colon-hexadecimal notation. Each hexadecimal digit is 4 bits of the IPv6 address. A fully expressed IPv6 address is 32 hexadecimal digits in 8 blocks, separated by colons. An example of a fully expressed IPv6 address is FD91:2ADD:715A:2111:DD48:AB34:D07C:3914. Forward name resolution for IPv6 addresses uses the IPv6 Host DNS record, known as the AAAA record (pronounced "quad-A"). For reverse name resolution, IPv6 uses the IP6.ARPA domain, and each hexadecimal digit in the 32-digit IPv6 address becomes a separate level in the reverse domain hierarchy in inverse order. For example, the reverse lookup domain name for the address FD91:2ADD:715A:2111:DD48:AB34:D07C:3914 is 4.1.9.3.C.7.0.D.4.3.B.A.8.4.D.D.1.1.1.2.A.5.1.7.D.D.A.2.1.9.D.F.IP6.ARPA. The DNS Server service in Windows Server 2003 supports forward and reverse name resolution for IPv6; however, the support is not fully integrated. For example, to create an IPv6 address record (the AAAA record we just discussed) in the Windows Server 2003 DNS Manager snap-in, you must right-click the zone, click Other New Records, and then double-click IPv6 Host (AAAA) as the resource record type. To add a AAAA record in the DNS Manager snap-in for Windows Server 2008, right-click the zone name, and then click New Host (A or AAAA). In the New Host dialog box, you can type an IPv4 or IPv6 address. Figure 1 shows an example. Figure 1** New Host dialog box ** Another example of better support for IPv6 is for reverse IPv6 zones. To create a reverse lookup zone in the DNS Manager snap-in for Windows Server 2003, you have to manually type the reverse zone name in the Reverse Zone Lookup Name page of the New Zone Wizard. An example of a DNS reverse zone name is 1.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa (for the IPv6 subnet prefix 2001:db8:0:1::/64, fully expressed as 2001:0db8:0000:0001::/64). IPv6 reverse zones in the DNS Manager snap-in for Windows Server 2008 are now fully integrated into the New Zone wizard. There is a new page of the wizard that prompts you to select an IPv4 reverse lookup zone or an IPv6 reverse lookup zone. For an IPv6 reverse lookup zone, you just need to type the IPv6 subnet prefix and the wizard automatically creates the zone for you. Figure 2 shows an example. Figure 2** Naming an IPv6 reverse lookup zone **(Click the image for a larger view) Another enhancement for reverse zones is the way in which the DNS Manager snap-in displays IPv6 pointer (PTR) records. Figure 3 shows how the DNS Manager snap-in for Windows Server 2003 displays a PTR record. Figure 3** PTR record for IPv6 in Windows Server 2003 **(Click the image for a larger view) Although this display accurately reflects the structure of the DNS namespace for IPv6 reverse domain names, it makes PTR record management for IPv6 addresses more difficult. Figure 4 shows how the DNS Manager snap-in for Windows Server 2008 displays a PTR record. Figure 4** PTR record for IPv6 in Windows Server 2008 **(Click the image for a larger view) The DNS Server service in Windows Server 2003 supports operation over IPv6, but it must be manually enabled with the dnscmd /config /EnableIPv6 1 command. Windows Server 2008, conversely, supports operation over IPv6 by default. The Dnscmd.exe command-line tool has been updated to accept IPv6 addresses in command-line options. Additionally, the DNS Server service can now send recursive queries to IPv6-only servers, and the server forwarder list can contain both IPv4 and IPv6 addresses. For more information about IPv6 and how it is supported in Windows®, see microsoft.com/ipv6. Read-Only Domain Controller Support Windows Server 2008 also introduces the RODC, a new type of domain controller that contains a read-only copy of Active Directory information and can perform Active Directory functions but cannot be directly configured. RODCs are less vulnerable to attack and can be placed in locations where the physical security of the domain controller cannot be guaranteed or where the network contains potentially malicious hosts. For RODCs, the DNS Server service in Windows Server 2008 supports the new primary read-only zone type. When a computer becomes an RODC, it replicates a full read-only copy of all of the application directory partitions that DNS uses, including the domain partition, ForestDNSZones, and DomainDNSZones. This ensures that the DNS Server service running on the RODC has a full read-only copy of any DNS zones stored in the directory partitions of a domain controller that is not an RODC. You can view the contents of a primary read-only zone on an RODC, but you cannot change them. You must change the contents of the zone on a domain controller that is not an RODC. GlobalNames Zone Name Resolution with the GlobalNames Zone After the GlobalNames zone is deployed, when a Windows Vista-based DNS client attempts to resolve a single-label name, it appends the primary DNS suffix to the single-label name and submits the name query request to its DNS server. If the name is not found, the DNS client sends additional name query requests for the combination of the single-label name with the suffixes in its DNS suffix search list (if configured). If none of those names resolve, the client requests resolution using the single-label name. The DNS server searches for the single-label name in the GlobalNames zone. If it appears there, the DNS server sends the resolved IPv4 address or FQDN back to the DNS client. Otherwise, the DNS client computer converts the name to a NetBIOS name and uses NetBIOS name resolution techniques, including WINS. No changes to the DNS Client service are required to enable single-label name resolution in the GlobalNames zone. Windows Server 2008 and Windows Vista® support the NetBIOS over TCP/IP (NetBT) protocol. NetBT uses NetBIOS names to identify Session-layer NetBIOS applications. Although NetBIOS name resolution with WINS is not required for current versions of Windows that rely on Windows Sockets-based network applications and DNS for name resolution, many Microsoft customers deploy WINS in their networks to support older NetBT applications and to provide name resolution for single-label names across their organizations. Single-label names typically refer to important, well-known, and widely used servers for an organization, such as e-mail servers, central Web servers, or the servers for line-of-business applications. In order to allow these single-label names to be resolved across an organization using only DNS, you might find it necessary to add A records to the multiple DNS domains of your organization so that a Windows-based DNS client can resolve the name regardless of their assigned DNS domain suffix or suffix search list. Suppose, for example, that the contoso.com organization has a central Web server named CWEB that is a member of the central.contoso.com domain. To implement a single-label name for the server CWEB when DNS clients can be assigned the DNS domain suffix wcoast.contoso.com, central.contoso.com, or ecoast.contoso.com, the network administrator must create two additional A records for both cweb.wcoast.contoso.com and cweb.ecoast.contoso.com. However, don't forget that manually created A records for single-label names must be maintained for changes in IPv4 address assignment or for new names. If contoso.com is already using WINS for older NetBT applications, a network administrator can implement name resolution for the single-label name CWEB by adding a single static WINS record to their WINS infrastructure. If the IPv4 address changes, only the single static WINS record needs to be changed. Because single-label names are easier to manage on WINS, many Windows-based networks use static WINS records for single-label names. To provide a single-label name solution on DNS that's as easily managed as static WINS records, the DNS Server service in Windows Server 2008 supports a new zone called GlobalNames to store single-label names. The replication scope of this zone is typically a forest, which provides single-label name resolution across an entire Active Directory forest. Additionally, the central and critical servers of an organization that are managed by its IT department. The GlobalNames zone is not intended to be used to store the names of desktop computers or other servers whose IPv4 addresses can change, and under no circumstances does it support DNS dynamic updates. It is most commonly used to hold alias (CNAME) resource records to map a single-label name to a Fully Qualified Domain Name (FQDN). For networks that are currently using WINS, the GlobalNames zone usually contains resource records for IT-managed names that are already statically configured in WINS. The GlobalNames zone provides single-label name resolution only when all authoritative DNS servers are running Windows Server 2008. However, other DNS servers that are not authoritative for any zone can be running older versions of Windows or other operating systems. The GlobalNames zone must be unique in the forest. To provide maximum performance and scalability, the GlobalNames zone should be integrated with Active Directory and you should configure each authoritative DNS server with a local copy of it. Accomplishing this is required in order to support deployment of the GlobalNames zone across multiple forests. For more information about DNS support in Windows and about deploying the GlobalNames zone, see the Microsoft DNS Web page at microsoft.com/dns..
https://docs.microsoft.com/en-us/previous-versions/technet-magazine/cc137727(v=msdn.10)
2019-06-16T00:44:28
CC-MAIN-2019-26
1560627997508.21
[array(['images/cc137727.fig01.gif', 'Figure 1 New Host dialog box'], dtype=object) array(['images/cc137727.fig02.gif', 'Figure 2 Naming an IPv6 reverse lookup zone'], dtype=object) array(['images/cc137727.fig03.gif', 'Figure 3 PTR record for IPv6 in Windows Server 2003'], dtype=object) array(['images/cc137727.fig04.gif', 'Figure 4 PTR record for IPv6 in Windows Server 2008'], dtype=object) ]
docs.microsoft.com
Note: This class is incubating and may change in a future version of Gradle. Assembles a static library from object files. The file where the output binary will be located. The source object files to be passed to the archiver. Additional arguments passed to the archiver. The tool chain used for creating the static library. Note: This method is incubating and may change in a future version of Gradle. Adds a set of object files to be linked. The provided source object is evaluated as per Project.files().
https://docs.gradle.org/2.10/dsl/org.gradle.nativeplatform.tasks.CreateStaticLibrary.html
2017-04-23T12:01:12
CC-MAIN-2017-17
1492917118552.28
[]
docs.gradle.org
Can I buy just one membership site? If you want to have reseller rights to just one membership site, you can do this by signing up as a free Silver member then upgrading to GOLD then to PLATINUM.That way you now have reseller rights to that one membership site Traffic Generation Club: Choose the site you wish to be the reseller for from the list in the Related Article: What sites are included in Membership Command? Replace trafficgenerationclub in the URL with the name of your chosen club.
http://docs.promotelabs.com/article/686-can-i-buy-just-one-membership-site
2017-04-23T11:48:57
CC-MAIN-2017-17
1492917118552.28
[]
docs.promotelabs.com
Configuration. - Configuring Buildbot - Global Configuration - Change Sources - Schedulers - Buildslaves - Builder Configuration - Build Factories - Properties - Build Steps - Common Parameters - Source Checkout - Source Checkout (Slave-Side) - ShellCommand - Slave Filesystem Steps - Python BuildSteps - Transferring Files - Transfering Strings - Running Commands on the Master - Setting Properties - Setting Buildslave Info - Triggering Schedulers - RPM-Related Steps - Debian Build Steps - Miscellaneous BuildSteps - Interlocks - Status Targets
https://buildbot.readthedocs.io/en/v0.8.12/manual/configuration.html
2017-04-23T11:56:05
CC-MAIN-2017-17
1492917118552.28
[]
buildbot.readthedocs.io
1 Introduction The Application Performance Monitor (APM) is a solution that helps to analyze performance issues as well as support users in analyzing runtime behaviour. This introduction provides a short explanation about what APM is, which tools are in the APM suite and what they are used for. The APM tool consists of the following tools: - Statistics Tool - Performance Tool - Trap Tool - Measurements Tool - JVM Browser - Query Tool - Log Tool 2 Definition of APM Wikipedia provides a good definition of Application Performance Management. APM is the monitoring and management of performance and availability of software applications. APM strives to detect and diagnose application performance problems to maintain an expected level of service. APM is the translation of IT metrics into business meaning (value). Of course you need the basic infrastructure probes to measure hardware parts like the CPU, memory and disk, as well as components like the database and the web server. However, for higher quality support you should also look at the application and how it is performing, especially linking this to the user’s business perspective. We all know software contains bugs, and of course we all test before we bring something into production. For users, an error is a sign that the application is not functioning. If the error appears unexpected, the user loses trust in the system. The standard reaction of support was always to ask questions, including whether the customer can reproduce the issue, to turn on logging and to ask for a database dump, so support can investigate the issue in a safe environment. 3 The Statistics Tool to See Performance Issues Coming We know that performance in applications is difficult to test in the initial stage. The data set is small and the exact usage of, for example, search behavior of users is unknown. Therefore, even if all performance best practices are applied in building a Mendix application, some issues still appear in production. Usually they appear over time, so the question is how to see them coming. The APM Statistics tool collects statistical data about microflows and client API requests. These statistics are stored periodically (usually daily, configurable) and used to see a trend long before the users sound the alarm. Also, as a good habit, support frequently checks the longest and most frequently running microflows to see if they can be improved. This is the statistics tool (for load balanced environments you see the server where the microflow runs): 4 The Performance Tool to Record Microflows When support wants to investigate a performance issue, either proactively through the statistics tool or reactively when a customer reports an issue, they use the APM Performance tool. With it they can see the duration of the steps in the microflow on the action level. They can drill down to see individual SQL statements. They can even ask the database for an explain plan that tells you how the database processes the query, which indexes it uses, and more. This tool quickly helps to pinpoint the issue. This is the call tree, which provides an overview of what happens, showing the called microflows and one iteration of a loop, filtered by duration: Below is the performance tool output. You can double-click on all actions and in the case of a microflow call, you can browse to the next microflow. In the case of loops you will see the individual iterations. This is the SQL statements during an action: 5 The Trap Tool Is Your Flight Data Recorder The APM Trap tool is always listening on all levels of logging up to the highest TRACE level and remembers the last couple of seconds (configurable!). When an error occurs the last few seconds of logging leading up to the error are saved in the database. The APM Measurements tool can also monitor memory usage or CPU and trap logging when a threshold is reached. In this way, the information is collected the first time something occurs, which speeds up the problem resolution considerably. 6 The Measurements Tool for Collecting More Information and Triggering an Alarm When Needed The APM Measurements tool is measurements to linking business logic and is bridging the gap between the model and the infrastructure metrics, like CPU measurements. The APM Measurements tool gets information from several sources. First, a simple APM JVM browser to present similar information as shown in the standard java management tool like JConsole, VisualVM and JMC. Second, the APM query tool to perform business and database queries to monitor database specific meta information and/or business values. Third, the internal metrics of Mendix and the APM tool are available as well via the APM JVM Browser or other Java JVM management platforms. The measurements can be used to trigger events on thresholds. For example, if more than 80% memory is used or if more than 80% of the CPU is used, a trigger fires. At some customers, support has configured a trigger on a model change so they are informed when a new deployment is done. The trigger can be to trap logging, or to execute a microflow, for example, to send an email or to make a heap dump. 7 JVM Browser The JVM browser can be used to see JVM information similar to tools like JConsole, JVisualVM or JMC. This gives the information to more functional people without the need for specialists and technical access to the machines running the Mendix application. 8 Query Tool The query tool lets you perform XPath, OQL and JDBC queries to collect either business statistics (like reports), application statistics (number of scheduled events running at the same time) or database specific statistics like the number of sessions. This module is also used for the explain plan functionality of the performance tool. 9 Log Tool The log tool is used to collect Mendix Runtime log messages and store them in the database. This gives remote access to log information, makes it available to consultants, and allows for easy searching and analyzing. The log rerouting makes sure that Java messages that are sent to the Java console, to the java util library or to the log4j library, are rerouted to the Mendix log. For example, javamail sends debug output to the console and with this option you can collect that information and make it visible in the Mendix log as well as in the APM Log Tool and APM Trap Tool. This helped support a lot in solving email issues and issues with web services security and certificates.
https://docs.mendix.com/addons/APM/
2017-04-23T11:56:45
CC-MAIN-2017-17
1492917118552.28
[array(['attachments/Introduction/Statistics_Tool.png', None], dtype=object) array(['attachments/Introduction/Performance_Tool_Tree_View.png', None], dtype=object) array(['attachments/Introduction/Performance_Tool_Browse_Microflow.png', None], dtype=object) array(['attachments/Introduction/Performance_Tool_Browse_Actions.png', None], dtype=object) array(['attachments/Introduction/Measurements_Tool.png', None], dtype=object) ]
docs.mendix.com
unpackUnorm2x16, unpackSnorm2x16, unpackUnorm4x8, unpackSnorm4x8 — unpack floating-point values from an unsigned integer p Specifies an unsigned integer containing packed floating-point values. unpackUnorm2x16, unpackSnorm2x16, unpackUnorm4x8, and unpackSnorm4x8 unpack single 32-bit unsigned integers, specified in the parameter p into a pair of 16-bit unsigned integers. a pair of 16-bit signed integers, four 8-bit unsigned integers, or four 8-bit signed integers respectively.) packUnorm4x8: f / 255.0 packSnorm4x8: clamp(f / 127.0, -1.0, 1.0) The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits. clamp, packUnorm2x16, packSnorm2x16 packUnorm4x8 packSnorm4x8 Copyright © 2011-2014 Khronos Group. This material may be distributed subject to the terms and conditions set forth in the Open Publication License, v 1.0, 8 June 1999..
http://docs.gl/el3/unpackUnorm
2017-04-23T11:59:47
CC-MAIN-2017-17
1492917118552.28
[]
docs.gl
V3 Multi-Datacenter Replication Reference: Fullsync via Active Anti-Entropy Note: Technical preview The active anti-entropy fullsync strategy, as it pertains to replication, is currently in technical preview mode. This means that it hasn’t been tested at large scale and that there may be issues that Basho must address prior to a general release. Please don’t use this feature on a production system without professional services or customer service engineering support. Overview Riak Enterprise Multi-Datacenter (MDC) Replication version 3 (Riak Enterprise version 1.4.0+) can now take advantage of Riak’s active anti-entropy (AAE) subsystem, which was first introduced as a technology preview in Riak 1.3.0. AAE plus Replication uses existing Riak AAE hash trees stored in LevelDB, so if AAE is already active, there is no additional startup delay for enabling the aae fullsync strategy. AAE can also be enabled for the first time on a cluster, although some custom settings can enhance performance in this case to help AAE trees be built more quickly. See Configuration/AAE Tree Build Optimization. Requirements: - Riak Enterprise version 1.4.0 or later installed on source and sink clusters - Riak Enterprise MDC Replication Version 3 enabled on source and sink clusters - Both source and sink clusters must be of the same ring size - AAE must be enabled on both source and sink clusters fullsync_strategyin the riak_replsection of the advanced.configconfiguration file must be set to aaeon both source and sink clusters - AAE trees must have been built on both source and sink clusters. In the event that an AAE tree is not built on both the source and sink, fullsync will default to the keylistfullsync strategy for that partition. Configuration If you are using Riak Enterprise version 2.0, configuration is managed using the advanced.config files on each node. The semantics of the advanced.config file are similar to the formerly used app.config file. For more information and for a list of configurable parameters, see our documentation on Advanced Configuration. Enable Active Anti-Entropy To enable active anti-entropy (AAE) in Riak Enterprise, you must enable it in Riak in both source and sink clusters. If it is not enabled, the keylist strategy will be used. To enable AAE in Riak KV: anti_entropy = active By default, it could take a couple of days for the cluster to build all of the necessary hash trees because the default build rate of trees is to build 1 partition per hour, per node. With a ring size of 256 and 5 nodes, that is 2 days. Changing the rate of tree building can speed up this process, with the caveat that rebuilding a tree takes processing time from the cluster, and this should not be done without assessing the possible impact on get/put latencies for normal cluster operations. For a production cluster, we recommend leaving the default in place. For a test cluster, the build rate can be changed in riak.conf. If a partition has not had its AAE tree built yet, it will default to using the keylist replication strategy. Instructions on these settings can be found in the section directly below. AAE Tree Build Optimization You can speed up the build rate for AAE-related hash trees by adjusting the anti_entropy.tree.build_limit.* and anti_entropy.concurrency_limit settings. anti_entropy.tree.build_limit.number = 10 anti_entropy.tree.build_limit.per_timespan = 1h anti_entropy.concurrency_limit = 10 Enable AAE Fullsync Replication Strategy Finally, the replication fullsync strategy must be set to use aae on both source and sink clusters. If not, the keylist replication strategy will be used. To enable AAE w/ Version 3 MDC Replication: {riak_repl, [ % ... {fullsync_strategy, aae}, % ... ]}
http://docs.basho.com/riak/kv/2.2.1/using/reference/v3-multi-datacenter/aae/
2017-04-23T11:50:51
CC-MAIN-2017-17
1492917118552.28
[]
docs.basho.com
Saving and Loading from Web Easy Save allows you to save to a MySQL server on the web using the PHP file and MySQL database provided with Easy Save. It saves to the database using Easy Save’s own format. However, you can use ES2Web.UploadRaw(string data) and ES2Web.LoadRaw() to upload and download a raw string to and from a database. Setup 5. Place the ES2.php file on your web server, and then enter the URL to the file into a web browser. If you have followed the previous steps correctly, you should see the message ‘ES2.php and MySQL database are working correctly‘. You are now ready to use ES2Web in Unity. Make sure that when using the ES2Web functions, you supply the same username and password as specified in your ES2PHP file. Saving to Web We use coroutines to upload and download from web as this allows us to download data over multiple frames. We advise you familiarise yourself with coroutines before attempting to save or load from web. In this example we create a coroutine to upload a Mesh to web. You can then start the coroutine using the StartCoroutine() method. - First, we create an ES2Web object, using the URL to our ES2.php file as the path. We can also provide path parameters after the URL. An important parameter is webfilename, which specifies which logical file we would like to save to on our MySQL server. - Other important parameters include webusername and webpassword, which are the username and password specified in our ES2.php file. We can even use the tag and encrypt parameter. - Now we yield ES2Web.Upload(data) to upload our data. - Finally, we check for any errors returned by ES2Web using the ES2Web.errorCode and ES2Web.error variables. A list of errors can be found in the Error Codes section of the ES2Web page. C# JS In this example we download a piece of data from web and load it. You may also download an entire file from web by not specifying a tag as a parameter. The URL and parameters work in the exact same way as in the Saving to Web section above. - First, create our ES2Web object with our URL. - Now instead of yielding ES2.Upload(), we yield ES2.Download() to download the data from the server. - Once downloaded, and we’ve checked for errors, we can do one of two things. - Save the data to a local file using ES2Web.SaveToFile(path). - Load directly from the downloaded data using ES2Web.Load(tag), or one of the other load functions. C# JS Delete from Web Deleting data using ES2Web.Delete(path) is much like uploading or downloading data. C# JS Integrating with a Login System It is strongly advised that you integrate ES2.php into a login system of your choosing. The ES2.php file contains an Authenticate($username, $password) method which can be modified to integrate into your login system. It has two parameters: username is the webusername specified in Unity. password is the webPassword specified in Unity. By default Easy Save 2 sends the password as an MD5 hash, so you may need to convert your password to an MD5 hash using PHP’s MD5($str) method. Alternatively you can get ES2 to send your password in plain text. To do this, set the hashType variable of your ES2Web objects to ES2Web.HashType.None. However, it is not advised that you do this unless you are using HTTPS. The Authenticate method should return false if either the username or password do not match, or true if they both match. Error Codes For a list of error codes, see the Error Codes section of the ES2Web documentation.
http://docs.moodkie.com/easy-save-2/guides/saving-and-loading-from-web/
2017-04-23T11:53:00
CC-MAIN-2017-17
1492917118552.28
[]
docs.moodkie.com
You can arrange your folders/labels in alphabetical or any custom order. The option for arrangement is located in Window > Arrange Folders Alphabetically. Once you have unchecked the option you can drag and drop folders to top or bottom or even drop into some other folder in order to make it a sub-folder. Did this answer your question?
http://docs.airmailapp.com/airmail-for-mac/order-of-folderslabels-airmail-for-macos
2017-04-23T11:58:05
CC-MAIN-2017-17
1492917118552.28
[array(['https://uploads.intercomcdn.com/i/o/8524997/9e23d3c59c5c2874c8cff7af/Screen%2520Shot%25202016-07-22%2520at%25208.20.15%2520PM.png', None], dtype=object) ]
docs.airmailapp.com
Airmail has gesturing features implemented that enables you to swipe left and right quickly on Inbox folder and other main folders without fuss. This also best thing to improve the daily productivity for all type of users. Screenshot & Instruction Two finger swipe right to left for moving the message into trash Two finger swipe left to right for moving the message into the archive You can edit the Left and the Right Swipe in Airmail Preferences > Actions as shown in the screenshot below. Please contact support if you encounter any difficulties and we will be glad to help you out! Did this answer your question?
http://docs.airmailapp.com/airmail-for-mac/multitouch-gestures
2017-04-23T11:59:14
CC-MAIN-2017-17
1492917118552.28
[array(['https://uploads.intercomcdn.com/i/o/8544083/f600683a5e4e28c6b8811608/Screen%2520Shot%25202016-07-23%2520at%252011.15.21%2520AM.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/8544085/910aa6ba39bb9238fb23130c/Screen%2520Shot%25202016-07-23%2520at%252011.14.07%2520AM.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/21970304/7578d616e8566fea3b0f5b35/Screen+Shot+2017-04-11+at+10.57.28+PM.png', None], dtype=object) ]
docs.airmailapp.com
The container for data that determines its storage site (or sites when there is redundancy), and the unit of migration for rebalancing. A relationship between two tables whereby the buckets that correspond to the same values of their partitioning fields are guaranteed to be physically located in the same server or peer client. In GemFire XD, a table configured to be colocated with another table has a dependency on the other table. If the other table needs to be dropped, then the colocated tables must be dropped first. A server or peer client process that is connected to the distributed system and has the host-data property set to true. A data store is automatically part of the default server group, and may be configured to be part of other server groups. A colocation relationship that is set up automatically between tables when there is no COLOCATED WITH clause in the CREATE TABLE statement. The anonymous server group that implicitly includes all servers in the distributed system. This is the server group that hosts the data for a table where there is no SERVER GROUPS clause in the CREATE TABLE statement, and there were no server groups specified in the CREATE SCHEMA statement for the schema that this table belongs to. A typical GemFire XD deployment is made up of a number of distributed processes that connect to each other to form a peer-to-peer network. These member processes may or may not host any data. The JDBC client process and all servers are always peer members of the distributed system. The members discover each other dynamically through a built-in multicast based discovery mechanism or using the GemFire XD locator service when TCP is more desirable. Sometimes a distributed system is also referred to as a GemFire XD cluster. A partitioning strategy based on the hashcode of one or more fields, such that all the values with the same hashcode are placed into the same bucket. A parallel SQL query engine that can read and write data to HDFS. HAWQ uses standards compliant SQL. GemFire XD utilizes the PXF driver installed with HAWQ to provide HDFS table data to HAWQ external tables. Hadoop Distributed File System. GemFire XD supports the HDFS implementation provided with Pivotal HD Enterprise. Memory allocated for use by the JVM. Heap memory undergoes garbage collection. Horizontal partitioning refers to partitioning strategies where a table is split by rows so that a bucket always contains entire rows. Vertical partitioning refers to strategies where a table is split by columns so that a bucket always contains entire columns. GemFire XD currently only supports horizontal partitioning strategies. A partitioning strategy based on specified lists of values of one or more fields. For example, a table could be list-partitioned on a string-valued field so that all the values for a specified list of string values are placed in the same bucket. A locator facilitates discovery of all members in a distributed system. This is a component that maintain a registry of all peer members in the distributed system at any given moment. Though typically started as a separate process (with redundancy for HA), a locator can also be embedded in any peer member (like a server). This opens a TCP port and all new members connect to this process to get initial membership information for the distributed system. Memory that is not part of the JVM's allocated heap but is allocated upon server startup for data storage. Off-heap memory is not managed by JVM garbage collection processes. A table that manages large volumes of data by partitioning it into manageable chunks and distributing it across all the servers in its hosting server groups. Partitioning attributes, including the partitioning strategy can be specified by supplying a PARTITION BY clause in a CREATE TABLE statement. See also replicated table, partitioning strategy. The policy used to determine the specific bucket for a field in a partitioned table. GemFire XD currently only supports horizontal partitioning , so an entire row is stored in the same bucket. You can hash-partition a table based on its primary key or on an internally-generated unique row id if the table has no primary key. Other partitioning strategies can be specified in the PARTITION BY clause in a CREATE TABLE statement. The strategies that are supported by GemFire XD include hash-partitioning on columns other than the primary key, range-partitioning , and list-partitioning. Also known as the embedded client, this is a process that is connected to the distributed system using the GemFire Peer Driver. The member may or may not host any data depending on the configuration property host-data. By default, all peer clients will host data. Configuration describes how this property can be set at connection time. Essentially, the peer client can be configured to just be a "pure" client or can be a client as well as a data store. When hosting data, the member can be part of one or more server groups. JDBC driver packaged in gemfirexd.jar. The client connects to the distributed system using the GemFire XD driver with the URL jdbc:gemfirexd: and doesn't specify a host and port in the URL. This driver provides single-hop access to all the data managed in the distributed members. (The GemFire XD JDBC thin-client driver also supports one-hop access for lightweight client applications.) A driver plug-in that enables HAWQ to query HDFS table data as an external table. The PXF driver is installed with HAWQ. The process that executes the query and determines the overall plan. It may distribute the query to the appropriate servers that host the data. When using a peer client, the query coordinator is the peer client itself. When using a thin client, the query coordinator is the server member to which the client is connected. A partitioning strategy based on specified contiguous ranges of values of one or more fields. For example, a table could be range-partitioned on a date field so that all the values within a range of years are placed into the same bucket. A table that keeps a copy of its entire dataset locally on every data store in its server groups. GemFire XD creates replicated tables by default if you do not specify a PARTITION BY clause. See also partitioned table. A JVM started with the gfxd server command, or any JVM that calls the FabricServer.start method. A GemFire XD server may or may not also be a data store, and may or may not also be a network server. A logical grouping of servers used for specifying which members will host data for table. Also used for load balancing thin client connections. A process that is not part of the distributed system but is connected to the distributed system through a thin driver. The thin client connects to a single server in the distributed system which in turn may delegate requests to other members of the distributed system. JDBC thin clients can also be configured to provide one-hop access to data for lightweight client applications. The JDBC thin driver bundled in the product (gemfirexd-client.jar). A process that is not part of the distributed system but is connected to it through a thin driver. The connection URL for this driver is of the form jdbc:gemfirexd://hostname:port/.
http://gemfirexd.docs.pivotal.io/docs/1.3.1/userguide/reference/glossary/glossary.html
2017-04-23T11:57:58
CC-MAIN-2017-17
1492917118552.28
[]
gemfirexd.docs.pivotal.io
PIN messages About PIN messages Find your PIN Compose and send a PIN message Create a link for a PIN Reply to or forward an email or PIN message Set the importance level for an email or a PIN message that you send Change the display color of PIN messages Parent topic: How to: Messages application
http://docs.blackberry.com/en/smartphone_users/deliverables/44780/26346.html
2014-10-20T13:45:34
CC-MAIN-2014-42
1413507442900.2
[]
docs.blackberry.com
... Config. "org.mortbay.plus.naming.Link"for link between a web.xml resource name and a NamingEntry. See Configuring Links for more info. There are 3 places in which you can define naming entries: ... This example will define a virtual <env-entry> called mySpecialValue with value 4000 that is unique within the whole jvm. It will be put into JNDI at java:comp/env/mySpecialValue for every webapp deployed.. ...
http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=45914&selectedPageVersions=55&selectedPageVersions=56
2014-10-20T13:27:53
CC-MAIN-2014-42
1413507442900.2
[]
docs.codehaus.org
: The theme of this release is Refactoring, Refactoring, Formatting. Outline: - Initial code: - This refactoring also supports replacing all occurrences of the expression: - Preview: - See the results: The keyboard binding for this refactoring is the same as it is for the Java variant of the refactoring (e.g., CMD-Alt-L on Macs). Extract Constant Refactoring Extract constant expression allows a programmer to select a static expression, and pull it out as a constant in the current class. This refactoring is also available from the refactoring context menu. - To invoke this refactoring, select a static expression and select the Extract Constant... refactoring: - In the refactoring wizard, you can choose to replace all occurrences with the constant, and you can also choose to qualify references to the constant with the class name. must be static: - This refactoring supports reordering and renaming of the parameters of the extracted method: - As expected, the result properly extract the selected statements into a new method, taking into account new parameters and the return statement: The limitations to this refactoring are the following: - Only complete statements or groups of complete statements can be refactored. - Multiple return values are not supported. Rename Refactoring Although originally introduced in the 2.0.1 version, rename refactoring is now available from the context menu: And the menu bar: or keyboard commands (e.g., Alt-CMD-R on Macos). Groovy Editor Code Formatting and Indentation In this release, we focused on fixing issues that directly affect a user's day-to-day experience, such as typing, copying and pasting. We have integrated a number of "smart" editing transformations into the Groovy Editor. In the past, we inherited some of the Java-like editing transformations, which would sometimes fail due to incompatibilities between the Groovy and Java grammars. In this release we are providing a implementations rely on heuristics rather than a full parser, so they should work even when the document in the editor is not parsable. These implementations are brand new and we are looking forward to getting your feedback on how to make them work better and polish them up. Besides these "on-line" algorithms that get invoked while typing and pasting. The Groovy editor also provides an "off-line" formatter that can be invoked explicitly by pressing CTRL-I or CTRL-SHIFT-F. Numerous bugfixes and small improvements have been made to the of-line Groovy formatter and indentor as well.: IMG -expand> IMG --restore-> IMG Better AST Transform support In previous versions of Groovy-Eclipse, AST Transforms would often mangle source code and source locations to the extent that the Groovy Editor could no longer find the lexical location of Groovy code elements. We have solved this problem in 2.0.2. What's next? The Groovy-Eclipse team will continue to work on improving the development experience of writing Groovy code in Eclipse as we move towards the 2.1.0 release in early Autumn..
http://docs.codehaus.org/pages/viewpage.action?pageId=157646984
2014-10-20T13:45:34
CC-MAIN-2014-42
1413507442900.2
[array(['/download/attachments/156827657/extractConstant1.gif?version=1&modificationDate=1276644779842&api=v2', None], dtype=object) array(['/download/attachments/156827657/extractConstant2.gif?version=1&modificationDate=1276644780149&api=v2', None], dtype=object) array(['/download/attachments/156827657/extractConstant3.gif?version=1&modificationDate=1276644779842&api=v2', None], dtype=object) array(['/download/attachments/156827657/extractConstant4.gif?version=1&modificationDate=1276644780229&api=v2', None], dtype=object) array(['/download/attachments/156827657/extractMethod1.gif?version=1&modificationDate=1276645612624&api=v2', None], dtype=object) array(['/download/attachments/156827657/extractMethod2.gif?version=1&modificationDate=1276645612625&api=v2', None], dtype=object) array(['/download/attachments/156827657/extractMethod3.gif?version=1&modificationDate=1276645613029&api=v2', None], dtype=object) array(['/download/attachments/156827657/rename1.gif?version=1&modificationDate=1276645805296&api=v2', None], dtype=object) array(['/download/attachments/156827657/rename2.gif?version=1&modificationDate=1276645805402&api=v2', None], dtype=object) array(['/download/attachments/156827657/javadoc1.gif?version=1&modificationDate=1277147350251&api=v2', None], dtype=object) array(['/download/attachments/156827657/expandSelection1.gif?version=1&modificationDate=1277156128048&api=v2', None], dtype=object) ]
docs.codehaus.org
. 2 plugins implementing the framework: - GRIB - NetCDF Unsupported Plugins Two other plugins are present but not currently supported: - HDF4 - GeoTiff
http://docs.codehaus.org/pages/viewpage.action?pageId=244875671
2014-10-20T13:03:54
CC-MAIN-2014-42
1413507442900.2
[array(['/download/attachments/135856262/coverage-api.gif?version=1&modificationDate=1260370105284&api=v2', None], dtype=object) array(['/download/attachments/135856262/coverage-core_new.gif?version=1&modificationDate=1405422185347&api=v2', None], dtype=object) ]
docs.codehaus.org
Configuring Test Lists¶ To edit existing test lists or create new test lists click on the Test Lists in the QA section on the main admin page. Create a new test list by clicking the Add Test List link on the Test Lists admin page. The fields for defining a new test list are described below. Slug¶ A slug is a URL friendly short label for a test list. It should consist of only letters, numbers, underscores and hyphens. Warning Message¶ Use this field to define the message shown when a test within this test is at Action level. If the message is left blank The default is (“Do not treat”). Javascript¶ You can enter arbitrary Javascript snippets here, and they will be injected into the page when perfo Javascript Injection Test List Members¶ Test List’s are composed from Test s and/or other Test Lists, called Sublists. In order to add a test to your test list, click the magnifying glass beside the first text box under the Test header. This will bring up a dialogue box that you can use to search for the test you want to add (or create a new one). Search results Once you’ve found the test you want to add, clicking on its name will close the dialogue box and add the test to your test list. Test added to list Likewise, you can select another Test List to include by using the Sublist entry field: Adding a Sublist to a Test List If you need to add more Tests or Sublists, then click the Add another Test List Membership or Add another Sublist links. You can choose to have Sublists distinguished visually by checking the corresponding Show Outline Around Sublist checkbox. If you now click the Save and continue editing button at the bottom of the page, you will see the name of your test next to its id number. Test added and saved Continue to add tests in this fashion (you can add many tests without saving in between) until you have added all the required tests. When you are finished click **Save and continue editing* and confirm that all your tests are present. See the section below for instructions on how to reorder the tests if required. Reordering Tests and Sublists within a Test List¶ Tests and Sublist’s can easily be reordered by dragging and dropping a tests row into a new position in the list. After you have finished reordering click Save and continue editing and ensure all your tests are now in the correct order. Reorder Tests & Sublists
http://docs.qatrackplus.com/en/latest/admin/qa/test_lists.html
2018-10-15T19:58:44
CC-MAIN-2018-43
1539583509690.35
[array(['../../_images/javascript.png', 'Javascript injection'], dtype=object) array(['../../_images/search_for_test.png', 'search for test'], dtype=object) array(['../../_images/search_results.png', 'Search results'], dtype=object) array(['../../_images/test_added_to_list.png', 'Test added to list'], dtype=object) array(['../../_images/select_sublist.png', 'Adding a Sublist to a Test List'], dtype=object) array(['../../_images/test_added_and_saved.png', 'Test added and saved'], dtype=object) array(['../../_images/drag_and_drop.png', 'Reorder Tests & Sublists'], dtype=object) ]
docs.qatrackplus.com
Developers Guide¶ Developers Guide Contents: Installing QATrack+ For Development¶ Due to the huge volume of tutorials already written on developing software using Python, Django, and git, only a brief high level overview of getting started developing for the QATrack+ project will be given here. That said, there are lots of steps involved which can be intimidating to newcomers (especially git!). Try not to get discouraged and if you get stuck on anything or have questions about using git or contributing code then please post to the mailing list so we can help you out! In order to develop for QATrack+ you first need to make sure you have a few requirements installed. Python 3.4+¶ QATrack+ is developed using Python 3 (at least Python 3.4, preferably 3.5+). Depending on your operating system, Python 3 may already be installed but if not you can find instructions for installing the proper version on. Git¶ QATrack+ uses the git version control system. While it is possible to download and modify QATrack+ without git, if you want to contribute code back to the QATrack+ project, or keep track of your changes, you will need to learn about git. You can download and install git from. After you have git installed it is recommended you go through a git tutorial to learn about git branches, commiting code and pull requests. There are many tutorials available online including a tutorial by the Django team as well as tutorials on BitBucket and GitHub. BitBucket¶ The QATrack+ project currently uses BitBucket for hosting its source code repository. In general, to contribute code to QATrack+ you will need to create a fork of QATrack+ on BitBucket, make your changes, then make a pull request to the main QATrack+ project. Creating a fork of QATrack+¶ Creating a fork of QATrack+ is explained in the BitBucket documentation. Cloning your fork to your local system¶ Once you have created a fork of QATrack+ on BitBucket, you will want to download your fork to your local system to work on. This can either be done using the command line or one of the graphical git apps that are available. This page assumes you are using bash on linux or the Git Bash shell on Windows. Setting up your development environment¶ In order to keep your QATrack+ development environment separate from your system Python installation, you will want to set up a virtual environment to install QATrack+’s Python dependencies in it. Using the command line change to the directory where you installed QATrack+, create a new virtual environment, and activate the virtual environment: cd /path/to/qatrackplus python3 -m venv env source env/bin/activate Then install the development libraries: pip install -r requirements.dev.txt Creating your development database¶ Rather than using a full blown database server for development work, You can use Sqlite3 which is included with Python. Once you have the requirements installed, copy the debug local_settings.py file from the deploy subdirectory and then create your database: cp deploy/local_settings.dev.py qatrack/local_settings.py mkdir db python manage.py migrate this will put a database called default.db in the db subdirectory. Running the development server¶ After the database is created, create a super user so you can log into QATrack+: python manage.py createsuperuser and then run the development server: python manage.py runserver Once the development server is running you should be able to visit in your browser and log into QATrack+. Next Steps¶ Now that you have the development server running, you are ready to begin modifying the code! If you have never used Django before it is highly recommended that you go through the official Django tutorial which is an excellent introduction to writing Django applications. Once you are happy with your modifications, commit them to your source code repository, push your changes back to your online repository and make a pull request! If those terms mean nothing to you…read a git tutorial! QATrack+ Development Guidelines¶ The following lists some guidelines to keep in mind when developing for QATrack+. Internationalization & Translation¶ As of version 0.3.1, all strings and templates in QATrack+ should be marked for translation. This will allow for QATrack+ to be made avaialable in multiple languages. For discussion of how to mark templates and strings for translation please read the Django docs on translation. Tool Tips And User Hints¶ Where possible all links, buttons and other “actionable” items should have a tooltip (via a title attribute or using one of the bootstrap tool tip libraries) which provides a concise description of what clicking the item will do. For example: <a class="..." title="Click this link to perform XYZ" href="..." > Foo </a> Other areas where tooltips are very useful is e.g. badges and labels where wording is abbreviated for display. For example: <i class="fa fa-badge" title="There are 7 widgets for review">7<i> <span title="This X has Y and Z for T">Foo baz qux</span> Formatting & Style Guide¶ General formatting¶ In general, any code you write should be PEP 8 compatible with a few exceptions. It is highly recommended that you use flake8 to check your code for pep8 violations. A QATrack+ flake8 config file is included with QATrack+, to view any flake8 violations run: make flake8 # or flake8 . You may also want to use yapf which can automatically format your code to conform with QATrack+’s style guide. A yapf configuration sections is included in the setup.cfg file. To run yapf: make yapf Import Order¶ Imports in your Python code should be split in three sections: - Standard library imports - Third party imports - QATrack+ specific imports and each section should be in alphabetical order. For example: import math import re import sys from django.apps import apps from django.conf import settings from django.contrib.auth.models import Group, User from django.contrib.contenttypes.fields import ( GenericForeignKey, GenericRelation, ) from django_comments.models import Comment import matplotlib from matplotlib.backends.backend_agg import FigureCanvasAgg import numpy import scipy from qatrack.qa import utils from qatrack.units.models import Unit isort is a simple tool for automatically ordering your imports and an isort configuration is included in the setup.cfg file. Running The Test Suite¶ Once you have QATrack+ and its dependencies installed you can run the test suite from the root QATrack+ directory using the py.test command: ./qatrackplus> py.test Test session starts (platform: linux, Python 3.6.5, pytest 3.5.0, pytest-sugar 0.9.1) Django settings: qatrack.settings (from ini file) rootdir: /home/randlet/projects/qatrack/qatrackplus, inifile: pytest.ini plugins: sugar-0.9.1, django-3.1.2, cov-2.5.1 qatrack/accounts/tests.py ✓✓✓ For more information on using py.test, refer to the py.test documentation. Important All new code you write should have tests written for it. Any non trivial code you wish to contribute back to QATrack+ will require you to write tests for the code providing as high a code coverage as possible. You can measure code coverage in the following way: make cover Writing Documentation¶ As well as writing tests for your new code, it will be extremely helpful for you to include documenation for the features you have built. The documentation for QATrack+ is located in the docs/ folder and is seperated into the following sections: - User guide: Documentation for normal users of the QATrack+ installation. - Admin guide: Documentation for users of QATrack+ who are responsible for configuring and maintaining Test Lists, Units etc. - Tutorials: Complete examples of how to make use of QATrack+ features. - Install: Documentation for the people responsible for installing, upgrading, and otherwise maintaining the QATrack+ server. - Developers guide: You are reading it :) Please browse through the docs and decide where is the most appropriate place to document your new feature. While writing documentation, you can view the documentation locally in your web browser (at) by running one of the following commands: make docs-autobuild # -or- sphinx-autobuild docs docs/_build/html -p 8008 The author of the code (or potentially their employer) retains the copyright of their work even when contributing code to QATrack+. However, unless specified otherwies, by submitting code to the QATrack+ project you agree to have it distributed using the same MIT license as QATrack+ uses. I’m not a developer, how can I help out?¶ Not everyone has development experience or the desire to contribute code to QATrack+ but still wants to help the project out. Here are a couple of ways that you can contribute to the QATrack+ project without doing any software development: - Translations: Starting in QATrack+ v0.3.1, QATrack+ will have the infrastructure in place to support languages other than English. We will be making translation files available so that the community can create translation files for their native languages. Please get in touch with [email protected] if you are able to help out with this task! - Tutorials: Tutorials are a great way for newcomers to learn their way around QATrack+. If you have an idea for a tutorial, we would love to include it in our tutorials section! - Mailing List: QATrack+ has a mailing list which QATrack+ users and administrators may find useful for getting support and discussing bugs and/or features. Join the list and chime in! - Spread the word: The QATrack+ community has grown primarily through word of mouth. Please let others know about QATrack+ when discussing QA/QC software :) - Other: Have any ideas for acquiring development funding for the QATrack+ project? We’d love to hear them!
http://docs.qatrackplus.com/en/latest/developer/guide.html
2018-10-15T19:58:59
CC-MAIN-2018-43
1539583509690.35
[]
docs.qatrackplus.com
Contents London release notes Previous Topic Next Topic Change Management release notes ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Change Management release notes ServiceNow Change Management product enhancements and updates in the London release. New in the London release. Changed in this release Change Request Approval Records Enhanced the approval summarizer in approval records associated with a change request to include additional information from the change request record. Refresh Impacted Services The Refresh Impacted Services option was only available for the Change Request table. In this release, this option is also available for tables that extend the task table. The list of these tables is driven by the com.snc.task.refresh_impacted_services property. This option populates the Impacted Services or Impacted CIs related list based on the primary CI, that is, the CI that you specify in the Configuration Item reference field on the Change Request form. Activation information To view the Change Schedules page, you must activate the Change Management - Change Schedule plugin (com.snc.change_management.soc). For more information, see Change Advisory Board (CAB) workbench. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/london-release-notes/page/release-notes/it-service-management/change-management-rn.html
2018-10-15T19:46:28
CC-MAIN-2018-43
1539583509690.35
[]
docs.servicenow.com
Network Address Translation (NAT) and port mapping configuration are required if Horizon Clients connect to virtual machine-based desktops on different networks. In the examples included here, you must configure external addressing information on the desktop so that Horizon Client can use this information to connect to the desktop by using NAT or a port mapping device. This URL is the same as the External URL and PCoIP External URL settings on Horizon 7 Connection Server and security server. When Horizon Client is on a different network and a NAT device is between Horizon Client and the desktop running the plug-in, a NAT or port mapping configuration is required. For example, If there is a firewall between the Horizon Client and the desktop the firewall is acting as a NAT or port mapping device. An example deployment of a desktop whose IP address is 192.168.1.1 illustrates the configuration of NAT and port mapping. A Horizon Client system with an IP address of 192.168.1.9 on the same network establishes a PCoIP connection by using TCP and UDP. This connection is direct without any NAT or port mapping configuration. If you add a NAT device between the client and desktop so that they are operating in a different address space and do not make any configuration changes to the plug-in, the PCoIP packets will not be routed correctly and will fail. In this example, the client is using a different address space and has an IP address of 10.1.1.9. This setup fails because the client will use the address of the desktop to send the TCP and UDP PCoIP packets. The destination address of 192.168.1.1 will not work from the client network and might cause the client to display a blank screen. To resolve this problem, you must configure the plug-in to use an external IP address. If externalIPAddress is configured as 10.1.1.1 for this desktop, the plug-in gives the client an IP address of 10.1.1.1 when making desktop protocol connections to the desktop. For PCoIP, the PCoIP Secure Gateway service must be started on the desktop for this setup. For port mapping, when the desktop uses the standard PCoIP port 4172, but the client must use a different destination port, mapped to port 4172 at the port mapping device, you must configure the plug-in for this setup. If the port mapping device maps port 14172 to 4172, the client must use a destination port of 14172 for PCoIP. You must configure this setup for PCoIP. Set externalPCoIPPortin the plug-in to 14172. In a configuration which uses NAT and port mapping, the externalIPAdress is set to 10.1.1.1, which is network translated to 192.168.1.1, and externalPColPPort is set to 14172, which is port mapped to 4172. As with the external PCoIP TCP/UDP port configuration for PCoIP, if the RDP port (3389) or the Framework Channel port (32111) is port mapped, you must configure externalRDPPort and externalFrameworkChannelPort to specify the TCP port numbers that the client will use to make these connections through a port mapping device.
https://docs.vmware.com/en/VMware-Horizon-7/7.6/view-agent-direct-connection-plugin-administration/GUID-04525A3D-2A7C-48EE-861D-F2EBC0C50636.html
2018-10-15T19:47:26
CC-MAIN-2018-43
1539583509690.35
[array(['images/GUID-8435862C-C61E-4925-A895-0EC9279E9741-high.png', 'This graphic illustrates the connection between a PCoIP client and PCoIPserver on the same network.'], dtype=object) array(['images/GUID-754ABC59-A340-4E29-A14F-E3E5582B057A-high.png', 'This graphic illustrates a failure on a connection between the PCoIP client and server using a NAT Device.'], dtype=object) array(['images/GUID-1E02817D-8CAF-4634-8E3E-B78B1584E456-high.png', 'This graphic illustrates setting up PCoIP client, security gateway, and server using a NAT Device and Port Mapping.'], dtype=object) ]
docs.vmware.com
. Constants must be initialized as they are declared. For example: class Calendar1 { public const int months = 12; }. Note Use caution when you refer to constant values defined in other code such as DLLs. If a new version of the DLL defines a new value for the constant, your program will still hold the old literal value until it is recompiled against the new version. Multiple constants of the same type can be declared at the same time, for example: class Calendar2 { const int months = 12, weeks = 52, days = 365; } The expression that is used to initialize a constant can refer to another constant if it does not create a circular reference. For example: class Calendar3 { const int months = 12; const int weeks = 52; const int days = 365; const double daysPerWeek = (double) days / (double) weeks; const double daysPerMonth = (double) days / (double) months; } Constants can be marked as public, private, protected, internal, protected internal or private protected. These access modifiers define how users of the class can access the constant. For more information, see Access Modifiers.: int birthstones = Calendar.months; C# Language Specification For more information, see the C# Language Specification. The language specification is the definitive source for C# syntax and usage.
https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/constants
2018-10-15T19:24:20
CC-MAIN-2018-43
1539583509690.35
[]
docs.microsoft.com
Setting up, performing, and reviewing your first test list step-by-step¶ This guide will take you through: - Logging in and accessing the admin interface - Changing the name displayed at the top of your QATrack+ website - Creating a new Unit - Creating different user groups - Configuring test statuses - Setting up a test list for performing a linac output measurement, - Assigning the test list to a unit - Setting references and tolerances for your tests. - Setting references and tolerances for your tests. - Performing the Test List - Reviewing the Test List Instance This tutorial assumes your website administrator has QATrack+ configured and running on a server somewhere and has assigned you a username/password with superuser status. Logging in and accessing the admin interface¶ Access the url for QATrack+ (your website adminstrator should have provided you with a URL) in your favourite web browser. QATrack+ looks and works best with Chrome or Firefox but Internet Explorer versions 11 and up are also supported. If you are not already logged in, you will be taken to the login page: The QATrack+ login screen enter your username and password and click the Log In button. After logging in, you should see a screen similar to the one below: QATrack+ home screen Click the little dropdown arrow next to your name in the top right hand corner, and then select the Admin option from the dropdown menu. Accessing the QATrack+ admin section This wll take you to the main admin page where you will be doing all of your QATrack+ configuration. Setting up the name of your QATrack+ website¶ As you may have noticed in earlier screen shots, the words “example.com” are displayed at the top of your website. Let’s change that to “Your Hospital Name”. From the main admin page find the Websites section and click on the Websites link. Sites section of the admin Click on the example.com entry under the Domain name column header. The example.com site object and then set the relevant fields (ask your admin if you’re not sure of the domain name to use) and click Save when you’re finished. Setting the website name If you now return to the main site (at any time you can click the QATrack+ administration header at the top of the admin pages to return to the main QATrack+ site) you should see your site now says Your Hospital Name+ at the top rather than example.com. Changed site name In the next step of this tutorial we will configure a new Unit. Creating a new Unit¶ In order to prevent duplicating information here, please follow the instructions in the Units administration docs to create a new Unit before continuing. Creating a New User Group¶ After you’ve created your Unit, return to the main admin page and click on the Groups link under the Auth section and then click the Add group button in the top right. Set the Name field to Physics and choose the following set of permissions: - attachments | attachment | Can add attachment - qa | frequency | Choose QA by Frequency - qa | test instance | Can review & approve tests - qa | test instance | Can review & approve self-performed tests - qa | test instance | Can skip tests without comment - qa | test instance | Can view charts of test history - qa | test instance | Can see test history when performing QA - qa | test list instance | Can add test list instance - qa | test list instance | Can override date - qa | test list instance | Can perform subset of tests - qa | test list instance | Can save test lists as ‘In Progress’ - qa | test list instance | Can view previously completed instances - qa | test list instance | Can change test list instance - qa | unit test collection | Can view tli and utc not visible to user’s groups - qa | unit test collection | Can view program overview - qa | unit test info | Can view Refs and Tols - service_log | return to service qa | Can add return to service qa - service_log | return to service qa | Can perform return to service qa - service_log | return to service qa | Can view return to service qa - service_log | service event | Can add service event - service_log | service event | Can review service event - service_log | service event | Can view service event - units | unit available time | Can change unit available time - units | unit available time edit | Can add unit available time edit - units | unit available time edit | Can change unit available time edit Your group should look like the following: Defining a physics group Click Save and you will now see your new group in the listings page. Group listings The last step for this section is to add yourself to the Physics group. Visit your user profile by going to the Users section under the Auth section and clicking on your username. Choose a user to edit On the next page find the Groups field and add Physics to the Chosen Groups list. selecting a group Click on Save to finalize the addition of yourself to the Physics group. Creating Test Statuses¶ We are next going to create two test statuses; first an Unreviewed status which will be the default Test Instance Status given to test data when they are performed and second an Approved status that can be applied to the tests after they have been reviewed. From the main admin page click on the Statuses link under the QA section and click the Add test instance status button in the top right. Give the status a Name of Unreviewed, a Slug of unreviewed and a description of Default status for tests which have just been completed.. Next, check off the Is default checkbox and then click Save and add another. Unreviewed status Create an Approved status similar to the Unreviewed status but this time leave the Is default box unchecked and also uncheck the Requires review checkbox. You should also select a new colour for the status (e.g. green for Approved). Click Save when you’re finished. Approved status And finally create a Rejected status similar to the Unreviewed status but this time leave the Is default box unchecked and also uncheck the Requires review and Is Valid checkboxex. You should also select a new colour for the status (e.g. red for Rejected). Click Save when you’re finished. Rejected status You should now see your three statuses in the listing. Test status listings Now that we have done the initial configuration we can begin to cover test and test list configuration. Creating a new Test List¶ Creating the required Tests¶ In this part of the tutorial we will create the tests required to calculate a dose. This will demonstrate a number of different test types including composite calculations. In order to calculate dose we need to create 6 tests (k_Q and Other corrections such as P_pol, P_ion etc are ignored for simplicity): - Temperature (numerical test) - Pressure (numerical test) - Temperature-pressure correction (Ftp - a composite calculation) - Ion chamber reading (numerical test) - Ndw for converting our ion chamber reading to dose (constant value) - Dose (the final result we are interested in - a composite calculation) To create the tests, visit the Tests link under the QA section from the main admin page and then click on Add test in the top right hand corner. Give our first test the name Temperature °C and the macro name of temperature. We will leave the description blank for now. Since we haven’t created any [categories](../categories.md) yet we will do so now. Click the green cross next to the Category: drop down and create a Dosimetry category. creating a dosimetry category For the Type field choose Simple Numerical, indicating that this test will require the user to enter a number. You may also check the Allow auto review of this test? checkbox if you intend to use auto review. Full temperature test Click Save and add another when you are done. Follow the same procedure define 1) a Pressure (mmHg) test ensuring that you use the macro name pressure (You can use the Dosimetry category we created earlier for the pressure and all of the remaining tests), and 2) an Ion Chamber Reading (nC) test using the macro name reading. Next we will create our first composite test for our temperature pressure correction. Give this test a name of Temperature-Pressure Correction and a macro name of p_tp. From the Type: dropdown select Composite and you will notice that a new Calculation Procedure text box will appear below. In that text box enter the following Python snippet: p_tp = (temperature + 273.15)/295.15*760/pressure Note here that we used the macro names temperature and pressure from our previously defined tests to define how our Temperature Pressure Correction test result will be calculated. P_TP calculation When you are finished, click Save and add another. Define a test called N_DW with the macro name n_dw. This time choose a Type of Constant and enter a value of 0.05 in the Constant value field that appears. N_DW Once that is finished we will add our final test for calculating dose. Create a composite test with the name Dose, the macro name dose and a calculation procedure defined as: corrected_reading = reading*p_tp dose = n_dw*corrected_reading dose test Note Note that the dose calculation is a composite test based on a previous composite result (p_tp). QATrack+ has a dependency resolution system to allow this sort of composite-of-composite type calculations. Once that is complete click on Save which will take you back to the test listings. If all the steps have been completed correctly you should see 6 tests listed: Test listings for dose calculations In the next step of the tutorial we will group these tests into a test list. Creating the Test List¶ To create the test list, visit the Test Lists link under the QA section from the main admin page and then click on Add test list in the top right hand corner. Give the test list the name Machine Output and slug machine-output. We will ignore the description fields for now. Under the Test List Members section click on green cross / Add another Test list membership link at the bottom to make a 6th Test text box appear (you can ignore the Sublist text box, it allows you to include other Test Lists within a parent Test List). Now click the first magnifying glass and click on the Temperature test in the window that pops up: Selecting a test Repeat this step for the other 5 tests we defined at which point the Test list memberships section should look like: Test list memberships Now click Save and that’s it! Now that we’ve created our tests and test list we can assign it to the unit we created earlier. This is covered in the next step of this tutorial. Assigning the test list¶ In this part of the tutorial we will assign our test list to a unit and ensure that it is functioning correctly on the main site. To assign the test list to a unit, visit the Assign Test Lists to Units link under the QA section from the main admin page and then click on Add unit test collection in the top right hand corner. Select the Test Unit from the Unit: dropdown, and then create a new [frequency](../frequencies) by clicking on the green cross next to the Frequency dropdown. Give the frequency the name Monthly, slug monthly and enter 28, 28, 35 for Nominal interval, Due Interval and for Overdue interval, respectively. creating a new frequency Select the Physics option from the Assigned to: dropdown and add the Physics group to the Visible to section. Next select test list from the Test List or Test List Cycle dropdown. After selecting test list you will be able to select Machine Output from the Tests collection dropdown. Assigning to a unit When you’re finished click Save. We can now set a reference and tolerance value for the dose calculated by our Test List. To assign the test list to a unit, visit the Set References and Tolerances link under the QA section from the main admin page and then click on the Dose link for the Test Unit. Create a new Tolerance by clicking the green cross beside the Tolerance field. Select Percentage for the Type and set Action Low = -3, Tolerance Low = -2, Tolerance High = 2, and Action High = 3 and then click Save. This will create a Tolerance which signals the user if a test is outside of tolerance (2%) or action (3%) levels relative to the reference value. Creating a new tolerance Set the New reference value to 1 and then click Save. We are now ready to perform the test list. Performing the Test List¶ Visit the main site (you can click the QATrack+ administration header at the top of the admin page) and select the Choose Unit link from the Perform QA dropdown at the top of the page. On the next page choose the Monthly option from the Test Unit drop down. Choosing Monthly On the next page click Perform beside the Machine Output test list. Monthly test listings You should now see the test list you defined: Final test list Fill in sample values of : - Temperature = 24 - Pressure = 760 - Ion Chamber Reading = 20.2 And you should see the Temperature-Pressure Correction and Dose values calculated as 1.007 and 1.017 respectively. The Status column next to Dose should indicate the Test is within tolerance. Calculated results Notice that the Status for all the other tests all show No Tol Set. This is because we haven’t set reference values and tolerance/action levels for these tests. For more information on Reference & Tolerance values see here. You may now click Submit QA Results and you will be returned to the previous page. You should notice at the top of the page that there is now an indication that there is 1 unreviewed Test List Instance: Visual indicators of review queue Reviewing the Data¶ Periodically whoever is responsible for ensuring QA has been completed satisfactorily should go through all unreviewed Test List Instances and update their status to either Approved or Rejected (note rejected is to be used if a Test was performed incorrectly, not if it was performed correctly but failing). Select the Unreviewed - All Groups menu item from the Review Data menu and then click Review beside the Test List Instance we just performed: Unreviewed Test Lists On the next page you will see details of the Test List Instance. Select the Approved status from the Status drop down to change the status from Unreviewed. Add a comment at the bottom of the page if desired and then click Update Test Statuses. Reviewing a test list That Test List instance will now be removed from the Unreviewed queue. Note that it is also possible to automate review and approval. Wrapping Up¶ We have now gone through the basics of taking QATrack+ from a blank installation all the way to performing and reviewing our first Test List! Check out the admin guide (for configuration) and users guide (for end user instructions) for more information.
http://docs.qatrackplus.com/en/latest/tutorials/step_by_step/index.html
2018-10-15T19:59:19
CC-MAIN-2018-43
1539583509690.35
[array(['../../_images/login.png', 'The QATrack+ login screen'], dtype=object) array(['../../_images/after_login.png', 'QATrack+ home screen'], dtype=object) array(['../../_images/access_admin.png', 'Accessing the QATrack+ admin section'], dtype=object) array(['../../_images/sites_section.png', 'Sites section of the admin'], dtype=object) array(['../../_images/example_dot_com.png', 'The example.com site object'], dtype=object) array(['../../_images/set_name.png', 'Setting the website name'], dtype=object) array(['../../_images/changed_name.png', 'Changed site name'], dtype=object) array(['../../_images/physics_group.png', 'Defining a physics group'], dtype=object) array(['../../_images/group_listing.png', 'Group listings'], dtype=object) array(['../../_images/edit_user.png', 'Choose a user to edit'], dtype=object) array(['../../_images/select_group.png', 'selecting a group'], dtype=object) array(['../../_images/unreviewed1.png', 'Unreviewed status'], dtype=object) array(['../../_images/approved1.png', 'Approved status'], dtype=object) array(['../../_images/rejected.png', 'Rejected status'], dtype=object) array(['../../_images/test_statuses.png', 'Test status listings'], dtype=object) array(['../../_images/category1.png', 'creating a dosimetry category'], dtype=object) array(['../../_images/temperature.png', 'Full temperature test'], dtype=object) array(['../../_images/p_tp.png', 'P_TP calculation'], dtype=object) array(['../../_images/n_dw.png', 'N_DW'], dtype=object) array(['../../_images/dose.png', 'dose test'], dtype=object) array(['../../_images/dose_tests.png', 'Test listings for dose calculations'], dtype=object) array(['../../_images/select_test1.png', 'Selecting a test'], dtype=object) array(['../../_images/memberships.png', 'Test list memberships'], dtype=object) array(['../../_images/new_frequency.png', 'creating a new frequency'], dtype=object) array(['../../_images/assign_to_unit2.png', 'Assigning to a unit'], dtype=object) array(['../../_images/new_tolerance.png', 'Creating a new tolerance'], dtype=object) array(['../../_images/choose_monthly.png', 'Choosing Monthly'], dtype=object) array(['../../_images/test_list_listing.png', 'Monthly test listings'], dtype=object) array(['../../_images/final_test_list.png', 'Final test list'], dtype=object) array(['../../_images/calculated_results.png', 'Calculated results'], dtype=object) array(['../../_images/unreviewed_indicators.png', 'Visual indicators of review queue'], dtype=object) array(['../../_images/unreviewed_lists.png', 'Unreviewed Test Lists'], dtype=object) array(['../../_images/review_list.png', 'Reviewing a test list'], dtype=object) ]
docs.qatrackplus.com