content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Chart from Table macro is available since the release of Table Filter and Charts 3.0.0. The current page contains the list of features available in the native Chart macro supplied with Confluence and the Chart from Table macro bundled within the Table Filter and Charts app. Despite the similar objectives that both macros were created for, they have several differences that can be found in the table below.
https://docs.stiltsoft.com/display/public/TFAC/Comparison+of+Chart+macro+against+Chart+from+Table+macro
2021-05-06T01:44:27
CC-MAIN-2021-21
1620243988724.75
[]
docs.stiltsoft.com
Gradient The gradient edited by the user. using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEditor; public class EditorGUIGradientField : EditorWindow { Gradient gradient = new Gradient(); [MenuItem("Examples/Gradient Field demo")] static void Init() { EditorWindow window = GetWindow(typeof(EditorGUIGradientField)); window.position = new Rect(0, 0, 400, 199); window.Show(); } void OnGUI() { gradient = EditorGUI.GradientField( new Rect(3, 3, position.width - 6, 50), "Gradient", gradient); } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2020.1/Documentation/ScriptReference/EditorGUI.GradientField.html
2021-05-06T00:37:30
CC-MAIN-2021-21
1620243988724.75
[]
docs.unity3d.com
In this section of the slide settings you can find options which are related to the animations between 2 slides. With Duration, you can set the time interval between slide changes, this slide will stay visible for the time specified here. This value is in millisecs, so the value 1000 means 1 second. Please don't use 0 or very low values. The Time shift option you can control the timing of the layer animations when the slider changes to this slide with a 3D/2D transition. Zero means that the layers of this slide will animate in when the slide transition ends. You can time-shift the starting time of the layer animations with positive or negative values. By clicking on the Select transition button You can select your desired slide transitions. There are many predefined transition here (more than 200+) separated into 2 categories, 2D and 3D. You can also select more than one. In this case a randomly selected transition will be used each time. If you hover on the transition name, then a sample will play in modal to show you how that current transition looks like. You are also able to set Custom duration time for the slide transition instead of the default one (1000ms).
http://docs.offlajn.com/creative-slider/53-slide-settings/221-timing-and-transition
2021-05-06T01:11:06
CC-MAIN-2021-21
1620243988724.75
[]
docs.offlajn.com
Instructions for a supported install of Homebrew are on the homepage. This script installs Homebrew to its preferred prefix ( /usr/local for macOS Intel, /opt/homebrew for Apple Silicon) so that you don’t need sudo when you brew install. It is a careful script; it can be run even if you have stuff installed in /usr/local already. It tells you exactly what it will do before it does it too. You have to confirm everything it will do before it starts. xcode-select --install, developer.apple.com/downloads or Xcode 3 bashor zsh) 4 You can set HOMEBREW_BREW_GIT_REMOTE and/or HOMEBREW_CORE_GIT_REMOTE in your shell environment to use geolocalized Git mirrors to speed up Homebrew’s installation with this script and, after installation, brew update. export HOMEBREW_BREW_GIT_REMOTE="..." # put your Git mirror of Homebrew/brew here export HOMEBREW_CORE_GIT_REMOTE="..." # put your Git mirror of Homebrew/homebrew-core here /bin/bash -c "$(curl -fsSL)" The default Git remote will be used if the corresponding environment variable is unset. Just extract (or git clone) Homebrew wherever you want. Just avoid: /tmpsubdirectories because Homebrew gets upset. /swand /opt/localbecause build scripts get confused when Homebrew is there instead of Fink or MacPorts, respectively. However do yourself a favour and install to /usr/local on macOS Intel, /opt/homebrew on macOS ARM, and /home/linuxbrew/.linuxbrew on Linux. Some things may not build when installed elsewhere. One of the reasons Homebrew just works relative to the competition is because we recommend installing here. Pick another prefix at your peril! mkdir homebrew && curl -L | tar xz --strip 1 -C homebrew Create a Homebrew installation wherever you extract the tarball. Whichever brew command is called is where the packages will be installed. You can use this as you see fit, e.g. a system set of libs in /usr/local and tweaked formulae for development in ~/homebrew. Uninstallation is documented in the FAQ. 1 For 32-bit or PPC support see Tigerbrew. 2 10.14 or higher is recommended. 10.9–10.13 are supported on a best-effort basis. For 10.4-10.6 see Tigerbrew. 3 Most formulae require a compiler. A handful require a full Xcode installation. You can install Xcode, the CLT, or both; Homebrew supports all three configurations. Downloading Xcode may require an Apple Developer account on older versions of Mac OS X. Sign up for free here. 4 The one-liner installation method found on brew.sh requires a Bourne-compatible shell (e.g. bash or zsh). Notably, fish, tcsh and csh will not work.
https://docs.brew.sh/Installation
2021-05-06T00:09:52
CC-MAIN-2021-21
1620243988724.75
[]
docs.brew.sh
DigitalOcean Droplets are Linux-based virtual machines (VMs). They run on top of virtualized hardware. Each Droplet that you create is a new server you can use, either standalone or as used as part of a larger, cloud-based infrastructure. We will be setting one up as a HYDRA staking node/wallet. This same method may be used with other VPS providers. To access the DigitalOcean Control Panel and create a Droplet, you need a DigitalOcean account. You can create one from the DigitalOcean new account registration page if you don't already have one. After you log in to the control panel, let's set up some of the security for our VPS. We will create our droplet and then resume the Firewall setup. It is highly recommended to enable 2FA with Google Authenticator as well as use the SMS backup method. You can find in depth instructions here: We will begin by creating and importing SSH keys for logging into our HYDRA node. Please follow THIS guide to create and import SSH keys created specifically for DigitalOcean: After you log in to the control panel, click the green Create button in the top right or click Droplets on the left panel and clicking 'Create droplet' to open the create menu. In the create menu, to enable (like backups and monitoring). The most popular defaults are pre-selected, so you can scroll to the bottom of the page and create a Droplet immediately, or you can customize any of the options in each section. Choose the price plan that works for you. Users have been successful staking with the minimum sized droplet (currently listed under 'Basic' plan $5/month 1GB Ram, 1 CPU 25 GB SSD, 1000 GB transfer) however it is recommended to use the higher tier $20/month option if it is within your budget, since the performance will be far more stable using the added memory as well as faster transfer speeds which may be needed down the road as the HYDRA blockchain grows). For this guide we will be using version Ubuntu 20.04(LTS) x64 Basic plan for $5/month. Scrolling further down in droplet creation there is the option to choose your datacenter region. This depends on your own personal preference as each area should be similar in stability. Under authentication it is recommended to select the SSH keys that we have set up earlier. If you have created more than one key you can select them all for use. Finally click 'Create droplet' to complete the droplet creation. You have the option to select creating system backups if you require it at an extra fee. Initialization will take a about a minute. Afterwards you will see your new droplet as well as its IP address you will connect to it with. Please take note of the IP address. The address will generally not change and it will be your way of accessing the droplet. We'll Continue with our security setup by enabling some custom rules for our Firewall. We'll need to set up the Firewall and allow port TCP/3338 for the node connectivity, as well as allowing Network Time protocol (NTP) to sync the time accurately on port UDP/123. Lastly we will allow port TCP/22 for logging in over SSH using our previously created keys. On the left panel under 'MANAGE' click on 'Networking' and then click 'Firewalls' and then click 'Create Firewall' Give the new firewall rule a name. We will be adding TCP port 3338 and TCP port 22 as well as UDP port 123 for the network time protocol sync service under Inbound Rules. Later on as needed we can delete or add back this SSH rule whenever we require access to our node. Under Type enter SSH and under Protocol enter TCP For Port Range enter 22 as in image below. In the New Rule box add a custom and allow port 3338 by setting to Custom Under Type and under Protocol enter TCP and For Port Range enter 3338. In a second New Rule box add another custom rule and allow port 123 by setting to Custom Under Type and under Protocol enter UDP and For Port Range enter 123. Below this section under 'Apply to Droplets' you can search for your droplets name (usually it will show your list of droplets by typing in 'ubuntu' and then you can select it clicking on it). Apply the firewall rules to the specific droplet and then click create firewall. Later after initial setup we can come back here and the Port 22 SSH rule can be deleted for added security and only added back when you want to access the droplet remotely. You can now log into the VPS with Putty as well as use WinSCP to access files and upload/download your wallet.dat as needed. You can download Putty from their official site here: Here is a quick guide to configuring putty. By default the username for your droplet will be root and you will not need a password unless you have set one with your SSH keys setup earlier. If you have already set up your SSH keys you can scroll to 'Configuring Putty' for quick reference on configuring your droplet for access: Accessing your server with MobaXterm We highly recommend MobaXterm as an SSH client. It's just a quantum leap ahead of default Putty and solves lots of issues (e.g copy/paste and or support for multi tab Linux apps such as byobu). Such issues arise often when using SSH with windows and can be really frustrating for new users. Below is a quick guide to connecting to your droplet using MobaXterm as an alternative to using Putty: MobaXterm can be downloaded from their official site HERE: After installation the MobaXterm program will be in your Windows start menu and can be found by pressing the Windows button and typing 'MobaXterm'. On first initialization you will be prompted to allow the program through the firewall, you can safely click yes and ok to allow it through public networks. Once the program has started we will click 'Session' and then select 'SSH' Enter the IP address of your droplet in 'Remote Host' box and then click 'Advanced SSH settings' Check the box 'Use private key' and select your private key (with .ppk extension) that you have generated earlier. Then click 'Open' and then press 'OK'. You can now continue with enabling the internal firewall and the installation of the HYDRA wallet below. The next time you start MobaXterm your server will be accessible by simply selecting its IP from the left panel. To learn more about further capabilities such as copying and downloading files as well as Xserver display access, please read the MobaXterm documentation here: We will enable UFW (Uncomplicated Firewall) and allow ports TCP 22, UDP 123 and TCP 3338. If you have been successful in connecting to your droplet through SSH then you should be looking at a terminal screen. Firstly we will add some rules to the UFW firewall and then enable it. Allow connections to ports 22, 123 and 3338: Enter these commands on separate lines: ufw allow 22/tcpufw allow 3338/tcpufw allow 123/udp Enable the firewall with this command: ufw enable If you have previously generated a wallet.dat file and wish to use it you can place it in /root/.hydra/wallets/ You will need to create this directory if this is a new installation. If you are importing your wallet from a private key this step is not necessary. Please read documentation about SCP file transfer in MobaXterm: or by using WinSCP: if you wish to upload or download files such as your wallet.dat later using a file explorer. It is highly recommended that you ensure backups of wallet.dat are made and are stored in a safe location). The latest build updates can be found at We will be using an install script here to speed up the installation process. The script is made for Ubuntu 20.04 however other Operating Systems and their scripts are available HERE If you run into any problems or wish to manually install, you can follow the guide from here: To use the script for Ubuntu 20.04 begin with the following two commands first to update the system. sudo apt update Answer yes when required (updating first and separately from the script will help avoid an issue with promps for ssh configuration settings where menu is inactive): sudo apt upgrade If any dialogs come up requesting changes to locally modified settings choose "keep the local version" or press enter for the default choice. When the update completes paste this command to initialize the script: wget -O - | bash After several minutes the script should conclude copying all files and setting up directories and start up the node. The node will take some time to sync to the latest block. If you run in to any errors you can try manually updating (remember to enter y and enter to continue if requested): sudo apt upgrade -y Then try running the script again. If there are still issues please follow the Manual installation guide or contact us on Telegram. After successful setup the node will continue to run even after you log out from the shell, providing that the server is not shut down or restarted. If you restart the server you can start up the node again by navigating to ~/Hydra/bin/ and typing ./hydrad -daemon - this will initialize the daemon with which you can use the cli to send commands; for example If you have already created a wallet and have the private keys you can import them with: ./hydra-cli importprivkey <key> You can enable staking as well as perform different types of transactions and chain commands. Please see more information and full documentation of the available commands HERE. To see information about your running node you can navigate to ~/Hydra/bin with command: cd ~/Hydra/bin and then enter ./hydra-cli getinfo here you can see which block you are currently synced to as well as wallet version, connections, balances. For information about staking status and more please see HERE for full documentation of wallet commands. To clear out your terminal history of any stored keys and passwords from memory use: history -c && history -w At this point your wallet is hopefully running and set up to start staking. The wallet.dat and blockchain date is stored in /root/.hydra/ or /home/username/.hydra/ depending on your setup. It is recommended to lock your wallet with a passphrase from the /Hydra/bin/ directory: ./hydra-cli encryptwallet [yourpassphrase] We can unlock only for staking when it is left running on any system or VPS: ./hydra-cli walletpassphrase [yourpassword] 9999999 true If you lock the wallet you will then need to enter the passphrase to use the wallet to send out transactions. The following command unlocks for five minutes: ./hydra-cli walletpassphrase [yourpassphrase] 300 For further instructions on operating the node and unlocking the wallet for staking, please see the documentation here: If your node is running and staking and you have imported the keys where your HYDRA coins are stored, you may now choose to delete the SSH rule from the digital ocean firewall section and add it back at a later time when you wish to access the Droplet again - This will add an extra layer of security as there is no way for anyone to access your server without two factor authentication with DigitalOcean and manually reopening the port. You can visit explorer.hydrachain.org to check on the staking status of your node by searching for your address and seeing how many blocks have been mined as well as your balances and transactions. Good luck staking!
https://docs.hydrachain.org/staking-hydra-coins/staking-with-linux-digitalocean
2021-05-06T00:49:55
CC-MAIN-2021-21
1620243988724.75
[]
docs.hydrachain.org
Shutdown and Draining The default behavior of most SDKs is to send out events over the network asynchronously in the background. This means that some events might be lost if the application shuts down unexpectedly. The SDKs provide mechanisms to cope with this. The Apple SDK automatically stores the Sentry events on the device's disk before shutdown. You can edit this page on GitHub.
https://docs.sentry.io/platforms/apple/configuration/draining/
2021-05-06T01:13:13
CC-MAIN-2021-21
1620243988724.75
[]
docs.sentry.io
Bitbucket CloudBitbucket Cloud Site admins can sync Git repositories hosted on Bitbucket Cloud with Sourcegraph so that users can search and navigate the repositories. To connect Bitbucket Cloud to Sourcegraph: - Depending on whether you are a site admin or user: - Site admin: Go to Site admin > Manage repositories > Add repositories - User: Go to Settings > Manage repositories. - Select Bitbucket.org. - Configure the connection to Bitbucket Cloud using the action buttons above the text field, and additional fields can be added using Cmd/Ctrl+Space for auto-completion. See the configuration documentation below. - Press Add repositories. NOTE That adding code hosts as a user is currently in private beta. Repository syncingRepository syncing Currently, all repositories belonging the user configured will be synced. In addition, there is one more field for configuring which repositories are mirrored: teams A list of teams that the configured user has access to whose repositories should be synced. exclude A list of repositories to exclude which takes precedence over the teamsfield. HTTPS cloningHTTPS cloning Sourcegraph clones repositories from your Bitbucket Cloud via HTTP(S), using the username and appPassword required fields you provide in the configuration. Internal rate limitsInternal rate limits Internal rate limiting can be configured to limit the rate at which requests are made from Sourcegraph to Bitbucket Cloud. If enabled, the default rate is set at 7200 per hour (2 Cloud connections support the following configuration options, which are specified in the JSON editor in the site admin “Manage repositories” area. admin/external_service/bitbucket_cloud.schema.json { // The API URL of Bitbucket Cloud, such as. Generally, admin should not modify the value of this option because Bitbucket Cloud is a public hosting platform. "apiURL": null, // Other example values: // - "" // The app password to use when authenticating to the Bitbucket Cloud. Also set the corresponding "username" field. "appPassword": null, // A list of repositories to never mirror from Bitbucket Cloud. Takes precedence over "teams" configuration. // // Supports excluding by name ({"name": "myorg/myrepo"}) or by UUID ({"uuid": "{fceb73c7-cef6-4abe-956d-e471281126bd}"}). "exclude": null, // Other example values: // - [ // { // "name": "myorg/myrepo" // }, // { // "uuid": "{fceb73c7-cef6-4abe-956d-e471281126bc}" // } // ] // - [ // { // "name": "myorg/myrepo" // }, // { // "name": "myorg/myotherrepo" // }, // { // "pattern": "^topsecretproject/.*" // } // ] // The type of Git URLs to use for cloning and fetching Git repositories on this Bitbucket Cloud. // // If "http", Sourcegraph will access Bitbucket Cloud repositories using Git URLs of the form. // // If "ssh", Sourcegraph will access Bitbucket Cloud repositories using Git URLs of the form [email protected]:myteam/myproject.git. See the documentation for how to provide SSH private keys and known_hosts:. "gitURLType": "http", // Other example values: // - "ssh" // Rate limit applied when making background API requests to Bitbucket Cloud. "rateLimit": { "enabled": true, "requestsPerHour": 7200 }, // The pattern used to generate the corresponding Sourcegraph repository name for a Bitbucket Cloud repository. // // - "{host}" is replaced with the Bitbucket Cloud URL's host (such as bitbucket.org), and "{nameWithOwner}" is replaced with the Bitbucket Cloud repository's "owner/path" (such as "myorg/myrepo"). // // For example, if your Bitbucket Cloud is and your Sourcegraph is, then a repositoryPathPattern of "{host}/{nameWithOwner}" would mean that a Bitbucket Cloud team names identifying Bitbucket Cloud teams whose repositories should be mirrored on Sourcegraph. "teams": null, // Other example values: // - ["name"] // - [ // "kubernetes", // "golang", // "facebook" // ] // URL of Bitbucket Cloud, such as. Generally, admin should not modify the value of this option because Bitbucket Cloud is a public hosting platform. "url": null, // Other example values: // - "" // The username to use when authenticating to the Bitbucket Cloud. Also set the corresponding "appPassword" field. "username": null }
https://docs.sourcegraph.com/admin/external_service/bitbucket_cloud
2021-05-06T00:44:16
CC-MAIN-2021-21
1620243988724.75
[]
docs.sourcegraph.com
> 5) Add Subscription or individual product to opportunity Once you are done with " Configure Account and Opportunity page layouts " You can select Add Subscription or Individual products. These two options will add either an independent plan/charge or a subscription (Selecting a bundle) to Opportunity. All these plan and charges will be added to Opportunity line Item product list. We can edit these lists after adding if we want to. Individual Products : This section gives us an option to select only individual plans. This is a product section where we can add the product which is not bundle also which does not have any parent product. Which means, the plan itself is a charge. NOTE: This setup uses custom filter visualforce page. Custom filters are not supported in LIGHTNING yet. Even though the org is in Lightning mode, this button asks the user to switch to Classic in a new Tab. 1) Click on Add Products button under Individual products Related List. fig : 1 Add products button 2) Select/Check mark the required products to add to Opportunity and click on SELECT. fig: 2 Select individual plans page 3) Enter desired quantity, amount and Opportunity line item product date information in the next screen. fig.3 Add selected individual plan's quantity and amount 4) these products will be added to Opportunity. We can create an order with these individual products. ADD SUBSCRIPTION: Subscription will provide us an option to add plan and charges at the same time. If we have multiple charges to a single plan, then this button will give us an option to choose those respective charges in a single screen. 1) Click on Add Subscription button under Subscriptions Related List. 2) You will get a Custom page where you can select the plan. Once a plan is selected, you can see the respected charge with other details showed at the bottom of the page. fig 4: Adding subscription plans These selected Subscriptions will be added to Opportunity as Product list. Automating Configure billing account: Instead of configuring billing contact/account manually using "Configure billing contact" button from Account view page, we have an automation option under Blusynergy tab. fig 5: Blusynergy tab Select "When Opportunity goes to close/won" for Export Accounts to BluBilling. Once this option is selected, Manually clicking on "Configure billing account" on Account is not required. Once any opportunity of that Account goes to closed won, that Account will automatically synced to blubilling. If "Contact with BLUSYNERGY BILLING CONTACT checked" is selected for Billing Contact Selection, process will check if there are any contacts with Blusynergy Billing Contact is checked and syncs once opportunity closed won. Here is the contact field that is referencing this option. fig 6: Contact page layout | Report Abuse | Print Page | Google Sites
http://docs.blusynergy.com/salesforce-crm/add-subscription-or-individual-product-to-opportunity
2021-05-06T01:08:06
CC-MAIN-2021-21
1620243988724.75
[]
docs.blusynergy.com
Resume SnapMirror operations Contributors Download PDF of this page You can resume SnapMirror transfers that were quiesced before upgrade and resume the SnapMirror relationships. The updates are on schedule after the upgrade is completed. Steps Verify the SnapMirror status on the destination: snapmirror show Resume the SnapMirror relationship: snapmirror resume -destination–vserver <vserver_name>
https://docs.netapp.com/us-en/ontap-systems/upgrade-arl-auto/resume_snapmirror_operations.html
2021-05-06T00:03:18
CC-MAIN-2021-21
1620243988724.75
[]
docs.netapp.com
Once you have received an email with your login credentials and set a new password, the following URL can be used to access Oliasoft WellDesign:. To get started in the application, you have to select the company you are representing. This is done easily by clicking on the name of the organization. As seen in the figure above, some users can represent several companies. In Oliasoft administration, company administrators can provide external consultants or experts with access to certain datasets within their own organization. This allows competent people from outside the organization to contribute without the hassle of having to provide them with internal company credentials. When working in Oliasoft WellDesign for the first time, you have to create a new project before you can start using the application. A new project can be created by clicking the Create Project button, which is found in the upper right corner. As seen in the figure, the application also lets you import projects by clicking the Import Project button. To import a file, it must have the file format known as JSON. If you are not able to find the import/create project buttons, you have to navigate back to the tab called Projects. See next section for further description. After you have created your first project, new projects can be created in three different ways. Clicking the + button next to Country, when in Browse view. Clicking the Create Project button, as explained in the previous subsection. Clicking the Import Project button, which imports a project from a JSON file. To create a new Field, Site, Well, Wellbore or Design within an existing project, you have to be in Browse view. This is a sorting mechanism in Projects which is to be further described in the next section. In the sorting mechanism called Browse, the projects are structured in an hierarchical way, following the structure: Country -> Field -> Site -> Well -> Wellbore -> Design A + button is found next to each of these and can be used to add a field, site, well, wellbore or design, respectively, within a project. All projects are stored in your database and can be viewed and retrieved by clicking the Projects tab, which is marked with a one in the figure below. There are three different methods that can be applied to view and retrieve the projects in a database. As seen in the figure (2), these are shown as sub tabs within the Projects tab and are further explained in the following table. To learn how to navigate within a project in Oliasoft WellDesign, please go to the following section named "How to Navigate in the Application".
https://docs.oliasoft.com/user-guides-1/getting-started
2021-05-05T23:48:02
CC-MAIN-2021-21
1620243988724.75
[]
docs.oliasoft.com
1. What is Atlassian Marketplace? Atlassian Marketplace is business ecosystem for purchasing Atlassian products and third-party add-ons and extensions for them. Marketplace is the preferred way of buying commercial licenses for installed add-ons and extensions. You buy from Atlassian directly, and then Atlassian shares the revenue with the developer of the add-on. 2. Who can I buy the add-on from? You can purchase this add-on directly from Atlassian Marketplace. 3. What are the pros of buying on Atlassian Marketplace? PROs: - Performing purchases on Atlassian Marketplace means only dealing with Atlassian instead of dealing with separate vendors. - You will buy from Atlassian taking the same terms and using the same purchase system you've already used to buy Atlassian products (for example, JIRA or Confluence). - We are looking forward to migrating all our products to Atlassian Marketplace to preserve the identical customer experience as if purchasing native Atlassian products. We also plan to migrate the currently purchased commercial licenses from our reseller to Atlassian Marketplace. So if you buy from Marketplace now you'll save your time on switching in the future. - All transactions are secured from any financial frauds. CONs: - We have not found any yet! 4. I'm an existing customer, buying from you. What changes for me? You will not notice any changes as of now. Your current commercial license will continue to work as before. 5. I'm an existing customer, can I move to Marketplace? Please send us an email to [email protected] or submit a ticket at. The license cannot be migrated to Marketplace within 30-day period after purchase, after this period it can be moved. 6. Where do I enter the evaluation / commercial license received via your website? - Navigate to Confluence Admin. - From the Configuration section, select Talk Add-on License. - Click Upload License. - Enter your license key. - Click Save. 7. Where do I enter the evaluation / commercial license received via Atlassian? By default, you are prompted to automatically apply the license after purchase. If this did not happen, so follow these steps: - Navigate to Confluence Admin. - From the Atlassian Marketplace section, select Manage Add-ons. - On the list with installed add-ons, locate the Talk - Inline Comments for Confluence and click it to expand. - In the expanded area, locate the License key field. - Click the Edit icon. - Enter your license key. - Click Update. 8. Which license should I purchase? The Talk license policy copies that of Atlassian Confluence. Check what Confluence license you currently have and purchase the corresponding Talk license. 9. I have a 100 Users Confluence license, can I purchase Talk Add-on for fewer users? Sorry, but it's impossible. You should purchase the add-on license for the entire number of users that your Confluence instance is licensed for. 10. My Confluence instance is used under a Community/OpenSource/Classroom license. Can I get Talk for free? Yes, in this case you can request a free Community/OpenSource/Classroom license for Talk directly from Atlassian. To do that, please, follow this link. Your free Talk license is valid only if your organization is approved by Atlassian for a Community/OpenSource/Classroom license type. 11. What payment methods are supported? If you purchase Talk Add-on from our re-seller, you can perform a transaction via a credit card, PayPal or by a bank transfer. If you purchase Talk Add-on from Atlassian, you can perform a transaction via a credit card, by a check or a bank transfer. 12. Can I get any discount? If you use Confluence under the academic license, you can purchase Confluence Talk at a half price. Also, every customer (except Starter license owners) gets a 50% discount when renewing their license. 13. Are there any discounts for Atlassian Experts? Yes, we provide 20-percent discount for Atlassian Experts. You can get this discount when purchasing via Atlassian Marketplace. In case you purchase from our re-seller, you need first to contact our support team and get a coupon code for getting a discount. 14. How is my online transaction secured? All fund transactions are secured. If you perform license purchase on my.atlassian.com, your transactions are secured by Atlassian. For the details on product purchase via Atlassian, refer to Atlassian's Licensing & Purchasing FAQ. If you make purchase on our website, in this case all online transactions are protected by the software developed by the Avangate company, a globally recognized software re-seller. Avangate is certified to PCI DSS (Payment Card Industry Data Security Standard). Please, see Avangate shoppers FAQ for more information. 15. I started buying the add-on, but something went wrong. Where can I get help? If you experience problems when buying a license on our site, please contact our reseller Avangate. For problems with buying the add-on on Markteplace, please contact Atlassian. 16. I have more questions about purchasing and licensing Talk Add-on. Please, contact our support team.
https://docs.stiltsoft.com/display/public/Talk/Purchasing
2021-05-06T00:11:52
CC-MAIN-2021-21
1620243988724.75
[]
docs.stiltsoft.com
Product-Led Growth Overview As product transitions from a supporting role to lead actor, companies are transforming the way they communicate with users, nurture relationships, and understand user behavior. To stay ahead of the curve, innovative companies in every vertical have been transitioning away from traditional business methodologies and embracing the opportunities and challenges of product-led growth. Part of this transition involves rethinking the user journey and the strategies teams use to affect it at every stage. To this end, leading companies have been embracing the shift from funnel to flywheel in order to fully realize the potential of product-led growth. We interviewed the fastest-growing B2B and B2C companies to learn and codify the commonalities in their approach. Our efforts culminated in the development of the Product-Led Growth Flywheel. The Product-Led Growth Flywheel is a framework for growing your business by investing in a product-led user experience. In this framework, the experience is designed to generate higher user satisfaction and increased advocacy, which in turn drives compounding growth of new user acquisition. It depicts 4 sequential user segments that correlate with stages in the user journey from awareness to evangelism—evaluator, beginner, regular, and champion—and the key actions that users need to take to graduate to the next stage—activate, adopt, adore, and advocate. The goal is to focus company- and team-level strategies on optimizing the user experience to move users from one stage to the next. As the rate of users completing each action increases, your flywheel will spin faster, increasing the rate that users move from one segment to the next. This creates a positive feedback loop: as more users become advocates, they drive more acquisition and growth increases exponentially. In this playbook, we’ll walk you through how you can use the Appcues Product-Led Growth Platform to create delightful in-product experiences at every stage of the user journey to help your flywheel spin faster and generate compounding growth. Each segment is discussed in more detail below. Evaluators Evaluators are new or free users with no prior experience with your product. These users are cautiously excited about your product as a solution to their problems. They’re probably evaluating a variety of solutions—including your competitors. Evaluators are typically: - In a trial or demo phase—they’ve just started playing around with your product - Haven’t connected their tech stack with your product - Not using your product in their current workflows - Still searching for a solution to a problem they are trying to solve Your Goal To guide users to their aha moment in a personalized way as quickly as possible. Let evaluators experience your product in action or show them around to help them get a basic understanding of its core functionality. When evaluators complete an activation event in your product and have realized your product’s initially promised value on their own, they seamlessly graduate to the next segment in the flywheel and become beginners. How to measure success - Activation rate - Time to value - Product-qualified leads - Free trial conversion (if applicable) Beginners Beginners are activated users who understand how your product can meet their needs and deliver value—and they’re excited about it! They’re eager to learn more and are starting to explore your product’s features and functionality more deeply. Beginners are typically: - Starting to use real data and receiving tangible value - Not using advanced functionality or implementing sophisticated use cases - Feeling confident that your product is the best solution to solve their problem Your Goal To facilitate product adoption by helping users form habits and getting them to think of your product as the go-to solution for a certain problem or task. Product adoption means full buy-in—it’s when a user really understands the power of your product and depends on it regularly. Once they adopt your product, users become regulars. How to measure success - Feature adoption - Time to value - Free trial conversion (if applicable) - Usage and retention (daily, weekly, monthly) Regulars Regulars are the bread and butter of your user base. They log in frequently and rely on your product for multiple use cases. Regulars may not always get excited about using your product, but it has become key to achieving their goals. Switching to another solution would be costly because they have already invested time, effort, and data in your product. Regular users are typically: - Incorporating your product into their workflows - Spending more time in your app - Using your product to complete core parts of their job - Defaulting to your product as a possible solution when new problems arise - Exploring deeper layers of your product to see what else your product help them do Your Goal To turn customer satisfaction into delight. You want to keep these users healthy and engaged by encouraging them to adopt new features, expand to new use cases, and provide feedback. The goal here is more than just habitual usage or product adoption—it’s emotional. To move users through the flywheel, you need them to adore your product. How to measure success - Monthly active users - Monthly retention rate - Feature adoption and usage - Feedback frequency - NPS score Champions Champions are the users who recommend your product to their colleagues, friends, and social media followers. They have formed an emotional connection with your brand and your product—at this point in the relationship, you are providing value outside of the job to be done. These users want more capabilities and more power from your product—not because they necessarily require them, but because they love your product and are actively invested in your success. If you were to shut the doors tomorrow, they would be devastated. Champions are typically: - Actively participating in the future of your product by providing thorough feedback - Pushing the limits of your product with new use cases - NPS promoters - Wearing your brand’s t-shirt Your Goal To close the loop by turning positive customer sentiment into tangible social proof, making it easier to acquire new users and keep the flywheel spinning. The goal is to get your champions to advocate for your product. How to measure success - NPS score: percentage of promoters - Product feedback - Number of online reviews - Referrals - New user acquisition - Case study opportunities
https://docs.appcues.com/article/377-product-led-growth-overview
2021-05-06T01:08:06
CC-MAIN-2021-21
1620243988724.75
[array(['https://assets.website-files.com/5c7fdbdd4e3feeee8dd96dd2/5d2496f396f8ad6540052c14_Productledgrowth-Flywheel-Appcues-Final.png', None], dtype=object) ]
docs.appcues.com
Use the following information to understand basic concepts and scenarios that you can use while adding redundancy to your system. The following video (1:26) illustrates the need for applying redundancy in your environment. Data moves through TrueSight IT Data Analytics via different channels, as depicted in the following figure. The channels depicted in the preceding figure are explained as follows:This scenario is a simple depiction of how data moves across the various product components. For a more advanced architecture, see Multiple-server deployment. TrueSight IT Data Analytics provides you a mechanism of collecting and searching data. In a multiple-server deployment, you can have the Collection Station, Indexer, and Search components installed on various nodes. If one of these nodes goes down, you can start losing data and thereby valuable knowledge that might be crucial to your business. Loss of data can occur at different stages of data flow within the TrueSight IT Data Analytics framework. Depending on which node goes down, you can experience data loss at the data collection stage, indexing stage, or search stage. For instance, if the Collection Station goes down, data collected by the Collection Agents will not reach the Indexer and therefore will not be available for search. If the Indexer goes down, the data collected will not be indexed and therefore will not be available for search. Similarly, if the Search node goes down, the data collected and indexed will not be searchable. This problem can be solved by adding redundancy to your system. Redundancy means that if one node goes down, another node in your system takes up the job of the first node. In this way, data continues to be collected, indexed, and searched. The need for redundancy depends on your business needs. If you want to increase data continuity and availability you need to apply redundancy. Also, if the data you are collecting is critical, you must apply redundancy. Redundancy can only be enabled if you are operating in a multiple-server deployment. This is because redundancy is only applicable when you are operating in an environment with multiple Collection Stations and Indexers. Note Enabling redundancy has a cost in terms of hardware resources required. However, in the long run redundancy can save the cost associated with losing data. Depending on the data availability needs of your business, you need to manage the trade-off between the benefits of data availability and the cost of hardware resources required (for enabling redundancy). For more information, see Sizing drivers and their impact. The following redundancy scenarios are supported:
https://docs.bmc.com/docs/display/itda27/Adding+redundancy+for+data+availability
2021-05-06T01:24:16
CC-MAIN-2021-21
1620243988724.75
[]
docs.bmc.com
Configuring product servers After you install the BMC License Usage Collection Utility, it must be able to connect to remote servers that host BMC products to successfully complete the license usage collection process. The utility secures a WMI connection to connect Windows based servers and an SSH connection to connect UNIX/Linux based servers. In both the cases it uses the deployment details, such as server name, user credentials, etc. to connect the remote servers that the user has entered into the utility on the Add Deployment screen. The utility does not require its users to have administrator privileges on the servers. However, as a one-time setup activity, the user must complete the following tasks: - With the assistance of the your server administrator, ensure that the user has the required permissions for WMI/SSH connections on remote servers. See Configuring user settings. - Verify that the environment settings are in place. See Configuring environment settings. Related topics Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/lucu3600/configuring-product-servers-717102170.html
2021-05-06T01:13:14
CC-MAIN-2021-21
1620243988724.75
[]
docs.bmc.com
Kodular Chamaeleon¶ 1.2 Chamaeleon | 27 October 2018¶ Major Changes¶ - Renamed Makeroid to Kodular - Deployed new custom Rendezvous server - Improved signing APK procedure - Added 64bit support for apps Companion¶ - Fixed little bug when downloading APKs through HTTPS UI Changes¶ - Halloween Easter Eggs are hidden in our Creator Share the ones you find using the #halloween18 tag Bugs Fixed¶ - Critical Issue with libraries was causing apps to not open - User Interface - BottomSheet: Registering components were causing error sometimes New Components¶ - Added new Chat View component to User Interface category - Added new Lottie component to Drawing and Animation category - Added new Cryptography component to Storage category New Events¶ - Web View component After JS Evaluated: triggered after the JS Inject function Page Loaded: triggered after the page has been loaded at 100% On Console Message: triggered after a message is pushed to console On Download Start: triggered after a download starts New Functions¶ - Web View component Load HTML: loads and displays an HTML text Evaluate JS: executes a JS piece in the website Go Back or Forward: goes back the given steps (negative number) or forward (positive number) Reload: refreshes the current page New Properties¶ 1.2.1 Chamaeleon | 23 November 2018¶ Improvements¶ - Removed Subscribe to Notifications button in Creator It was added on our main website - Changed Donations Link in Creator Now points to Donate Kodular - Stop propagation of mouse events on warning toggle button Stop propagation of mouse events on warning toggle button · mit-cml/[email protected] · GitHub UI Changes¶ - Reverted Pumpkin Backpack - Reverted Halloween Dark Theme - Changed a few more references to Kodular Bugs Fixed¶ activation link was not visible) - Cryptography component using Base64 on low APIs ( it was crashing) - Lottie component using Fill Parent properties ( it was not working) - GeoJSON Source processing in MockFeatureCollection Fix GeoJSON Source processing in MockFeatureCollection (#1248) · mit-cml/[email protected] · GitHub - Clearing of MockFeatureCollection when Source is None Fix clearing of MockFeatureCollection when Source is None · mit-cml/[email protected] · GitHub - Removed BOM from GeoJSON files Remove BOM from GeoJSON files · mit-cml/[email protected] · GitHub - Little bug on AARLibrary Bugfix to AARLibrary · mit-cml/[email protected] · GitHub - Force MockCircle default position to (0, 0) Force MockCircle default position to (0, 0) · mit-cml/[email protected] · GitHub Last update: April 13, 2021
https://docs.kodular.io/release-notes/chamaeleon/
2021-05-06T01:04:31
CC-MAIN-2021-21
1620243988724.75
[array(['https://assets.kodular.io/images/creator/versions/chamaeleon.png', 'Kodular Chamaeleon'], dtype=object) ]
docs.kodular.io
Supervisely is 100% free as long as you don't use it for commercial projects. If you want to launch a commercial for your company, please contact us at [email protected]. Your data is yours. We respect your privacy and when you create an account you don't grant us any rights to your data, except for the ones that needs for the application functioning. We don't use your data for any commercial or non-commercial purposes and share it with nobody. You can learn more in the terms of service. We do! Drop us an email at [email protected] or fill the form here. The name Supervisely comes from machine learning term supervised learning — when we use a known dataset (called the training dataset) to make predictions. And, well, Supervisely is all about datasets and using them to build models. 🎉.
https://docs.supervise.ly/getting-started/faq
2021-05-05T23:58:31
CC-MAIN-2021-21
1620243988724.75
[]
docs.supervise.ly
If your ribbon is not customized, a Paste button is available both in the Standard and in the Advanced menu set of JChem for Excel, Word, Powerpoint, and Outlook. Select a few structures and data in IJC and right-click on the selected area, click Copy to MS Office. Press the Paste button in any of the JChem for Office applications. The data copied from IJC is displayed on the selected sheet or document. {info} The paste will start in the Excel cell, which was selected on the sheet before pasting.
https://docs.chemaxon.com/display/lts-fermium/from-instant-jchem-to-jchem-for-office.md
2021-09-16T19:25:16
CC-MAIN-2021-39
1631780053717.37
[]
docs.chemaxon.com
Developing on-line stores in MVC With Kentico, you can create on-line stores with the ASP.NET MVC framework. You use Kentico as a content platform with e-commerce data and develop an MVC application separately. Both applications then access the same database. In the end, you can build an MVC website based on your needs and only connect it to Kentico to use the data. For leveraging the e-commerce functionality of Kentico, use the e-commerce integration package (Kentico.Ecommerce) available in NuGet Package Manager to simplify the process of creating the store. The Kentico.Ecommerce integration package contains supporting models and services you may need to build an on-line store. Supported e-commerce integration with MVC - Displaying product listings on MVC sites - Displaying product details on MVC sites - Displaying discount values on MVC sites - Displaying and updating orders on MVC sites All supported MVC objects You can learn what is and what is not supported on MVC sites in Supported and unsupported Kentico features on MVC sites.
https://docs.xperience.io/k10/developing-websites/developing-sites-using-asp-net-mvc/developing-on-line-stores-in-mvc
2021-09-16T19:04:12
CC-MAIN-2021-39
1631780053717.37
[]
docs.xperience.io
NBeats¶ - class pytorch_forecasting.models.nbeats.NBeats(stack_types: List[str] = ['trend', 'seasonality'], num_blocks=[3, 3], num_block_layers=[3, 3], widths=[32, 512], sharing: List[int] = [True, True], expansion_coefficient_lengths: List[int] = [3, 7], prediction_length: int = 1, context_length: int = 1, dropout: float = 0.1, learning_rate: float = 0.01, log_interval: int = - 1, log_gradient_flow: bool = False, log_val_interval: Optional[int] = None, weight_decay: float = 0.001, loss: Optional[pytorch_forecasting.metrics.MultiHorizonMetric] = None, reduce_on_plateau_patience: int = 1000, backcast_loss_ratio: float = 0.0, logging_metrics: Optional[torch.nn.modules.container.ModuleList] = None, **kwargs)[source]¶ Bases: pytorch_forecasting.models.base_model.BaseModel Initialize NBeats Model - use its from_dataset()method if possible. Based on the article N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. The network has (if used as ensemble) outperformed all other methods including ensembles of traditional statical methods in the M4 competition. The M4 competition is arguably the most important benchmark for univariate time series forecasting. - Parameters stack_types – One of the following values: “generic”, “seasonality” or “trend”. A list of strings of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [“generic”] Recommended value for interpretable mode: [“trend”,”seasonality”] num_blocks – The number of blocks per stack. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [1] Recommended value for interpretable mode: [3] num_block_layers – Number of fully connected layers with ReLu activation per block. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [4] Recommended value for interpretable mode: [4] width – Widths of the fully connected layers with ReLu activation in the blocks. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [512] Recommended value for interpretable mode: [256, 2048] sharing – Whether the weights are shared with the other blocks per stack. A list of ints of length 1 or ‘num_stacks’. Default and recommended value for generic mode: [False] Recommended value for interpretable mode: [True] expansion_coefficient_length – If the type is “G” (generic), then the length of the expansion coefficient. If type is “T” (trend), then it corresponds to the degree of the polynomial. If the type is “S” (seasonal) then this is the minimum period allowed, e.g. 2 for changes every timestep. A list of ints of length 1 or ‘num_stacks’. Default value for generic mode: [32] Recommended value for interpretable mode: [3] prediction_length – Length of the prediction. Also known as ‘horizon’. context_length – Number of time units that condition the predictions. Also known as ‘lookback period’. Should be between 1-10 times the prediction length. backcast_loss_ratio – weight of backcast in comparison to forecast when calculating the loss. A weight of 1.0 means that forecast and backcast loss is weighted the same (regardless of backcast and forecast lengths). Defaults to 0.0, i.e. no weight. loss – loss to optimize. Defaults to MASE(). log_gradient_flow – if to log gradient flow, this takes time and should be only done to diagnose training failures reduce_on_plateau_patience (int) – patience after which learning rate is reduced by a factor of 10 logging_metrics (nn.ModuleList[MultiHorizonMetric]) – list of metrics that are logged during training. Defaults to nn.ModuleList([SMAPE(), MAE(), RMSE(), MAPE(), MASE()]) **kwargs – additional arguments to BaseModel. Methods - forward(x: Dict[str, torch.Tensor]) Dict[str, torch.Tensor] [source]¶ Pass forward of network. - Parameters x (Dict[str, torch.Tensor]) – input from dataloader generated from TimeSeriesDataSet. - Returns output of model - Return type Dict[str, torch.Tensor] - classmethod from_dataset(dataset: pytorch_forecasting.data.timeseries.TimeSeriesDataSet, **kwargs)[source]¶ Convenience function to create network from :py:class`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet`. - Parameters dataset (TimeSeriesDataSet) – dataset where sole predictor is the target. **kwargs – additional arguments to be passed to __init__method. - Returns NBeats - log_interpretation(x, out, batch_idx)[source]¶ Log interpretation of network predictions in tensorboard. - plot_interpretation(x: Dict[str, torch.Tensor], output: Dict[str, torch.Tensor], idx: int, ax=None, plot_seasonality_and_generic_on_secondary_axis: bool = False) matplotlib.figure.Figure [source]¶ Plot interpretation. Plot two pannels: prediction and backcast vs actuals and decomposition of prediction into trend, seasonality and generic forecast. - Parameters x (Dict[str, torch.Tensor]) – network input output (Dict[str, torch.Tensor]) – network output idx (int) – index of sample for which to plot the interpretation. ax (List[matplotlib axes], optional) – list of two matplotlib axes onto which to plot the interpretation. Defaults to None. plot_seasonality_and_generic_on_secondary_axis (bool, optional) – if to plot seasonality and generic forecast on secondary axis in second panel. Defaults to False. - Returns matplotlib figure - Return type plt.Figure
https://pytorch-forecasting.readthedocs.io/en/latest/api/pytorch_forecasting.models.nbeats.NBeats.html
2021-09-16T19:47:37
CC-MAIN-2021-39
1631780053717.37
[]
pytorch-forecasting.readthedocs.io
Default.
http://docs.imsidesign.com/projects/Turbocad-2018-User-Guide-Publication/TurboCAD-2018-User-Guide-Publication/Creating-3D-Objects/Smooth-Mesh-SMesh/Edit-Tool-and-SMeshes/SMesh-Merging/
2021-09-16T18:40:57
CC-MAIN-2021-39
1631780053717.37
[array(['../../Storage/turbocad-2018-user-guide-publication/smesh-merging-img0001.png', 'img'], dtype=object) array(['../../Storage/turbocad-2018-user-guide-publication/smesh-merging-img0002.png', 'img'], dtype=object) array(['../../Storage/turbocad-2018-user-guide-publication/smesh-merging-img0003.png', 'img'], dtype=object) array(['../../Storage/turbocad-2018-user-guide-publication/smesh-merging-img0004.png', 'img'], dtype=object) array(['../../Storage/turbocad-2018-user-guide-publication/smesh-merging-img0005.png', 'img'], dtype=object) array(['../../Storage/turbocad-2018-user-guide-publication/smesh-merging-img0006.png', 'img'], dtype=object) ]
docs.imsidesign.com
Support Ticket Details. Enrollment Id Property Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Gets enrollment Id associated with the support ticket. [Newtonsoft.Json.JsonProperty(PropertyName="properties.enrollmentId")] public string EnrollmentId { get; } [<Newtonsoft.Json.JsonProperty(PropertyName="properties.enrollmentId")>] member this.EnrollmentId : string Public ReadOnly Property EnrollmentId As String Property Value - Attributes - Newtonsoft.Json.JsonPropertyAttribute
https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.management.support.models.supportticketdetails.enrollmentid?view=azure-dotnet
2021-09-16T18:58:14
CC-MAIN-2021-39
1631780053717.37
[]
docs.microsoft.com
The release notes cover the following topics: Product Description VMware Smart Assurance Network Configuration Manager (NCM) is: - An automated compliance, change and configuration management solution that delivers industry-recognized best practices. - A collaborative network infrastructure design that controls change processes, provides network device and service configuration transparency, and ensures compliance with corporate and regulatory requirements — to enable you to ensure the security, availability, and operational efficiency of your network. - An automated support for all facets of the network infrastructure lifecycle, seamlessly integrating critical design, change, and compliance management requirements. What's New in this Release VMware Smart Assurance 9.6.1 introduces a new version of Network Configuration Manager. This is a minor release however targeted for new deployments only. Existing Network Configuration Manager customers are recommended to be on version 9.6. With Smart Assurance Network Configuration Manager 9.6.1 release, we introduce the following changes: - Report Advisor (RA) and Compliance Advisor (CA) will not be part of NCM from 9.6.1 onwards. New reporting capabilities will be added in a future release of Smart Assurance. - Upgrades and Migrations to 9.6.1 are not supported, no new functionality is added in 9.6.1. For older known issues, fixed issues, and configuration, refer previous Release Note. Known IssuesThere are no known issues in VMware Smart Assurance Network Configuration Manager 9.6.1. The known issues are grouped as follows.Known issues in NCM 9.6.0.0 - SND-7346 The VMware rebranding has been initiated. However, few files/documents still have the "EMC" branding. This has no functionality impact. This will be further addressed in the future releases. - IS-12916 The Java JVM error occurs during the NCM uninstallation. Note: This error appears only when NCM is installed in the Silent mode on Windows. Workaround: You need to continue clicking OK until the prompted dialogue box disappears.You need to continue clicking OK until the prompted dialogue box disappears.
https://docs.vmware.com/en/VMware-Smart-Assurance/9.6.1/rn/Smart-Assurance-Network-Configuration-Manager-Release-Note-961.html
2021-09-16T20:03:45
CC-MAIN-2021-39
1631780053717.37
[]
docs.vmware.com
This operation performs Total-Subtraction-Division-Multiplication with the fields in the form. Mathematical Operations operation at Property Panel Properties Description: Description related to action is written Heading: When added as Form Action, it specifies the name in the action list. Linked Field: The field to write the result. Calculate: The calculation is made in this area. (e.g. “ $PValue1$ - $PValue2$ “) ( The +, -, / and * operators can be used.) few numeric boxes are in a form. Subtraction is desired with these values in the controls. A button is added in the form and a Mathematical Operations with When clicked event is created for the button. Linked Field contains the control, that the result is written automatically with the operation. The calculation is made in the Calculate area as below. When the values are entered in the controls and then the button is clicked, the result comes in the control named Subtraction.
https://docs.xpoda.com/hc/en-us/articles/360015677500-Mathematical-Operations
2021-09-16T18:02:37
CC-MAIN-2021-39
1631780053717.37
[array(['/hc/article_attachments/360015109259/mceclip0.png', 'mceclip0.png'], dtype=object) array(['/hc/article_attachments/360015109319/mceclip1.png', 'mceclip1.png'], dtype=object) array(['/hc/article_attachments/360015122320/mceclip2.png', 'mceclip2.png'], dtype=object) array(['/hc/article_attachments/360015122380/mceclip3.png', 'mceclip3.png'], dtype=object) ]
docs.xpoda.com
Light Paths Reference - パネル Ray Types. 参考 The object ray visibility settings. Bounce Control.. Transparency (透過). 設定 Max Bounces (最大バウンス回数) - Total Maximum number of light bounces. For best quality, this should be set to the maximum. However, in practice, it may be good to set it to lower values for faster rendering. A value of 0 bounces results in direct lighting only. - Diffuse (ディフューズ) Maximum number of diffuse bounces. - Glossy (光沢) Maximum number of glossy bounces. - Transmission Maximum number of transmission bounces. - Volume Maximum number of volume scattering bounces. - Transparent Maximum number of transparency bounces. Note, the maximum number of transparent bounces is controlled separately from other bounces. It is also possible to use probabilistic termination of transparent bounces, which might help rendering many layers of transparency. Clamping - Direct Light. 注釈. - Indirect Light The same as Direct Light, but for rays which have bounced multiple times. Caustics A common source of noise is Caustics. 参考 See Reducing Noise for examples of the clamp settings in. - Caustics - Reflective While in principle path tracing supports rendering of caustics with a sufficient number of samples, in practice it may be inefficient to the point that there is just too much noise. This option can be unchecked, to disable reflective caustics. - Refractive The same as above, but for refractive caustics. Fast GI Approximation Reference - パネル Approximate diffuse indirect light with background tinted ambient occlusion. This provides fast alternative to full global illumination (GI), for interactive viewport rendering or final renders with reduced quality. - Viewport Bounces Replace global illumination with ambient occlusion after the specified number of bounces when rendering in the 3D Viewport. This can reduce noise in interior scenes with little visual difference. - Render Bounces Number of bounces when rendering final renders. - Distance (距離) Distance from shading point to trace rays. A shorter distance emphasizes nearby features, while longer distances make it also take objects farther away into account. Note, this setting is stored per World Environment.
https://docs.blender.org/manual/ja/dev/render/cycles/render_settings/light_paths.html
2021-09-16T19:08:50
CC-MAIN-2021-39
1631780053717.37
[]
docs.blender.org
Install and provision Azure IoT Edge for Linux on a Windows device Applies to: IoT Edge 1.1 The Azure IoT Edge runtime is what turns a device into an IoT Edge device. The runtime can be deployed on devices from PC class to industrial servers. Once a device is configured with the IoT Edge runtime, you can start deploying business logic to it from the cloud. To learn more, see Understand the Azure IoT Edge runtime and its architecture. Azure IoT Edge for Linux on Windows allows you to install IoT Edge on Linux virtual machines that run on Windows devices. The Linux version of Azure IoT Edge and any Linux modules deployed with it run on the virtual machine. From there, Windows applications and code and the IoT Edge runtime and modules can freely interact with each other. This article lists the steps to set up IoT Edge on a Windows device. These steps deploy a Linux virtual machine that contains the IoT Edge runtime to run on your Windows device, then provision the device with its IoT Hub device identity. Note IoT Edge for Linux on Windows is the recommended experience for using Azure IoT Edge in a Windows environment. However, Windows containers are still available. If you prefer to use Windows containers, see Install and manage Azure IoT Edge with Windows containers. Prerequisites An Azure account with a valid subscription. If you don't have an Azure subscription, create a free account before you begin. A free or standard tier IoT Hub in Azure. A Windows device with the following minimum system requirements: - Windows 10 Version 1809 or later; build 17763 or later - Professional, Enterprise, or Server editions - Minimum Free Memory: 1 GB - Minimum Free Disk Space: 10 GB - Virtualization support - On Windows 10, enable Hyper-V. For more information, see Install Hyper-V on Windows 10. - On Windows Server, install the Hyper-V role and create a default network switch. For more information, see Nested virtualization for Azure IoT Edge for Linux on Windows. - On a virtual machine, configure nested virtualization. For more information, see nested virtualization. - Networking support - Windows Server does not come with a default switch. Before you can deploy EFLOW to a Windows Server device, you need to create a virtual switch. For more information, see Create virtual switch for Linux on Windows. - Windows Desktop versions come with a default switch that can be used for EFLOW installation. If needed, you can create your own custom virtual switch. If you want to install and manage IoT Edge device using Windows Admin Center, make sure you have access to Windows Admin Center and have the Azure IoT Edge extension installed: Download and run the Windows Admin Center installer. Follow the install wizard prompts to install Windows Admin Center. Once installed, use a supported browser to open Windows Admin Center. Supported browsers include Microsoft Edge (Windows 10, version 1709 or later), Google Chrome, and Microsoft Edge Insider. On the first use of Windows Admin Center, you will be prompted to select a certificate to use. Select Windows Admin Center Client as your certificate. Install the Azure IoT Edge extension. Select the gear icon in the top right of the Windows Admin Center dashboard. On the Settings menu, under Gateway, select Extensions. On the Available extensions tab, find Azure IoT Edge in the list of extensions. Choose it, and select the Install prompt above the list of extensions. After the installation completes, you should see Azure IoT Edge in the list of installed extensions on the Installed extensions tab. If you want to use GPU-accelerated Linux modules in your Azure IoT Edge for Linux on Windows deployment, there are several configuration options to consider. You will need to install the correct drivers depending on your GPU architecture, and you may need access to a Windows Insider Program build. To determine your configuration needs and satisfy these prerequisites, see GPU acceleration for Azure IoT Edge for Linux on Windows. Choose your provisioning method Azure IoT Edge for Linux on Windows supports the following provisioning methods: Manual provisioning for a single device. - To prepare for manual provisioning, follow the steps in Register an IoT Edge device in IoT Hub. Choose either symmetric key authentication or X.509 certificate authentication, then return to this article to install and provision IoT Edge. Automatic provisioning using the IoT Hub Device Provisioning Service (DPS) for one or many devices. Choose the authentication method you want to use, and then follow the steps in the appropriate article to set up an instance of DPS and create an enrollment to provision your device or devices. For more information about the enrollment types, visit the Azure IoT Hub Device Provisioning Service concepts. Create a new deployment Deploy Azure IoT Edge for Linux on Windows on your target device. Install IoT Edge for Linux on Windows onto your target device if you have not already. Note The following PowerShell process outlines how to deploy IoT Edge for Linux on Windows onto the local device. To deploy to a remote target device using PowerShell, you can use Remote PowerShell to establish a connection to a remote device and run these commands remotely on that device. In an elevated PowerShell session, run each of the following commands to download IoT Edge for Linux on Windows. " You can specify custom IoT Edge for Linux on Windows installation and VHDX directories by adding INSTALLDIR="<FULLY_QUALIFIED_PATH>"and VHDXDIR="<FULLY_QUALIFIED_PATH>"parameters to the install command. Tip By default, the Deploy-Eflowcommand creates your Linux virtual machine with 1 GB of RAM, 1 vCPU core, and 16 GB of disk space. However, the resources your VM needs are highly dependent on the workloads you deploy. If your VM does not have sufficient memory to support your workloads, it will fail to start. You can customize the virtual machine's available resources using the Deploy-Eflowcommand's optional parameters. For example, the command below creates a virtual machine with 4 vCPU cores, 4 GB of RAM, and 20 GB of disk space: Deploy-Eflow -cpuCount 4 -memoryInMB 4096 -vmDiskSize 20 For information about all the optional parameters available, see PowerShell functions for IoT Edge for Linux on Windows. You can assign a GPU to your deployment to enable GPU-accelerated Linux modules. To gain access to these features, you will need to install the prerequisites detailed in GPU acceleration for Azure IoT Edge for Linux on Windows. To use a GPU passthrough, you will need add the gpuName, gpuPassthroughType, and gpuCount parameters to your Deploy-Eflowcommand. For information about all the optional parameters available, see PowerShell functions for IoT Edge for Linux on Windows. Warning Enabling hardware device passthrough may increase security risks. Microsoft recommends a device mitigation driver from your GPU's vendor, when applicable. For more information, see Deploy graphics devices using discrete device assignment. Enter 'Y' to accept the license terms. Enter 'O' or 'R' to toggle Optional diagnostic data on or off, depending on your preference. Once the deployment is complete, the PowerShell window reports Deployment successful. Once your deployment is complete, you are ready to provision your device. Provision your device Choose a method for provisioning your device and follow the instructions in the appropriate section. This article provides the steps for manually provisioning your device with either symmetric keys or X.509 certificates. If you are using automatic provisioning with DPS, follow the appropriate links to complete provisioning. You can use the Windows Admin Center or an elevated PowerShell session to provision your devices. Manual provisioning: Automatic provisioning: Manual provisioning using the connection string This section covers provisioning your device manually using your IoT Edge device's connection string. If you haven't already, follow the steps in Register an IoT Edge device in IoT Hub to register your device and retrieve its connection string. Run the following command in an elevated PowerShell session on your target device. Replace the placeholder text with your own values. Provision-EflowVm -provisioningType ManualConnectionString -devConnString "<CONNECTION_STRING_HERE>" For more information about the Provision-EflowVM command, see PowerShell functions for IoT Edge for Linux on Windows. Manual provisioning using X.509 certificates This section covers provisioning your device manually using X.509 certificates on your IoT Edge device. If you haven't already, follow the steps in Register an IoT Edge device in IoT Hub to prepare the necessary certificates and register your device. Have the device identity certificate and its matching private key ready on your target device. Know the absolute path to both files. Run the following command in an elevated PowerShell session on your target device. Replace the placeholder text with your own values. Provision-EflowVm -provisioningType ManualX509 -iotHubHostname "<HUB HOSTNAME>" -deviceId "<DEVICE ID>" -identityCertPath "<ABSOLUTE PATH TO IDENTITY CERT>" -identityPrivKeyPath "<ABSOLUTE PATH TO PRIVATE KEY>" For more information about the Provision-EflowVM command, see PowerShell functions for IoT Edge for Linux on Windows. Verify successful configuration Verify that IoT Edge for Linux on Windows was successfully installed and configured on your IoT Edge device. Important If you're using IoT Edge for Linux on Windows PowerShell public functions, be sure to set the execution policy on the target device to AllSigned. Ensure that all prerequisites for PowerShell functions for IoT Edge for Linux on Windows are met. If you need to troubleshoot the IoT Edge service, use the following Linux commands. If you need to troubleshoot the service, retrieve the service logs. sudo journalctl -u iotedge Use the checktool to verify configuration and connection status of the device. sudo iotedge check When you create a new IoT Edge device, it will display the status code 417 -- The device's deployment configuration is not set in the Azure portal. This status is normal, and means that the device is ready to receive a module deployment. Next steps - Continue to deploy IoT Edge modules to learn how to deploy modules onto your device. - Learn how to manage certificates on your IoT Edge for Linux on Windows virtual machine and transfer files from the host OS to your Linux virtual machine. - Learn how to configure your IoT Edge devices to communicate through a proxy server.
https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-on-windows?WT.mc_id=AZ-MVP-5003408&view=iotedge-2018-06
2021-09-16T20:28:36
CC-MAIN-2021-39
1631780053717.37
[]
docs.microsoft.com
The .udt.csv format is another representation of the .udt.json object that allows easier viewing and editing of data in spreadsheet software. The UDT CSV format is capable of specifying all the same things as the UDT JSON format, but the UDT JSON format is the "canonical form" because the .udt.csv files are very flexible, and can be written in different but equivalent ways. UDT CSV files are generating by converting the JSON format into a CSV using JSON as CSV (JAC). They can be converted back to JSON using the jac-format npm module or jac_format pip module. The UDT CSV format is really easy to use with libraries like pandas.
https://docs.universaldatatool.com/the-format-.udt.json/what-is-the-.udt.csv-format
2021-09-16T18:17:17
CC-MAIN-2021-39
1631780053717.37
[]
docs.universaldatatool.com
Unlike traditional software development languages, Xpoda is a new generation software development method without long and laborious coding processes. Due to the slowness of software development processes, many methodologies emerged for the completion of software projects on time, with a planned budget and resource usage. In particular, ERP manufacturers developed their own methods for the controlled progress of the implementation of their ERP products to customers and said SAP development and adaptation method ASAP (as Soon As Possible) while Microsoft AX SureStep (Sure step). Of course, the goal of each one was to determine the requirements correctly, to direct the time and project stakeholders correctly, and thus to achieve the most accurate results. However, serious problems and unsuccessful projects were encountered due to the fact that the software was developed with classical methods during the process and that this process was the most resource-consuming and time-consuming, and the projects were necessarily subject to change during this long period of time. A different way of Managing the Process Xpoda provides an alternative way to software development languages and enables codeless software development, which has brought a new perspective to the development and implementation processes. In this method, which enables all stakeholders of the project to be involved in every point of the project, the sales consultant can be a software developer while also providing testing or user training. With this method, all small and large projects can be managed with very small teams in a way that will fully adapt to the needs of the users and can be quickly tested and implemented at every stage. Software team, Project Consultant team, User side, Documentation, Testing are not performed separately but with the full participation and control of each stakeholder under the same roof. MultiPOD All the steps of the project are shared within the project team and all stakeholders can cross-back each other. Process Steps: - Determination of needs / Problem Identification - Solution Proposal - Application Development - Test / Request for Change / User Training - Application Development (Changing / New demands) - Test / Request for Change / User Training - Transition to Living …
https://docs.xpoda.com/hc/en-us/articles/360011497100-A-New-Way-of-Development-with-MultiPOD-methodology
2021-09-16T18:07:16
CC-MAIN-2021-39
1631780053717.37
[]
docs.xpoda.com
An OR Gateway directs incoming flows to one of many possible output paths, based on the condition(s) you set. The node can have multiple incoming and outgoing paths. The incoming paths are evaluated in the order they arrive. Each outgoing path is assigned a condition. Conditions are evaluated in bulk, without regard to the order in which you list them. If all conditions evaluate as False, you can also specify a path to follow. If any condition has an output that can't be evaluated as either True or False, the node does not open any output paths. pv!Credit_Score > pv!Credit_Score > 100 Click Save and Close. The Expression is displayed in the condition row with the prefix (=). pv!Name = Value Specifying an expression that evaluates to neither true or false results in the process pausing at this gateway. When multiple flows enter an OR node, the Gateway node pauses after the first instance token passes through – until all other incoming flows arrive. Work around this issue by placing an empty Script Task node between the incoming flows and the Gateway node. On This Page
https://docs.appian.com/suite/help/21.2/OR_Gateway.html
2021-09-16T18:13:16
CC-MAIN-2021-39
1631780053717.37
[]
docs.appian.com
dad.resource.timer This configuration parameter specifies how often (in seconds) the dad resource monitor checks dad resource usage. The default value of this parameter is 600 (10 minutes), meaning that the dad monitor checks dad resource usage every 10 minutes. For example: dad.resource.timer: 600 The dad resource monitor automatically checks the usage of various dad resources during runtime. For each resource that is monitored, you can configure the threshold value that triggers a dad restart or a log entry. When dad is restarted, the client is purged, and counters for resources such as CPU usage, file descriptors, and memory are reset. See dad.resource.restart for more information about the advantages of setting a threshold that is lower than the default system value.
https://docs.centrify.com/Content/config-unix/dad_resource_timer.htm
2021-09-16T19:43:36
CC-MAIN-2021-39
1631780053717.37
[]
docs.centrify.com
Date: Fri, 7 May 2004 13:17:22 [email protected]> In-Reply-To: <[email protected]>> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On Thu, 6 May 2004, Luigi Rizzo wrote: > On Fri, May 07, 2004 at 01:35:11AM +0400, Oleg Bulyzhin wrote: > ... > > i see. > > There is a little bug (i'll PR it as soon i'll get enough time), you can > > try attached patch(built on RELENG_4). > > very interesting that you found out what the bug was -- i > couldn't realize it myself. Thanks! > > However, i believe the fix is incorrect and in principle can > still trigger the problem (which is innocuous). > > The bug your patch addresses is the following: > when a packet is stored in a pipe, dummynet keeps a pointer > to the matching rule so if one_pass=0 it can restart processing > from the following one. > > if the matching rule goes away while a packet is queued, > my intention was to use the default rule as the next one, > but i mistakenly used the default rule as the _matching_ one. > >? To my mind problem is not log pollution with 'skip past' messages but dropping packets which _should_ be further processed. If we have no default_to_accept option in kernel and our next rule is default rule - packed should be dropped. If we are here (ip_fw2.c:1452): ---------------- | if (fw_one_pass) | return 0; | | f = args->rule->next_rule; <--- if (f == NULL) f = lookup_next_rule(args->rule); it means we got a packet which passed all rules up to pipe/queue rule and which was not dropped inside dummynet (i.e. packet already passed rule which was deleted). Why we should just drop such 'die hard' packet? ;)) > > I believe a proper fix is right before the main loop in > ipfw_chk(), check if we enter with a NULL rule pointer and use the > default rule instead. Hmm... consider following ruleset: net.inet.ip.fw.one_pass=0 ipfw pipe 1 config bw 1Mbit/s queue 8Kbytes 10 pipe 1 ip from any to 10.0.0.2 out 20 count ip from any to 10.0.0.2 out 65535 allow ip from any to any i.e. we are limiting user's bandwidth and want to know how much was downloaded through traffic shaper. Then user pay us for 10Mbit/s bandwidth: ipfw pipe 2 config bw 10Mbit/s queue 16Kbytes ipfw set disable 30 ipfw add 10 set 30 pipe 2 ip from any to 10.0.0.2 out ipfw set swap 30 0 #### NB! #### ipfw delete set 30 There are 2 cases: 1) if we delete temporary set 30 as fast as possible: With current code up to 8Kbytes data will be dropped. With your suggestion default rule will be used (packets will be passed in our case) but rule 20 will not catch em. 2) if we delete temporary set 30 after 5 seconds (i.e. when queue became empty): packets will be passed & counted by rule 20 > > Alternatively, if we believe that when a rule goes away > we should also drop queued packets because the resulting > behaviour would be unpredictable or unsafe, then what happens > now is basically correct (not by design, just pure luck), > and we just need to remove the 'ouch...' message Well, i think packets should not be dropped until we delete corresponding pipe or queue. > > cheers > luigi > -- SY, Oleg. ================================================================ === Oleg Bulyzhin -- OBUL-RIPN -- OBUL-RIPE -- [email protected] === ================================================================ Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=37767+0+archive/2004/freebsd-ipfw/20040509.freebsd-ipfw
2021-09-16T19:56:33
CC-MAIN-2021-39
1631780053717.37
[]
docs.freebsd.org
Why must I trust the documents as well as code? Spurred on by the knowledge that at least 5 people read my blog and the fact that it's 2:40 am the morning before I'm supposed to give a talk on Office security, I thought I'd post some more stuff. For those of you who didn't go to TechEd and learn all about VSTO, we have this new product called VSTO (technically, it is "Microsoft Visual Studio Tools for the Microsoft Office System", but that's a mouthful) and it lets you build managed code behind Word and Excel docs. Much like the WordBlogX sample I have on GotDotNet (downloaded 111 times so far...). Anyway, security is of course a huge concern with Office development, so we had to get it right (for some interpretation of the word "right"). The CLR has a very rich security system, but up until now most developers haven't really had to deal with it. Why? Well, mostly because they are... - Using ASP .NET, where you have FullTrust by default; or - Building local EXEs, where you have FullTrust by default; or - Building EXEs or DLLs on an Intranet, where they only have partial trust but can work within those limits Now we come along with our little product and say 'You MUST have FullTrust to use this code. Oh, and by the way, you MUST have a better reason than "I've been copied to the local machine!" in order to do that.' The main motivation behind this is that it's very easy to socially engineer people into downloading content to their local machine. There have also been bugs in the past whereby attackers could place files in well-known locations on your hard drive, ready for future exploitation, and we wanted to avoid these scenarios. So anyway, back to the point of this post... most people understand that you need to trust CODE before it is allowed to execute, because we don't want arbitrary code from ne'er-do-wellers running amuck on our machines, now do we? Right. But why do we have to trust VSTO documents as well in order to get stuff to work? Look at it this way. You probably have some sharp knives in your kitchen drawer, and you probably trust at least some of them to cut through various food substances without hurting you or damaging the aforementioned foodstuff too much. (You may also have some blunt old knives that you wouldn't even trust to cut through hot butter -- time for some spring cleaning, donchathink?). Even though these knives are very sharp and could potentially do lots of harm, when used correctly by an educated, non-malicious person, they are quite useful. OK, so you trust the knives. But do you trust your (perhaps hypothetical) young children with the knives? No, because they are not aware of the dangers associated with knives and may accidentally hurt themselves or other people. Respect the knife! Similarly, if a big bad guy in a trench coat breaks down your front door and demands all your money, would you trust him with your knife? No, because he's probably going to do something quite nasty with it. (I once wrote an internal e-mail about trust vs safety that ended in a comment something like "No matter how safe your scissors are, it's never a good idea to hand them to a criminal as he's breaking through your front door." I thought it was quite funny at the time.) See the link here? Just because the knife is trustworthy when handled by a trustworthy person, doesn't mean it will be trustworthy when handed by an untrustworthy person. And just because code is trustworthy when invoked by a trustworthy document, doesn't mean it will be trustworthy when invoked by an untrustworthy document. Another quick analogy: it's OK for me to use the del command to delete files, but it's not OK for random web pages to use the del command to delete files. The same idea applies to the Restricted Sites zone in Internet Explorer (and subsequently any HTML e-mail you receive) -- just because an ActiveX control (such as MSXML or Flash) is "safe" when used by a trustworthy web page, doesn't necessarily mean it is "safe" when used by an untrustworthy web page. (Obviously both MSXML and Flash try to be safe at all times, but both have had security problems in the past that let the bad guys do bad things). So it's never a good idea to enable Flash in e-mail (or any other "active" content for that matter), because the bad guys could send you a mal-formed Flash file that takes over your machine. So in an attempt to try and minimise this kind of thing (we call it a "re-purposing attack"), VSTO requires that a document must be trusted before it can host code, and furthermore that the code it is trying to host must be trusted. Now unlike code, which is NOT trusted merely be being copied to the local machine, documents ARE trusted if they're copied to the local machine because, basically, there's no other good way to secure documents. You can't use a signature because documents change constantly, and although you could use a specific directory (eg "My Documents") people tend to place documents in all manner of random places, so it would be too hard to manage. What about documents on the network, you ask? Well, here's another problem. Say you have a SharePoint site at and you want to let people upload VSTO documents to the site. You've already followed best practices for the actual code, so it is signed with the corporate key and stored on a secure, read-only server somewhere else (eg). Great, no-one can tamper with the assembly unless they compromise the server AND steal your private key, in which case it's time for you to take an emergency extended vacation anyways. But what about the documents? You have to grant them FullTrust, otherwise they won't be able to load code. But you have some problems: - You don't want to grant FullTrust to the entire site, because anyone can upload junk to the server (including EXEs or DLLs) and you don't want that junk to get permissions - You can't use hashes or signatures or anything else, because docs don't lend themselves to those kinds of evidence - You can't really trust each document individually (by name) because that's a management nightmare Pop Quiz, hotshot: You've got a document on a share and you want to give it FullTrust. Whadda you do? Answer: Shoot it in the leg. No, wait... What you do is use the new OfficeDocumentMembershipCondition that ships with VSTO. With this you can set up a rule in policy that says "Grant Office Documents FullTrust" (obviously you would scope this to your MyDocs server or your Local Intranet to avoid leaking permissions out to the world...). When Office loads a VSTO document, it creates the AppDomain passing in the location of the document and the special OfficeDocument evidence. This additional piece of evidence can be used to grant permissions to documents (using the ODMC mentioned above) without granting them to EXEs or other code, since no other code will present the OfficeDocument evidence. :> cd "%programfiles%Microsoft OfficeOffice11Addins" :> gacutil -i msosec.dll :> caspol -m -ag LocalIntranet_Zone -site MyDocs Nothing -n MyDocs_Site -d "Container for MyDocs HTTP server" :> caspol -m -ag MyDocs_Site -custom MSOSEC.XML FullTrust -n OfficeDocuments -d "Allows Office Documents to host code" See, it's easy really. You can even do it through the GUI (mscorcfg.msc) if you want. Bonus points for anyone who can tell me why we don't install ODMC in the GAC by default... [And now it's 3:40 am...]
https://docs.microsoft.com/en-us/archive/blogs/ptorr/why-must-i-trust-the-documents-as-well-as-code
2021-09-16T19:28:39
CC-MAIN-2021-39
1631780053717.37
[]
docs.microsoft.com
Data¶ The power of Skuid lies in the ability to bring together all of your data within a cohesive user experience. To bring that data into your pages, you’ll first connect to that data in the Skuid UI. There are three key concepts in connecting to data with Skuid: Data source types (DSTs) are Skuid “helpers” that facilitate the communication between Skuid and a data system. These bundles of code allow Skuid to speak with other systems, and new DSTs generally are included with every major release of Skuid. Skuid contains several pre-configured data source types, which allow for plug-and-play connections to specific data services. Some of these include Salesforce, Google Drive, and the others listed in this section’s table of contents. But to allow admins and developers the flexibility of connecting to other services, there are data source types that are not attached to specific products—such as the REST and OData DSTs. While these require additional configuration to use, they facilitate connections to a plethora of services. Authentication providers are used to authenticate to data systems and are often the first step in connecting to data. They are configured by admins in the Skuid UI to coordinate with—and authenticate to—an external system. Additionally, admins often make adjustments within the external system to properly configure permissions and create the necessary credentials for authentication. For systems produced by the same company—such as Google—it is often sufficient to create one authentication provider for multiple data sources. Data sources are the individual connections between Skuid and a specific service. They are configured by admins in the Skuid UI and depend on both of the above concepts. Skuid data sources use data source types to speak a service’s language and authentication providers to authenticate to a service. Once you’ve created your data sources, you can implement as many as you want in each Skuid page through Skuid models; you can mix and match data from Salesforce orgs, REST data sources, and various other sources all in one page. Any headers or parameters that should be sent with every request (such as API keys and/or authentication) are configured on the data source.
https://docs.skuid.com/latest/v2/en/data/index.html
2021-09-16T18:16:42
CC-MAIN-2021-39
1631780053717.37
[]
docs.skuid.com
This endpoint deletes a specific Organization. Request To delete an Organization please make a DELETE request to the following URL: Path Parameters Query Parameters Header $ curl -X DELETE '<organization_key>/' \ -H 'X-Auth-Token: oaXBo6ODhIjPsusNRPUGIK4d72bc73' \ { "code": 400001, "message": "Validation Error.", "detail": { .... } } { "code": 401001, "message": "Authentication credentials were not provided.", "detail": "Authentication credentials were not provided." } Response If the Organization was deleted successfully, returns status code 204. Attention Deleting an Organization cannot be undone. Please backup any data before deleting a Organization.
https://docs.ubidots.com/reference/delete-organization
2021-09-16T17:49:40
CC-MAIN-2021-39
1631780053717.37
[]
docs.ubidots.com
Meeting Rescheduled for Monday, August 3rd, 2009 (Terry is going to start researching this topic)Open Question: what process should we use to "vet" new developers for commit access?Terry will ask the OpenDS community about the process they follow. Items expected from a developer member of the community (not all are required, but these are valueable skills): The following list of tasks were discussed as either in process or in planning stages for version 2. The priority is not a measure of the importance of the feature, just the order in which features are expected to be completed as there are many dependencies. Server Hosting of openptk.org The server we were using was taken away. Terry is looking into potentially forwarding the domain to: We will look into alternative options if DNS forwarding is not viable.
http://docs.openptk.org/references/minutes/minutes-2009-08-03
2021-09-16T18:03:35
CC-MAIN-2021-39
1631780053717.37
[]
docs.openptk.org
Microsoft delivers Windows Azure Platform updates As previously announced at PDC 2010, Microsoft is now delivering updates to the Windows Azure platform to further enable Platform as a service (PaaS), where developers and business will ultimately see the true value of the cloud. Today, Microsoft is delivering: To make it easier to move existing applications and run them more efficiently, Microsoft is providing a bridge to PaaS from IaaS. - Windows Azure Virtual Machine Role beta eases the migration of existing Windows Server 2008 R2 applications to Windows Azure by eliminating the need to make costly application changes and enable customers to quickly access their existing business data from the cloud. To enhance applications and workloads with rich new services and features. - Database Manager for SQL Azure General Availability: Database Manager for SQL Azureis a lightweight, Web-based database management and querying capability for SQL Azure. This capability was formerly referred to as “Project Houston,” and allows customers to have a streamlined experience within the Web browser without having to download any tools. - Windows Azure Marketplace beta: The Windows Azure Marketplace. The first “aisle” in the Windows Azure Marketplace is DataMarket, which provides developers and information workers with access to premium third-party data, Web services, and self-service business intelligence and analytics, which they can use to build rich applications. DataMarket was released to web at PDC 2010 and is currently available. - Windows Azure Virtual Network Connect CTP: The first Windows Azure Virtual Network feature is called Windows Azure Connect.Windows Azure Connect enables a simple and easy-to-manage mechanism to set up IP-based network connectivity between on-premises and Windows Azure resources. - Extra Small Windows Azure Instance beta: The Extra Small Instance, which is priced at $0.05 per computehour in order to make the process of development, testing and trial easier for developers. This will make it affordable for developers interested in running smaller applications on the platform. - Remote Desktop General Availability: Remote Desktopenables IT professionals to connect to a running instance of their application or service to monitor activity and troubleshoot common problems. - Elevated Privileges General Availability:. - Full IIS Support General Availability: enables development of more complete applications using Windows Azure. Full IIS functionality, enables multiple IIS sites per Web role and the ability to install IIS modules. The full IIS functionality enables developers to get more value out of a Windows Azure instance. - Windows Server 2008 R2 Roles General Availability:. - Multiple Admins General Availability: Windows Azure now supports multiple Windows Live IDs to have administrator privileges on the same Windows Azure account. The objective is to make it easy for a team to work on the same Windows Azure account while using their individual Windows Live IDs. To transform applications to do new things in new ways, which are highly scalable and highly available. - Windows Azure Enhancements General Availability: delivering the following developer and operator enhancements: - For more information, visit the Windows Azure blog , or attend this overview webcast that’s happening on Wednesday, December 1st at 9AM. All of these new features can be found at the new Windows Azure Management Portal and through downloading the Windows Azure SDK and Windows Azure Tools for Visual Studio release 1.3
https://docs.microsoft.com/en-us/archive/blogs/stbnewsbytes/microsoft-delivers-windows-azure-platform-updates
2021-09-16T18:57:38
CC-MAIN-2021-39
1631780053717.37
[]
docs.microsoft.com
Stake your PEFI for iPEFI and maximize your yield. No Impermanent Loss. iPEFI is an ARC20 token that is always exchangeable for PEFI. When you stake your PEFI into the nest, you effectively exchange your PEFI for iPEFI. The iPEFI Nest is the main staking mechanism on the Penguin Finance ecosystem. Over time, you’ll earn PEFI by HODLing iPEFI tokens and will also gain access to exclusive dApps inside of the ecosystem. Once you bought iPEFI with PEFI, the value of your tokens will appreciate against PEFI over time. This is because a percentage of the Liquidity and Staking allocation is sent to the nest every day. Fees collected in other dApps are also distributed among iPEFI holders. Additionally, Paper Hand Penalties are collected from paper-handed polar bears and distributed to the Nest, significantly increasing APY. Check out our How to Use the Nest step-by-step guide to get started with staking. Yield-farming provides attractive returns at the cost of being subject to impermanent loss. For example, if you're farming with AVAX-PEFI, you might lose out on potential gains if PEFI goes up and AVAX stays still. Farming in our Igloos is attractive because the rewards on these pools can appear big, but it comes with added risk. Instead, you may stake your PEFI tokens to iPEFI, which has no impermanent loss (IL) and maximizes your profits when PEFI tokens go up. Rewards compound automatically, and you don't need to do anything once you've exchanged your PEFI for iPEFI. Our Nest-exclusive dApps require users to use iPEFI for our platform, effectively making it a valuable token to hold as an investor. So not only are you collecting fees from these applications just by being an iPEFI holder, you’re getting access to the entirety of our ecosystem. Some of the applications that require iPEFI include: Penguin Emperor. The bidding system of this entertaining king of the hill dApp requires iPEFI. Penguin Launchpad. Every tier of our upcoming IDO platform requires you to hold a certain amount of iPEFI, not regular PEFI, to participate in token sales. Club Penguin. Our upcoming initiative to earn new tokens and help projects gain recognition in Avalanche requires Penguins to stake iPEFI. To receive the rewards, Penguins commit to locking their $PEFI token inside the Penguin Nests, which will generate a passive stream of income and magnify the APY of all our Penguins. There is a 6% Paper Hands Penalty over the total withdrawn amount to protect our long-term investors and reward them with more PEFI. Over time, your balance will surpass this initial 6% fee as the PHP is being distributed among all iPEFI holders, and you're being rewarded exponentially the longer you stay in the Nest. 100% of all penalties are redistributed among iPEFI holders, increasing token value and rewarding long-term investors.
https://docs.penguinfinance.io/summary/penguin-nests-staking-and-fee-collection
2021-09-16T19:38:30
CC-MAIN-2021-39
1631780053717.37
[]
docs.penguinfinance.io
String conversion and formatting¶ Functions for number conversion and formatted string output. - int PyOS_snprintf(char *str, size_t size, const char *format, ...)¶ - Part of the Stable ABI. Output not more than size bytes to str according to the format string format and the extra arguments. See the Unix man page snprintf(3). - int PyOS_vsnprintf(char *str, size_t size, const char *format, va_list va)¶ - Part of the Stable ABI. Output not more than size bytes to str according to the format string format and the variable argument list va. Unix man page vsnprintf(3)., format != NULL and size < INT_MAX. The return value (rv) for these functions should be interpreted as follows: - Part of the Stable ABI. - Part of the Stable ABI.: Py_DTSF_SIGNmeans to always precede the returned string with a sign character, even if val is non-negative. Py_DTSF_ADD_DOT_0means to ensure that the returned string will not look like an integer. Py_DTSF_ALTmeans to apply “alternate” formatting rules. See the documentation for the PyOS_snprintf() '#'specifier for details.if the conversion failed. The caller is responsible for freeing the returned string by calling PyMem_Free(). New in version 3.1. - int PyOS_stricmp(const char *s1, const char *s2)¶ Case insensitive comparison of strings. The function works almost identically to strcmp()except that it ignores the case.
https://docs.python.org/3.10/c-api/conversion.html
2021-09-16T19:38:22
CC-MAIN-2021-39
1631780053717.37
[]
docs.python.org
Source code for libqtile.extension.window_list # Copyright (C) 2016, zordsdav.extension.dmenu import Dmenu from libqtile.scratchpad import ScratchPad[docs]class WindowList(Dmenu): """ Give vertical list of all open windows in dmenu. Switch to selected. """ defaults = [ ("item_format", "{group}.{id}: {window}", "the format for the menu items"), ("all_groups", True, "If True, list windows from all groups; otherwise only from the current group"), ("dmenu_lines", "80", "Give lines vertically. Set to None get inline"), ] def __init__(self, **config): Dmenu.__init__(self, **config) self.add_defaults(WindowList.defaults) def list_windows(self): id = 0 self.item_to_win = {} if self.all_groups: windows = self.qtile.windows_map.values() else: windows = self.qtile.current_group.windows for win in windows: if win.group and not isinstance(win.group, ScratchPad): item = self.item_format.format( group=win.group.label or win.group.name, id=id, window=win.name) self.item_to_win[item] = win id += 1 def run(self): self.list_windows() out = super().run(items=self.item_to_win.keys()) try: sout = out.rstrip('\n') except AttributeError: # out is not a string (for example it's a Popen object returned # by super(WindowList, self).run() when there are no menu items to # list return try: win = self.item_to_win[sout] except KeyError: # The selected window got closed while the menu was open? return screen = self.qtile.current_screen screen.set_group(win.group) win.group.focus(win)
https://docs.qtile.org/en/latest/_modules/libqtile/extension/window_list.html
2021-09-16T19:45:04
CC-MAIN-2021-39
1631780053717.37
[]
docs.qtile.org
Plan the configuration of flash devices for vSAN cache and all-flash capacity to provide high performance and required storage space, and to accommodate future growth. Choosing Between PCIe or SSD Flash Devices Choose PCIe or SSD flash devices according to the requirements for performance, capacity, write endurance, and cost of the vSAN storage. - Compatibility. The model of the PCIe or SSD devices must be listed in the vSAN section of the VMware Compatibility Guide. - Performance. PCIe devices generally have faster performance than SSD devices. - Capacity. The maximum capacity that is available for PCIe devices is generally greater than the maximum capacity that is currently listed for SSD devices for vSAN in the VMware Compatibility Guide. - Write endurance. The write endurance of the PCIe or SSD devices must meet the requirements for capacity or for cache in all-flash configurations, and for cache in hybrid configurations. For information about the write endurance requirements for all-flash and hybrid configurations, see the VMware vSAN Design and Sizing Guide. For information about the write endurance class of PCIe and SSD devices, see the vSAN section of the VMware Compatibility Guide. - Cost. PCIe devices generally have higher cost than SSD devices. Flash Devices as vSAN Cache Design the configuration of flash cache for vSAN for write endurance, performance, and potential growth based on these considerations.
https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vsan-planning.doc/GUID-1D6AD25A-459A-43D6-8FF5-52475499D6A2.html
2021-09-16T20:06:39
CC-MAIN-2021-39
1631780053717.37
[]
docs.vmware.com
The following hotfix is available for Appian 21.2..2 installations not currently up to date with the latest hotfixes. After installing, you will be running on Appian 21.2.510.0. See the Installation section at the bottom of this page for instructions on how to install this hotfix. Security Updates - Low AN-190453 - Medium Fixed an issue where expressions with references to many different record types with data sync enabled could cause a performance degradation. Security Updates - Low AN-189023 - Medium Fixed an issue that caused the concurrent session limit to be enforced even for SAML authentication. AN-190068 - Medium Fixed an issue that applied an incorrect memory limit to certain record queries and prevented them from completing when run on large data sets. Security Updates - Medium AN-187458 - Medium Fixed an issue where the user record list did not always show all rows when navigating to the last page and then returning to the first page. AN-188525 - Medium Caching mechanism for data stores has been improved to optimize resource utilization. AN-189656 - Medium File upload fields no longer produce an error when a percent (%) character is present in the filename of an uploaded file. AN-189739 - Medium Fixed an issue that could cause the screen to flicker on Chrome when using character limit on text and paragraph fields AN-189222 - Low The a!isUserMemberOfGroup() function now works as expected when used directly in the process modeler as an exception flow trigger. Prior to this fix, the function would cause an evaluation error and pause the process. Security Updates - Medium AN-188233 - High Fixed an issue where a hidden data type was displayed as a missing precedent that could be added to an application for objects that contained an expression that used a record data type constructor. AN-189419 - High Fixed an issue where the PDF Doc From Template Smart Service would not use the checkbox style defined in the PDF template. AN-188590 - Medium Fixed an issue where a!queryRecordType() did not return the sync error code when only querying for related record fields and the primary key field of a record type with a failed sync. AN-189035 - Medium On Appian Cloud MariaDB database, the optimizer_switch variable has been updated to disable derived_merge option. AN-189014 - Low Fixed an issue upon site startup where user sync with RPA prevented other activities from occurring. AN-189393 - Low Appian tasks are now properly displayed in Microsoft Outlook emails when using the Task Viewer Add-in. AN-189093 - Critical New Process Analytics engines can now be added to environments on Appian 21.1 and later without any issues. Prior to this fix, this could cause the site to create numerous large incremental update .l files and eventually run out of disk. This updates behavior to be consistent with Appian 20.4 and earlier. AN-189188 - High Local variables referenced within a!aggregationFields() for the field parameter of a!queryRecordType() are now correctly refreshing with refreshOnReferencedVarChanges. AN-189420 - High The Deploy to Another Environment action no longer triggers an error when you reuse a deployment related to an application with associated packages. AN-187502 - Medium A data migration of process history from the process execution engine no longer causes the engines to restart. Prior to this fix, a username exception caused the engines to crash and restart frequently. AN-188784 - Medium Fixed an issue that created null users without a UUID and caused the application server deployment to fail. AN-180424 - Low Fixed an issue in which the "List of" portion of a data type name was not being translated in an expression rule. This issue occurred when evaluating a list of record maps, so the record map is now appropriately internationalized. Security Updates - High AN-188886 - High Fixed an issue which caused an error to be displayed after completing a record action in a dialog. AN-186606 - Medium Fixed an issue which could cause forms to erroneously scroll back to the top of the page. AN-186759 - Medium Fixed an issue where record fields and related record fields with the same name were not evaluated as different field references. AN-186971 - Medium CastInvalid error no longer displays for a!gridField() when using a!groupMembers() in the data parameter. AN-187950 - Medium Fixed an issue in which duplicate rows of data were causing an error on the record list for DSE-backed record types that were created prior to 20.3 and imported into 20.3 or later. Duplicate rows now correctly display without an error. AN-188062 - Medium The automatic background cleanup of the type cache no longer impacts site performance or user experience. AN-188067 - Medium Fixed an issue which prevented updating data types due to an error on locked types clean up. AN-188324 - Medium Fixed an issue which prevented updating data types due to an error on locked types clean up. AN-188661 - Medium TrimTables script now allows you to trim excess table entries from the Content engines in order to reduce memory usage and improve performance. AN-189012 - Medium Fixed an issue that caused a potential thread deadlock when writing access log response data to the Health Check zip file. AN-185257 - Low Fixed an issue to address high memory utilization in a server-side component. AN-188601 - Low Enabled additional Info-level logging for WebService invocation to help troubleshooting. Security Updates - Medium AN-180216 - High A timer which calculates the memory usage of process instances no longer causes errors in the process execution engine. Prior to this fix, the timer would cause the process execution engine to rollback in certain scenarios. AN-185653 - Medium Improved grid load time in response to grid field pipelines showing a slowdown of 200ms to 260ms. AN-187958 - Medium Fixed an issue that caused an error when viewing the record summary of a record with an empty summary interface. AN-186789 - Low Fixed an issue where the missing domain warning triggered too often for expressions containing a function and rule input with the same name, such as "user". AN-187231 - Low Adding a comment /* at the start of an expression no longer causes an error in interfaces. Security Updates - Low AN-187851 - High New Process Execution and Analytics engines can now be added to environments on Appian 21.1 and later without any issues. Prior to this fix, this could cause rollbacks on the newly provisioned engines. This updates behavior to be consistent with Appian 20.4 and earlier. AN-187970 - High Resolved an issue that could cause a replica engine to get stuck in a REPLAYING state 2 to 3 months after a site restart. AN-172311 - Medium Fixed a race condition during an engine failover and promotion event, that could result in an engine shutting down instead of being promoted to primary. AN-178689 - Medium Fixed an issue where export to Excel would fail for record types that have not been updated if no user filters were applied. AN-186747 - Medium Fixed an issue which caused the union function to return an error when provided a list of maps. AN-187090 - Medium Fixed an issue where clicking the cancel button multiple times for an edit expression dialog in the interface designer would cause an error to occur. AN-136621 - Low Increased the maximum number of replicas of an engine from 5 to 9. AN-186217 - Low For Appian Cloud database in high availability configuration, additional logging has been introduced to capture information during a database failover. AN-186874 - Low Fixed an issue that prevented the search server from stopping when installed as a windows service. Security Updates - Low AN-186025 - Medium Records-powered grids created with the interface designer now show the correct expression for the record type after switching to expression mode. AN-187353 - Medium Fixed an issue where a process model could not be selected as the source for a record type if any visible process model did not have a name in the user's current language. AN-177356 - Low Improved email security. AN-187146 - High Fixed an issue that caused Db2 database connection string validation to fail during the data source creation in the admin console. AN-187265 - High Fixed an issue that prevented self-managed customers from properly configuring Google credentials in the Document Extraction page of the Administration Console. AN-180918 - Medium Fixed an issue where record data sync fails if the primary key field is renamed within the record type for web service backed records. AN-182753 - Medium Show generic error messages for errors encountered during user authentication. AN-186938 - Medium A type of crash of the process design engines caused by an error with a specific activity class parameter type in a process is now prevented. AN-184062 - Low Record response time and record sync status metrics on the Health Dashboard are now reported regardless of record count. Security Updates - Low AN-185288 - Low Resolved an issue with RedisCache trying to unlock a lock that it never acquired that caused repetitive logged errors. Security Updates - High AN-182515 - High Fixed an issue that caused non-synced entity-backed record types that were manually updated to versions 20.4 and above to fail upon export to Excel. The impacted record types were those containing complex data structures (i.e., nested custom data types). AN-185647 - High Repeated wildcard queries caused in pickers and search boxes no longer cause site performance issues. AN-52587 - Medium Provided a configuration to increase the size of the email messages that can be sent out. AN-181512 - Medium The automatic background cleanup of old transient system files no longer impacts site performance or user experience. For self-managing customers, if you had previously set conf.content.max.temporary.uploaded.files.age in custom.properties, you should now remove that setting. Appian Cloud customers do not need to take any action. AN-184459 - Medium Fixes an issue where closing the settings window from the Tempo news feed would reset the scroll position to the top of the page. AN-185856 - Medium Fixed an issue where date fields showed the improper format for the Swedish locale. AN-186125 - Medium EN-GB locale now displays the correct format in the Date component. AN-115868 - Low Searching by Process Model ID in Appian Designer now correctly returns the matching process model Security Updates - Low AN-176614 - High Upgraded OpenJDK to version 8u292b10. AN-182130 - High Multiple CDTs with a parent-child relationship can now be edited and saved at the same time without any issues. Prior to this fix, making changes and saving these CDTs at the same time would result in an error. AN-184317 - High Fixed an issue on self-managed Appian systems that prevented syncing records after restoring from a backup if a sync occurred less than 10 minutes before an unplanned outage. AN-182677 - Medium Fixed a race condition which occasionally caused outdated expression design guidance to appear on High Availability environments. AN-182784 - Medium A data migration of process history from the process execution engine is now faster and optimized for performance. AN-182887 - Medium Fixed an issue where each field was marked as unique in a synced record if multiple columns were included in a unique index in the database. AN-182898 - Medium The performance of browsing to tables from Oracle when selecting a source for synced records has been improved. AN-183364 - Medium Fixed an issue that caused the content engines to stop in some scenarios when the content logger is configured to output log entries at the DEBUG level. AN-183865 - Medium Clicking on a Document Download Link component now properly refreshes the idle timeout. AN-183875 - Medium Design guidance for sites and web APIs is now properly re-calculated when a precedent object is modified. AN-184600 - Medium Fixed an issue that caused some users to not be able to log in to Appian. AN-185226 - Medium New Process Execution and Analytics engines can now be added to environments on Appian 21.1 and later without any issues. Prior to this fix, this could cause frequent errors and rollbacks on the newly provisioned engines. This updates behavior to be consistent with Appian 20.4 and earlier. AN-185842 - Medium On-premise customers can re-enable TLS 1.0 and 1.1 support for Tomcat. AN-182814 - Low Task emails no longer fail when an invalid locale is provided. AN-185013 - Low Fixed the validations on the color picker component for Design Mode so that it no longer renders duplicative validation messages. AN-185040 - Low Fixed an issue with the display of event nodes in process model documentation. AN-185400 - Low Selecting related record data via relationship references no longer incorrectly errors in certain scenarios. AN-185431 - Low Fixed an issue where the design_errors.csv log file could have multiple headers. Security Updates - Critical AN-184783 - High Fixed logging issues when using Oracle Security Updates - Critical AN-182771 - High Memory optimizations for migrating process history from the process execution engines. Prior to these optimizations, the migration could cause an out of memory error. AN-184729 - High Fixed an issue that resulted in submission of certain forms with uploaded files to be slow. AN-183790 - Medium Compare and deploy no longer triggers an OutOfMemory error when inspecting large deployments. Security Updates - Medium AN-143471 - Medium Interfaces no longer break when typing an incorrect format into a date component. AN-182668 - Medium Users will now see the appropriate error, instead of an internal error, when a record view contains an image to which the user doesn't have at least 'viewer' permission. AN-183012 - Medium Improved performance when syncing record data from Oracle, IBM DB2, and SQL Server. AN-184083 - Medium Improved performance in record action dialogs that contain a grid with data sourced from a record type. AN-184470 - Medium In web API expressions, the "invalid parameter" design guidance no longer incorrectly triggers for evolved functions. AN-182704 - Low Fixed an issue that resulted in document extraction tables opening to the incorrect page. AN-183516 - Low Component Plugins now appear disabled on unassigned tasks. Security Updates - Low AN-182083 - High Fixed a performance issue where related action start forms would re-query the record data on every interface evaluation. AN-182179 - High Execution of processes with very large process variables now use less memory. Prior to this fix, such operations could cause the site to go out of memory in certain scenarios. AN-182195 - High Fixed an issue that prevented the application server from starting up due to an error in acquiring change log lock AN-180764 - Medium Fixes a bug that caused double slashes '//' to be interpreted as single slashes '/' in the URL of HTTP integrations. AN-180967 - Medium Updated Apache PDFBox to 2.0.23 to address the out of memory exception experienced when loading certain PDF files. AN-181036 - Medium In Internet Explorer, the in-line design guidance banner now displays correctly when the language is set to Arabic. AN-181111 - Medium Fixed an issue with metric collection process AN-181252 - Medium Upgrade Zookeeper to 3.7.0 AN-181352 - Medium Sort icons now display for legacy record list grids created with Appian 20.2 and earlier. AN-181881 - Medium When a plug-in is selected in the precedents view of an object, the option to "Remove from Package” is now rightly disabled. AN-181942 - Medium The stability of the process execution engine has been improved. Prior to this fix, certain actions like starting a process, archiving or unarchiving a process could cause the engine to rollback, causing a temporary service outage for process execution. AN-182236 - Medium Fixed an issue that prevented the OAuth Provider from redirecting back to the mobile app after authorizing the user. AN-183783 - Medium AN-183783 AN-41128 - Low Fixed an issue with the process calendar which marked the wrong non-working days with en_GB locale AN-178286 - Low Fixed an issue which caused outdated validation errors to appear when uploading an XSD to create a new version of a data type. AN-179783 - Low Fixed an issue where inline documentation did not display in instructions or tooltips when editing the grid-style record list in the record type designer. AN-181051 - Low Mid-tone accent colored text no longer changes to black on light colored backgrounds. AN-181060 - Low Fixed an issue where a paging grid did not automatically return to the first page when refreshOnReferencedVarChange was false and a referenced variable in refreshOnVarChange was changed. AN-182313 - Low Fixed an issue where the unused rule input recommendation was incorrectly triggering in a specific case where the input was used as a rule reference. Perform the following steps to apply the hotfix: <APPIAN_HOME>directory. <APPIAN_HOME>directory. <APPIAN_HOME>directory. <APPIAN_HOME>/deployment/web.warto the folder where the Web server is getting the static resources. See Copy Static Resources to the Web Server for more information. To determine if the Appian 21.2 Hotfix is deployed, open the build.info file located in <APPIAN_HOME>/conf/. The contents of this file should match the following code sample: build.revision=fc0e25af69a77e0c0f11277eca24953101611fed build.version=21.2.510.0 On This Page
https://docs.appian.com/suite/help/21.2/Hotfixes.html
2021-09-16T19:41:13
CC-MAIN-2021-39
1631780053717.37
[]
docs.appian.com
In this guide you will learn step by step how to add text capture to your application. Roughly, the steps are: Include the ScanditTextCapture library and its dependencies to your project, if any. Create a new data capture context instance, initialized with your license key. Create a text capture settings instance. Create a new text capture mode instance and initialize it with the settings created above. Register a text capture listener to receive events when a new text is captured. Text textCaptureSettings = TextCaptureSettings.FromJson(json); Next, create a TextCapture instance with the settings from the previous step: textCapture = TextCapture.Create(context, textCaptureSettings); Register the Text Capture Listener# To get informed whenever a new text has been captured, add a ITextCaptureListener through TextCapture.AddListener() and implement the listener methods to suit your application’s needs. First implement the ITextCaptureListener interface. For example: public void OnObservationStarted(TextCapture textCapture) { } public void OnObservationStopped(TextCapture textCapture) { } public void OnTextCaptured(TextCapture textCapture, TextCaptureSession session, IFrameData data) { // Do something with the captured text. } Then add the listener: textCapt: CameraSettings cameraSettings = TextCapture.RecommendedCameraSettings; // Depending on the use case further camera settings adjustments can be made here. Camera camera = Camera.GetDefaultCamera(); if (camera != null) { camera.ApplySettingsAsync(cameraS text capture, the following overlay can be added: TextCaptureOverlay overlay = TextCaptureOverlay.Create(this.textCapture, this.dataCaptureView); Disabling Text Capture# To disable text capture, for instance as a consequence of a text being captured, set TextCapt:
https://docs.scandit.com/data-capture-sdk/xamarin.android/get-started-text-capture.html
2021-09-16T17:52:27
CC-MAIN-2021-39
1631780053717.37
[]
docs.scandit.com
(Available in Pro Platinum) Default UI Menu: Addons/SDK Samples/Insert/Weld Symbol Ribbon UI Menu: Use the Weld Symbol window to enter the symbol parameters. Click OK when finished, and locate the symbol in your drawing. To edit a weld symbol, open its Properties window . Open the Other page, click Weld Symbol, and click Go To Page. This opens the original design window, in which you can change any symbol parameters.
http://docs.imsidesign.com/projects/Turbocad-2018-User-Guide-Publication/TurboCAD-2018-User-Guide-Publication/Annotation/Drawing-Symbols/Weld-Symbols/
2021-09-16T19:34:32
CC-MAIN-2021-39
1631780053717.37
[array(['../../Storage/turbocad-2018-user-guide-publication/weld-symbols-img0001.png', 'img'], dtype=object) array(['../../Storage/turbocad-2018-user-guide-publication/weld-symbols-img0003.png', 'img'], dtype=object) ]
docs.imsidesign.com
Header(bpy_struct)¶ base class — bpy_struct - class bpy.types. Header(bpy_struct)¶ Editor header containing UI elements bl_idname¶” - Type string, default “”, (never None) bl_region_type¶ The region where the header is going to be used in (defaults to header region) - Type enum in [‘WINDOW’, ‘HEADER’, ‘CHANNELS’, ‘TEMPORARY’, ‘UI’, ‘TOOLS’, ‘TOOL_PROPS’, ‘PREVIEW’, ‘HUD’, ‘NAVIGATION_BAR’, ‘EXECUTE’, ‘FOOTER’, ‘TOOL_HEADER’], default ‘HEADER’ bl_space_type¶ The space where the header is going to be used in EMPTYEmpty. VIEW_3D3D Viewport, Manipulate objects in a 3D environment. IMAGE_EDITORUV/Image Editor, View and edit images and UV Maps. NODE_EDITORNode Editor, Editor for node-based shading and compositing tools. SEQUENCE_EDITORVideo Sequencer, Video editing tools. CLIP_EDITORMovie Clip Editor, Motion tracking tools. DOPESHEET_EDITORDope Sheet, Adjust timing of keyframes. GRAPH_EDITORGraph Editor, Edit drivers and keyframe interpolation. NLA_EDITORNonlinear Animation, Combine and layer Actions. TEXT_EDITORText Editor, Edit scripts and in-file documentation. CONSOLEPython Console, Interactive programmatic console for advanced editing and script development. INFOInfo, Log of operations, warnings and error messages. TOPBARTop Bar, Global bar at the top of the screen for global per-window settings. STATUSBARStatus Bar, Global bar at the bottom of the screen for general status information. OUTLINEROutliner, Overview of scene graph and all available data-blocks. PROPERTIESProperties, Edit properties of active object and related data-blocks. FILE_BROWSERFile Browser, Browse for files and assets. SPREADSHEETSpreadsheet, Explore geometry data in a table. PREFERENCESPreferences, Edit persistent configuration settings. -SPREADSHEET’, ‘PREFERENCES’], default ‘EMPTY’ - classmethod bl_rna_get_subclass(id, default=None)¶ - Parameters id (string) – The RNA type identifier. - Returns The RNA type or default when not found. - Return type bpy.types.Structsubclass Inherited Properties Inherited Functions
https://docs.blender.org/api/current/bpy.types.Header.html
2021-09-16T19:14:58
CC-MAIN-2021-39
1631780053717.37
[]
docs.blender.org
If you join Kindcow, you definitely want to get more KIND, right? We have good news for you! Kindcow Finance allow you to provide liquidity by adding your tokens to liquidity pools or “LPs”. It will make you easily get many reward. The kindcow reward per block is 1.1 KIND. read the KIND distribution here. Besides that, we have many farm options for you: FSXU - BNB WHIRL - KIND KIND - ETH KIND - KEBAB KIND - Cake WBST - BUSD KIND - BNB BNB - BUSD KIND - BUSD ANI - BNB WHIRL - BNB ORE - KIND OGC - BNB ORE - ORM OGC - BUSD OGC - KIND DUSA - BNB You can choose which one you like and add your LP on it. After that you just need to wait for the KIND reward come to you! 🎉 Every farm has different APR and AllocPoint. So, it up to your strategies. .
https://docs.kindcow.finance/yield-farming/farms
2021-09-16T19:03:52
CC-MAIN-2021-39
1631780053717.37
[]
docs.kindcow.finance
If you no longer require Site Recovery Manager, you must follow the correct procedure to cleanly unregister Site Recovery Manager. Deploying Site Recovery Manager, creating inventory mappings, protecting virtual machines by creating protection groups, and creating and running recovery plans makes significant changes on both Site Recovery Manager sites. Before you unregister Site Recovery Manager, you must remove all Site Recovery Manager configurations from both sites in the correct order. If you do not remove all configurations before unregistering Site Recovery Manager, some Site Recovery Manager components, such as placeholder virtual machines, might remain in your infrastructure. If you use Site Recovery Manager with vSphere Replication, you can continue to use vSphere Replication after you unregister Site Recovery Manager.. - (Optional) If you use array-based replication, select , and remove all array pairs. - Select an array pair, click Array Pair, and click Disable. - Click Array Manager Pair and click Remove. - Select Break Site Pair. , and clickBreaking the site pairing removes all information related to registering Site Recovery Manager with Site Recovery Manager, vCenter Server, and the Platform Services Controller on the remote site. - Log in to the Site Recovery Manager Appliance Management Interface as admin. - Click Summary, and click Unregister. - Provide the required credentials, review the information, and click Unregister.Important: Unregistering the Site Recovery Manager Appliance deletes the embedded database. This process cannot be reversed. - Repeat the procedure on the other site.
https://docs.vmware.com/en/Site-Recovery-Manager/8.4/com.vmware.srm.install_config.doc/GUID-14953B96-43EA-4691-96FF-0657B489EAA5.html?hWord=N4IghgNiBcIK4DsBOBTA5gSwM4BcVJAF8g
2021-09-16T20:08:23
CC-MAIN-2021-39
1631780053717.37
[]
docs.vmware.com
Displaying color legend In an interactive map visual, CDP Data Visualization enables you to display color legend of circles. - On the right side of Visual Designer, click the Settings menu. - In the Settings menu, click Circles. - To show the color legend for the Circles, select Add Circles Color Legend option. This option is off by default. Make sure you have one aggregate field in the Colors shelf. Here is the Google Map with Circles, plotting two measures: elevation, and count of features. Notice that the first measure appears as colors you can check in the color legend, while the second measurement displays as size that you can see in the area legend. Similarly, the Mapbox map with Circles and two measures: elevation, and count of features. Elevation appears as colors, and feature count is represented by the area of the circle.
https://docs.cloudera.com/data-visualization/cdsw/howto-customize-visuals/topics/viz-display-circles-color-legend.html
2021-05-06T10:25:17
CC-MAIN-2021-21
1620243988753.91
[]
docs.cloudera.com
crdctl Utility scripts/crdctl is a utility for managing the lifecycle of GitLab CRD. It helps you to create or delete the CRD. You may find it useful for more advanced use-cases such as development or CI-managed environments. Usage crdctl ACTION [PREFIX] Currently only create and delete actions are supported which respectively create/update or delete the GitLab CRD. You can pass an optional prefix for GitLab CRD. This prefix will be added to the group name of GitLab CRD. It can be used to distinguish different CRDs in a cluster. For example GitLab Chart CI uses this feature to separate CRDs of different pipelines. When you decide to use CRD prefix, you need to pass it to the Chart as well, so the Operator will be able to work with the expected CRD. To do so, use gitlab.operator.crdPrefix value. kubectl. For versions prior to v1.14 you also need kustomize. To use an external kustomizeset KUSTOMIZE_CMDenvironment variable, e.g. export KUSTOMIZE_CMD="kustomize build".
https://docs.gitlab.com/charts/installation/crdctl.html
2021-05-06T10:39:27
CC-MAIN-2021-21
1620243988753.91
[]
docs.gitlab.com
Creating an OVHcloud DNS zone for a domain name Find out how to create an OVHcloud DNS zone for your domain name via the OVHcloud Control Panel Find out how to create an OVHcloud DNS zone for your domain name via the OVHcloud Control Panel Last updated 5th May 2020 A Domain Name System (DNS) zone is a domain name’s config file. It is composed of technical information, otherwise called ‘records’. DNS zones are usually used to link your domain name to the server (or servers) that host your website and email addresses. For a number of reasons, you may need to create a DNS zone for your domain name at OVHcloud. Find out how to create an OVHcloud DNS zone for your domain name via the OVHcloud Control Panel. First of all, log in to the OVHcloud Control Panel. Click Order in the services bar on the left-hand side, then DNS zone. In the page that pops up, enter the domain name you would like to create an OVHcloud DNS for. Then wait a few moments for the tool to carry out its verifications on the domain name. If a message appears notifying you that the DNS zone cannot be created, check that the domain name follows the requirements listed above, or ask the person managing it to do this for you. Once you have ensured that the domain name meets all requirements and is correctly configured, try again. Once the verifications are complete, you must choose whether to enable the minimal records for the DNS zone you are going to create. The way you set your DNS records is not permanent. You can change the records after you have created the DNS zone. Once you have selected an option, continue following the next steps until you have created the DNS zone. Now that your domain name’s DNS zone has been created, you can edit it. This step is optional, but it may be essential if you want to ensure that any services linked to your domain name do not experience any downtime (e.g. your website and email services). If you would like to edit this DNS zone, in the OVHcloud Control Panel, click Domains in the services bar on the left-hand side, then choose the domain name concerned. Go to the DNS Zone tab. If you have just created the DNS zone but the domain name doesn’t appear under the list of services in the Domains section, please wait a few moments, then reload the page. Once it appears, make the required changes. To learn more about how to edit a DNS zone, please read our guide to Editing an OVHcloud DNS zone. Once you have modified your domain name’s OVHcloud DNS zone, you will need to allow 4-24 hours for the changes to fully propagate and take effect. Once the OVHcloud DNS zone is ready to be used, you can then link it to your domain name. To do this, you will need to retrieve the details for the OVHcloud DNS servers activated for your domain name in the OVHcloud Control Panel. The servers will appear below Name Servers. Once you have the details, edit your domain name’s DNS servers using the interface supplied by your domain name’s service provider. Once you have modified the DNS zone configuration, you will need to allow 48 hours for the changes to fully propagate.
https://docs.ovh.com/sg/en/domains/create_a_dns_zone_for_a_domain_which_is_not_registered_at_ovh/
2021-05-06T10:07:33
CC-MAIN-2021-21
1620243988753.91
[]
docs.ovh.com
Version 2021.1 (7.1.0) Substance Painter 2021.1 (7.1.0) introduces several new features and improvements such as the geometry mask and the copy and paste of effects in the layer stack. Release data: 28 January 2021 This release raise the minimum version supported of Ubuntu to 18.04 and MacOS to 10.14. For more details see the technical requirements. Major Features New Geometry Mask The Geometry Mask is a new masking tool in the layer stack that allows to hide geometry based on mesh names or UV Tiles. It is an evolution of the previously named UV Tile Mask that masked geometry based on UDIM numbers. This new tool is a better way of masking geometry than regular painting (or when using the Polygon Fill) because it benefits from several engine optimizations. It is also non-destructive as it doesn't store geometry information (like faces or vertices) but instead just the mesh name or the UV Tile number, so re-importing a mesh won't break the mask. Another benefit is that hiding geometry permit to paint on surfaces that weren't accessible before within a Texture Set, this avoids the need to split an object into several Texture Sets for example. - New geometry mask on layers The geometry mask is automatically available on any layer in the layer stack. By default it has no effect, meaning the layer is fully visible. The Geometry Mask has its own contextual menu that allows to quickly select or deselect all its items but also to copy its values to another layer. - Editing the geometry mask properties The geometry mask follows the same logic as the other layer's contexts (like editing a mask or instanced properties). To enter the geometry mask edition mode, simply click on the doted square at the right of a layer. To exit the geometry mask click on the content or paint mask of the same layer. - Masking by mesh names or by UV Tiles At the top of the Geometry Mask properties is a dropdown that controls the masking mode. It is possible to choose between masking by UV Tile number or by mesh name. This dropdown is disabled and set to mesh name only in case a project doesn't use the UV Tile workflow. - Masking geometry via the properties When editing the Geometry Mask, the properties window will display a list of the mesh names (or UV tiles) based on the geometry related to the current Texture Set. - The number above the list indicates how many meshes/UV tiles are unmasked over the total available. - The menu next to the number gives quick controls to select all or none of the items and even invert the current selection. - The list below defines which items are masked or not. Like other list in the application it is possible to click and drag to enable/disable several items at once or event use ALT+Click to isolate an item. - Masking geometry via the viewport The Geometry Mask selection can also be changed in the 2D and 3D views. Simply move the mouse over the part that should be visible/hidden and click on it to toggle its state. When editing the Geometry Mask, masked geometry is displayed with a gray and diagonal lines effect. It is also possible to do rectangular selections by click and dragging to select multiple items at once. Painting hidden / unreachable geometry. After selecting geometry to mask out in the Geometry Mask, it is possible to enable the Hide/Ignore excluded geometry button at the top of the viewport (or by pressing the ALT+H shortcut). When enabled, excluded geometry will be hidden (as well as other Texture Sets) to only show geometry that is included / paintable with the current layer. This option allows to paint areas that were previously blocked or out of reach. This option also applies to any kind of layer. Painting with masked out geometry remains dynamic: if some geometry blocking the painting was initially hidden when the brush stroke was made and is then unmasked it will block again the previously made brush stroke. New Copy and Paste of Layer Stack Effects Effects can now be copied across layers and layer stacks, the same way as regular layers. Multi-selection is now possible as well to offer the possibility to copy and paste multiple effects at once. For convenience, copying or moving an effect from a mask on a layer without one will automatically add one. This is because effects from the layer content and mask are not compatible with each other. This means copying an effect from a mask into the content of a layer will automatically switch to the mask (or create one). - Copy and paste via the contextual menu Right-click on any effect in the layer stack of a Texture Set and choose the cut or copy action. Then right-click again on any layer and choose paste to move or create a copy of the desired effects. We also took the opportunity to rework the contextual menu and give access to more functionalities: - Copy and paste with the keyboard shortcuts Same as with any layer, the keyboard shortcut CTRL+C (copy)/CTRL+X (cut) and CTRL+V (paste) can be used to copy effects based on the current selection. Like with layers, effects are inserted above the current selection. - Quickly duplicate effect with keyboard shortcuts Use CTRL+D to duplicate the current selection or press and maintain ALT while dragging any effect to duplicate it at a desired location: - Move effects across layers In addition to the copy and duplication, it is now possible to move an effect by simply drag and dropping it from one layer to another: New General Features and Improvements Several improvements have been made in this release: - Add a description per UV Tile A description can now be added for each UV Tile via the Texture Set List. This make the project easier to navigate, especially when exporting and baking as the descriptions can also be seen in these contexts. To add or edit a description simply click on a UV Tile in the Texture Set List window and then go into the Texture Set Settings window to edit it. - New layer stack thumbnails The optimized layer stack thumbnails have been improved. A material sphere is now displayed for fill layers, making it easier to navigate and see the main properties of each layer even when working with the UV Tiles workflow. The thumbnail is generated from the layer information but doesn't take into account effects to avoid being recomputed too often. - Improved Geometry Mask exit in the layer stack Exiting the Geometry Mask (formerly UV Tile Mask) could be proven difficult with folders in the layer stack if it didn't had a mask. This is because there was no other context to switch on other than selecting another layer. It is now possible to click on the folder thumbnail to exit the Geometry Mask. It is also possible to drag and drop materials or smart materials from the Shelf into the viewport while editing the Geometry Mask. - New Alt-Click selection in Baking window The list of mesh maps in the baking window can now be isolated with the Alt + mouse click to isolate a specific map to bake, instead of having to exclude them manually. The same shortcut can be used to re-enable all the mesh maps. - New bake current Texture Set button A new button has been added at the bottom of the Baking window to make it quick and easy to re-bake a Texture Set. Using this button won't affect the custom selection that was previously defined and instead will bake the whole Texture Set (including all its UV Tiles if any are available). New Substance Engine Update The Substance Engine has been updated to its version 8 to support the latest Substance file format and its functionalities. For more details on the new Substance Engine features, take a look at this documentation page. New Nvidia RTX 3000 Support in Iray The Iray renderer has been updated to its latest version and now supports the new Nvidia Ampere GPUs (RTX 3000 Series and Quadro A Series). With this update Kepler GPUs (GeForce 600 and 700 series) are not supported anymore, rendering in Iray will be done on the CPU instead. New Content Three new stitch tools have been added in this release and can be used to create complex patterns and realistic stitches. To find them, simply go in the Tool section of the Shelf and look for: - Stitches Complex - Stitches Cross Seam - Stitches Straight It is recommended to activate the Lazy Mouse feature in the contextual toolbar to increase the quality of the painted stitches. Below is an overview of all the presets contained inside these new tools: New Python Functionalities The Python API received several new functionalities. The documentation has also been reworked, notably its examples, to make it easier to understand and learn the API. - Resources and shelves management The resource module has been improved and can now: - Create and manage shelves. - Search or import resources in shelves and projects. - Know if a shelf is being crawled (allowing to know when resources are ready to use). - Assign custom thumbnails to resources in a shelf. - UV Tiles information It is now possible to query the UV Tile list of a Texture Set. This open the possibility to create custom exports on a specific range of UDIM tiles for example. - Project edition status A new function and events have been added to know if a project can be edited. This is useful to know if a computation is in progress and modifying the properties of a project is not possible. The Python API documentation is accessible from the help menu of the application. Tutorials Below are our video tutorials covering the new features: Release Notes 2021.1 (Released January 28, 2021) Summary : Major release, new Geometry Mask which allows to select and paint parts of the geometry, copy/paste effects in the layer stack, improvement of UV Tile workflow, update of Iray, Bakers, Substance Engine and new content Added: - New geometry mask and paint selected parts of the geometry - [Geometry Mask] Allow to paint selected parts of geometry by mesh names - [Geometry Mask] Rectangular selection in both viewports - [Geometry Mask] Allow to hide/ignore excluded geometry on any layer - [Geometry Mask][Properties] Quick selection for checkboxes with click and drag - [Geometry Mask][Properties][UI] Include/Exclude all with a dropdown in Properties window - [Geometry Mask][Properties] Allow to quickly select one item in a list with ALT+LEFT CLICK - [Geometry Mask][Properties] Overlay in viewports when hovering Mesh names/UV Tiles in Properties window - [Geometry Mask][Layer Stack] Add Copy/Paste options to the geometry mask - [Geometry Mask] New icon for Hide/ignore excluded geometry button - [Geometry Mask] New tooltip for Hide/ignore excluded geometry - [Geometry Mask] Keyboard shortcut ALT+H to toggle on/off "hide ignore excluded geometry" button - [UV Tiles][Layer Stack] New Fill layer sphere preview thumbnail for UV Tiles and simplified mode - [UV Tiles][Layer Stack] Allow to easily exit the UV Tile mask - [UV Tiles][Texture Set List] Allow to give a description per UV Tile - [UV Tiles][Texture Set Settings][UI] Two new section titles in the dropdown menu to change UV Tile resolution - [UV Tiles][Viewport] Exit UV Tile Mask when dragging a material into the viewport - [Layer Stack] Add Copy/Paste options for effects - [Layer Stack] Allow to copy/paste effects from one Texture Set to another - [Layer Stack] Allow multi-selection of effects - [Layer Stack] Add copy/paste options as shortcuts for layer effects - [Layer Stack] Automatically switch between mask and content when dragging effects to another layer - [Layer Stack] Automatically create a mask when pasting a mask from another layer - [Layer Stack] Add move effect actions inside the effects' contextual right click menu - [Layer Stack] Allow to drag and drop effects from one layer to another - [Layer Stack] Dragging items onto a folder places them on the top of the folder - Update Iray to version 2020.1.0 - [Bakers] Update Bakers to version 2.5.4 - [Bakers] Display individual UV Tiles in the baking progress window - [Bakers][UI] Allow to quickly bake the current Texture Set with a new button - [Bakers] Allow user to quickly select one of the bakers with ALT+LEFT CLICK - Update Substance Engine to version 8.0.8 - [Substance Engine] Support Default Color in new .sbsar files - [Auto Unwrap] Performance improvement - [Export] Add visual feedback to indicate which UV Tile's resolution differs from project's default - [Export] Add scene size factor into exported shader json file - [Language] Add Japanese translation - [UI] Update About window with versioning of internal dependencies - [Scripting][Python] Allow to manage Shelf resources - [Scripting][Python] Allow to know when a project is ready for baking and exporting - [Scripting][Python] Allow to know when a Shelf has finished crawling resources on disk - [Scripting][Python] Allow to query the list of UV tiles per Texture Sets - [Scripting][Python] Allow to assign custom preview to Shelf resources - [Scripting][Python] Allow to manage custom shelves - [Scripting][Python] Add a method index in each submodule in the documentation - [Scripting][Python] New style for the documentation - [Scripting][Python] Improvement of resources and Shelf documentation - [Content] Three new tool presets to make stitches - [Shelf] Temporarily remove "Export to Substance Share" while transitioning to the new Substance Share platform Fixed: - Crash when using monitors with different resolutions - Crash in Substance Engine with some rare projects - Viewport refresh fails with Hide/Ignore Excluded Geometry when switching layers - [2D View] 2D Viewport can be missing on some projects - [Baking] "Match by mesh name" ignores parts of the object - [Layer Stack] Clicking on a layer effect opens folder - [Geometry Mask] UV Tile is still counted in mask even when reimporting the mesh without it - [Geometry Mask] Right click menu in the viewport does not provide the correct tools - [Engine] Heavy lags on particular projects - [Scripting] High latency with remote JSON POST requests on Windows - [Linux] Vram amount is not detected properly with specific integrated GPUs - [Auto Unwrap] Crashes or long unwrap on some projects
https://docs.substance3d.com/spdoc/version-2021-1-7-1-0-205358288.html
2021-05-06T10:10:35
CC-MAIN-2021-21
1620243988753.91
[]
docs.substance3d.com
Dbrandt From Xojo Documentation ContainerControl I recently added a table of Events to the ContainerControl page. Then you edited it to say that ContainerControl inherits from Window, and you modified the events table to point to Window event definitions. The event table now does not match the IDE in 3 ways: - event table does not include GotFocus, but the IDE does - event table does not include LostFocus, but the IDE does - event table includes CancelClose, but the IDE does not I feel like this material is now incorrect but maybe I am missing something... Eliza and other new pages Hi, I'm quite impressed with the amount of recent pages added to this wiki. Are they copied from the old guide or designed anew? I noticed a few things that should probably be improved: - If they're all part of a tutorial, then please add a link, maybe at the very bottom, of all such pages, that brings one back to a entry page that lists all the tutorial pages, or something like this. E.g, the Eliza page just stands by their own there, although it suggest at the start that it's part of several lessons. - The Eliza page has a few literal "" appearances. - The Eliza sample code is a bit outdated. E.g, it uses the now-deprecated "f.OpenAsTextFile". Should be using TextInputStream.Open(f) with a try/catch wrapper instead now. TTemplmann 15:29, 8 March 2011 (UTC) Hi, yes, yes, and yes. Geoff asked me to update the Curriculum; it has not gotten any attention in 4 years. I don't know whether he has plans to use it more generally than as a set of lessons for students. I think it covers useful material that is beyond the tutorial and it is a different approach than is in the Users' Guide. So probably it should be made generally available, but not yet! I left off somewhere in the Eliza lesson last nite, so I'm not close to being ready to add more links to it. Yesterday, I built the lessons up to the Eliza chapter in RB2011 and noted several items that have been deprecated. I know: they used EditFields and Statictexts in the apps in addition that deprecated method. I saw that. A relatively subtle point was that the ListBox was named "WordList" but "WordList" is now a reserved word, as it is used by the MS Office Automation system. This was all wikified before me and some chapters use the "#" symbol for numbered instructions, others use the "<li>" tag. I am going to convert them all to the "#" convention, like the other docs. Also, the code was all enclosed in "<div>" statements, indicating that the work was done before they had the "<rbcode>" tag. I am updating everything to "<rbcode>". You can tell where I left off by where the "<div>" tags start appearing for code snippets. After an initial formatting pass, I will go through and work through the lessons, making any content changes that are still remaining. Dave Cool. BTW, it may help if you use a editor that supports Regular Expressions, such as TextWranger, so that you can more easily convert all those tags. TTemplmann 18:03, 8 March 2011 (UTC) I'm using BBEdit, but there's no substitute for actually reading it for content! Especially knowing that it was written 4 years ago and there may be deprecated items lurking anywhere. I'm happy to read the code afterwards and give some comments on what needs changing if you want to do the final work. Just let me know if the pages are done and I'll add my comments to them. TTemplmann 18:19, 8 March 2011 (UTC) Thanks! It's not ready yet, though. Let me walk through a provisional copy first, so you are not wasting time correcting things that I hadn't gotten to yet.
https://docs.xojo.com/User_talk:Dbrandt
2021-05-06T09:24:55
CC-MAIN-2021-21
1620243988753.91
[]
docs.xojo.com
Blender is a free and open-source (maintained by the Blender Foundation) 3D computer graphics software toolset used for creating animated films, visual effects, art, 3D printed models, motion graphics, interactive 3D applications, virtual reality and computer games. The tool is widely accepted in VFX and 3D content industries and has seen significant market penetration with its growing user base, at over 1M downloads a month as of 2019. The following pages outline how it can be leveraged to produce compelling multi-view renders for Leia displays:
https://docs.leialoft.com/developer/blender-sdk/using-blender-for-litbyleia-displays
2021-05-06T09:49:09
CC-MAIN-2021-21
1620243988753.91
[]
docs.leialoft.com
Store Demo Notice. Storefront demo notice Set the store demo notice On the Admin sidebar, go to Content > Design > Configuration. In the grid, find the store view that you want to configure and click Edit in the Action column. Under Other Settings, expand the HTML Head section. Scroll down to the bottom and set the Display Demo Store Notice to your preference. When complete, click Save Configuration. If prompted to update the cache, click Cache Management in the system message and follow the instructions to refresh the cache.
https://docs.magento.com/user-guide/design/demo-notice.html
2021-05-06T10:22:06
CC-MAIN-2021-21
1620243988753.91
[]
docs.magento.com
Satellite to satellite tracking in the space-wise approach Sharifi, Mohammad A. Univ. Stuttgart Monography Verlagsversion Englisch Sharifi, Mohammad A., 2006: Satellite to satellite tracking in the space-wise approach. Univ. Stuttgart, 172 S., DOI 10.23689/fidgeo-330. The launch of the champ mission in 2000 has renewed interest in the recovery of the geopotential field from satellite observations which has been a challenging research issue for decades. It was the first dedicated gravity field mission which was followed by the grace spacecrafts. In the grace mission, the high-low (hl-) and the low-low satellite-to-satellite tracking (ll-sst) observations are combined and the resultant observables are expressed in terms of the gravity gradient at the barycenter of two satellites. Each observation at its respective evaluation point can be written in terms of the spherical harmonic coefficients. Consequently, the observations are a sequence of discrete time series which are mathematically related to the unknown coefficients via the corresponding position of the satellites at the evaluation epoch. In this approach, which is called time-wise approach, the determination of unknown coeficients becomes possible after plugging the observations into the mathematical model. Fulfilling the sampling theorem, however, leads to a huge linear system of equations with a large number of unknowns. As an alternative, one can employ the semi-analytical approach which is derived from the time-wise approach by imposing some approximations. Observations are still considered as discrete time series on an ideal geometry with a constant radius and/or constant inclination. The coeficients are reordered and then computed via the lumped coefficients or using 2d fft.
https://e-docs.geo-leo.de/handle/11858/00-1735-0000-0001-31D7-D
2021-05-06T10:00:18
CC-MAIN-2021-21
1620243988753.91
[]
e-docs.geo-leo.de
- Summary of major changes - Problematic Helm 2.15 - Upgrade path from 2.x - Upgrade from 2.6.x - Major Changes - Upgrade path from 1.x - Known issues and limitations - Release cadence - Kubernetes deployment support - Technical support GitLab Cloud Native Chart 3.0 We have bumped the chart version to 3.0 in order to take in several major changes in our chart dependencies, some of which require manual actions to complete the upgrade. Summary of major changes - The bundled PostgreSQL database is upgraded to 10.9 - The NGINX Ingress has been updated to work with Helm 3.0 - The version of Redis has changed from 4.x to 5.x and switched to the upstream Redis chart - The Prometheus chart has been upgraded from 9.0.x to 10.0.x - The Sidekiq deployments have a new name and label selector - Other minor changes also required the version bump can be found linked in our 3.0 release epic Problematic Helm 2.15 Helm v2.15.x has a severe bug, and absolutely should not be used. If you are to use Helm 2, use 2.14.3 or >= 2.16.1. Upgrade path from 2.x In order to upgrade to the 3.0 version of the chart, you first need to upgrade to the latest 2.6.x release of the chart. Check the version mapping details for the latest patch. If you don’t first upgrade to the latest 2 3.0.0 (GitLab 12.7.0). It is required to upgrade to the last minor version in a major version series first before jumping to the next major version. Please follow the upgrade documentation at and upgrade to GitLab Helm Chart version 2.6.0 before upgrading to 3.0.0. Upgrade from 2.6.x Upgrading to the 3.0 chart requires manual upgrade steps in order to update some of the components. Please follow the upgrade steps for 3.0 release. Major Changes PostgreSQL As part of the 3.0.0 release of this chart, we upgraded the bundled PostgreSQL chart from 0.11.0 to 7.7.3. This updates the database version from 9.6 to 10.9. This is not a drop in replacement. Manual steps need to be performed to upgrade the database. The 3.0 upgrade steps includes the manual steps required during upgrade. Further details can be found in our PostgreSQL upgrade issue. 9.6is still supported, though we recommend upgrading to PostgreSQL 10. NGINX Ingress We addressed issue #1710, and that change will fix future upgrades, but requires a manual intervention when upgrading from a version of the chart prior to 3.0. The 3.0 upgrade steps includes the manual steps required during upgrade. Further details on this can be found in our troubleshooting documentation, under Immutable Field Error, spec.clusterIP. Redis As part of our Redis upgrade we’ve dropped our fork of the Redis and Redis HA charts and have instead switched to using a newer version of the upstream Redis chart. This brings with it an update to using Redis 5.x, with improved performance. - For users of the previous bundled Redis chart, there will be no changes required to upgrade to the new Redis version. - For users of the previous Redis HA chart, there are some additional flags you need to enabled to put Redis in an HA configuration. - For users of an external Redis database. The syntax for disabling the bundled database has changed to redis.install=false. (From redis.enabled=false) Prometheus The Prometheus chart has been updated to 10.0.0. This brings in the latest changes for the chart, which include removing deprecated APIs, that were preventing installation into Kubernetes 1.16. This component does not require any manual upgrade steps, but it was required that users have already upgraded to the 9.0 Prometheus chart before upgrading further. We included 9.0 in GitLab Helm chart 2.5.0, so we placed the new version in this 3.0 release, which requires users who are upgrading to be on the GitLab Helm chart 2.6.0 release or newer. See our Prometheus upgrade issue for further details. Sidekiq Selectors Previously, the Sidekiq chart did not assign unique selectors to deployments. This prevented deployments from being able to properly identify their Sidekiq pods and clean up as necessary. These selectors are immutable fields in the Deployment Spec, so in order to update them, the Sidekiq deployments need to be deleted, then recreated. As part of the 3.0.0 release, this is done automatically by Helm by appending -v1 to the name of the Sidekiq Deployments, HPAs, and Pods. Additional details on the can be found in the troubleshooting documentation for Immutable Field Error, spec.selector. Upgrade path from 1.x You first need to upgrade to the 2.6.x release of the charts,.12.10 in our automated tests, and 1.13.11.
https://docs.gitlab.com/charts/releases/3_0.html
2021-05-06T09:30:31
CC-MAIN-2021-21
1620243988753.91
[]
docs.gitlab.com
Direct3D 12 variable-rate shading sample This sample illustrates how to use variable-rate shading (VRS) to improve application performance. VRS adds the concept of subsampling, where shading can be computed at a level coarser than a pixel. For example, a group of pixels can be shaded as a single unit, and the result is then broadcast to all samples in the group. This is great for areas of the image where extra detail doesn't help—such as those behind HUD elements, transparencies, blurs (depth-of-field, motion, etc.), and optical distortions due to VR optics. Requirements GPU and driver with support for DirectX 12 Ultimate Controls SPACE: Toggles light animation. ALT + ENTER: Toggles between windowed and fullscreen modes. [+/-]: Increments/decrements the glass refraction scale. CTRL + [+/-]: Increments/decrements the fog density. [F1-F5]: Selects a preset for multiple Shading Rates. [1-7]: Selects Shading Rate for the Refraction pass. SHIFT + [1-7]: Selects the Shading Rate for the Scene pass. CTRL + [1-7]: Selects the Shading Rate for the Postprocess pass. Recommended scenarios to try Hit SPACE to stop the light animating and then use the [F1-F5] keys to toggle between presets for Shading Rates. Can you spot the visual difference between F1 and F2? Try experimenting with the various controls to find an acceptable balance between degraded-visuals and performance.
https://docs.microsoft.com/en-us/samples/microsoft/directx-graphics-samples/d3d12-variable-rate-shading-sample-win32/
2021-05-06T11:28:23
CC-MAIN-2021-21
1620243988753.91
[array(['media/screenshot.png', 'Variable Rate Shading GUI'], dtype=object)]
docs.microsoft.com
Data Migration Considerations Contributors Download PDF of this page Overview Migrating data is a near-universal requirement when migrating to a cloud solution of any type. While Admins are responsible for migrating data into their Virtual Desktops, NetApp’s experience is available and has proven invaluable for innumerable Customer migrations. The Virtual Desktop environment is simply a hosted Windows environment, so any methods desired can likely be accommodated. User profiles (Desktop, Documents, Favorites, etc…) File Server Shares Data Shares (App data, databases, backup caches) The User (typically H:\) drive: This is the mapped drive visible for each User. This is mapped back to the <DRIVE>:\home\CustomerCode\user.name\ path Each user has their own H:\ drive and can not see another User The Shared (typically I:\) drive: This is the shared mapped drive visible for all users This is mapped back to the <DRIVE>:\data\CustomerCode\ path All users can access this drive. Their level of access to contained folders/file is managed in the Folders section of VDS. Generic migration process Replicate data to the Cloud Environment Move data to the appropriate path for H:\ and I:\ drives Assign appropriate permissions in the Virtual Desktop environment FTPS transfers & considerations Migration with FTPS If the FTPS server role was enabled during the CWA deployment process, gather FTPS credentials by logging into VDS, navigating to Reports and running the Master Client Report for your organization Upload data Move data to the appropriate path for the H:\ and I:\ drives Assign appropriate permissions in the Virtual Desktop environment via the Folders module Enabling Migration Mode is easy – navigate to the organization, then scroll down to the Virtual Desktop Settings section and check the box for Migration Mode, then click Update. To enable that setting, connect to CWMGR1 and navigate to the CwVmAutomationService program, then enable PCI v3 compliance. Sync tools and considerations Enterprise File Sync and Share, often referred to as EFSS or sync tools, can be extremely useful in migrating data, as the tool will capture changes on each side until cutover. Tools like OneDrive, which comes with Office 365, can help you sync fileserver data. It is also useful for VDI User deployments as well, where there is a 1:1 relationship between the User and the VM, as long as the User doesn’t attempt to sync shared content onto their VDI Server when shared data can be deployed once to the Shared (typically I:\) drive for the whole organization to use. Migrating SQL and Similar Data (Open Files) Mailbox (.ost) files QuickBooks files Microsoft Access files SQL databases This means that if one single element of the entire file (1 new email appears, for example) or database (1 new record is entered into a app’s system) then the entire file is different and standard sync tools (Dropbox, for example) will think it is an entirely new file and needs to be moved again. There are specialized tools available for purchase from 3rd party providers, if desired. Another common way these migrations are handled is via providing access to a 3rd party VAR, who often have streamlined of importing/exporting databases. Shipping drives Many data center providers no longer ship hard drives – either that, or they require you to follow their specific policies and procedures. Microsoft Azure is enabling organizations to use Azure Data Box, which Admins can take advantage of by coordinating with their Microsoft representatives.
https://docs.netapp.com/us-en/virtual-desktop-service/Architectual.migrate_data_into_vds.html
2021-05-06T11:05:04
CC-MAIN-2021-21
1620243988753.91
[]
docs.netapp.com
Renaming features Sometimes the business asks to change the name of a feature. Broadly speaking, there are 2 approaches to that task. They basically trade between immediate effort and future complexity/bug risk: - Complete, rename everything in the repository. - Pros: does not increase code complexity. - Cons: more work to execute, and higher risk of immediate bugs. - Façade, rename as little as possible; only the user-facing content like interfaces, documentation, error messages, etc. - Pros: less work to execute. - Cons: increases code complexity, creating higher risk of future bugs. When to choose the façade approach The more of the following that are true, the more likely you should choose the façade approach: - You are not confident the new name is permanent. - The feature is susceptible to bugs (large, complex, needing refactor, etc). - The renaming is difficult to review (feature spans many lines, files, or repositories). - The renaming is disruptive in some way (database table renaming). Consider a façade-first approach The façade approach is not necessarily a final step. It can (and possibly should) be treated as the first step, where later iterations accomplish the complete rename.
https://docs.gitlab.com/ee/development/renaming_features.html
2021-05-06T10:05:31
CC-MAIN-2021-21
1620243988753.91
[]
docs.gitlab.com
Managing renewal for OVHcloud services Find out how to manage automatic renewal for your services via the OVHcloud Control Panel Find out how to manage automatic renewal for your services via the OVHcloud Control Panel Last updated 14th July 2020 You can manage renewals and cancellations for your services via the OVHcloud Control Panel. Find out how to manage automatic renewal for your services via the OVHcloud Control Panel. Depending on your place of residence, local legislation, and the solutions concerned, the details in this guide may vary or not apply to your situation. If you are unsure about any details, please refer to your OVHcloud contracts. You can view these via the OVHcloud Control Panel by going to the My services section, then Contracts. This guide is not applicable for US services. If you are a customer of OVHcloud US, please refer to the guide for your region. When you place orders, your services are set to be automatically renewed on their expiry date. Payments are taken via the default payment method saved in the OVHcloud Control Panel. You can cancel your services whenever you want via the OVHcloud Control Panel, and they will not be renewed after their expiry date has passed. You can also set certain products (domains, hosting plans, VPS, dedicated servers) to manual renewal, if you do not want payments and renewals to be carried out automatically. We recommend reading the following guides, and you can focus on the operations you wish to carry out. View the status of your services. This guide will help you check if your services are renewed automatically. You can also check their renewal and expiry dates. Manage renewal for your services. This guide will help you enable or disable automatic renewal, change the payment frequency for a service, and pay for renewals before their expiry date. Manage your payment methods. You can use this guide to ensure that you have a payment method saved for future renewals. You can also add and delete payment methods, if necessary. Log in to the OVHcloud Control Panel. Click on the name associated with your NIC handle (Customer ID) in the menu bar in the top right-hand corner, then select Products and services. The “My services” page contains a table for managing your OVHcloud services. On this page, you can find the service names, types, service availability (e.g. if it is suspended), its status (renewal type, actions required, etc.), and the date by which you need to take the action. You can sort the columns by ascending or descending order, use the search field, or apply a filter to only display a selection of your services that match a chosen set of criteria. Your filter criteria are then displayed above the table. Here is an example of a filter that displays domain names with bills awaiting payment. When you subscribe to a service, it is set to automatic renewal by default. This setting means you can ensure your services are systematically renewed on their expiry date. Also, if you have registered a payment method in the OVHcloud Control Panel, it will be used to pay for your bills automatically. If you have not registered any payment methods, you will be sent a bill via email. You can then pay it online. For services with an automatic renewal frequency higher than 1 month (3 months, 6 months, 12 months), you will also be sent an email reminder the month before, listing the services that will need to be renewed soon. For some OVHcloud products (domains, web hosting plans, VPS, dedicated servers), you can switch to manual renewal. This renewal mode is useful if you are not sure whether you want to keep the service until its expiry date, or if you do not want payments to be taken automatically via your payment method. If you select this mode, you will receive several reminder emails before the expiry date, each containing a link for renewing your services online before the expiry date. You can also pay via the OVHcloud Control Panel. If you do not pay for a service in manual renewal, it is suspended on its expiry date, then deleted after a few days. To the right of each service, click on the ... in the “Actions” column to set renewal for your services. Depending on the service, some actions may not be available depending on whether or not it is eligible for manual renewal. Depending on the service you have chosen, you can set it to manual renewal, or choose a frequency for automatic renewal. If your service is eligible, you can choose the renewal type and frequency. Depending on your choice, you will be given information on future payment dates, the payment method that will be used, and the service’s expiry date. This action will redirect you to an online payment interface. You can renew a service at any time before the expiry date, and choose the renewal duration. In this case, the duration of validity you subscribe to will be added to the current validity duration. You will not lose any remaining validity time. This action is available for services set to automatic renewal. By choosing this action, automatic payments and renewals are disabled for the service you have selected. If you have services set to automatic renewal, but have not registered a payment method for paying your bills, a “Bill to pay” comment will be displayed when a bill is awaiting payment. Then click Pay my bill, which will redirect you to an online payment interface. You can perform group actions by selecting several services in the table, then clicking Actions. The table below details the group actions you can
https://docs.ovh.com/ca/en/billing/how-to-use-automatic-renewal-at-ovh/
2021-05-06T09:32:22
CC-MAIN-2021-21
1620243988753.91
[]
docs.ovh.com
TextOutputStream.Append From Xojo Documentation Shared Method TextOutputStream.Append(f as FolderItem) As TextOutputStream New in 2009r5 Supported for all project types and targets. New in 2009r5 Supported for all project types and targets. Opens the passed file so that text can be appended to existing text. Notes If no file exists at the specified location, one is created. If the file cannot be created or opened for appending, an IOException is raised. The append is done by calling Write or WriteLine. The Append shared method replaces the deprecated FolderItem.AppendToTextFile. Example This example appends the text in TextField1 to the text file that was opened by GetOpenFolderItem:
https://docs.xojo.com/TextOutputStream.Append
2021-05-06T10:51:55
CC-MAIN-2021-21
1620243988753.91
[]
docs.xojo.com
We designed a clean and simple SQL query editor ✨ Once you open the Query tab for any data asset, you can start typing ⌨️ your queries in the editor and run ✅ them to explore your data. An intutive design and color palette, easy shortcuts, and meta information about the query make it easy to use the editor 🍃 Let's look into its features and how we can use them for our benefit. There's a "Help" ℹ️ button at the top of the editor. Click it to see shortcuts for faster SQL writing and editing, like Ctrl/Cmd + Enter to run the query. 📄 Use the "Copy" button to easily copy your SQL query and paste it somewhere else. If you want the full space to write your query, you can just click on the "Expand" button at the end. This will take you to a full screen 🖥️ mode. The editor auto-completes SQL functions as you write. 😎 Cool Hack: Press Ctrl/Cmd + Shift + L to auto-format your SQL query. No need to put in extra effort to make your code look clean. Just use this shortcut! On the Discover page, click on the data table name for which you want to write the SQL query. Click on the second tab (the Query tab) to open the editor. The editor will be in front of you 🎉 to start typing and running queries ▶️ All SQL functions are supported by the editor. You can even join two tables together using JOIN statements. However, this editor is just for querying and not for writing changes back to the source database. 👀 Note: The query that runs on the editor is executed at the source of the data catalog. For example, if a table is stored in the Snowflake warehouse, the query will be executed there itself. There is a bunch of information displayed on the editor screen to help you write a better query. If you only want to look at the columns of the table, just click the "Columns" option. It will list each column name with its data type. 🌟 Pro Tip: The column count always shows up right above the table. If you want to check the row count as well, click the "Get rows count" option right next to it. You can check how long a query took to run. The time is displayed right next to the "Run Query" button. If you want to quickly copy the data showing in the table, just click on the option "Copy Data" or "Copy Columns". You can then paste the data in an Excel sheet, Powerpoint, Notepad, etc. 👀 Note: The table will only show max 10,000 rows, so more than this number of rows cannot be copied. Whenever you mention a column while writing a query, the editor will display the classification and terms attached to that column name for further context. Go query to glory 🌼 and experience working with the SQL editor inside Atlan yourself!
https://docs.atlan.com/collaborating-on-your-data/sql-editor
2021-05-06T09:43:47
CC-MAIN-2021-21
1620243988753.91
[]
docs.atlan.com
The Salt Windows Software Repository provides a package manager and software repository similar to what is provided by yum and apt on Linux. It permits the installation of software using the installers on remote windows machines. In many senses, the operation is similar to that of the other package managers salt is aware of: pkg.installedand similar states work on Windows. pkg.installand similar module functions work on Windows. pkg.refresh_dbexecuted against it to pick up the latest version of the package database. High level differences to yum and apt are: The install state/module function of the windows package manager works roughly as follows: pkg.list_pkgsand store the result pkg.list_pkgsresults) pkg.list_pkgsand compare to the result stored from before installation. pkg.list_pkgsresults. If there are any problems in using the package manager it is likely to be due to the data in your sls files not matching the difference between the pre and post pkg.list_pkgs results. By default, the Windows software repository is found at /srv/salt/win/repo This can be changed in the master config file (default location is /etc/salt/master) by modifying the win_repo variable. Each piece of software should have its own directory which contains the installers and a package definition file. This package definition file is a YAML file named init.sls. The package definition file should look similar to this example for Firefox: /srv/salt/win/repo/firefox/init.sls Firefox: 17.0.1: installer: 'salt://win/repo/firefox/English/Firefox Setup 17.0.1.exe' full_name: Mozilla Firefox 17.0.1 (x86 en-US) locale: en_US reboot: False install_flags: ' -ms' uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe' uninstall_flags: ' /S' 16.0.2: installer: 'salt://win/repo/firefox/English/Firefox Setup 16.0.2.exe' full_name: Mozilla Firefox 16.0.2 (x86 en-US) locale: en_US reboot: False install_flags: ' -ms' uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe' uninstall_flags: ' /S' 15.0.1: installer: 'salt://win/repo/firefox/English/Firefox Setup 15.0.1.exe' full_name: Mozilla Firefox 15.0.1 (x86 en-US) locale: en_US reboot: False install_flags: ' -ms' uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe' uninstall_flags: ' /S' More examples can be found here: The version number and full_name need to match the output from pkg.list_pkgs so that the status can be verified when running highstate. Note: It is still possible to successfully install packages using pkg.install even if they don't match which can make this hard to troubleshoot. salt 'test-2008' pkg.list_pkgs test-2008 ---------- 7-Zip 9.20 (x64 edition): Firefox 17.0.1 (x86 en-US): 17.0.1 Mozilla Maintenance Service: 17.0.1 NSClient++ (x64): 0.3.8.76 Notepad++: 6.4.2 Salt Minion 0.16.0: 0.16.0 If any of these preinstalled packages already exist in winrepo the full_name will be automatically renamed to their package name during the next update (running highstate or installing another package). test-2008: ---------- 7zip: Maintenance Service: 17.0.1 Notepad++: 6.4.2 Salt Minion 0.16.0: 0.16.0 firefox: 17.0.1 nsclient: 0.3.9.328 Add msiexec: True if using an MSI installer requiring the use of msiexec /i to install and msiexec /x to uninstall. The install_flags and uninstall_flags are flags passed to the software installer to cause it to perform a silent install. These can often be found by adding /? or /h when running the installer from the command line. A great resource for finding these silent install flags can be found on the WPKG project's wiki: 7zip: 9.20.00.0: installer: salt://win/repo/7zip/7z920-x64.msi full_name: 7-Zip 9.20 (x64 edition) reboot: False install_flags: ' /q ' msiexec: True uninstaller: salt://win/repo/7zip/7z920-x64.msi uninstall_flags: ' /qn' Add cache_dir: True when the installer requires multiple source files. The directory containing the installer file will be recursively cached on the minion. Only applies to salt: installer URLs. sqlexpress: 12.0.2000.8: installer: 'salt://win/repo/sqlexpress/setup.exe' full_name: Microsoft SQL Server 2014 Setup (English) reboot: False install_flags: ' /ACTION=install /IACCEPTSQLSERVERLICENSETERMS /Q' cache_dir: True Once the sls file has been created, generate the repository cache file with the winrepo runner: salt-run winrepo.genrepo Then update the repository cache file on your minions, exactly how it's done for the Linux package managers: salt '*' pkg.refresh_db Now you can query the available version of Firefox using the Salt pkg module. salt '*' pkg.available_version Firefox {'Firefox': {'15.0.1': 'Mozilla Firefox 15.0.1 (x86 en-US)', '16.0.2': 'Mozilla Firefox 16.0.2 (x86 en-US)', '17.0.1': 'Mozilla Firefox 17.0.1 (x86 en-US)'}} As you can see, there are three versions of Firefox available for installation. You can refer a software package by its name or its full_name surround by single quotes. salt '*' pkg.install 'Firefox' The above line will install the latest version of Firefox. salt '*' pkg.install 'Firefox' version=16.0.2 The above line will install version 16.0.2 of Firefox. If a different version of the package is already installed it will be replaced with the version in winrepo (only if the package itself supports live updating). You can also specify the full name: salt '*' pkg.install 'Mozilla Firefox 17.0.1 (x86 en-US)' Uninstall software using the pkg module: salt '*' pkg.remove 'Firefox' salt '*' pkg.purge 'Firefox' pkg.purge just executes pkg.remove on Windows. At some point in the future pkg.purge may direct the installer to remove all configs and settings for software packages that support that option. In order to facilitate managing a Salt Windows software repo with Salt on a Standalone Minion on Windows, a new module named winrepo has been added to Salt. winrepo matches what is available in the salt runner and allows you to manage the Windows software repo contents. Example: salt '*' winrepo.genrepo Windows software package definitions can also be hosted in one or more git repositories. The default repo is one hosted on GitHub.com by SaltStack,Inc., which includes package definitions for open source software. This repo points to the HTTP or ftp locations of the installer files. Anyone is welcome to send a pull request to this repo to add new package definitions. Browse the repo here: . Configure which git repos the master can search for package definitions by modifying or extending the win_gitrepos configuration option list in the master config. win_gitrepos, compile your package repository cache and then refresh each minion's package cache: salt-run winrepo.update_git_repos salt-run winrepo.genrepo salt '*' pkg.refresh_db If the package seems to install properly, but salt reports a failure then it is likely you have a version or full_name mismatch. Check the exact full_name and version used by the package. Use pkg.list_pkgs to check that the names and version exactly match what is installed. Ensure you have (re)generated the repository cache file and then updated the repository cache on the relevant minions: salt-run winrepo.genrepo salt 'MINION' pkg.refresh_db
https://ansible-cn.readthedocs.io/en/latest/topics/windows/windows-package-manager.html
2021-05-06T08:50:59
CC-MAIN-2021-21
1620243988753.91
[]
ansible-cn.readthedocs.io
Managing PostgreSQL extensions This guide documents how to manage PostgreSQL extensions for installations with an external PostgreSQL database. The following extensions must be loaded into the GitLab database: In order to install extensions, PostgreSQL requires the user to have superuser privileges. Typically, the GitLab database user is not a superuser. Therefore, regular database migrations cannot be used in installing extensions and instead, extensions have to be installed manually prior to upgrading GitLab to a newer version. Installing PostgreSQL extensions manually In order to install a PostgreSQL extension, this procedure should be followed: Connect to the GitLab PostgreSQL database using a superuser, for example: sudo gitlab-psql -d gitlabhq_production Install the extension ( btree_gistin this example) using CREATE EXTENSION: CREATE EXTENSION IF NOT EXISTS btree_gist Verify installed extensions: gitlabhq_production=# \dx List of installed extensions Name | Version | Schema | Description ------------+---------+------------+------------------------------------------------------------------- btree_gist | 1.5 | public | support for indexing common datatypes in GiST pg_trgm | 1.4 | public | text similarity measurement and index searching based on trigrams plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language (3 rows) On some systems you may need to install an additional package (for example, postgresql-contrib) for certain extensions to become available. A typical migration failure scenario The following is an example of a situation when the extension hasn’t been installed before running migrations. In this scenario, the database migration fails to create the extension btree_gist because of insufficient privileges. == 20200515152649 EnableBtreeGistExtension: migrating ========================= -- execute("CREATE EXTENSION IF NOT EXISTS btree_gist") GitLab requires the PostgreSQL extension 'btree_gist' installed in database 'gitlabhq_production', but the database user is not allowed to install the extension. You can either install the extension manually using a database superuser: CREATE EXTENSION IF NOT EXISTS btree_gist Or, you can solve this by logging in to the GitLab database (gitlabhq_production) using a superuser and running: ALTER regular WITH SUPERUSER This query will grant the user superuser permissions, ensuring any database extensions can be installed through migrations. In order to recover from this situation, the extension needs to be installed manually using a superuser, and the database migration (or GitLab upgrade) can be retried afterwards.
https://docs.gitlab.com/ee/install/postgresql_extensions.html
2021-05-06T09:47:39
CC-MAIN-2021-21
1620243988753.91
[]
docs.gitlab.com
It is possible to connect to the Nethermind node using web3.py (python web3.js implementation). You will need to have web3.py installed using following guides: You may use below script in order to check your connection. Please make sure you have enabled JSON RPC module, this can be done by passing flag --JsonRpc.Enabled true to either Nethermind.Launcher or Nethermind.Runner from web3.auto import w3connected = w3.isConnected()print(connected)if connected and w3.clientVersion.startswith('Nethermind'):client = w3.clientVersionprint(client)else:client = Noneprint(client) You should see the following output (depends on the node version): TrueNethermind/v1.4.8-13-5c66dcdf6-20200120/X64-Linux 5.3.2-050302-generic/Core3.1.1
https://docs.nethermind.io/nethermind/guides-and-helpers/web3.py
2021-05-06T08:45:15
CC-MAIN-2021-21
1620243988753.91
[]
docs.nethermind.io
Mailchimp - Working with lists - Sync all of your users - Pass custom fields as merge fields - Retrieving & Updating Lists - Mailchimp API Key - Allow users to manage their subscription status - Mailchimp Addon Installation - My Mailchimp list isn't showing my users - Automatic account details update - Subscribing a user to a Mailchimp list on registration
https://docs.wpusermanager.com/category/78-mailchimp
2021-05-06T09:59:41
CC-MAIN-2021-21
1620243988753.91
[]
docs.wpusermanager.com
Martin is a cultural critic, poet, printmaker, digital ceramicist, installation artist, as well as the first Professor of Digital Creativity at De Montfort University. Session Title: Layered Leicester The Virtual Romans was a collaborative project between the Institute of Creative Technologies at De Montfort University, the University of Leicester and Leicester City council exploring aspects of life in Roman Leicester through the creation of digital models of buildings and artefacts. AI programmed characters populate the space, and live out their lives in this environment. Gareth Howell and Dave Everitt are currently developing web and mobile-based delivery of aspects of this project, working with Roman artefacts from the Jewry Wall Museum collection to create 3D models , which will be ‘placed’ virtually in Leicester, to be discovered and interacted with. The project is now expanding to include multiple layers from Roman times to the present day on a locative platform. The project will use Empedia and the Layar Augmented Reality Browser to develop a locative, experiential way to interact with the objects, creating options for game – play and storytelling. Whilst the project is currently in the early stages of development, it is envisaged that the discovery of and interaction with virtual objects in the city will encourage participation and engagement with the physical artefacts held at the museum. Possible interactions with the objects may include: visualisation of the 3D objects through the Layar Animation API; machinima fly-throughs of key buildings in the virtual environment; links to contextual information about the object and location; the possibility of ‘collecting’ virtual artefacts and clues, introducing game – play; the use of animated avatars as guides to the city, developing a narrative, story-based experience.
http://i-docs.org/idocs-2012__trashed/speakers-2/martin-reiser/
2021-05-06T10:15:57
CC-MAIN-2021-21
1620243988753.91
[]
i-docs.org
Indentation CIDER relies on clojure-mode to do the indentation of Clojure code. While clojure-mode will generally indent code "the right way", from time to time you might want to teach it how to indent certain macros. There are two ways to do this - you can either add some indentation configuration in your Emacs config or you can add it to your Clojure code and let CIDER generate the necessary configuration for clojure-mode automatically. We’ll refer to the first approach as "static indentation" and to the second one as "dynamic indentation". Static Indentation clojure-mode is smart enough to indent most Clojure code correctly out-of-the-box, but it can’t know if something is a macro and its body should be indented differently. clojure-mode is very flexible when it comes to indentation configuration and here we are going to go over the basics. Indentation Modes There are few common ways to indent Clojure code and all of them are supported by clojure-mode. The indentation of function forms is configured by the variable clojure-indent-style. It takes three possible values: always-align(the default) (some-function 10 1 2) (some-function 10 1 2) always-indent (some-function 10 1 2) (some-function 10 1 2) align-arguments (some-function 10 1 2) (some-function 10 1 2) Macro Indentation As mentioned earlier, clojure-mode can’t know if something in your code is a macro that has to be indented differently from a regular function invocation (most likely because the macro takes some forms as parameters). In such situation you need to teach clojure-mode how to indent the macro in question. Consider this simple example: (defmacro with-in-str "[DOCSTRING]" {:style/indent 1} [s & body] ...cut for brevity...) ;; Target indentation (with-in-str str (foo) (bar) (baz)) To get clojure-mode to indent it properly you’ll need to add the following code to your Emacs config: (put-clojure-indent 'with-in-str 1) ;; or (define-clojure-indent (with-in-str...) ;; Target indentation (with-in-str str (foo) (bar) (baz)) And here’s a more complex one: (defmacro letfn "[DOCSTRING]" {:style/indent [1 [[:defn]] :form]} [fnspecs & body] ...cut for brevity...) ;; Target indentation (letfn [(six-times [y] (* (twice y) 3)) (twice [x] (* x 2))] (println "Twice 15 =" (twice 15)) (println "Six times 15 =" (six-times 15))))
https://docs.cider.mx/cider/1.1/config/indentation.html
2021-05-06T10:20:35
CC-MAIN-2021-21
1620243988753.91
[]
docs.cider.mx
. Does CIDER have a roadmap? There’s no precise roadmap, but there are a few major goals for the (near) future: improve session management (make it simpler and more predictable) reach parity between the functionality for ClojureScript and Clojure (as it stands today a lot of functionality is Clojure-only) integrate the most important refactoring functionality from our sibling project clj-refactor into CIDER. You can find more details in our roadmap document.? Maybe. Our focus remains making the most out of nREPL, but down the road we might explore investing some time in adding support for additional REPL servers. Will CIDER eventually support the Clojure 1.10 prepl? Same answer as above. One thing is certain - prepl is much more convenient for the purposes of CIDER than the plain socket REPL.
https://docs.cider.mx/cider/1.1/faq.html
2021-05-06T08:43:37
CC-MAIN-2021-21
1620243988753.91
[]
docs.cider.mx
About Source Selection Algorithm and Reservations The heart of Inventory Management tracks every available product virtually and on-hand in your warehouses and stores. The Source Selection Algorithm and Reservations systems run in the background, keeping your salable quantities updated, checkout free of collisions, and shipment options recommended. Source Selection Algorithm The Source Selection Algorithm (SSA) analyzes and determines the best match for sources and shipping using the priority order of sources configured in a stock. During order shipment, the algorithm provides a recommended list of sources, available quantities, and amounts to deduct according to the selected algorithm. Inventory Management provides a Priority algorithm and supports extensions for new options. With multiple source locations, global customers, and carriers with various shipping options and fees, knowing your actual available inventory and finding the best shipment option can be difficult. SSA does the work for you from tracking inventory salable quantities across all sources to calculating and making recommendations for shipments. Track Inventory - Using stocks and sources, the SSA checks the sales channel of incoming product requests and determines available inventory: - Calculates the aggregated virtual salable quantity of all assigned sources per stock: aggregates Quantity - Out-of-Stock Threshold per source - Subtracts the Out-of-Stock Threshold amount from salable quantity to protect against overselling - Reserves inventory quantities on order submission, deducting from in-stock inventory at order processing and shipment - Supports backorders with enhanced options for negative thresholds Manage Shipments - The algorithm helps when you process and ship orders. You can run the algorithm to get recommendations on the best sources for shipping the product or override the selections to: - Ship partial shipments, sending only a few products from specific locations and completing the full order at a later date - Ship the entire order from one source - Break the shipments across multiple sources in different amounts to keep a balanced stock across all warehouses and stores SSA is able extensible for third party support and custom algorithms for recommending cost effective shipments. SSA functions differently for Virtual and Downloadable products, which may not incur shipping costs. In these cases, the system runs the algorithm implicitly when it creates invoices, and always uses the suggested results. You cannot adjust these results for Virtual and Downloadable products. Source Priority Algorithm Custom stocks include an assigned list of sources to sell and ship available product inventory through your storefront. The Source Priority Algorithm uses the order of assigned sources in the stock to recommend product deductions per source when invoicing and shipping the order. When run, the algorithm: - Works through the configured order of sources at the stock level starting at the top - Recommends a quantity to ship and source per product based on the order in the list, available quantity, and quantity ordered - Continues down the list until the order shipment is filled - Skips disabled sources if found in the list To configure, assign and order sources to a custom stock. See Prioritizing Sources for a Stock. The following example details the mapped sources in order, available quantity, and recommended source and amount to deduct and ship. The top source is a Drop Shipper in the United Kingdom with an available quantity of 240. Example SSA recommendations for a Mountain Bike Distance Priority Algorithm The Distance Priority Algorithm compares the location of the shipping destination address with source locations to determine the closest source to fulfill shipments. The distance may be determined by physical distance or time spent traveling from one location to another, using imported database locations or Google directions (driving, walking, or bicycling). You have two options for calculating the distance and time to find the closest source for shipment fulfillment: Google MAP - Uses Google Maps Platform services to calculate the distance and time between the shipping destination address and source locations (address and GPS coordinates). This option uses the source’s Latitude and Longitude. You must provide a Google API key with Geocoding API and Distance Matrix API enabled. This option requires a Google billing plan and may incur charges through Google. Offline Calculation - Calculates the distance using downloaded and imported geocode data to determine the closest source to the shipping destination address. This option uses the country codes of the shipping address and source. To configure this option, you may require developer assistance to initially download and import geocodes using a command line. To configure, select configurations and complete additional steps such as the Google API key or downloading shipping data. See Configuring Distance Priority Algorithm. Custom algorithms. Reservations Instead of immediately deducting or adding product inventory quantities, reservations hold inventory amounts until orders ship or cancel. Reservations work entirely in the backend to automatically update your salable quantity at the stock level. Order reservations Reservations place holds on inventory quantities deducted from the salable quantity when submitting an order. The reservations are at the stock level, counting against quantities until the order is invoiced and shipped, canceled, etc. When shipping the order, you can use the SSA recommendations or manually enter quantity deductions per source. When shipped, the reservations are automatically cleared and the quantity deducted. The salable quantity recalculates for the stock with an updated quantity and any reservation amounts still in the system. The following diagram helps define the process of reservations during an order and through to shipment. A customer submits an order. Magento checks the current inventory salable quantity. If enough inventory is available at the stock level, a reservation enters placing a temporary hold for the product SKU (for that stock) and recalculates the salable quantity. After invoicing the order, you determine the product amounts to deduct and ship from your sources. The shipment is processed and sent from the selected source(s) to the customer. The quantities automatically deduct from the source inventory quantity and reservations clear. For complete details and examples, see About Order Status and Reservations. Updating reservations As changes complete in orders and product amounts, Magento automatically enters reservation compensations. You do not need to enter compensations through the Admin or code to update or clear these holds. Reservations are only affected by entered reservations to put a hold on a quantity or to clear a hold amount (compensating the reservations). Here is how they work: Submitted Order - When an order submits for an amount of products, a reservation enters for that amount. For example, ordering five backpacks from a US website enters a reservation of -5 for that SKU and stock. The salable quantity is reduced by 5. Canceled Order - When an order is canceled (all or partial), a compensation reservation enters to clear that amount. For example, canceling three backpacks enters a +3 reservation for that SKU and stock, clearing the hold. The salable quantity is increased by 3. Shipped Order - When an order ships (all or partial), a compensation reservation enters to clear that amount. For example, shipping two backpacks enters a +2 reservation for that SKU and stock, clearing the hold. The product quantity is directly reduced by 2 for the shipment. The calculated salable quantity is also updated for the reduced stock amount, but is no longer affected by the reservation. All reservations need to be cleared by compensations when orders complete fulfillment, products cancel, credit memos are issued, etc. If compensations do not clear out reservations, you may have quantities held in stasis, not available for sale and never shipping. If you want to review reservations, a series of command line options are available. You can only review reservations through a command line interface. Using CLI commands may require developer assistance. See [Inventory Management CLI Reference][]. If you remove all sources from a product for a stock with pending orders, you may have stuck reservations..
https://docs.magento.com/user-guide/v2.3/catalog/inventory-about-ssa.html
2021-05-06T10:02:25
CC-MAIN-2021-21
1620243988753.91
[array(['/user-guide/v2.3/images/images/inventory/inventory-diagram-ssa-sources.png', None], dtype=object) array(['/user-guide/v2.3/images/images/inventory/inventory-diagram-qty.png', None], dtype=object) array(['/user-guide/v2.3/images/images/inventory/inventory-diagram-reservation.png', None], dtype=object) ]
docs.magento.com
O&O DiskStat O&O DiskStat provides you with an overall view of just how your hard disk is being used. It lets you track down those files and folders that are taking up too much space on your hard disk, and causing your computer to slow down. You can sort by category, file type, view them in Explorer and export them as a table. A further major function to speed up your systems. Understanding the chart The chart displays files or folders and shows their comparative sizes. Individual sizes are shown in the labels. Smaller files and folders are lumped together and displayed as such.
https://docs.oo-software.com/en/oodefrag-20/oo-diskstat-en-20
2021-05-06T09:54:26
CC-MAIN-2021-21
1620243988753.91
[array(['/oocontent/uploads/ood20_oods003.png', 'O&O DiskStat'], dtype=object) ]
docs.oo-software.com
Version 3002.2 is a bugfix release for 3002. Change dict check to isinstance instead of type() for key_values in file.keyvalue. (#57758) Fail when func_ret is False when using the new module.run syntax. (#57768) Fix comparison of certificate values (#58296) When using ssh_pre_flight if there is a failure, fail on retcode not stderr. (#58439) Fix use of unauthd cached vmware service instance (#58691) Removing use of undefined varilable in utils/slack.py. (#58753) Restored the ability to specify the amount of extents for a Logical Volume as a percentage. (#58759) Ensuring that the version check function is run a second time in all the user related functions incase the user being managed is the connection user and the password has been updated. (#58773) Allow bytes in gpg renderer (#58794) Fix issue where win_wua module fails to load when BITS is set to Manual (#58848) Ensure that elasticsearch.index_exists is available before loading the elasticsearch returner. (#58851) Log a different object when debugging if we're using disk cache vs memory cache. The disk cache pillar class has the dict object but the cache pillar object which is used with the memory cache does not include a _dict obeject because it is a dict already. (#58861) Do not generate grains for every job run on Windows minions. This makes Windows conform more to the way posix OSes work today. (#58904) Fixes salt-ssh authentication when using tty (#58922) Revert LazyLoader finalizer. Removed the weakref.finalizer code. On some occasions, the finalized would run when trying to load a new module, firing a race condition. (#58947)
https://docs.saltproject.io/en/latest/topics/releases/3002.2.html
2021-05-06T10:49:28
CC-MAIN-2021-21
1620243988753.91
[]
docs.saltproject.io
TrilioVault for RHV does provide the capabilties to take application consistent backups by utilizing the Qemu-Guest-Agent. The Qemu-Guest-Agent is a component of the qemu hypervisor, which is used by RHV. RHV automatically builds all VMs to be prepared to use the Qemu-Guest-Agent. The Qemu-Guest-Agent provides many capabilities, including the possibility to freeze and thaw Virtual Machines Filesystems. The Qemu-Guest-Agent is not developed or maintained by Trilio. Trilio does leverage standard capabilities of the Qemu-Guest-Agent to send freeze and thaw commands to the protected VMS during a backup process. The Qemu-Guest-Agent needs to be installed inside the VM. The Qemu-Guest-Agent requires a special SCSI interface in the VM definition. This interface is automatically created by RHV upon spinning up the Virtual Machine. The installation process depends on the Guest Operating System. yum install qemu-guest-agentsystemctl start qemu-guest-agent apt-get install qemu-guest-agentsystenctk start qemu-guest-agent Windows Guests require the installation of the VirtIO drivers and tools. These are provided by Red Hat in a prepared ISO-file. For RHV 4.3 please follow this documentation: RHV 4.3 Windows Guest Agents For RHV 4.4 please follow this documentation: RHV 4.4 Windows Guest Agents The Qemu-Guest-Agent is calling the fsfreeze-hook.sh script either with the freeze or the thaw argument depending on the current operation. The fsfreeze-hook.sh script is a normal shell script. It is typically used to do all necessary steps to get an application into a consistent state for the freeze or to undo all freeze operations upon the thaw. The fs-freeze-hook.sh script default path is: /etc/qemu/fsfreeze-hook The fsfreeze-hook.sh script does not require a special content. It is recommended to provide a case identifier for the freeze and thaw argument. This can be achieved for example by the following bash code: #!/bin/bashcase "$1" infreeze)#Commands for freeze;;thaw)#Commands for thaw;;*)echo $"Neither freeze nor thaw provided"exit 1;;esac This example flushes the MySQL tables to the disks and keeps a read lock to prevent further write access until the thaw has been done. #!/bin/&2' HUP INT QUIT ALRM TERMread < $FIFOprintf "UNLOCK TABLES \\G\n"rm -f $FIFO}case "$1" infreeze)mkfifo $FIFO || exit 1flush_and_wait | "$MYSQL" $MYSQL_OPTS &# wait until every block is flushedwhile [ "$(echo 'SHOW STATUS LIKE "Key_blocks_not_flushed"' |\"$MYSQL" $MYSQL_OPTS | tail -1 | cut -f 2)" -gt 0 ]; dosleep 1done# for InnoDB, wait until every log is flushedINNODB_STATUS=$(mktemp /tmp/mysql-flush.XXXXXX)[ $? -ne 0 ] && exit 2trap "rm -f $INNODB_STATUS; exit 1" HUP INT QUIT ALRM TERMwhile :; doprintf "SHOW ENGINE INNODB STATUS \\G" |\"$MYSQL" $MYSQL_OPTS > $INNODB_STATUSLOG_CURRENT=$(grep 'Log sequence number' $INNODB_STATUS |\tr -s ' ' | cut -d' ' -f4)LOG_FLUSHED=$(grep 'Log flushed up to' $INNODB_STATUS |\tr -s ' ' | cut -d' ' -f5)[ "$LOG_CURRENT" = "$LOG_FLUSHED" ] && breaksleep 1donerm -f $INNODB_STATUS;;thaw)[ ! -p $FIFO ] && exit 1echo > $FIFO;;*)echo $"Neither freeze nor thaw provided"exit 1;;esac
https://docs.trilio.io/rhv/user-guide/preparing-for-application-consistent-backups
2021-05-06T09:35:06
CC-MAIN-2021-21
1620243988753.91
[]
docs.trilio.io
: Edit Issue Fields Screen: Publisher's Age Guidelines 1.17: Notes (issue) 1.18: Keywords 1.19: Comments 2.0: Sequence/Story Screen 2.1: Sequence Number 2.2: Title 2.3: Type 2.4: Feature 2.5: Feature Logo 2.6: Page Count 2.7: Credits 2.7.1: Script 2.7.2: Pencils 2.7.3: Inks 2.7.4: Colors 2.7.5: Letters 2.7.6: Editing 2.8: Genre * Official Genres List 2.9: Character Appearances * Indexing multiple versions of the same character 2.10: Job Number 2.11: Reprint Notes 2.12: Synopsis 2.13: Notes 2.14: Keywords 2.15: Comments: ISBN 3.11: Issue Title 3.12: Volume 3.13: Comics Publication 3.14: Publisher's Age Guidelines 3.15: Tracking 3.16: Series Notes 3.17: Keywords 3.18: Imprint - (all are now deleted) 3.19: Comments 4.0: Publisher 4.1: Publisher Name * Definition of Publisher (when to create a new publisher) 4.2: Years of Operation 4.3: Country 4.4: URL 4.5: Notes (on Publisher Screen) 4.6: Keywords 4.7:: Adding/Editing Creator Screen NOTE: This screen is currently still in beta testing (). These instructions and rules are not yet final, and all edits made and data entered at this page will be deleted. 7.1: GCD Official Name 7.2: Creator Names 7.3: Name Type 7.4: Sources Fields 7.5: Relation Type Fields 7.6: Year / Month / Date Fields 7.7: Year / Month / Date Fields 7.8: Who's Who 7.9: Country / Province / City Fields 7.10: Bio 7.11: Notes (Creators) 7.12: School Details (Add Schools button) 7.13: Degree Details (Add Degrees button) 8.0: Adding Creator Influence Screen NOTE: This screen is currently still in beta testing (see for example). These instructions and rules are not yet final, and all edits made and data entered at this page will be deleted. All creator influences must be self-identified by the named creator. Sources for this self-identified influence must be cited. 8.1: Influence Name 8.2: Influence Link 8.3: Notes (Creator Influences) 8.4: Sources Fields (end of definitions) Policy Votes Affecting This Topic
http://docs.comics.org/wiki/Formatting_Documentation
2016-07-23T13:04:09
CC-MAIN-2016-30
1469257822598.11
[]
docs.comics.org
This local transform adds a logging ability to your program using Apache Commons CommonsLoggingStrategy.class @default "log"
http://docs.groovy-lang.org/latest/html/gapi/groovy/util/logging/Commons.html
2016-07-23T13:05:49
CC-MAIN-2016-30
1469257822598.11
[]
docs.groovy-lang.org
Sometimes.
https://docs.kde.org/stable4/en/extragear-office/kmymoney/firsttime.schedules.html
2016-07-23T13:08:15
CC-MAIN-2016-30
1469257822598.11
[array(['/stable4/common/top-kde.jpg', None], dtype=object)]
docs.kde.org
google.cloud.gcp_pubsub_topic module – Creates a GCP Topic_topic. Synopsis A named resource to which messages are sent by publishers. Requirements The below requirements are needed on the host that executes this module. python >= 2.6 requests >= 2.18.4 google-auth >= 1.3.0 Parameters Notes Note API Reference: Managing Topics:: test-topic1 project: test_project auth_kind: serviceaccount service_account_file: "/tmp/auth.pem" state: present Return Values Common return values are documented here, the following are the fields unique to this module: Collection links Homepage Repository (Sources)
https://docs.ansible.com/ansible/latest/collections/google/cloud/gcp_pubsub_topic_module.html
2022-06-25T04:20:34
CC-MAIN-2022-27
1656103034170.1
[]
docs.ansible.com
Start integration with GlobalPay payment platform using Nuvei SDK-PHP. Download Nuvei SDK-PHP available now on GitHub. For quick information about available SDK methods and functionalities, please visit: Nuvei SDK Demo Page. For demo of methods and functionalities available in current version of SDK, please visit: Nuvei SDK Play Page. In order to test a full end-to-end transaction you need to configure your SDK.
https://docs.smart2pay.com/smart2pay-sdk-php/
2022-06-25T05:15:27
CC-MAIN-2022-27
1656103034170.1
[]
docs.smart2pay.com
Scan Your Computer - Part I Scan Your Computer - Part II Scan Your Computer - Part III This tutorial explains how to scan your computer and interpret the results. Titanium can keep? All scans check for the same types of threats, like viruses and spyware. Use Quick Scan if you have recently run a Full Scan. Although it takes longer, run a Full Scan if you have time.
https://docs.trendmicro.com/en-us/consumer/titanium2014/tutorials/scan_your_computer_1.aspx
2022-06-25T04:54:08
CC-MAIN-2022-27
1656103034170.1
[]
docs.trendmicro.com
The. Note: Always use the Oracle Compatibility Functions module included with your Greenplum Database version. Before upgrading to a new Greenplum Database version, uninstall the compatibility functions from each of your databases, and then, when the upgrade is complete, reinstall the compatibility functions from the new Greenplum Database release. See the Greenplum Database release notes for upgrade prerequisites and procedures.. The following functions are available by default in Greenplum Database and do not require installing the Oracle Compatibility: The default date and timestamp format in the original orafce module implementation is different than the default format in the Greenplum Database implementation. If the following code is run:. Some Oracle Compatibility Functions reside in the oracle schema. To access them, set the search path for the database to include the oracle schema name. For example, this command sets the default search path for a database to include the oracle schema: ALTER DATABASE <db_name> SET <search_path> = "$user", public, oracle; Note the following differences when using the Oracle Compatibility Functions with PostgreSQL vs. using them with Greenplum Database: orafcemodule implementation. dbms_pipepackage run only on the Greenplum Database master host. Refer to the README and Greenplum Database orafce documentation in the Greenplum Database github repository for detailed information about the individual functions and supporting objects provided in this module.
https://docs.vmware.com/en/VMware-Tanzu-Greenplum/6/greenplum-database/GUID-ref_guide-modules-orafce_ref.html
2022-06-25T04:55:32
CC-MAIN-2022-27
1656103034170.1
[]
docs.vmware.com
Lessons Learnt Templates Sale Sold out Regular price $19.00 USD Regular priceUnit price per Sale price $19.00 USD Every company has times where they need to evaluate their processes and figure out what is working well, and what needs improvement. This process is often called a lessons learnt template. A lessons learnt template can be used by any type of organization in order to make changes for future success. The Lessons Learnt Templates are a set of templates that will help you to create powerful lessons learnt documentation. These documents are an important tool for all organizations who want to improve their performance and learn from mistakes. Format: MS Excel and Powerpoint Template details: Benefits of this template: - The templates will help you to assess your current situation, identify areas that need improvement, develop strategies for change and then track progress on your goals. - Helps you document, analyze and act on the lessons that your organization has learnt since its formation. Features: The template contains 2 templates: - Lessons learnt templates- This template helps you to note down the risks involved, lessons learnt, actions taken, the ownership taken and the status of actions. - Project closure- Project closure is a document that you create to help your team understand the project's progress, what they need to do before ending it, and how they will end the project. - Project initiation- Project initiation template is a document that can be used to set up the goals of your project. - Project planning- It can be used to plan projects of any size, from small office tasks to large-scale construction projects. - Project execution- Project execution is a document used during the project development phase which provides a framework for all team members to follow in order to execute the work.
https://iso-docs.com/products/lessons-learned-template
2022-06-25T05:03:06
CC-MAIN-2022-27
1656103034170.1
[array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/LessonsLearnedMeetingAgenda2_1445x.png?v=1643055784', 'lessons learnt, Project closure, lessons learnt template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/LessonsLearnedMeetingAgenda_6150fdf8-e475-4a3a-9403-b77785fa1f15_1445x.png?v=1643055784', 'lessons learnt, Project initiation, lessons learnt template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/ProjectExecution_7de0d804-1666-4622-9ad5-62c43eb02113_1445x.png?v=1643055784', 'lessons learnt, Project execution, lessons learnt template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/projectplanning_c0f72d1c-5aca-46f3-883d-c34858d7d959_1445x.png?v=1643055784', 'lessons learnt, Project planning, lessons learnt template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/LessonsLearnedLog_1445x.png?v=1643055784', 'lessons learnt, lessons learnt template'], dtype=object) ]
iso-docs.com
On your Feed will be displayed a "Quick Calendar" showing the current week's schedule of Activities. A few notes about the Quick Calendar: Days with at least one Activity assigned will display a dot under the date. You may click the days to navigate from day to day. If there are no Activities on a day, the number of days until the next Activity will be displayed. When Activities are present on a day, you may click the Activities to go directly to the full Activity view on your Calendar. Don't hesitate to reach out using the support chat below with any questions.
http://docs.sixcycle.com/en/articles/2417701-the-quick-calendar
2022-06-25T03:53:57
CC-MAIN-2022-27
1656103034170.1
[]
docs.sixcycle.com
Getting Started Guide Getting started with the SDK¶ To create a script,perform the following steps: Use Citrix Studio to perform the operation that you want to script; for example, to create a catalog for a set of Machine Creation Services Machines. Collect the log of SDK operations that Studio made to perform the task. Review the script to understand what each part is doing. This will help you with the customization of your own script. For more information, see the example use case which explains in detail what the script is doing. Convert and adapt the Studio script fragment to turn it into a script that is more consumable. To do this: Use variables. Some cmdlets take parameters, such as TaskId. However, it may not be clear where the value used in these parameters comes from because Studio uses values from the result objects from earlier cmdlets. Remove any commands that are not required. Add some steps into a loop so that these can be easily controlled. For example, add machine creation into a loop so that the number of machines being created can be controlled. Examples¶ Note: When creating a script, to ensure you always get the latest enhancements and fixes, Citrix recommends you follow the procedure described above rather than copying and pasting the example scripts.
https://developer-docs.citrix.com/projects/citrix-virtual-apps-desktops-sdk/en/latest/getting-started/
2022-06-25T04:39:05
CC-MAIN-2022-27
1656103034170.1
[]
developer-docs.citrix.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. Update-IAMAssumeRolePolicy-RoleName <String>-PolicyDocument <String>-Select <String>-PassThru <SwitchParameter>-Force <SwitchParameter> \u0020) through the end of the ASCII character range \u00FF) \u0009), line feed ( \u000A), and carriage return ( \u000D) Update-IAMAssumeRolePolicy -RoleName ClientRole -PolicyDocument (Get-Content -raw ClientRolePolicy.json)This example updates the IAM role named ClientRolewith a new trust policy, the contents of which come from the file ClientRolePolicy.json. Note that you must use the -Rawswitch parameter to successfully process the contents of the JSON file. AWS Tools for PowerShell: 2.x.y.z
https://docs.aws.amazon.com/powershell/latest/reference/items/Update-IAMAssumeRolePolicy.html
2022-06-25T05:42:47
CC-MAIN-2022-27
1656103034170.1
[]
docs.aws.amazon.com
Discreet webaffair review hook ups can be a great way to enhance your life. Whether you’re looking to spice up your property life, transform your life self-esteem or maybe find a fresh partner, prudent hook ups can be a great solution. To get yourself a discreet get together, try a webpage dedicated to the niche. Prior to you sign up, investigate reviews and look through users. Ensure that the internet site is safe and secure to stop scams and unwanted conditions. In addition , you can also request to get a photo with the person you have in mind. Distractive hookups are one time affairs between two people for social or other reasons. Women enjoy prudent hookups because they satisfy their very own sexual requires and make an impression good friends. Men may perhaps enjoy discreet hookups because they can be a satisfactory substitute for long-term relationships. Distractive hookups are typical among scholars, but they are also a common activity for students. If you want as being a part of an evergrowing trend in college dating, this type of hookup might be the answer. Although this study will not address so why hook ups are so uncommon, it does support previous research simply by investigating the connotations learners attach to them. In particular, pupils in bunch two were more likely to associate conference up with numerous connotations. In this manner, discrete catch ups will tend to be the result of mismatched expectations. Consequently , understanding what persons mean by simply discrete catch ups can assist them generate more knowledgeable decisions. Students’ definitions of discreet set-up change significantly. The most common definition is normally associated with love-making. However , students who identify hookups simply by specific erotic activities were significantly more very likely to engage in discrete hookups. No matter the context, these types of findings signify that set-up are often under the radar. So what is discrete hookup? There are plenty of types of discrete set-up. Find the right one for you. Although it’s important to be aware that hookups are different in appearance, there is no evidence that they will be less discreet than we might assume. The fact that they are discrete doesn’t imply they’re any less real. This review has elevated a lot of questions and opened the door to further exploration on the topic. However , it can enhance our comprehension of hookups. We require further research on whether they lead to harmful consequences. In many instances, discrete set-up are one encounters. They aren’t relationships and are usually depending on sex. Whether you’re searching for a love-making relationship, a romantic encounter, or an emotional connection, hookups aren’t for everyone. Hookups can be a superb option should you be not sure for anybody who is ready for a lot more serious relationship. You can always make an effort dating after a discreet get together.
https://docs.jagoanhosting.com/subtle-hook-up/
2022-06-25T04:15:05
CC-MAIN-2022-27
1656103034170.1
[]
docs.jagoanhosting.com
I'm trying to set up a rule for when i get an email from a certain address and sent to certain specific emails these should be moved to another folder. Problem is, these specific emails are a lot, i can set up almost 60 and the rule works fine, but when trying to go higher it doesn't work and an error saying there's insufficient space to store my rules appears. I need to set up almost 180 but i can't because of this, anyone that can help me?
https://docs.microsoft.com/en-us/answers/questions/347957/how-do-i-set-up-a-rule-with-a-lot-of-email-adresse.html
2022-06-25T03:54:42
CC-MAIN-2022-27
1656103034170.1
[]
docs.microsoft.com
$ oc edit cdi By default, the Containerized Data Importer (CDI) reserves space for file system overhead data in persistent volume claims (PVCs) that use the Filesystem volume mode. You can set the percentage that CDI reserves for this purpose globally and for specific storage classes. When you add a virtual machine disk to a persistent volume claim (PVC) that uses the Filesystem volume mode, you must ensure that there is enough space on the PVC for: The virtual machine disk. The space that the Containerized Data Importer (CDI) reserves for file system overhead, such as metadata. By default, CDI reserves 5.5% of the PVC space for overhead, reducing the space available for virtual machine disks by that amount. If a different value works better for your use case, you can configure the overhead value by editing the CDI object. You can change the value globally and you can specify values for specific storage classes. Change the amount of persistent volume claim (PVC) space that the Containerized Data Importer (CDI) reserves for file system overhead by editing the spec.config.filesystemOverhead attribute of the CDI object. Install the OpenShift CLI ( oc). Open the CDI object for editing by running the following command: $ oc edit cdi Edit the spec.config.filesystemOverhead fields, populating them with your chosen values: ... spec: config: filesystemOverhead: global: "<new_global_value>" (1) storageClass: <storage_class_name>: "<new_value_for_this_storage_class>" (2) Save and exit the editor to update the CDI object. View the CDI status and verify your changes by running the following command: $ oc get cdi -o yaml
https://docs.openshift.com/container-platform/4.10/virt/virtual_machines/virtual_disks/virt-reserving-pvc-space-fs-overhead.html
2022-06-25T05:42:08
CC-MAIN-2022-27
1656103034170.1
[]
docs.openshift.com
iOS Product Setup Setting up your in-app purchases in App Store Connect To set up products for iOS, iPadOS, macOS, tvOS, and watchOS, start by logging into App Store Connect. App Store Connect is Apple's central hub for managing app releases, TestFlight, in-app purchases, and more. This guide assumes basic knowledge of App Store Connect, as well as having an app set up and ready for adding in-app purchases. For more information, visit Apple's documentation and guides for App Store Connect. Make sure Paid Applications Agreement is signed Before you set up your products, make sure you have the latest Paid Applications Agreement signed in in the "Agreements, Tax, and Banking" module in App Store Connect. You will not be able to test in-app purchases until the latest version of this agreement is signed with Apple. Create an In-App Purchase To create an in-app purchase, go to App Store Connect's 'My Apps' page and select your app from the list. In the sidebar, select 'Manage' under In-App Purchases, and click the '+' symbol. You will be presented with a modal where you select the type of in-app purchase you want to add to your app. We're going to show you how to set up an Auto-Renewable Subscription here, but the steps are similar for other types of in-app purchases. If you don't see the Auto-Renewable Subscription option, ensure your developer account has accepted all applicable contracts and have provided tax and banking information in the 'Agreements, Tax, and Banking' section of App Store Connect. Next, you'll be asked to provide a Reference Name and a Product ID. - Reference Name: The reference name will be used on App Store Connect and in Sales and Trends reports from Apple. It won't be displayed to your users on the App Store. We recommend using a human readable description of the purchase you plan to set up. The name can't be longer than 64 characters. - Product ID: The product Id is a unique alphanumeric ID that is used for accessing your product in development and syncing with RevenueCat. After you use a Product ID for one product in App Store Connect, it can’t be used again across any of your apps, even if the product is deleted. It helps to be a little organized here from the beginning - we recommend using a consistent naming scheme across all of your product identifiers such as: <app>_<price>_<duration>_<intro duration><intro price> - app: Some prefix that will be unique to your app, since the same product Id cannot but used in any future apps you create. - price: The price you plan to charge for the product in your default currency. - duration: The duration of the normal subscription period. - intro duration: The duration of the introductory period, if any. - intro price: The price of the introductory period in your default currency, if any. In this case, I want to set up a yearly subscription with a one week trial for $39.99 USD. Using this format I've set my product identifier as: rc_3999_1y_1w0 Pro Tip ☝️ Using a consistent naming scheme across product identifiers in App Store Connect can save you time in the future and make it easier to organize and understand your products with only the identifier. Once you have your product identifiers configured, the last step will be to add them to a Subscription Group. Subscription Groups are ways to organize your products in App Store Connect so users are able to switch between products. You can read more about Subscription Groups in our blog post here. If you don't have any Subscription Groups configured yet, you'll be prompted to provide a Reference Name. Similar to the product Reference Name you set earlier, this is not user-facing so we recommend using a string you can understand. Setting Subscription Duration Once your product is created, you'll be able to set the duration of the auto-renewable subscription. Use the duration dropdown to choose an option, and click Save. Setting Subscription Price To set the price of your subscription, click the '+' icon in the Subscription Prices section. You'll be presented with a modal where you can select a Price from a dropdown in your default currency. When you click Next, Apple will automatically set the price in all App Store regions based off the price and currency you selected. You'll have the option to edit these, but we recommend sticking with the defaults. When done, click Create. Last step, don't forget to Save! Adding Introductory Offers and Free Trials To add an introductory offer or free trial to your product, navigate to the Introductory Offers tab on the same page you just configured pricing. Click the '+' icon next to Introductory Offers to set one up. You'll be presented with a modal with a few configuration screens: - Countries or Regions for Introductory Offer: Use this if you want the introductory offer or trial to be region specific. Most of the time the answer here is "no", so go ahead and click Next. - Introductory Offer Start/End Date: Set the start and end dates if you want the introductory offer or trial to be a limited time deal. In most cases, you'll be setting the Start Date to today and No End Date, then click Next. On the last screen, you'll get to choose the Type of Introductory Offer. Free trials are the most common type of introductory offer, and that's what we'll set up here. Select the Free radio button and choose the desired Duration from the dropdown. You can read more about the different Introductory Offer types in our blog post here. Just like with regular prices, don't forget to click Save when you're done. Adding Localization The next piece to set up is localization information for the App Store. This is the name and description of the in-app purchase that the user will see. In the App Store Information section, click the '+' icon next to Localization and choose the language you with to set up. Next, you'll need to provide a Subscription Display Name and a Description. The Subscription Display Name and Description will be visible to the user on the App Store and in their subscription management settings. We recommend a short display name that describes the level of access the purchase unlocks, and we recommend using the same Subscription Display Name for all of your products that unlock the same level of access. Using the same name will result in a cleaner App Store listing and cause less confusion among users as your suite of products grow. Pro Tip ☝️ Use the same Subscription Display Name and Description for all of your products that unlock the same level of access. This results in a much cleaner App Store listing as your suite of products grows. Add Reviewer Information The last part of setting up an in-app purchase in iOS is adding information for the reviewer. This is a Screenshot, and optional Review Notes. Often times developers overlook the screenshot, but you'll be unable to submit your product for review without it. - Screenshot: A required image of your in-app purchase paywall for the reviewer. While testing, it's okay to upload an empty 640 x 920 image here of whatever you want. Before submitting for review, you should add a picture of your paywall. - Review Notes: An optional text area to clarify anything about your in-app purchase for the reviewer. Subscription Groups If you're configuring products for the first time and just set up a subscription group, you may see a warning in App Store Connect: Before you can submit your in-app purchase for review, you must add at least one localization to your subscription group. Add localizations Clicking on the Add localizations link will take you to the Subscription Group configuration. Similar to how you added localizations to the product, you'll need to add localizations to the Subscription Group as well. Next, you'll need to provide a Subscription Group Display Name and an App Name. Like the Subscription Display Name you set up earlier, this will be visible to the user on the App Store and in their subscription management settings. Subscription Group Display Name: Just like the product localizations, we recommend a short display name that describes the level of access the subscription group unlocks, and if you use a multi-subscription group strategy for things like price testing we recommend using the same Subscription Group Display Name for all of your subscription groups that unlock the same level of access. App Name: Apple provides you with a couple of options for the app display name that the users will see on their subscription. You can choose your app name from the App Store listing, or a Custom Name. Using a Custom Name is useful if your App Store listing title is slightly different than your app name. For example, if your App Store listing was titled "VSCO - Photo Filters", you may want to use a Custom Name for your subscriptions of just "VSCO". Pro Tip ☝️ Use the same Subscription Group Display Name if you plan on creating multiple Subscription Groups that unlock the same content. Typically these types of strategies are used for price testing and offering discounts. Don't forget to click Save before exiting. Next Steps Cross Platform If your app is cross-platform, check out our guides for setting up products for Google Play or Stripe. Integrate with RevenueCat If you're ready to integrate your new App Store Connect in-app product with RevenueCat, continue our product setup guide . Updated about 1 year ago
https://docs.revenuecat.com/docs/ios-products
2022-06-25T05:24:44
CC-MAIN-2022-27
1656103034170.1
[array(['https://files.readme.io/7a887cd-Screen_Shot_2020-06-24_at_4.33.09_PM.png', 'Screen_Shot_2020-06-24_at_4.33.09_PM.png'], dtype=object) array(['https://files.readme.io/7a887cd-Screen_Shot_2020-06-24_at_4.33.09_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/b6cf2ea-Screen_Shot_2020-06-26_at_3.15.20_PM.png', 'Screen Shot 2020-06-26 at 3.15.20 PM.png'], dtype=object) array(['https://files.readme.io/b6cf2ea-Screen_Shot_2020-06-26_at_3.15.20_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/b8e240e-Screen_Shot_2020-06-26_at_3.17.27_PM.png', 'Screen Shot 2020-06-26 at 3.17.27 PM.png'], dtype=object) array(['https://files.readme.io/b8e240e-Screen_Shot_2020-06-26_at_3.17.27_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/9e1a4d6-Screen_Shot_2020-06-24_at_4.37.18_PM.png', 'Screen_Shot_2020-06-24_at_4.37.18_PM.png'], dtype=object) array(['https://files.readme.io/9e1a4d6-Screen_Shot_2020-06-24_at_4.37.18_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/0e76619-Screen_Shot_2020-06-26_at_3.24.49_PM.png', 'Screen Shot 2020-06-26 at 3.24.49 PM.png'], dtype=object) array(['https://files.readme.io/0e76619-Screen_Shot_2020-06-26_at_3.24.49_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/f999077-Screen_Shot_2021-06-09_at_10.45.44_AM.png', 'Screen Shot 2021-06-09 at 10.45.44 AM.png'], dtype=object) array(['https://files.readme.io/f999077-Screen_Shot_2021-06-09_at_10.45.44_AM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/6ee3408-Screen_Shot_2020-06-26_at_4.06.56_PM.png', 'Screen Shot 2020-06-26 at 4.06.56 PM.png'], dtype=object) array(['https://files.readme.io/6ee3408-Screen_Shot_2020-06-26_at_4.06.56_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/8588f79-Screen_Shot_2020-06-26_at_4.08.32_PM.png', 'Screen Shot 2020-06-26 at 4.08.32 PM.png'], dtype=object) array(['https://files.readme.io/8588f79-Screen_Shot_2020-06-26_at_4.08.32_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/5cd0fce-Screen_Shot_2020-06-26_at_4.11.38_PM.png', 'Screen Shot 2020-06-26 at 4.11.38 PM.png'], dtype=object) array(['https://files.readme.io/5cd0fce-Screen_Shot_2020-06-26_at_4.11.38_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/19c49a4-Screen_Shot_2020-06-26_at_4.15.06_PM.png', 'Screen Shot 2020-06-26 at 4.15.06 PM.png'], dtype=object) array(['https://files.readme.io/19c49a4-Screen_Shot_2020-06-26_at_4.15.06_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/49136a6-Screen_Shot_2020-06-26_at_4.18.05_PM.png', 'Screen Shot 2020-06-26 at 4.18.05 PM.png'], dtype=object) array(['https://files.readme.io/49136a6-Screen_Shot_2020-06-26_at_4.18.05_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/b875ffe-Screen_Shot_2020-06-26_at_4.30.31_PM.png', 'Screen Shot 2020-06-26 at 4.30.31 PM.png'], dtype=object) array(['https://files.readme.io/b875ffe-Screen_Shot_2020-06-26_at_4.30.31_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/b8fd4f8-Screen_Shot_2020-06-26_at_4.34.38_PM.png', 'Screen Shot 2020-06-26 at 4.34.38 PM.png'], dtype=object) array(['https://files.readme.io/b8fd4f8-Screen_Shot_2020-06-26_at_4.34.38_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/df175a3-Screen_Shot_2020-06-26_at_4.37.17_PM.png', 'Screen Shot 2020-06-26 at 4.37.17 PM.png'], dtype=object) array(['https://files.readme.io/df175a3-Screen_Shot_2020-06-26_at_4.37.17_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/c69b00a-Screen_Shot_2020-06-26_at_4.58.12_PM.png', 'Screen Shot 2020-06-26 at 4.58.12 PM.png'], dtype=object) array(['https://files.readme.io/c69b00a-Screen_Shot_2020-06-26_at_4.58.12_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/dbb31d0-Screen_Shot_2020-06-26_at_5.07.04_PM.png', 'Screen Shot 2020-06-26 at 5.07.04 PM.png'], dtype=object) array(['https://files.readme.io/dbb31d0-Screen_Shot_2020-06-26_at_5.07.04_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/325ddc8-Screen_Shot_2020-06-26_at_5.12.52_PM.png', 'Screen Shot 2020-06-26 at 5.12.52 PM.png'], dtype=object) array(['https://files.readme.io/325ddc8-Screen_Shot_2020-06-26_at_5.12.52_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/5ac3954-Screen_Shot_2021-06-18_at_8.17.30_PM.png', 'Screen Shot 2021-06-18 at 8.17.30 PM.png'], dtype=object) array(['https://files.readme.io/5ac3954-Screen_Shot_2021-06-18_at_8.17.30_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/bd293a3-Screen_Shot_2020-06-26_at_5.19.31_PM.png', 'Screen Shot 2020-06-26 at 5.19.31 PM.png'], dtype=object) array(['https://files.readme.io/bd293a3-Screen_Shot_2020-06-26_at_5.19.31_PM.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/f4abcb1-Screen_Shot_2020-06-26_at_5.18.11_PM.png', 'Screen Shot 2020-06-26 at 5.18.11 PM.png'], dtype=object) array(['https://files.readme.io/f4abcb1-Screen_Shot_2020-06-26_at_5.18.11_PM.png', 'Click to close...'], dtype=object) ]
docs.revenuecat.com
To successfully use the Cisco Umbrella roaming client, the following prerequisites must be met. Supported Operating Systems - Windows 10 with .NET 4.5 - Windows 8 (includes 8.1) (64-bit) with .NET 4.5 - Windows 7 SP1 (64-bit/32-bit) with .NET 3.5. - macOS 10.11 or newer. Unsupported Operating Systems ** Windows Server (All versions) - Windows RT (Currently, we do not support ARM processors) - macOS 10.10 or older. Network Access Host Names The Umbrella roaming client uses hostnames for registration. All machines must have a hostname that is unique within your organization. DNS The Umbrella roaming client uses standard DNS ports 53/UDP and 53/TCP to communicate with Umbrella. If you explicitly block access to third-party DNS servers on your corporate or home network, you must add the following allow rules in your firewall. The Umbrella roaming client automatically encrypts DNS queries when it senses that 443/UDP is open. crl3.digicert.com and crl4.digicert.com 443 TCP 146.112.255.101, 67.215.71.201, 67.215.92.210, 146.112.255.152/29 (8 IPs) sync.hydra.opendns.com, crl3.digicert.com and crl4.digicert.com. IPv6: 2620:0:cc1:115::210 IPv6: 2a04:e4c7:ffff::20/125 (8 IPs) In the table above, the IP addresses resolve to: - disthost.umbrella.com - api.opendns.com - disthost.opendns.com The Digicert domains resolve to various IP addresses based on CDN and are subject to is Anycast and may change. These domains resolve to the following IPs: - 146.112.63.3 to 146.112.63.9 - 146.112.63.11 to 146.112.63.13 Currently, the roaming client only supports connecting to the Umbrella cloud resources using IPv4. This will change as the services that the roaming client requires become available over IPv6. Windows Operating System When using the roaming client on Windows, some network locations may observe a yellow triangle NCSI network connectivity indicator badge. This may prevent Outlook from fetching or some Office applications from fetching network content. A setting from Microsoft is available to resolve this issue via GPO setting or registry key. For more information, see A Fix from Microsoft (Windows 10 Fall 2017 Creators Update).. - The Umbrella roaming client must be installed on the C:\ drive and does not support secondary or remote drive installations. IPv6 Support Currently, the Umbrella roaming client only supports dual-stack IPv4/IPv6 for macOS > Domain Management 2 years ago
https://docs.umbrella.com/mssp-deployment/docs/prerequisites-for-the-umbrella-roaming-client
2022-06-25T04:10:16
CC-MAIN-2022-27
1656103034170.1
[]
docs.umbrella.com
Project Charter Templates Sale Sold out Regular price $19.00 USD Regular priceUnit price per Sale price $19.00 USD It is a document that summarizes the key information about a project and announces to the world, aka, your organization, that there is a new project on the block. The Charter appoints a project manager for the project and assigns it the authority to proceed. They provide a framework for the entire project and help ensure that all parties involved know what to expect. In addition, a well-written Project Charter Template can save you time, money, and headaches! Template details: Templates Included- - PMO Charter PPT Template - Project Charter PPT Template - Project Charter Word Template - Team Charter Template - Team Charter Templates with RACI Format: MS Excel, Word, PPT<<
https://iso-docs.com/products/project-charter-template-ppt
2022-06-25T04:59:08
CC-MAIN-2022-27
1656103034170.1
[array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/ProjectCharterpptTemplate_1445x.png?v=1643054801', 'project charter, project charter template, project charter ppt template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/TeamCharterTemplateswithRACI_1445x.png?v=1643054801', 'Team charter template, team charter'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/TeamCharterTemplate_484dfd98-36ba-4a5b-bcfa-ac006dd653b5_1445x.png?v=1643054801', 'Team charter template, team charter'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/ProjectCharterTemplate_1445x.png?v=1643054801', 'project charter, project charter template, project charter ppt template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/1_3cbaf265-9928-471f-8662-73d0e4aafd80_1445x.jpg?v=1643054801', 'project charter, project charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/2_f48ce3d6-9c55-48f0-be1a-ab734e3abfc5_1445x.jpg?v=1643054801', 'project charter, project charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/3_9a5c629b-028e-437f-83bb-9949f9d9b581_1445x.jpg?v=1643054801', 'project charter, project charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/4_e08c356d-5fc3-4672-9176-31e22aa72e6f_1445x.jpg?v=1643054801', 'project charter, project charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/5_1f136760-9b70-4222-be05-643c53dc0246_1445x.jpg?v=1643054801', 'project charter, project charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharter_1445x.png?v=1643054801', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/PMOcharter1_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/PMOcharter2_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharter3_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharter4_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharter5_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharter6_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharter7_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/PMOCharterPPT_27d74f63-ed88-4386-a936-153349677729_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharter8_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharter10_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharter11_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharterword_a1de52ab-129b-42a1-a662-71b0780a1f92_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/PMOCharterWord_4dd35a24-9063-4f94-8fcc-05b7fadeb67d_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharterword1_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharterword2_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharterword3_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharterword4_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharterword5_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharterword6_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharterword7_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharterword8_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharterword9_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/pmocharterword10_1445x.png?v=1643054802', 'pmo charter, pmo charter template'], dtype=object) ]
iso-docs.com
TableLookTypes Enum Lists values used to specify table style options influencing a table appearance. Namespace: DevExpress.XtraRichEdit.API.Native Assembly: DevExpress.RichEdit.v22.1.Core.dll Declaration Remarks The TableLookTypes enumeration values are used to set the Table.TableLook property. Related GitHub Examples The following code snippets (auto-collected from DevExpress Examples) contain references to the TableLookTypes enum. Note The algorithm used to collect these code examples remains a work in progress. Accordingly, the links and snippets below may produce inaccurate results. If you encounter an issue with code examples below, please use the feedback form on this page to report the issue.
https://docs.devexpress.com/OfficeFileAPI/DevExpress.XtraRichEdit.API.Native.TableLookTypes
2022-06-25T05:01:03
CC-MAIN-2022-27
1656103034170.1
[]
docs.devexpress.com
How to tag products for segmenting in Recapture How can I tag products for proper segmenting in Recapture? Recapture allows you to use tags for segmenting your campaigns. How can you set that up in Shopify? Tags are different from SKUs. A SKU should be unique to the product but multiple products can share the same tag and a single product can have multiple SKUs (for example, depending on variants). Tags allow you to group certain products (kind of like a product category, but it's more powerful than that). Tags can be set in the product details page in the bottom right, below Collections as shown here:
https://docs.recapture.io/article/76-how-to-tag-products-for-segmenting-in-recapture
2022-06-25T03:55:50
CC-MAIN-2022-27
1656103034170.1
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55fb1fb2c697913bc9927e29/images/5f3e9d9f2c7d3a352e9122e1/file-LYyH8ZnSpS.png', None], dtype=object) ]
docs.recapture.io
Whenever a shop is approved in Nuvei onboarding system, a notification is sent from Nuvei to Mirakl Marketplace where the following info is send, and whether it was successful or not. Mirakl!
https://docs.smart2pay.com/mirakl-notification-for-shop-approval/
2022-06-25T04:22:54
CC-MAIN-2022-27
1656103034170.1
[]
docs.smart2pay.com
For VTC Pay Wallet payment method there aren’t any test data available, but you can see how it works with the payment flow given below. VTC Pay Wallet Payment Flow The customer enters his Email Address (the below page can be skipped by sending the parameter in the payment request). The customer logs in to his VTC Pay Wallet account by using his account number and password. The customer will receive a a code via text message to his phone number which he needs to enter to confirm the payment. The customer receives a message that the payment was successfully processed. Upon completion of the payment flow, the customer is redirected back to your ReturnURL.
https://docs.smart2pay.com/s2p_testdata_1100/
2022-06-25T03:59:13
CC-MAIN-2022-27
1656103034170.1
[]
docs.smart2pay.com
Use tags to group roaming computer identities together. When you create a tag and configure multiple roaming computers to use that tag, you can then select this group of roaming computers as if they were one identity when you create a policy. If you expect to have hundreds or thousands of roaming computers in your deployment, we suggest using a tag to help expedite policy creation. Note: Each roaming computer can be configured with multiple tags. - Tags are only available for roaming computer identities. - A tag cannot be applied to a roaming computer at the time of roaming client installation. - You cannot delete a tag. Instead, remove a tag from a roaming computer. - Tags can be up to 40 characters long. Prerequisites - Full admin access to the Umbrella dashboard. See Manage User Roles. Procedure - Navigate to Deployments > Core Identities > Roaming Computers. - Expand a roaming computer listing. - Click Add Tag. - Give your new tag a meaningful name and then click Create New Tag for <tag_name>. When you start typing, the Select Tag pop-up window updates to reflect your new tag. Your new tag is added to the roaming computer. The top level of the Roaming Computers page lists all tags that the roaming computer is configured for. Now that you have configured your roaming computers to include a tag, they are grouped together under this tag and can be selected just as you would a single identity when you are creating DNS policies. Enable SafeSearch < Group Roaming Computers with Tags > Manage Security Settings Updated 2 days ago
https://docs.umbrella.com/deployment-umbrella/docs/group-roaming-computers
2022-06-25T03:58:01
CC-MAIN-2022-27
1656103034170.1
[]
docs.umbrella.com
NSX-T Data Center 3.2.1 provides a variety of new features that offer new functionalities for virtualized networking, security, and migration from NSX Data Center for vSphere. Highlights include new features and enhancements in the following focus areas. Federation This offers more flexibility for security use cases. Maximum latency between RTEP is still at 150ms round-trip time for network stretch use cases. Edge Platform Distributed Firewall Gateway Firewall NSX Data Center for vSphere to NSX-T Data Center Migration Install and Upgrade N-VDS to VDS migrator tool Platform Security For compatibility and system requirements information, see the VMware Product Interoperability Matrices and the NSX-T Data Center Installation Guide. For instructions about upgrading the NSX-T Data Center components, see the NSX-T Data Center Upgrade Guide. Customers upgrading to this release are recommended to run the NSX Upgrade Evaluation Tool before starting the upgrade process. The tool is designed to ensure success by checking the health and readiness of your NSX Managers prior to upgrading. See developer.vmware.com to use the NSX-T Data Center APIs or CLIs for automation. The API documentation is available from the API Reference tab. The CLI documentation is available from the Documentation tab. NSX-T Data Center has been localized into multiple languages: English, German, French, Japanese, Simplified Chinese, Korean, Traditional Chinese, and Spanish. Because NSX-T Data Center localization utilizes the browser language settings, ensure that your settings match the desired language. Fixed Issue 2881471: Service Deployment status not getting updated when deployment status switches from failure to success. You may see that Service Deployment status remains in Down state forever along with the alarm that was raised. Fixed Issue 2881281: Concurrently configuring multiple virtual servers might fail for some. Client connections to virtual servers may time out. Fixed Issue 2878325: In the Inventory Capacity Dashboard view for Manager, “Groups Based on IP Sets” attribute count doesn’t include Groups containing IP Addresses that are created from Policy UI. In the Inventory Capacity Dashboard view for Manager, the count for “Groups Based on IP Sets” is not correctly represented if there are Policy Groups containing IP Addresses. Fixed Issue 2878030: Upgrade orchestrator node for Local Manager site change is not showing notification. If the orchestrator node is changed after the UC is upgraded and you continue with the UI workflow by clicking any action button (pre-check, start, etc.), you will not see any progress on the upgrade UI. This is only applicable if the Local Manager Upgrade UI is accessed in the Global Manager UI using site switcher. Fixed Issue 2866885: Event log scrapper (ELS) requires the NetBIOS name configured in AD domain to match that in AD server. User login will not be detected by ELS. Fixed Issue 2862418: The first packet could be lost in Live Traffic Analysis (LTA) when configuring LTA to trace the exact number of packets. You cannot see the first packet trace. Fixed Issue 2882070: NSGroup members and criteria is not displayed for global groups in Manager API listing. No functional impact. Fixed Issue 2875385: When a new node joins the cluster, if local users (admin, audit, guestuser1, guestuser2) were renamed to some other name, these local user(s) may not be able to log in. Local user is not able to log in. Fixed Issue 2874236: After upgrade, if you re-deploy only one Public Cloud Gateway (PCG) in an HA pair, the older HA AMI/VHD build is re-used. This happens only post upgrade in the first redeployment of PCGs. Fixed Issue 2870645: In response of /policy/api/v1/infra/realized-state/realized-entities API, 'publish_status_error_details' shows error details even if 'publish_status' is a "SUCCESS" which means that the realization is successful. There is no functional impact. Fixed Issue 2870529: Runtime information for Identify Firewall (IDFW) not available if not exact case of netbios name is used when AD domain is added. You cannot easily and readily obtain IDFW current runtime information/status. Current active logins cannot be determined. Fixed Issue 2868235: On Quick Start - Networking and Security dialog, visualization shows duplicate VDS when there are multiple PNICs attached to the same VDS. Visualization shows duplicate VDS. It may be difficult to find or scroll to the customize host switch section in case the focus is on the visualization graph. Fixed Issue 2867243: Effective membership APIs for a Policy Group or NSGroup with no effective members does not return 'results' and 'result_count' fields in API response. There is no functional impact. Fixed Issue 2882769: Tags on NSService and NSServiceGroup objects are not carried over after upgrading to NSX-T 3.2. There is no functional impact on NSX as Tags on NSService and NSServiceGroup are not being consumed by any workflow. There may be an impact on external scripts that have workflows that rely on Tags on these objects. Fixed Issue 2888658: Significant performance impact in terms of connections per second and throughput observed on NSX-T Gateway Firewall when Malware Detection and Sandboxing feature is enabled. Any traffic subject to malware detection experiences significant latencies and possibly connection failures. When malware detection is enabled on the gateway, it will also impact L7FW traffic causing latencies and connection failures. Fixed Issue 2875962: Upgrade workflow for Cloud Native setups is different from NSX-T 3.1 to NSX-T 3.2. Following the usual workflow will erase all CSM data. Fixed Issue 2889748: Edge Node delete failure if redeployment has failed. Delete leaves stale intent in system, which is displayed on UI. Though Edge VM will be deleted, stale edge intent and internal objects will be retained in the system and delete operation will be retried internally. No functionality impact, as Edge VMs are deleted and only intent has stale entries. Fixed Issue 2887037: Post Manager to Policy object promotion, NAT rules cannot be updated or deleted. This happens when NAT rules are created by a PI (Principal Identity) user on Manager before promotion is triggered. PI user created NAT rules cannot be updated or deleted post Manager to Policy object promotion. Fixed Issue 2886971: Groups created on Global Manager are not cleaned up post delete. This happens only if that Group is a reference group on a Local Manager site. No functional impact; however, you cannot create another Group with the same policypath as the deleted group. Fixed Issue 2886210: During restore, if the VC is down, a Backup/Restore dialog will appear telling the user to ensure that VC is up and running, however the IP address of the VC is not shown. the IP address of the VC is not shown during Restore for VC connectivity. Fixed Issue 2885552: If you have created an LDAP Identity Source that uses OpenLDAP, and there is more than one LDAP server defined, only the first server is used. If the first LDAP server becomes unavailable, authentication fails, instead of trying the rest of the configured OpenLDAP server(s). Fixed Issue 2885248: For InterVtep scenario, if EdgeVnics are connected to NSX Portgroups (irrespective of vlan on Edge VM and ESX Host), the north-south traffic between workloads VMs on the ESX and the Edge stops working as ESX drops packets that are destined for the Edge VTEP. The north-south traffic between workloads VMs on the ESX and the Edge stops working as ESX drops packets that are destined for the Edge VTEP. Fixed Issue 2885009: Global Manager has additional default Switching Profiles after upgrade. No functional impact. Fixed Issue 2884416: Load balancer status cannot be refreshed timely. Wrong load balancer status. Fixed Issue 2884070: If there is a mismatch of MTU setting between NSX-T edge uplink and peering router, OSPF neighbor-ship gets stuck in Exstart state. During NSX for vSphere to NSX-T migration, the MTU is not automatically migrated so a mismatch can impact dataplane during North/South Edge cutover. OSPF adjacency is stuck in Exstart state. Fixed Issue 2882822: Temporary IPSets are not added to SecurityGroups used in EDGE Firewall rules / LB pool members during NSX for vSphere to NSX-T config migration. During migration, there may be a gap until the VMs/VIFs are discovered on NSX-T and are a part of the SGs to which they are applicable to via static/dynamic memberships. This can lead to traffic being dropped or allowed contrary to the Edge Firewall rules in the period between the North/South Cutover (N/S traffic going through NSX-T gateway) until the end of the migration. Fixed Issue 2881168: LogicalPort GET API output is in expanded format "fc00:0:0:0:0:0:0:1" as compared to previous format "fc00::1". LogicalPort GET API output in NSX-T 3.2 is in expanded format "fc00:0:0:0:0:0:0:1" as compared to NSX-T 3.0 format "fc00::1". Fixed Issue 2877628: When attempting to install Security feature on an ESX host switch VDS version lower than 6.6, an unclear error message is displayed. The error message is shown via the UI and API. Fixed Issue 2872658: After Site-registration, UI displays an error for "Unable to import due to these unsupported features: IDS." There is no functional impact. Config import is not supported in NSX-T 3.2. Fixed Issue 2866751: Consolidated effective membership API does not list static IPs in the response for a shadow group. No functional or datapath impact. You will not see the static IPs in the GET consolidated effective membership API for a shadow group. This is applicable only for a shadow group (also called reference groups). Fixed Issue 2772500: Enabling Nested overlay on ENS can result in PSOD. Can result in PSOD. Fixed Issue 2884518: Incorrect count of VMs connected to segment on Network topology UI after upgrading an NSX appliance to NSX-T 3.2. You will see incorrect count of VM connected to Segment on Network Topology. However, actual count of VMs associated with the Segment will be shown when you expand the VM's node. Fixed Issue 2864250: A failure is seen in transport node realization if Custom NIOC Profile is used when creating a transport node. Transport node is not ready to use. Fixed Issue 2613113: If onboarding is in progress, and restore of Local Manager is done, the status on Global Manager does not change from IN_PROGRESS. UI shows IN_PROGRESS in Global Manager for Local Manager onboarding. Configuration of the restored site cannot be imported. Fixed Issue 2526769: Restore fails on multi-node cluster. When starting a restore on a multi-node cluster, restore fails and you will have to redeploy the appliance. Fixed Issue 2628503: DFW rule remains applied even after forcefully deleting the manager nsgroup. Traffic may still be blocked when forcefully deleting the nsgroup. Fixed Issue 2882574: Blocked 'Brownfield Config Onboarding' APIs in NSX-T 3.2.0 release as the feature is not fully supported. You will not be able to use the 'Brownfield Config Onboarding' feature. Fixed Issue 2791490: Federation: Unable to sync the objects to standby Global Manager (GM) after changing standby GM password. Cannot observe Active GM on the standby GM's location manager, either any updates from the Active GM. Fixed Issue 2782010: Policy API allows change vdr_mac/vdr_mac_nested even when "allow_changing_vdr_mac_in_use" is false. VDR MAC will be updated on TN even if allow_changing is set to false. Error is not thrown. Fixed Issue 2687084: After upgrade or restart, the Search API may return 400 error with Error code 60508, "Re-creating indexes, this may take some time." Depending on the scale of the system, the Search API and the UI are unusable until the re-indexing is complete. Fixed Issue 2636420: The transport node profile is applied on a cluster on which "remove nsx" is called after backup. Hosts are not in prepared state but the transport node profile is still applied at the cluster. Fixed Issue 2658092: Onboarding fails when NSX Intelligence is configured on Local Manager. Onboarding fails with a principal identity error. You cannot onboard a system with principal identity user. Fixed Issue 2862606: For ESX version less than 7.0.1, NIOC profile is not supported. Creating or updating Transport Nodes appears to be successful. However, the actual configuration of the NIOC profile will not be applied to the datapath, so it will not work. Fixed Issue 2871162: You cannot see the pool member failure reason through API when the load-balancer pool member is down. Failure Reason cannot be shown in pool member status when the load-balancer pool is configured with one monitor and the pool member status is DOWN. Fixed Issue 2879667: Post NSX-T 3.2 migration, flows are not being streamed through Pub/Sub. When migrating to NSX-T 3.2, the broker endpoint for Pub/Sub subscriptions does not get updated. The subscription stops receiving flows if the broker IP is incorrect. Fixed Issue 2881503: Scripts fail to clear PVLAN properties during upgrade if dvs name contains blank space. PVLAN properties are not cleared after upgrading, so the conflict with VC still persists. Fixed Issue 2885820: Missing translations on CCP for few IPs for IP range starting with 0.0.0.0 NSGroup with IP range starting with 0.0.0.0, for example “0.0.0.0-255.255.255.0”, has translation issues (missing 0.0.0.0/1 subnet). NSGroup with IP range “1.0.0.0-255.255.255.0" are unaffected. Fixed Issue 2890348: The default VNI pool needs to be migrated correctly to IM if the default VniPool's name is changed in GC/HL. The default VNI Pool was named as "DefaultVniPool" before NSX-T 3.2. The VNI Pool will be migrated incorrectly if it was renamed prior to the release of NSX-T 3.2. The upgrade or migration will not fail, but the pool data will be inconsistent. Fixed Issue 2893170: SAP - Policy API not able to fetch inventory, networking, or security details. UI displays an error message: "Error: Index is currently out of sync, system is trying to recover." In NSX-T 3.x, elastic search has been configured to index IP range data in the format of IP ranges instead of Strings. A specific IP address can be searched from the configured IP address range for any rule. Although elastic search works fine with existing formats like IPv4 addresses and ranges, IPv6 addresses and ranges, and CIDRs, it does not support IPv4-mapped IPv6 addresses with CIDR notation and will raise an exception. This will cause the UI to display an "Index out of sync" error, resulting in data loading failure. Fixed Issue 2914742: Tier0 Logical Routing enters an error state when one or more of its BGP neighbors Route filters "Maximum Routes" is set for L2VPN_EVPN address family of the neighbor. Routing stops working. Fixed Issue 2938347: ISO installation on bare metal edge fails with a black screen after reboot. Installation of NSX-T Edge (bare metal) may fail during the first reboot after installation is complete on a Dell PowerEdge R750 server while in UEFI boot mode. Fixed Issue 2936347: Edge redeploy must raise an alarm if it cannot successfully find or delete the previous edge VM that is still connected to MP. With power-off and delete operations through VC failing, Edge redeploy operation may end up with two Edge VMs functioning at the same time, resulting in IP conflicts and other issues. Fixed Issue 2946102: Firewall Exclude List records from /internal (mp) entry are missing in upgrade paths GC/HL to 320 or 3201, which may lead to CCP having problems excluding the members in the firewall exclude list. CCP might have problems configuring the DFW Exclude list if the upgrade path includes the NSX-T 3.2.0 or 3.2.0.1 release. You will not be able to see DFW Firewall Exclude List members from the MP side, and you may find the members in the firewall exclude list not being excluded. One of the entries in the database that the CCP consumes is missing since the internal records were overwritten by the infra one. This issue does not occur if the customer directly upgrades from the NSX-T 3.0.x or 3.1.x release to the NSX-T 3.2.1 release. Fixed Issue 2941110: The upgrade coordinator page failed to load in UI due to slowness in scale setup. You may not be able to navigate and check the upgrade status after starting the large-scale upgrade, since the Upgrade coordinator page failed to load in UI with an error upgrade status listing failed: Gateway Time-out. Fixed Issue 2894642: The datapath process on Edge VMs deployed on a host with SandyBridge, IvyBridge or Westmere CPU, or with EVC mode set to IvyBridge or earlier, fails to start. A newly deployed Edge has a Configuration State of Failed with an error. Resulting, Edge datapath to be non-functional. Fixed Issue 2909840: The upgrade of NSX-T Baremetal Edge with a bond interface configured as the management interface fails during serial upgrade. PNIC is reported down. Following the upgrade reboot, the dataplane service fails to start. The syslogs indicate an error in a python script. For example, 2021-12-24T15:19:19.274Z HKEDGE02.am-int01.net datapath-systemd-helper 6976 - - fd = file(path) 2021-12-24T15:19:19.296Z HKEDGE02.am-int01.net datapath-systemd-helper 6976 - - NameError: name 'file' is not defined On the partially upgraded Edge, the dataplane service is down. There will still be an active Edge in the cluster, but it might be down to a single point of failure. Fixed Issue 2880406: Realized NSService or NSServiceGroups does not have policyPath present in tags if those are retrieved through search API. If a Service or ServiceEntry is created on the Policy side and retrieved using the Search API, the returned NSServiceGroup or NSService will not have a policyPath tag that contains the path of the Service or ServiceEntry on the Policy side. Fixed Issue 2873440: Error is returned for VIF membership API during VM vMotion. During VM vMotion, effective VIF membership API (https://{{ip}}/policy/api/v1/infra/domains/:domains/groups/:group/members/vifs) returns error. API works fine after the VM vMotion is successfully completed. You can use effective VIF membership API after VM vmotion is completed. Fixed Issue 2865827: VM loses the existing TCP and/or ICMP connectivity after the vMotion of Guest Virtual Machine. When Service Insertion is configured, VM loses the existing TCP and/or ICMP connectivity after the vMotion of Guest Virtual Machine. Fixed Issue 2878414: While creating a group in the members' dialog for the group member type and when you click on "View Members", the members of that group are copied into the current group. You may see that the members are copied from the other group while viewing its members. You can always modify and unselect/remove those items. Fixed Issue 2875563: Delete IN_PROGRESS LTA session may cause PCAP file leak. The PCAP file will leak if a LTA is deleted with PCAP action when the LTA session state is "IN_PROGRESS". This may cause the /tmp partition of the ESXi to be full. Fixed Issue 2875667: Downloading the LTA PCAP file results in error when the NSX /tmp partition is full. The LTA PCAP file cannot be downloaded due to the /tmp partition being full. Fixed Issue 2883505: PSOD on ESXi hosts during NSX for vSphere to NSX-T migration. PSOD on multiple ESXi hosts during migration. This results in a datapath outage. Fixed Issue 2914934: DFW rules on dvPortGroups are lost after NSX for vSphere to NSX-T migration. After migration, any workload that is still connected to vSphere dvPortGroup will not have DFW configuration. Fixed Issue 2921704: Edge Service CPU may spike due to nginx process deadlock when using L7 load balancer with ip-hash load balancing algorithm. You cannot connect to the backend servers behind the Load Balancer. Fixed Issue 2933905: Replacing an NSX-T Manager results in transport nodes losing connection to the controller. Adding or removing a node from the Manager cluster can result in some transport nodes losing controller connectivity. Fixed Issue 2894988: PSOD during normal operation of DFW. A PSOD host occurs in a pollWorld or NetWorld world, with the callstack showing rn_match_int() and pfr_update_stats() as top 2 functions. An address set object is in transition due to being reprogrammed, but a packet processing thread (pollWorld or NetWorld) is concurrently traversing the address set. Fixed Issue 2927442: Traffic sometimes hits the default deny DFW rule on VMs across different hosts and clusters since the NSX-T 3.2.0.1 upgrade. The issue is the result of a race condition where two different threads access the same memory address space simultaneously. This sometimes causes incomplete address sets to be forwarded to transport nodes whose control plane shard is on the impacted controller. Fixed Issue 2938194: Refresh API fails with error code 100 - NullPointerException. Refresh enables the NSX Manager to collect the current config of the edge. The configuration sync does not work and fails with an error ‘Failed to refresh the transport node configuration: undefined’. Any external changes will not raise alarms. Fixed Issue 2938407: Edge node fails to deploy completely in an NSX-T federation setup on NSX-T 3.2.0.x. No update for that edge node in the UI. The edge node fails to deploy completely with "Registration Timedout". Fixed Issue 2962901: Edge Datapath Configuration Failure alarm after edge node upgrade. On the NSX-T Federation setup, when T1 gateways are stretched with DHCP static bindings for downlink segments, MP also creates L2 forwarder ports for the DHCP switch. If a single edge node has two DHCP switches and it was restarted, it caused the failure. Fixed Issue 2645632: During switchover operation of the local edge, IKE sessions are deleted and re-established by the peer. Some IPSec setups that have a large number (more than 30) of IKE sessions configured, local edges deployed in active-standby mode enabled with HA-Sync, and peers having DPD enabled with default settings, some IKE sessions may be torn down by the peer due to DPD timeout and re-established during the switchover. Fixed Issue 2937810: The datapath service fails to start and some Edge bridge functions (for example, Edge bridge port) do not work. If Edge bridging is enabled on Edge nodes, the Central Control Plane (CCP) sends the DFW rules to the Edge nodes, which should only be sent to host nodes. If the DFW rules contain a function which is not supported by the Edge firewall, the Edge nodes cannot handle the unsupported DFW configuration, which causes the datapath to fail to start. Issue 2663483: The single-node NSX Manager will disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager. This issue is seen only with NSX Federation and with the single node NSX Manager Cluster. The single-node NSX Manager will disconnect from the rest of the NSX Federation environment if you replace the APH-AR certificate on that NSX Manager. Workaround: Single-node NSX Manager cluster deployment is not a supported deployment option, so have three-node NSX Manager cluster. Issue 2879979: IKE service may not initiate new IPsec route based session after "dead peer detection" has happened due to IPsec peer being unreachable. There could be outage for specific IPsec route based session. Workaround: Enable/disable on IPsec session can resolve the problem. Issue 2879734: Configuration fails when same self-signed certificate is used in two different IPsec local endpoints. Failed IPsec session will not be established until the error is resolved. Workaround: Use unique self-signed certificate for each local endpoint. Issue 2879133: Malware Prevention feature can take up to 15 minutes to start working. When the Malware Prevention feature is configured for the first time, it can take up to 15 minutes for the feature to be initialized. During this initialization, no malware analysis will be done, but there is no indication that the initialization is occurring. Workaround: Wait 15 minutes. Issue 2868944: UI feedback is not shown when migrating more than 1,000 DFW rules from NSX for vSphere to NSX-T Data Center, but sections are subdivided into sections of 1,000 rules or fewer. UI feedback is not shown. Workaround: Check the logs. Issue 2865273: Advanced Load Balancer (Avi) search engine won't connect to Avi Controller if there is a DFW rule to block ports 22, 443, 8443 and 123 prior to migration from NSX for vSphere to NSX-T Data Center. Avi search engine is not able to connect to the Avi Controller. Workaround: Add explicit DFW rules to allow ports 22, 443, 8443 and 123 for SE VMs or exclude SE VMs from DFW rules. Issue 2864929: Pool member count is higher when migrated from NSX for vSphere to Avi Load Balancer on NSX-T Data Center. You will see a higher pool member count. Health monitor will mark those pool members down but traffic won't be sent to unreachable pool members. Workaround: None. Issue 2719682: Computed fields from Avi controller are not synced to intent on Policy resulting in discrepancies in Data shown on Avi UI and NSX-T UI. Computed fields from Avi controller are shown as blank on the NSX-T UI. Workaround: App switcher to be used to check the data from Avi UI. Issue 2848614: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the forward or reverse lookup entry missing in external DNS server or dns entry missing for joining node, forward or reverse alarms are not generated for the joining node. Forward/Reverse alarms are not generated for the joining node even though forward/reverse lookup entry is missing in DNS server or dns entry is missing for the joining node. Workaround: Configure the external DNS server for all Manager nodes with forward and reverse DNS entries. Issue 2871585: Removal of host from DVS and DVS deletion is allowed for DVS versions less than 7.0.3 after NSX Security on vSphere DVPortgroups feature is enabled on the clusters using the DVS. You may have to resolve any issues in transport node or cluster configuration that arise from a host being removed from DVS or DVS deletion. Workaround: None. Issue 2870085: Security policy level logging to enable/disable logging for all rules is not working. You will not be able to change the logging of all rules by changing "logging_enabled" of security policy. Workaround: Modify each rule to enable/disable logging. Issue 2866682: In Microsoft Azure, when accelerated networking is enabled on SUSE Linux Enterprise Server (SLES) 12 SP4 Workload VMs and with NSX Agent installed, the ethernet interface does not obtain an IP address. VM agent doesn't start and VM becomes unmanaged. Workaround: Disable Accelerated networking. Issue 2816781: Physical servers cannot be configured with a load-balancing based teaming policy as they support a single VTEP. You won't be able to configure physical servers with a load-balancing based teaming policy. Workaround: Change the teaming policy to a failover based teaming policy or any policy having a single VTEP. Issue 2884939: NSX-T Policy API results in error: Client 'admin' exceeded request rate of 100 per second (Error code: 102). The NSX rate limiting of 100 requests per second is reached when we migrate a large number of VS from NSX for vSphere to NSX-T ALB and all APIs are temporarily blocked. Workaround: Update Client API rate limit to 200 or more requests per second. Note: There is fix on AVI version 21.1.4 release. Issue 2792485: NSX manager IP is shown instead of FQDN for manager installed in vCenter. NSX-T UI Integrated in vCenter shows NSX manager IP instead of FQDN for installed manager. Workaround: None. Issue 2888207: Unable to reset local user credentials when vIDM is enabled. You are unable to change local user passwords while vIDM is enabled. Workaround: vIDM configuration must be (temporarily) disabled, the local credentials reset during this time, and then integration re-enabled. Issue 2885330: Effective member not shown for AD group. Effective members of AD group not displayed. No datapath impact. Workaround: None. Issue 2879119: When a virtual router is added, the corresponding kernel network interface does not come up. Routing on the vrf fails. No connectivity is established for VMs connected through the vrf. Workaround: Restart the dataplane service. Issue 2877776: "get controllers" output may show stale information about controllers that are not the master when compared to the controller-info.xml file. This CLI output is confusing. Workaround: Restart nsx-proxy on that TN. Issue 2874995: LCores priority may remain high even when not used, rendering them unusable by some VMs. Performance degradation for "Normal Latency" VMs. Workaround: There are two options. Issue 2854139: Continuous addition/removal of BGP routes into RIB for a topology where Tier0 SR on edge has multiple BGP neighbors and these BGP neighbors are sending ECMP prefixes to the Tier0 SR. Traffic drop for the prefixes that are getting continuously added/deleted. Workaround: Add an inbound routemap that filters the BGP prefix which is in the same subnet as the static route nexthop. Issue 2853889: When creating EVPN Tenant Config (with vlan-vni mapping), Child Segments are created, but the child segment's realization status gets into failed state for about 5 minutes and recovers automatically. It will take 5 minutes to realize the EVPN tenant configuration. Workaround: None. Wait 5 minutes. Issue 2690457: When joining an MP to an MP cluster where publish_fqdns is set on the MP cluster and where the external DNS server is not configured properly, the proton service may not restart properly on the joining node. The joining manager will not work and the UI will not be available. Workaround: Configure the external DNS server with forward and reverse DNS entries for all Manager nodes. Issue 2490064: Attempting to disable VMware Identity Manager with "External LB" toggled on does not work. After enabling VMware Identity Manager integration on NSX with "External LB", if you attempt to then disable integration by switching "External LB" off, after about a minute, the initial configuration will reappear and overwrite local changes. Workaround: When attempting to disable vIDM, do not toggle the External LB flag off; only toggle off vIDM Integration. This will cause that config to be saved to the database and synced to the other nodes. Issue 2668717: Intermittent traffic loss might be observed for E-W routing between the vRA created networks connected to segments sharing Tier-1. In cases where vRA creates multiple segments and connects to a shared ESG, migration from NSX for vSphere to NSX-T will convert such a topology to a shared Tier-1 connected to all segments on the NSX-T side. During the host migration window, intermittent traffic loss might be observed for E-W traffic between workloads connected to the segments sharing the Tier-1. Workaround: None. Issue 2355113: Workload VMs running RedHat and CentOS on Azure accelerated networking instances is not supported. In Azure when accelerated networking is enabled on RedHat or CentOS based OS's and with NSX Agent installed the ethernet interface does not obtain an IP address. Workaround: Disable accelerated networking for RedHat and CentOS based OS. Issue 2283559: /routing-table and 2684574: If the edge has 6K+ routes for Database and Routes, the Policy API times out. These Policy APIs for the OSPF database and OSPF routes return an error if the edge has 6K+ routes: /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/routes /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/routes?format=csv /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/database /tier-0s/<tier-0s-id>/locale-services/<locale-service-id>/ospf/database?format=csv If the edge has 6K+ routes for Database and Routes, the Policy API times out. This is a read-only API and has an impact only if the API/UI is used to download 6k+ routes for OSPF routes and database. Workaround: Use the CLI commands to retrieve the information from the edge. Issue 2574281: Policy will only allow a maximum of 500 VPN Sessions. NSX claims support of 512 VPN Sessions per edge in the large form factor, however, due to Policy doing auto plumbing of security policies, Policy will only allow a maximum of 500 VPN Sessions. Upon configuring the 501st VPN session on Tier0, the following error message is shown: {'httpStatus': 'BAD_REQUEST', 'error_code': 500230, 'module_name': 'Policy', 'error_message': 'GatewayPolicy path=[/infra/domains/default/gateway-policies/VPN_SYSTEM_GATEWAY_POLICY] has more than 1,000 allowed rules per Gateway path=[/infra/tier-0s/inc_1_tier_0_1].'} Workaround: Use Management Plane APIs to create additional VPN Sessions. Issue 2839782: unable to upgrade from NSX-T 2.4.1 to 2.5.1 because CRL entity is large, and Corfu imposes a size limit in 2.4.1, thereby preventing the CRL entity from being created in the Corfu during upgrade. Unable to upgrade. Workaround: Replace certificate with a certificate signed by a different CA. Issue 2838613: For ESX version less than 7.0.3, NSX security functionality not enabled on VDS upgraded from version 6.5 to a higher version after security installation on the cluster. NSX security features are not enabled on the the VMs connected to VDS upgraded from 6.5 to a higher version (6.6+) where NSX Security on vSphere DVPortgroups feature is supported. Workaround: After VDS is upgraded, reboot the host and power on the VMs to enable security on the VMs. Issue 2799371: IPSec alarms for L2 VPN are not cleared even though L2 VPN and IPSec sessions are up. No functional impact except that unnecessary open alarms are seen. Workaround: Resolve alarms manually. Issue 2584648: Switching primary for T0/T1 gateway affects northbound connectivity. Location failover time causes disruption for a few seconds and may affect location failover or failback test. Workaround: None. Issue 2561988: All IKE/IPSEC sessions are temporarily disrupted. Traffic outage will be seen for some time. Workaround: Modify the local endpoints in phases instead of all at the same time. Issue 2491800: AR channel SSL certificates are not periodically checked for their validity, which could lead to using an expired/revoked certificate for an existing connection. The connection would be using an expired/revoked SSL. Workaround: Restart the APH on the Manager node to trigger a reconnection. Issue 2558576: Global Manager and Local Manager versions of a global profile definition can differ and might have an unknown behavior on Local Manager. Global DNS, session, or flood profiles created on Global Manager cannot be applied to a local group from UI, but can be applied from API. Hence, an API user can accidentally create profile binding maps and modify global entity on Local Manager. Workaround: Use the UI to configure system. Issue 2639424: Remediating a Host in a vLCM cluster with Host-based Deployment will fail after 95% Remediation Progress is completed. The remediation progress for a Host will be stuck at 95% and then Fail after 70 minute timeout is completed. Issue 2950206: CSM is not accessible after MPs are upgraded and before CSM upgrade. When MP is upgraded, the CSM appliance is not accessible from the UI until the CSM appliance is upgraded completely. NSX services on CSM are down at this time. It's a temporary state where CSM is inaccessible during an upgrade. The impact is minimal. Workaround: This is an expected behavior. You have to upgrade the CSM appliance to access CSM UI and ensure all services are running. Issue 2945515: NSX tools upgrade in Azure can fail on Redhat Linux VMs. By default, NSX tools are installed on /opt directory. However, during NSX tools installation default path can be overridden with "--chroot-path" option passed to the install script. Insufficient disk space on the partition where NSX tools is installed can cause NSX tools upgrade to fail. Workaround: Increase the partition size on which NSX tools is installed and then initiate NSX tools upgrade. Steps for increasing disk space are described in page. Issue 2882154: Some of the pods are not listed in the output of "kubectl top pods -n nsxi-platform". The output of "kubectl top pods -n nsxi-platform" will not list all pods for debugging. This does not affect deployment or normal operation. For certain issues, debugging may be affected. There is no functional impact. Only debugging might be affected. Workaround: There are two workarounds: Issue 2871440: Workloads secured with NSX Security on vSphere dvPortGroups lose their security settings when they are vMotioned to a host connected to an NSX Manager that is down. For clusters installed with the NSX Security on vSphere dvPortGroups feature, VMs that are vMotioned to hosts connected to a downed NSX Manager do not have their DFW and security rules enforced. These security settings are re-enforced when connectivity to NSX Manager is re-established. Workaround: Avoid vMotion to affected hosts when NSX Manager is down. If other NSX Manager nodes are functioning, vMotion the VM to another host that is connected to a healthy NSX Manager. Issue 2898020: The error 'FRR config failed:: ROUTING_CONFIG_ERROR (-1)' is displayed on the status of transport nodes. The edge node rejects a route-map sequence configured with a deny action that has more than one community list attached to its match criteria. If the edge nodes do not have the admin intended configuration, it results in unexpected behavior. Workaround: None Issue 2910529: Edge loses IPv4 address after DHCP allocation. After the Edge VM is installed and received an IP from DHCP server, within a short time it loses the IP address and becomes inaccessible. This is because the DHCP server does not provide a gateway, hence the Edge node loses IP. Workaround: Ensure that the DHCP server provides the proper gateway address. If not, perform the following steps: Issue 2942900: The identity firewall does not work for event log scraping when Active Directory queries time out. The identity firewall issues a recursive Active Directory query to obtain the user's group information. Active Directory queries can time out with a NamingException 'LDAP response read timed out, timeout used: 60000 ms'. Therefore, firewall rules are not populated with event log scraper IP addresses. Workaround: To improve recursive query times, Active Directory admins may organize and index the AD objects. Issue 2958032: If you are using NSX-T 3.2 or upgrading to an NSX-T 3.2 maintenance release, the file type is not shown properly and is truncated at 12 characters on the Malware Prevention dashboard. On the Malware Prevention dashboard, when you click to see the details of the inspected file, you will see incorrect data because the file type will be truncated at 12 characters. For example, for a file with File Type as WindowsExecutableLLAppBundleTarArchiveFile, you will only see WindowsExecu as File Type on Malware Prevention UI. Workaround: Do a fresh NAPP installation with an NSX-T 3.2 maintenance build instead of upgrading from NSX-T 3.2 to an NSX-T 3.2 maintenance release. Issue 2954520: When Segment is created from policy and Bridge is configured from MP, detach bridging option is not available on that Segment from UI. You will not be able to detach or update bridging from UI if Segment is created from policy and Bridge is configured from MP. If a Segment is created from the policy side, you are advised to configure bridging only from the policy side. Similarly, if a Logical Switch is created from the MP side, you should configure bridging only from the MP side. Workaround: You need to use APIs to remove bridging: 1. Update concerned LogicalPort and remove attachment PUT :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id> Add this to headers in PUT payload headers field -> X-Allow-Overwrite : true 2. DELETE BridgeEndpoint DELETE :: https://<mgr-ip>/api/v1/bridge-endpoints/<bridge-endpoint-id> 3. Delete LogicalPort DELETE :: https://<mgr-ip>/api/v1/logical-ports/<logical-port-id> Issue 2889482: The wrong save confirmation is shown when updating segment profiles for discovered ports. The Policy UI allows editing of discovered ports but does not send the updated binding map for port update requests when segment profiles are updated. A false positive message is displayed after clicking Save. Segments appear to be updated for discovered ports, but they are not. Workaround: Use MP API or UI to update the segment profiles for discovered ports. Issue 2919218: Selections made to the host migration are reset to default values after the MC service restarts. After the restart of the MC service, all the selections relevant to host migration such as enabling or disabling clusters, migration mode, cluster migration ordering, etc., that were made earlier are reset to default values. Workaround: Ensure that all the selections relevant to host migration are performed again after the restart of the MC service. Issue 2931403: Network interface validation prevents API users from performing updates. An Edge VM network interface can be configured with network resources such as port groups, VLAN logical switches, or segments that are accessible for specified compute and storage resources. Compute-Id regroup moref in intent is stale and no longer present in VC after a power outage (moref of resource pool changed after VC was restored). Workaround: Redeploy edge and specify valid moref Ids.
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2.1/rn/vmware-nsxt-data-center-321-release-notes/index.html
2022-06-25T04:39:54
CC-MAIN-2022-27
1656103034170.1
[]
docs.vmware.com
Primary template for the output-designated only converters. More... #include <converter.hpp> Primary template for the output-designated only converters. This is the primary template for the sub-family of converters which only designate the output in the template argument. These are designed to incorporate a group of conversions to the specified output type (convenience class). Definition at line 71 of file converter.hpp. Definition at line 73 of file converter.hpp. The generalised fallback for converting various input types to a specific output type. This looks inside the output template argument's class for the required converter. Definition at line 106 of file converter.hpp.
http://docs.ros.org/en/melodic/api/ecl_converters/html/classecl_1_1Converter_3_01Output_00_01void_01_4.html
2022-06-25T04:33:58
CC-MAIN-2022-27
1656103034170.1
[]
docs.ros.org
For CIMB Clicks payment method there aren’t any test data available, but you can see how it works with the payment flow given below. CIMB Clicks Payment Flow The customer enters his email address, name and phone number. The customer is shown the details of his payment and proceeds to pay with CIMB Clicks. The customer logs in to his account by entering his User ID and completes the payment. Upon completion of the payment flow, the customer is redirected back to your ReturnURL.
https://docs.smart2pay.com/s2p_testdata_1011/
2022-06-25T05:07:39
CC-MAIN-2022-27
1656103034170.1
[]
docs.smart2pay.com
As a Partner User, you can monitor the status of your Customers along with the Edges connected to the Customers. In the Partner portal, click Monitor Customers. This screen shows the Edges and Links for all customers managed by this Partner. Selections can be made to control the interval for updating the information. In the Refresh Interval, you can either pause the monitoring or choose the time interval to refresh the monitoring status. The Monitor Customers page displays the following details: Customers: - Customers managed by the Partner. - Number of Customers that are UP, DOWN, and UNACTIVATED. Click the number to view the corresponding Customer details in the bottom panel. - In the bottom panel, click the link to the Customer name to navigate to the Enterprise portal, where you can view and configure other settings corresponding to the selected customer. For more information see the VMware SD-WAN Administration Guide. Edges: - Edges associated with the Customers. - Number of Edges that are DOWN, DEGRADED, CONNECTED, and UNACTIVATED. Click the number to view the corresponding details of the Edges in the bottom panel. - In the bottom panel, place the mouse cursor on the Down Arrow displayed next to the number of Edges, to view the details of each Edge. Click the link to the Edge name to navigate to the Enterprise Monitoring portal, where you can view more details corresponding to the selected Edge. For more information see the VMware SD-WAN Administration Guide. You can also view the Customers and associated Edges. The new Orchestrator UI does not provide the option for Auto Refresh. You can refresh the Window manually to view the current data.
https://docs.vmware.com/en/VMware-SD-WAN/5.0/vmware-sd-wan-partner-guide/GUID-2F0BD4CB-1EC8-4D95-944E-1415D9807A72.html
2022-06-25T05:18:35
CC-MAIN-2022-27
1656103034170.1
[array(['images/GUID-990E156A-7595-4F55-B996-D1F14B2A0694-low.png', None], dtype=object) array(['images/GUID-F5BEF294-CF58-4177-A3FF-EB8078F1633F-low.png', None], dtype=object) ]
docs.vmware.com
What Is Information Security Risk Management? ISMS Information security risk management is a process by which organizations identify and control the risks that arise from using and managing information technologies. It’s also called Information Risk Management, or IRM. It has been around for many years, and it is crucial to any organization that deals with sensitive data. Information Security Risk Management, or ISRM, can be defined as the process by which an organization manages the risks associated with all of its information assets. This includes everything from how they store their data to what measures to prevent unauthorized access to it. There are many things to consider regarding IRM, such as data classification, system configuration, unauthorized access prevention, personnel training for data protection awareness, incident response planning, and more. Why is Risk Management Important in Information Security? Risks can come from various sources, both internal and external to the organization. Some common hazards include: - Technology failures: Devices or systems can malfunction, leading to data loss or system outages. - Human error: Incorrectly entering data, clicking on malicious links, or simply making a mistake Risk Management Methodology: ISMS Information Security Risk management is systematically identifying, analyzing, and controlling risks. It can help companies avoid problems that could disrupt their business operations or trigger financial loss. There are three main steps to the risk management methodology: analyze, plan, and implement. - Analyze: The first step in the ISMS Information Security risk management process is to analyze the risks. This involves identifying and assessing all potential risks that could affect the company. It’s essential to be as comprehensive as possible so that nothing is missed. The goal is to thoroughly understand all of the risks that need to be addressed. - Plan: These four steps are as follows: - Identifying your risks. - Evaluating those risks. - Developing a plan of action. - Implementing the risk management plan. - Implement: An implementation in ISMS Risk Management Methodology is the process of taking a risk and implementing it into a project. The risk management methodology can assess, monitor, control, and communicate risks to stakeholders. It also guides decision-making by establishing boundaries between acceptable and unacceptable levels of risk. Four phases make up an implementation: Identification, assessment, evaluation, and the selection or elimination. Stages in ISRM: - Identify Assets: Identify Assets is the second stage of ISRM. The purpose of this stage is to identify all assets that are available to recover from a disaster or outage. To start this process, you must first know your company’s business continuity objectives and find out which event will impact business functions. These two pieces of information will help determine how much time it would take for your organization to resume operations at average capacity following an event. Once these factors have been determined, you can create a list of possible recovery solutions based on the amount of time needed to implement them and their reliability levels. - Identify Vulnerabilities: The ISRM is a model for risk assessment. It focuses on identifying vulnerabilities within an organization to assess risks and prioritize possible countermeasures. The ISRM has five stages, each with its own set of methods that are used to identify vulnerabilities: - Identification : where potential hazards are detected. - Assessment : in which the likelihood and severity of harm is determined. - Analysis: in which mitigation options are explored. - Recommendation : when solutions are proposed based on the results from steps 2-3. - Implementation : where all or some of these recommendations are implemented - Identify Threats: The Identify threats Stages of ISRM is a framework that can identify threats in the information system. This framework has five stages: Preparation, Identification, Containment, Recovery, and Mitigation. - The first stage is preparation, where you plan for when an attack might happen. - Next comes Identification, which identifies what type of attack it was or if there was one at all. - Containment ensures that this particular event does not affect other parts of the organization’s infrastructure. - Recovery brings everything back to its normal state while mitigating works towards preventing future events like this from happening again in the future by implementing safeguards and security measures into place to stop them before they start. - Assessment: The assessment stages for ISRM include: - Defining the problem : To get started with solving a problem, you need to know what the problem is. This includes assessing how bad it is and whether or not there are any other problems involved. - Understanding the cause of the problem : Once you know what’s going on, it’s time to figure out why this is happening so that you can solve it more easily. You might find out that some people don’t believe it’s their problem, and you need to determine how much of the population thinks this way. - Communication: The purpose of the organization is to promote rock mechanics worldwide by providing a forum for international cooperation in research, education, and application. It also provides opportunities for professional development through workshops, conferences, courses, and other meetings. - Rinse and Repeat: Rinse and Repeat is a strategy that can increase the number of conversions on your website. It involves implementing an action, analyzing the results, making changes, if necessary, rinsing, and repeating until you are satisfied with the result. Rinse and Repeat is one of many Conversion Rate Optimization strategies that should be implemented to maximize your conversion rates.
https://iso-docs.com/blogs/iso-27001-isms/information-security-risk-management-iso-27001
2022-06-25T05:04:42
CC-MAIN-2022-27
1656103034170.1
[array(['https://cdn.shopify.com/s/files/1/0564/9625/9172/files/ISMS_Information_Security_RISK_Management_Excel_Template_1024x1024.png?v=1648735164', 'ISMS Information Security Risk Management Template, ISMS Information Security Risk Management Excel Template, ISMS Information Security Risk Management Template Excel, ISMS Risk Management Templates'], dtype=object) array(['https://cdn.shopify.com/s/files/1/0564/9625/9172/files/Risk-Management-Methodology-1024x286_600x600.jpg?v=1643110726', 'ISMS Risk Management, Risk Management Methodology, Information Security Risk Management Methodology'], dtype=object) array(['https://cdn.shopify.com/s/files/1/0564/9625/9172/files/Stages-in-ISRM_600x600.jpg?v=1643110888', 'Stages in ISRM, Stages of ISMS, Stages of ISMS Information Security Risk Management'], dtype=object) ]
iso-docs.com
Abort a Multipart Upload You can abort an in-progress multipart upload by calling the AmazonS3.abortMultipartUpload method. This method deletes any parts that were uploaded to Amazon S3 and frees up the resources. You must provide the upload ID, bucket name, and key name. The following Java code sample demonstrates how to abort an in-progress multipart upload. Copy to clipboard InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(existingBucketName, keyName); InitiateMultipartUploadResult initResponse = s3Client.initiateMultipartUpload(initRequest); AmazonS3 s3Client = new AmazonS3Client(new ProfileCredentialsProvider()); s3Client.abortMultipartUpload(new AbortMultipartUploadRequest( existingBucketName, keyName, initResponse.getUploadId())); Note Instead of a specific multipart upload, you can abort all your multipart uploads initiated before a specific time that are still in progress. This clean-up operation is useful to abort old multipart uploads that you initiated but neither completed nor aborted. For more information, see Abort Multipart Uploads.
http://docs.aws.amazon.com/AmazonS3/latest/dev/LLAbortMPUJava.html
2017-03-23T06:19:54
CC-MAIN-2017-13
1490218186780.20
[]
docs.aws.amazon.com
Ready. We hope you enjoy Gantry 5 every bit as much as we have enjoyed making it. Helium. You can do so by clicking the buttons below, or via GitHub. Once you have the latest packages, installation is simple. We have provided a step-by-step guide in the Installation portion of this documentation. When you have installed and activated both the Gantry framework and your Gantry-powered theme, you can access the Gantry 5 administrator in several different ways. The easiest being simply navigating to Components → Gantry 5 Themes from the back end of Joomla. Here, you will see a list of any installed Gantry-powered themes. You can Preview the theme from here or select Configure to go directly to the Gantry Administrator where you can get started modifying your Gantry-powered site. Accessing the Gantry 5 administrator is pretty easy. Once you have both Gantry and your desired Gantry-powered theme installed and activated, you can simply select (Theme Name) Theme from the sidebar in the backend. Accessing the Gantry 5 administrator in Grav is easy. Simply log in to the Grav Admin and select Gantry 5 in the sidebar. Once here, you can choose the Gantry 5 theme you wish to configure by clicking the Configure button. The Gantry Administrator has multiple administrative tools you can flip through to configure how your Gantry-powered theme looks and functions. Here is a quick breakdown of each of these tools, and what you can do with them. Outlines: This administrative panel displays the current theme's outlines, giving you quick access to edit, rename, duplicate, and delete them. Menu Editor: This administrative panel gives you the ability to enhance the platform's menu by altering styling, rearranging links, and creating menu items that sit outside of the CMS's integrated Menu Manager. About: This page gives you quick, at-a-glance information about the currently-accessed theme. This is a one-stop shop for information about the theme including: name, version number, creator, support links, features, and more. Extras: This button opens a dropdown that gives you access to Clear Cache and Platform Settings functionality. Outlines Dropdown: This dropdown displays various outlines associated with your site. You can use it to quickly switch between them to edit their individual settings. Styles (Pictured): The Styles administrative panel gives you access to style-related settings for the outline. This includes things like theme colors, fonts, style presets, and more. Particle Defaults: The Particle Defaults administrative panel offers you the ability to configure the functional settings of the theme. This includes setting defaults for Particles, as well as enabling/disabling individual Particles. Layout: The Layout administrative panel is where you would configure the layout for your theme. Creating and placing module positions, Particles, spacers, and non-rendered scripts such as Google Analytics code is all done in this panel.. So, you've downloaded the Helium theme and you're ready to edit the content that appears on the front page? We have you covered. If you're creating a new website from scratch, you are most likely using a RocketLauncher for Joomla or WordPress or a Skeleton for Grav. These are pre-configured editions of Helium that come complete with demo content and particles that are ready for you to edit and configure to your liking. Once you have the skeleton or RocketLauncher installed, you will notice that there is a lot of content already on the front page. This content is placed within the Home - Particles outline. You can access this outline by navigating to Components → Gantry 5 Themes in the administration menu in the back end of Joomla and selecting Home - Particles from the outline drop-down. Once you have done that, simply switch to the Layout tab and you will see and have access to the particles that make up the front page. You can access this outline by navigating to Helium Theme in the administration menu in the back end of WordPress and selecting Home - Particles from the outline drop-down. Once you have done that, simply switch to the Layout tab and you will see and have access to the particles that make up the front page. You can access this outline by navigating to Gantry 5 in the administration menu in the back end of Grav and selecting Home - Particles from the outline drop-down. Once you have done that, simply switch to the Layout tab and you will see and have access to the particles that make up the front page. This process works for any other outline, including the Base outline which is the global default for any page that isn't already assigned to another outline. A chat room has been set up using Gitter where you can go to talk about the project with developers, contributors, and other members of the community. This is the best place to go to get quick tips and discuss features with others.
http://docs.gantry.org/gantry5/basics/getting-started
2017-03-23T06:21:08
CC-MAIN-2017-13
1490218186780.20
[]
docs.gantry.org