content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Contents Intro We. Bootstrap Updates Versions 5.0.x and below requires an update to the CSS bootstrap which can causes incompatibilities with heavily customized templates during the upgrade process. If upgrading from versions 5.0.x or below, it is recommended to create a new custom template from the updated bootstrap rather than attempting to update the old bootstrap. This is noted in the WHMCS 5.1 Change log ( ). Customising how payment gateways are displayed There may be occasion where it's desirable to customise the way payment gateways are displayed. For example you may wish to add formatting, display images such as card logos or any other code of your own. Due to security considerations in v5.3 and above it isn't possible to enter HTML code into the display name or payment instruction fields. Instead you can customise the relevant template to display your desired code. For example if you wanted to display some credit card logos when the PayPal payment method is selected on the printable invoice, the following code could be used.
http://docs.whmcs.com/Client_Area_Template_Files
2015-02-27T03:59:27
CC-MAIN-2015-11
1424936460472.17
[]
docs.whmcs.com
In the Median Resource Usage of Searches panels, note that: - Resource usage is aggregated over all searches. - Memory usage represents physical memory. - In this chart, CPU usage is expressed in percentage of one core, not as system-wide CPU usage. As a result, you are likely to see values >100% here. This is not the case for other examples of CPU usage in the distributed management console. In the Aggregate Search Runtime panel, note that: - For each time bin in the chart, the Monitoring Console adds up the runtime of all searches that were running during that time range. Thus, you might see, for example, 1000 seconds of search in 5 minutes. This means that multiple searches were running over the course of those 5 minutes. - For the modes historical batch and RT indexed, historical batch can be dispatched only by certain facilities within Splunk Enterprise (the scheduler, for example). RT indexed means indexed real-time. In the Top 10 Memory-Consuming Searches panel, SID means search ID. If you are looking for information about a saved search, audit.log matches the name of your saved search (savedsearch_name) with its search ID (search_id), user, and time. With the search_id, you can look up that search elsewhere, like in the Splunk platform search logs (see What Splunk logs about itself). The memory and CPU usage shown in these dashboards are for searches only. See the resource usage dashboards for all Splunk Enterprise resource usage. In the Instances by Median CPU Usage panel, CPU can be greater than 100% because of multiple cores. In the Instances by Median Memory Usage panel, memory is physical. For the modes historical batch and RT indexed: historical batch can be dispatched only by certain facilities within Splunk Enterprise (the scheduler, for example). RT indexed means indexed real-time. What to look for in these The historical panels get data from introspection logs. If a panel is blank or missing information from non-indexers, check: - that you are forwarding your introspection logs to your indexers, and - the system requirements for platform instrumentation. In the Search Activity: Instance > Search activity panel, the snapshots are taken every ten seconds by default. So if no searches are currently running, or if the searches you run are very short lived, the snapshots panel is blank and says "no results!
https://docs.splunk.com/Documentation/Splunk/8.1.0/DMC/SearchactivityDeploymentwide
2021-09-16T20:13:07
CC-MAIN-2021-39
1631780053717.37
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
The group, or groups, over which the function operates, depending on the evaluation of the specified condition. If the condition evaluates to TRUE, a new dynamic partition is created inside the specified window partition. If there is no PARTITION BY or RESET WHEN clauses, then the entire result set, delivered by the FROM clause, constitutes a partition.
https://docs.teradata.com/r/756LNiPSFdY~4JcCCcR5Cw/ZxJwOmwt314lEJw9iwit7A
2021-09-16T18:51:26
CC-MAIN-2021-39
1631780053717.37
[]
docs.teradata.com
Managing website content Learn how you can edit content and insert new content into Kentico pages. Learn about the functionality of the built-in text editor. Such as styling your text, inserting media into text or using the spelling checker. Add and remove files from page attachments, content tree and media libraries. Learn how to best store files in the system. Apply workflows to pages, publish pages, approve pages or send them for approval to someone else. Individual applications Familiarize yourself with the functionality provided by individual applications. Configuration Configuring the environment for content editors Set up the applications and other functionality that content editors use. Was this page helpful?
https://docs.xperience.io/k11/managing-website-content
2021-09-16T19:39:26
CC-MAIN-2021-39
1631780053717.37
[]
docs.xperience.io
Reference - Field editor The field editor allows you to define fields for objects (page types, web parts, on-line forms, system classes, etc.). Users can set the values of the fields when editing the resulting form. The interface of visible fields depends on the selected form control (a drop-down list, check box, etc.). You can perform the following actions using the field editor: Creating new fields Click New field to add new fields. When creating fields for objects that represent database tables (such as on-line forms, page types or custom tables), choose the Field type: - Standard field - allows you to select a form control, which users can see in the resulting form and save values into the corresponding database column. - Primary key - creates a field that stores the primary identifier for the table. - Field without database representation - creates a field without an equivalent column in the database. For example, you can create a check box field that does not have a persistent value, but allows users to adjust the visibility of other form fields or affects the processing of the form. - Page field - only available for page type fields. Allows you to choose a general page column from the CMS_Tree or CMS_Document table, and link it to the page field. To see an example of creating new fields, see Customizing product option forms. Note: Not all settings may be available depending on the selected Field type. The field editor may also have other options related to the specific object type for which the form is being defined. Creating new categories Categories allow you to group multiple fields together. The categories are displayed as sub-headings in the resulting form. Each category contains all field defined below it in the list. Using categories is recommended in large forms to make orientation easier for users. Click ... next to the New field button and select New category to create categories. The following properties are available when creating or editing categories: Moving fields or groups - Select the field or category in the list. - Move the item using the Move up () or Move down ( ) buttons. Moving fields in the list also changes their positions in the resulting form. Deleting fields or categories - Select the field or category in the list. - Click Delete item (). The field or group is deleted without the option of undoing the deletion. Setting default form field values through macros When defining form fields, you can use macro expressions. Was this page helpful?
https://docs.xperience.io/k12sp/custom-development/developing-form-controls/reference-field-editor
2021-09-16T19:12:32
CC-MAIN-2021-39
1631780053717.37
[]
docs.xperience.io
Using the advanced monitoring reports These reports provide details on what your advanced monitoring configuration has tracked: - Monitored execution report If you have configured your auditing installation for advanced monitoring, then this Monitored Execution Report provides a detailed record of the sessions where a user ran one of the commands that you’ve configured to monitor. This report shows who ran one of the monitored commands even if that person is not an audited user. Also, this report includes information on commands that are run individually or as part of scripts. - Detailed execution report If you have configured your auditing installation to perform advanced monitoring, then this Detailed Execution report shows all of the commands being executed on the audited machines—including commands that are run as part of scripts or other commands. - File monitor report The File Monitor report shows the sensitive files being modified by users on the audited machines. The File Monitor report includes any activity by any user (except root) in the following protected areas on audited computers: /etc/ /var/centrifydc/ /var/centrifyda/ /var/centrify/
https://docs.centrify.com/Content/audit-admin/AdvancedMonitoringReportsUse.html
2021-09-16T18:47:31
CC-MAIN-2021-39
1631780053717.37
[]
docs.centrify.com
Pick list on fields supersede and enhance former "Discrete set of values". List of values can be bound to a Standard field. Those will be used as a template when editing or searching the field. Typical use case is when one field value is heavily used on multiple records through the database. This is exactly what you would expect from discrete set of values. But there are other advanced options to be setup on a field. {primary} The pre-defined values can be selected from the list if you query on the field in both, Query builder and Form based query. Also a value can be easily selected when editing or inserting a row. The Pick list is configurable in schema editor for a selected field: Under Edit Pick List... button you will find a dialog with several options. Values loaded by Groovy script This is the easiest option. Pick list will access all the possible values in the field and make them available for selection. You need to load the available values from the database by clicking the "Load" button. As it would be too tedious to pick the value from the list while for example building a query, Pick List functionality also supports context searching for the values by simply typing. This option enables to set up only a discrete set of values to choose from. For example, setting up a Pick List for Acceptors field, which contains about 90 different numerical values, only those defined will be available in the Pick List. Setting up the Pick List with values loaded by Groovy give us more possibilities. One of the neat features is "aliasing" the values. See the example Groovy below. loadValues = { field, pickList -> return [ [1, "one"], [2, "two"], [3, "three"], [4, "four"], [5] ] } As you can see in the example, which was used for Acceptors field in the Demo Database, the code assigns only 5 values to be available (in analogous way to Constant List of values option). First four will be available in the Pick List by their string alias. The last one will be available as the value itself (in this case 5). The interface also offers a possibility to validate your Groovy script by clicking the "Test script" button. Other groovy examples are simple loading of values, which basically supplies the functionality of Constant List of Values // 1. Just simple list of values - numbers for integer fields OR Strings for text fields loadValues = { field, pickList -> return [ 1, 2, 3, 10..20, 30 ] // Or Strings for text fields: // return [ "one", "two", "three", "four", "five" ] } Next groovy example also allows to create obsoleted items. The createItem method is also an alternative way to create "aliased" values as seen in the first groovy example. // 3. Using create method - allows to create also obsoleted items loadValues = { field, pickList -> return [ pickList.createObsoletedItem(0, "zero"), // Only for querying, but not allowed in inserting rows pickList.createItem(1, "one"), pickList.createItem(2, "two"), pickList.createItem(3, "three"), pickList.createItem(4, "four"), pickList.createItem(5) ] } The last example shows a way how to load values from other field from different or the same entity. This can be useful if you have an entity with a field used only to store the list of values for pick list. By this, you can also share the same pick list definitions in various different entities simply by loading the values from one centralised place by using this script // 4. Load values from other field from other or the same entity. See comments below. // a. Fill correct names of entity and field below // b. In the case of the same entity just change the 2nd line to: // def entity = field.entity loadValues = { field, pickList -> def schema = field.entity.schema def entity = schema.entities.items.find { it.name == 'Wombat activities' } // Fill the correct entity name // def entity = field.entity // If the field is from the same entity just use this line instead def field2 = entity.fields.items.find { it.name == 'BIO.SPECIES' } // Fill the correct field name def edp = DFEntityDataProviders.find(entity) return edp.retrieveDistinctValuesForField(field2.id) } Other highlight of the functionality is that when "In list" query is used, the Pick List changes to checkboxes enabling multiple selection.
https://docs.chemaxon.com/display/lts-gallium/pick-list.md
2021-09-16T18:38:37
CC-MAIN-2021-39
1631780053717.37
[]
docs.chemaxon.com
Date: Fri, 02 Mar 2012 22:40:21 +1000 From: Da Rock <[email protected]> To: [email protected] Subject: Re: Brother Printer Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <20120302065220.2b4fa8f9@scorpio> <[email protected]> Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help On 03/02/12 22:15, [email protected] wrote: >. Are you sure its just a script? Any clue as to what shell it is using? Bash? I do believe there should be some binaries there somewhere as well. Do you know what printer language it is using? This was mostly obscured on the net, but I presume GDI. You also may have some success using another similar driver (trial and error though). >> The fact that it works under Windows is not surprising. Printing under >> Windows "just works". There is a movement towards that goal now in >> progress for now-Windows based systems. See >> <> >> for further details. > It is clear under Windows would be running a cow with no effort. :) Love this response :D >>. You'll be lucky if you get an answer, but it will invariably be "no. Piss off, its not worth our time".... Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=823936+0+archive/2012/freebsd-questions/20120304.freebsd-questions
2021-09-16T18:09:54
CC-MAIN-2021-39
1631780053717.37
[]
docs.freebsd.org
It is a best practice to configure SP/BMC and the e0M management interface on a subnet dedicated to management traffic. Running data traffic over the management network can cause performance degradation and routing problems. The management Ethernet port on most storage controllers (indicated by a wrench icon on the rear of the chassis) is connected to an internal Ethernet switch. The internal switch provides connectivity to SP/BMC and to the e0M management interface, which you can use to access the storage system via TCP/IP protocols like Telnet, SSH, and SNMP. If you plan to use both the remote management device and e0M, you must configure them on the same IP subnet. Since these are low-bandwidth interfaces, the best practice is to configure SP/BMC and e0M on a subnet dedicated to management traffic. If you cannot isolate management traffic, or if your dedicated management network is unusually large, you should try to keep the volume of network traffic as low as possible. Excessive ingress broadcast or multicast traffic may degrade SP/BMC performance.
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-960/GUID-19433707-A30F-4FEC-884D-E0AF71CD14F8.html
2021-09-16T18:19:20
CC-MAIN-2021-39
1631780053717.37
[]
docs.netapp.com
Installation - Install the plugin through the Update Center or download it into the SONARQUBE_HOME/extensions/plugins directory - Restart the SonarQubeTM server Known Limitations Most of the coding rules (Checkstyle and PMD) are currently not translated. (Findbugs is translated) Change Log , with the exception of the Java rules provided by FindBugs.
http://docs.codehaus.org/pages/diffpages.action?pageId=229739827&originalId=231082058
2014-04-16T10:45:36
CC-MAIN-2014-15
1397609523265.25
[]
docs.codehaus.org
usually group depreciable properties into classes. You have to base your CCA claim on a rate assigned to each class of property. How much CCA can you claim? The amount of CCA you can claim depends on the type of rental property you own and the date you acquired it. You group the depreciable property you own into classes. A different rate of CCA generally applies to each class. We explain the most common classes of depreciable rental property and the rates that apply to each class in "Classes of depreciable property." For the most part, use the declining balance method to calculate your CCA. This means that you claim CCA on the capital cost of the property minus the CCA, if any, you claimed in previous years. The remaining balance declines over the years as you claim CCA. Last year, Sue bought a rental building for $60,000. On her income tax return for last year, she claimed CCA of $1,200 on the building. This year, Sue bases her CCA claim on her remaining ot the class by the amount of CCA claimed. As a result, the CCA available for future years will be reduced. In the year you acquire rental property, you can usually claim CCA only on one-half of your net additions to a class. This is the half-year rule (also known as the 50% rule). The available-for-use rules may also affect the amount of CCA you can claim. In the year you dispose of rental property, you may have to add an amount to your income as a recapture of CCA. Conversely, you may be able to deduct an amount from your income as a terminal loss. If you own more than one rental property, you have to calculate your overall net income or loss for the year from all your rental properties before you can claim CCA. Include the net rental income or loss from your T5013 or T5013A slip in the calculation if you are a partner. Combine the rental incomes and losses from all your properties, even if they belong to different classes. This also applies to furniture, fixtures, and appliances that you use in your rental building. You can claim CCA for these properties, the building, or both. You cannot use CCA to create or increase a rental loss. For more information about loss restrictions on rental and leasing properties, see Interpretation Bulletin IT-195, Rental Property - Capital Cost Allowance Restrictions, and Interpretation Bulletin IT-443, Leasing Property - Capital Cost Allowance Restrictions, and its Special Release. For more information, see Classes of depreciable property.
http://docs.quicktaxweb.ca/ty10/english/text/en/common/cra_other/cra_t4036_rent_cca.html
2014-04-16T10:32:52
CC-MAIN-2014-15
1397609523265.25
[]
docs.quicktaxweb.ca
Edição dos pesos de vértices. Importante Este modificador faz a fixação implícita de valores de pesos no intervalo padrão (0.0 a 1.0). Todos valores abaixo de 0.0 serão definidos como 0.0, e todos valores acima de 1.0 serão definidos como 1.0. Nota You can view the modified weights in Weight Paint Mode. This also implies that you will have to disable the Vertex Weight Edit modifier if you want to see the original weights of the vertex group you are editing. Opções The Vertex Weight Edit modifier panel. - Vertex Group O grupo de vértices que será afetado. - Padronização de pesos The default weight to assign to all vertices not in the given vertex group. - Adicionar ao grupo Adds vertices with a final weight over Add Threshold to the vertex group. - Remover do grupo Removes vertices with a final weight below Remove Threshold from the vertex group. - Normalize Weights Scale the weights in the vertex group to keep the relative weight but the lowest and highest values follow the full 0 - 1 range. Decaimento - Tipo de decaimento Type of mapping. - Linear No mapping. - Custom Curve Allows you to manually define the mapping using a curve. - Sharp, Smooth, Root and Sphere These are classical mapping functions, from spikiest to roundest. - Random Uses a random value for each vertex. - Passos em média Creates binary weights (0.0 or 1.0), with 0.5 as cutting value. - Invert <--> Inverts the falloff. Influência Those settings are the same for the three Vertex Weight modifiers. - Influência global The overall influence of the modifier (0.0 will leave the vertex group’s weights untouched, 1.0 is standard influence). Importante. Exemplo.
https://docs.blender.org/manual/pt/dev/modeling/modifiers/modify/weight_edit.html
2022-06-25T04:37:11
CC-MAIN-2022-27
1656103034170.1
[array(['../../../_images/modeling_modifiers_modify_weight-edit_panel.png', '../../../_images/modeling_modifiers_modify_weight-edit_panel.png'], dtype=object) array(['../../../_images/modeling_modifiers_modify_weight-edit_exrem-vertices.jpg', '../../../_images/modeling_modifiers_modify_weight-edit_exrem-vertices.jpg'], dtype=object) ]
docs.blender.org
Plan content using Video to Social Recipe Videos are an amazing source for educating your audience in a more interactive manner. Using this automation recipe, you can share interactive and informative videos to your social media profiles, groups, and pages. This is a great way to keep your audience connected with your content on different channels. It also helps you align all your social channels and keep every channel updated. Let's see how to create an automated campaign to schedule videos for your social channels. Sign in to your ContentStudio account and find Publish on the navigation bar and click on it. And then select Automation. In the next screen, all recipes will appear. Select Videos to Social Media and click on +New Campaign. - 1 Campaign Name and Channels - 2 Schedule and Finalize - You can start a campaign instantly with the option Run this campaign continuously starting today. - Or you could Set the start and end date and time manually. Content Matching Rules and Filters In this step, you can set the rules and filters to specify the type of content you wish to fetch. Social Media channels.
https://docs.contentstudio.io/article/370-plan-content-using-video-to-social-recipe
2022-06-25T04:29:21
CC-MAIN-2022-27
1656103034170.1
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/5c948f870428633d2cf3ece2/file-eocyUbGcGg.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/5c36daaa2c7d3a31944fde6f/file-iPtyrKOPfb.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/5c35a47904286304a71e05b2/file-XSlLGWGTKP.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/5c35aa602c7d3a31944fd2c1/file-QfeOoHZDE0.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/5c35aaba2c7d3a31944fd2c2/file-lVqIHXxkE1.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/5c35ba152c7d3a31944fd330/file-seZowwlKnN.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/5c347e432c7d3a31944fc675/file-ADwTWmkcSL.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/5c347e7504286304a71dfada/file-SD7RKsZ74h.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/576134c4c6979153877cd3cc/images/5c347f1f04286304a71dfadc/file-oMEdhhgXJN.png', None], dtype=object) ]
docs.contentstudio.io
Pinterest Post Failed to Publish Are you wondering why some of your pins have been appearing in the "Failed Pins" section of your dashboard? Don't worry, we are here to help! We will always show you why your post failed to publish so you know what went wrong. We have listed down the reasons for the most common errors that you face while publishing content to Pinterest and start scratching your head why the content did not appear, but with errors such as: - Sorry! We blocked this link because it may lead to spam. - Authentication failed - Sorry! This site doesn't allow you to save Pins. - Sorry! We blocked this link because it may lead to inappropriate content. - Sorry we could not fetch the image. - Sorry! Something went wrong on our end. Please try again - Pinterest denied publishing social message because the limit was reached - Unknown Error Occurred To solve this issue, we will address them one-by-one so that you do not face this issue anymore. Authentication Failed or Authorization Connection Problem: If you are getting the authentication failed message while publishing content to the Pinterest or while connecting your account. To solve this: - 1 - Authentication Failed: You may have changed your account login credentials, but have not reconnected the Pinterest account to the ContentStudio and tried publishing content to your social channel. In this case, you will be getting the authentication failed as we will need a new access token to make a request for Pinterest. Otherwise, the publication will be failed. - 2 - Authorization Problem: If you are trying to connect the Pinterest account, but are getting the Authorization error message. This means, that your Pinterest account has a different language. To solve this, you will have to change the language for the account to the English. After that try connecting your account and it would work seamlessly. Link Blocked or Inappropriate Content: By default, Pinterest has blocked the URL shorteners such as bit.ly, goo.gl and others. If you are sharing the content that contains this type of URL, your content will not be shared on a Pinterest and you will be receiving one of the following messages. - Sorry! We blocked this link because it may lead to spam. - Sorry! This site doesn't allow you to save Pins. - Sorry! We blocked this link because it may lead to inappropriate content. - Sorry! Something went wrong on our end. Please try again To solve this issue, there are two options. - 1 - Links without URL Shortener: Share your content to Pinterest without a URL Shortener. This will fix the issue that you faced above. However, if you are still getting the error, it means Pinterest has blocked your website. You may need to send an appeal to whitelist your domain for the interest. You can find more information from this link. - 2 - Custom Domains: You can use a third-party service such as Replug for the links tracking. It allows you to add your custom domain for the link shortening and have direct integration with ContentStudio. By doing that, you will have your branded links and can manage their reputation by what type of links you have created. Other Messages: If you happen to get messages like: - Sorry we could not fetch the image. - Sorry! Something went wrong on our end. Please try again The image that you are sharing is not downloadable on the Pinterest side, that's why the post failed to publish. Or, there is some an internal error that was generated on the Pinterest side while performing a post due to which post failed to publish. These all are the most common errors that can happen while sharing your content to Pinterest. Hopefully, this guide may help you in sharing content to Pinterest. Still not resolved.. check if: If you're an avid Pinterest user and have pinned more than 200,000 pins, we will be unable to post on your behalf! You can check out their limits here. Solution: Try deleting some pins before you try to schedule If you need more information about what is considered spam for Pinterest, please check out Blocked URLs on Pinterest. Limit Reached Error: Pinterest denied publishing social message because limit was reached Why did I receive this error? Pinterest has officially reduced its API request limits: Each unique Pinterest profile is allowed to make up to 100 calls per 24 hour period. Pinterest API request limit has been reduced. Every distinct Pinterest profile can make up to 100 calls per 24 hours. These calls include selecting board, creating messages, sending messages, connecting profiles, authenticating profiles, etc How can I fix this? Please reschedule the message for a minimum of 24 hours in the future. How do I prevent this error from happening in the future? You will need to wait 24 hours minimum before sending more calls from that specific Pinterest profile. Unknown Error: Error: Awkwaaaard.{errors=[{code=5000, message=Unknown error occurred, id=74844ecc-e891-4f6a-86db-b3b081bfa6d2, resource={type=socialProfile, id=126057268}}]} In an effort to remove spam from their network, Pinterest does not allow posting of any shortened links. Please read more about it here. Please remove the shortened link from your Pin in order to solve this issue.
https://docs.contentstudio.io/article/497-pinterest-post-failed-to-publish
2022-06-25T04:17:03
CC-MAIN-2022-27
1656103034170.1
[]
docs.contentstudio.io
This description applies to the following properties: Property Type: Static Default Value:. The topic How To: Embed Non-standard fonts gives more information on choosing fonts and ensuring non-standard fonts display correctly when viewed on different devices. Value set in Form Designer. Static properties can be made Dynamic by double clicking the gray radio button.
https://docs.driveworkspro.com/Topic/FontProperty
2022-06-25T04:09:13
CC-MAIN-2022-27
1656103034170.1
[]
docs.driveworkspro.com
Description This command can change the configuration on the device during runtime. It will set the given property to the given value. Parameters Supported Properties Usage This command can be used to set the property of device configuration at runtime. Example Set Experitest Url Please make sure you replace <EXPERITEST_URL> with the appropriate URL in the sample code below. - For public SeeTest cloud, EXPERITEST_URL needs to be - For Dedicated Experitest Lab, EXPERITEST_URL needs to be your own domain. e.g - SetProperty DesiredCapabilities dc = new DesiredCapabilities(); driver = new AndroidDriver(new URL("<EXPERITEST_URL>"), dc); seetest = new SeeTestClient(driver); dc.setCapability(MobileCapabilityType.UDID, "deviceid"); ... ... // this command can be used to set the screen refresh rate. seetest.setProperty("screen.refresh", "10");
https://docs.experitest.com/display/TE/SetProperty
2022-06-25T04:02:33
CC-MAIN-2022-27
1656103034170.1
[]
docs.experitest.com
Authentication guide - Twitter¶ Note: This guide was written before the renaming. Just replace HackMD with HedgeDoc in your mind 😃 thanks! Go to the Twitter Application management page here Click on the Create New App button to create a new Twitter app: Fill out the create application form, check the developer agreement box, and click Create Your Twitter Application Note: you may have to register your phone number with Twitter to create a Twitter application To do this Click your profile icon --> Settings and privacy --> Mobile --> Select Country/region --> Enter phone number --> Click Continue After you receive confirmation that the Twitter application was created, click Keys and Access Tokens Obtain your Twitter Consumer Key and Consumer Secret Add your Consumer Key and Consumer Secret to your config.jsonfile or pass them as environment variables: config.json: { "production": { "twitter": { "consumerKey": "esTCJFXXXXXXXXXXXXXXXXXXX", "consumerSecret": "zpCs4tU86pRVXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" } } } - environment variables: CMD_TWITTER_CONSUMERKEY=esTCJFXXXXXXXXXXXXXXXXXXX CMD_TWITTER_CONSUMERSECRET=zpCs4tU86pRVXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
https://docs.hedgedoc.org/guides/auth/twitter/
2022-06-25T05:29:19
CC-MAIN-2022-27
1656103034170.1
[]
docs.hedgedoc.org
If you are looking for methods to attract women of all ages online, you should know that there are many effective funny headlines for online dating. If you are looking to appeal to a more imaginative or creative type of girl, create days news that will be appealing to her. For instance , you can use a humorous dilemma about the art of game playing to bring her interest. This is a terrific way to strike up a conversation with her and eventually get her to read your profile. Another effective way to attract men and women should be to create a funny headline. People who find themselves humorous tend to be attractive than people who don’t. Funny headlines display a easy going personality and put prospective goes at ease. They’re likewise more likely to throw open to somebody who is fun. Luckily, these types of headlines are becoming more commonplace in online dating, and you will use them to attract women. Your subject is your first sight, so ensure that you write an interesting the one that captures the attention of others. Men like to find out about people who have related interests. If you enjoy athletics, link to a document on the group you like to adhere to. If you are into home mend, add photos of your do the job. And if you are an specialist, reveal your most current artwork. After that link these kinds of images to your headline. In this manner, you’ll have a larger audience.
https://docs.jagoanhosting.com/funny-headlines-meant-for-online-dating/
2022-06-25T04:37:49
CC-MAIN-2022-27
1656103034170.1
[]
docs.jagoanhosting.com
If you are enthusiastic about meeting and forming interactions having a Russian female, there are a few points you need to stick to. First of all, it is recommended to preserve communication. Russian women are searching for a man having a strong good sense of home, good manners, and appearance. However , it is not a simple feat to get a man with these behavior. To be able to attract an eastern european girl, you’ll need to be patient. The beauty of Russian women can be well-known around the world. Its females are good and elegant having a graceful physique and strong arms and legs. They do not just like conflicts, but are also friendly and enjoy spending some time with other people. Moreover, they don’t take up many important issues at once. Russian young girls are regarded to get very clever and delightful, and they are good spouses. Their internal beauty can be reflected in the manner they behave about men. When dating a Russian girl, you should consider her inner life. You should not select a Russian woman depending on superficial overall look or sociable behavior. This lady needs to feel relaxed around you and understand your way of life, as well as your own personal. In addition , you should be affected individual and open to her tradition. Russian young ladies are hypersensitive and honest, and they will coach you on about their lifestyle and customs. They also have a big heart. If you believe that the romance will probably work, you should put in time and effort in mastering about her. Beauty of a Russian female is apparent in your manner in which this girl cares for herself. These kinds of women happen to be educated, indie, and do not rely very own husbands very much. This quality, combined with her soft personality and kind nature, cause them to stand out from their overseas counterparts. A Russian female will always contain a better alternative for you. The advantage of a Russian woman can make your life worth living. So , you surprised if a Russian female turns out to be over of your dreams. A high level00 man searching for a wife or possibly a girlfriend, make an attempt mail order bride services to meet an eastern european woman. Using these products will save you cash and acquire you neat stuff at the same time. You may chat with her online, exchange gifts, leave likes, and comments. It’s far easier than ever to do this all with the help of a Russian dating webpage. You can also consider your helpful site work with you while you meet the perfect girl for your marriage. Russian women are very good at preserving their privacy. They will never speak about their careers because the majority of are not proud of them. Similarly, if you find away about their interests, you’ll be surprised to know they are always interested inside your hobbies, and they are more likely to be more loyal for you. So , go ahead and take advantage of this unique opportunity to discover a Russian woman for a lifetime! And don’t forget that you will be astonished at just how much you can learn from these women.
https://docs.jagoanhosting.com/how-you-can-find-a-sweetheart-with-russian-wedding-traditions/
2022-06-25T04:53:49
CC-MAIN-2022-27
1656103034170.1
[]
docs.jagoanhosting.com
Office Customization Tool (OCT) 2016 Help: Overview Applies to: Office Professional Plus 2016, Office Standard 2016 You use the Office Customization Tool (OCT) to customize an installation of a volume licensed edition of Office. When you run the OCT, you choose whether to create a new Setup customization file or open an existing one. If you are creating a new file, the OCT displays a list of the products available on the network installation point. You must select a single product that you want to customize. To start the OCT, type setup.exe /admin on the command line. Note The most current version of the OCT is available on the Microsoft Download Center. Setup customization files By using the OCT, you customize Office and save your customizations in a Setup customization file (.msp file), and then place the file in the Updates folder on the network installation point. When you install Office, Setup looks for a Setup customization file in the Updates folder and, if found, applies those customizations. If you put the customization file somewhere other than the Updates folder, you can use the Setup command-line option /adminfile to specify the fully qualified path to the file; for example, setup.exe /adminfile \\server_name\share_name\subfolder\custom.msp. Note If you use a folder other than the Updates folder for your customization files, you can specify its location in the Config.xml file by using the SUpdateLocation attribute of the SetupUpdates element. You also can use a Setup customization file to change an existing installation. Because a Setup customization file is an expanded form of a Windows Installer .msp file, you apply the customization file to the user's computer just as you would a software update, and the user's existing Office installation is updated with your customizations. For example, if you change the installation states of some features to Not Available and then apply the resulting customization file to an existing installation of Office, those features are removed from the user's computer. If you use the OCT to modify an existing .msp customization file, we recommend that you select the .msp file for the same product you are customizing. For example, if you are customizing an existing Office Professional Plus 2016 installation, select an Office Professional Plus 2016 customization .msp file. There are some options in the OCT that are applicable only on a new installation of Office. For example, you can use the INSTALLLOCATION element to specify the folder where Office is installed on the user's computer. If a customization file is applied to an existing installation, however, the INSTALLLOCATION element is ignored. You need to uninstall and reinstall Office to change the installation location. Select Save on the File menu to save the Setup customization file before you exit the OCT. Import Setup customization files The OCT provides support for importing Setup customization files as follows: 32-bit Setup customization files can be imported into the 64-bit version of the OCT and can then be used to customize 64-bit Office products. 64-bit Setup customization files can be imported to the 32-bit version of the OCT and can then be used to customize 32-bit Office products. A 32-bit Setup customization file that is imported into the 64-bit version of the OCT is converted to 64-bit, and a 64-bit customization file that is imported into the 32-bit version of the OCT is converted to 32-bit. To import a customization file, in the OCT, select Import on the File menu. In the Open dialog box, select the .msp file that you want to convert, and then choose Open to start the conversion. Note Importing customization .msp files is intended for equivalent cross-architecture products only. You can import a 32-bit Office Professional Plus 2016 customization .msp file into the 64-bit version of the OCT for a 64-bit Office Professional Plus 2016 .msp file. However, you cannot import a 32-bit Word 2016 stand-alone customization .msp file into the 64-bit version of the OCT for a 64-bit Office Professional Plus 2016 .msp file; doing so is prevented and displays an error message. You cannot import Office 2007 Setup customization files into the OCT for Office 2016. The Import feature can also be used in cases where you created an initial Setup customization .msp file for an Office product, such as Office Professional Plus 2016, and later you want to modify the installation to add language packs for that product. In such cases, you first add the language packs to the network installation point that contains the Office product source files. Then you run the OCT from the root of the network installation point, create a new Setup customization file for the same product, and import the original customization .msp file that you created previously for the product (Office Professional Plus 2016 in this example). To import the .msp file, in the OCT, on the File menu, choose Import. In the Open dialog box, select the previously created customization .msp file that you want to update. On the File menu, choose Save As. Specify a unique name for the .msp file, and choose Save. Importing the previously created customization .msp file into the OCT will update the .msp file and include the added languages. Structure of the OCT The OCT consists of following four major sections, each of which is divided into a number of pages containing the following customizable options: Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/DeployOffice/oct/oct-2016-help-overview?redirectedfrom=MSDN
2022-06-25T04:54:08
CC-MAIN-2022-27
1656103034170.1
[]
docs.microsoft.com
The Umbrella Investigate API follows RESTful principles and provides HTTPS endpoints to interact with Investigate. You can search and list information related to domains, IP addresses, email addresses, Autonomous Systems (AS), and file checksums. The Investigate API requires standard Bearer token authorization for all API requests. For more information, see Umbrella Investigate API. To create an Investigate API access token, log into Umbrella at with your Umbrella Investigate account credentials. Note: To create or delete Investigate API tokens, your user account must include the Full Admin user role. Table of Contents Create Investigate API Key - Navigate to Investigate > API Keys, and click Create New Token. - Enter a title for the API access token. - Click Create. Delete Investigate API Key - Click the trash can icon to delete an Investigate API key. Investigate Views < Manage Investigate API Keys > Domain Summary Updated about a month ago
https://docs.umbrella.com/investigate/docs/manage-investigate-api-keys
2022-06-25T05:41:12
CC-MAIN-2022-27
1656103034170.1
[array(['https://files.readme.io/e6004a8-inv-api-keys-create-token-2.png', 'inv-api-keys-create-token-2.png'], dtype=object) array(['https://files.readme.io/e6004a8-inv-api-keys-create-token-2.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/6e8318b-inv-api-keys-1.png', 'inv-api-keys-1.png'], dtype=object) array(['https://files.readme.io/6e8318b-inv-api-keys-1.png', 'Click to close...'], dtype=object) ]
docs.umbrella.com
Creating Test Money In order to make testing easier, most Centrapay assets have a “test” variant which can be issued at no cost. In the case of money, issuing the test variant (eg “centrapay.nzd.test”) requires linking a “test” bank account which, instead of going through the banking system, sends transaction notifications to the email address of the initiating user. The test bank account can be used to create a topup request. The 4-digit bank account verification code, which normally appears in your bank account statement, will be included in the emailed transaction notification. The test assets can be created via either the Centrapay API or the Centrapay app. Via API To create test dollars via the Centrapay API: Add email to Centrapay profile: If not already configured, set an email on your Centrapay user profile via the update profile endpoint. Create a test bank account: Create a bank account, using “00-“ as the bank account number prefx, via the create bank account endpoint. Create a test topup: Use the test bank account id to topup up via the topup endpoint. The topup must be created with a Centrapay user (ie: authenticated with JWT, not an API key) in order for the transaction email notification to be delivered. Verify the bank account: Post the 4-digit code from the test transaction confirmation email, along with the test bank account id, to the verify bank account endpoint. Via Centrapay App To create test dollars via the Centrapay app: Enable Test Assets: Create a test payment request at . Follow the link to pay the payment request ({id}) and, when prompted, enable test assets. Link Test Bank Account: Visit and link a bank account using “00-“ as the bank account number prefix. Topup and Verify: Topup via by choosing the test bank account. You will receive a test transaction confirmation email with a 4-digit code to “verify” the test bank account. After the bank account is verified, the topup amount will be released into your Centrapay account.
https://docs.centrapay.com/guides/creating-test-money
2022-06-25T04:13:30
CC-MAIN-2022-27
1656103034170.1
[]
docs.centrapay.com
Adding Columns to the Resource Views How can I add more columns to the Resource scheduling view? In Admin Settings / Views you'll see an option for "Resource Columns to Show". Setting that will change everyone's default number of columns. You can also change this on the fly from the drop-down menu on the resources tab: Finally, you can also change the number of days that show up in this view--selecting up to 30 days at a time.
https://docs.dayback.com/article/4-adding-columns-to-the-resource-views
2022-06-25T04:52:05
CC-MAIN-2022-27
1656103034170.1
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/568d5975c69791436155c1b3/images/5b01d8f42c7d3a2f9011ad1e/file-Lr3y7kxRlm.png', None], dtype=object) ]
docs.dayback.com
Syntax Here is the syntax for calling FXC.exe, the effect-compiler tool. For an example, see Offline Compiling. Usage fxc SwitchOptions Filenames Arguments Separate each switch option with a space or a colon. SwitchOptions [in] Compile options. There is just one required option, and many more that are optional. Separate each with a space or colon. Required option /T <profile> Shader model (see Profiles). Optional options Print help for FXC.exe. @<command.option.file> File that contains additional compile options. This option can be mixed with other command line compile options. The command.option.file must contain only one option per line. The command.option.file cannot contain any blank lines. Options specified in the file must not contain any leading or trailing spaces. /all_resources_bound Enable aggressive flattening in SM5.1+. New for Direct3D 12. /Cc Output color-coded assembly. /compress Compress DX10 shader bytecode from files. /D <id>=<text> Define macro. /decompress Decompress DX10 shader bytecode from first file. Output files should be listed in the order they were in during compression. /dumpbin Loads a binary file rather than compiling a shader. /E <name> Shader entry point. If no entry point is given, main is considered to be the shader entry name. /enable_unbounded_descriptor_tables Enables unbounded descriptor tables. New for Direct3D 12. /extractrootsignature <file> Extract root signature from shader bytecode. New for Direct3D 12. /Fc <file> Output assembly code listing file. /Fd <file> Extract shader program database (PDB) info and write to the given file.When you compile the shader, use /Fd to generate a PDB file with shader debugging info. /Fe <file> Output warnings and errors to the given file. /Fh <file> Output header file containing object code. /Fl <file Output a library. Requires the D3dcompiler_47.dll or a later version of the DLL. /Fo <file> Output object file. Often given the extension ".fxc", though other extensions are used, such as ".o", ".obj" or ".dxbc". /Fx <file> Output assembly code and hex listing file. /Gch Compile as a child effect for fx_4_x profiles. Note Support for legacy Effects profiles is deprecated. /Gdp Disable effect performance mode. /Gec Enable backward compatibility mode. /Ges Enable strict mode. /getprivate <file> Save private data from the shader blob (compiled shader binary) to the given file. Extracts private data, previously embedded by /setprivate, from the shader blob. You must specify the /dumpbin option with /getprivate. For example: fxc /getprivate ps01.private.data /dumpbin ps01.with.private.obj /Gfa Avoid flow control constructs. /Gfp Prefer flow control constructs. /Gis Force IEEE strictness. /Gpp Force partial precision. /I <include> Additional include path. /Lx Output hexadecimal literals. Requires the D3dcompiler_47.dll or a later version of the DLL. /matchUAVs Match template shader UAV slot allocations in the current shader. For more info, see Remarks. /mergeUAVs Merge UAV slot allocations of template shader and the current shader. For more info, see Remarks. /Ni Output instruction numbers in assembly listings. /No Output instruction byte offset in assembly listings. When you produce assembly, use /No to annotate it with the byte offset for each instruction. /nologo Suppress copyright message. /Od Disable optimizations. /Od implies /Gfp, though output may not be identical to /Od /Gfp. /Op Disable preshaders (deprecated). /O0 /O1, /O2, /O3 Optimization levels. O1 is the default setting. - O0 - Disables instruction reordering. This helps reduce register load and enables faster loop simulation. - O1 - Disables instruction reordering for ps_3_0 and up. - O2 - Same as O1. Reserved for future use. - O3 - Same as O1. Reserved for future use. /P <file> Preprocess to file (must be used alone). /Qstrip_debug Strip debug data from shader bytecode for 4_0+ profiles. /Qstrip_priv Strip the private data from 4_0+ shader bytecode. Removes private data (arbitrary sequence of bytes) from the shader blob (compiled shader binary) that you previously embedded with the /setprivate <file> option. You must specify the /dumpbin option with /Qstrip_priv. For example: fxc /Qstrip_priv /dumpbin /Fo ps01.no.private.obj ps01.with.private.obj /Qstrip_reflect Strip reflection data from shader bytecode for 4_0+ profiles. /Qstrip_rootsignature Strip root signature from shader bytecode. New for Direct3D 12. /res_may_alias Assume that UAVs/SRVs may alias for cs_5_0+. Requires the D3dcompiler_47.dll or a later version of the DLL. /setprivate <file> Add private data in the given file to the compiled shader blob. Embeds the given file, which is treated as a raw buffer, to the shader blob. Use /setprivate to add private data when you compile a shader. Or, use the /dumpbin option with /setprivate to load an existing shader object, and then after the object is in memory, to add the private data blob. For example, use either a single command with /setprivate to add private data to a compiled shader blob: fxc /T ps_4_0 /Fo ps01.with.private.obj ps01.fx /setprivate ps01.private.data Or, use two commands where the second command loads a shader object and then adds private data: fxc /T ps_4_0 /Fo ps01.no.private.obj ps01.fx fxc /dumpbin /Fo ps01.with.private.obj ps01.no.private.obj /setprivate ps01.private.data /setrootsignature <file> Attach root signature to shader bytecode. New for Direct3D 12. /shtemplate <file> Use given template shader file for merging (/mergeUAVs) and matching (/matchUAVs) resources. For more info, see Remarks. /Vd Disable validation. /verifyrootsignature <file> Verify shader bytecode against root signature. New for Direct3D 12. /Vi Display details about the include process. /Vn <name> Use name as variable name in header file. /WX Treat warnings as errors. /Zi Enable debugging information. /Zpc Pack matrices in column-major order. /Zpr Pack matrices in row-major order. A.fx /Fo A.o /matchUAVs /shtemplate C.o You don't have to recompile C.fx in the second pass..
https://docs.microsoft.com/en-us/windows/win32/direct3dtools/dx-graphics-tools-fxc-syntax?redirectedfrom=MSDN
2022-06-25T06:08:45
CC-MAIN-2022-27
1656103034170.1
[]
docs.microsoft.com
What is Site Induction? Site inductions are an important part of workplace health and safety. They ensure that workers are aware of the hazards present at a worksite, and how to safely complete their work. In most cases, site inductions must be completed before a worker begins work at a new job. The induction process typically includes an overview of the company’s health and safety policies, as well as specific instructions for the worksite. Workers need to receive a site induction, even if they have worked at the same job before. The hazards present at a worksite can change over time, so workers need to be up to date on any new risks. Site inductions are typically provided by the employer, but workers may also be responsible for completing their induction. In some cases, workers may be required to sign a document confirming that they have received an induction. Common Site Induction Topics Site induction is the process of familiarizing new employees with the organization's policies and procedures, their role in the company, safety measures, and other important information. It can be an overwhelming process for new employees, but it's crucial to their success within the organization. Here are some of the most common topics covered in site induction programs: - Company history. - Corporate culture. - Purpose of the company. - Structure and organization. - Role of the employee within the company. - Values and ethics. - Health and safety. - Procedures and policies. - Compliance. - Reporting procedures. - Onboarding and orientation. Site Induction Plan Site induction plans are put in place to ensure that new employees are given the necessary safety information and training before working on site. This plan typically includes a tour of the worksite, as well as hands-on training for the specific tasks that the employee will be carrying out. The health and safety risks associated with the particular job should also be covered, along with any emergency procedures that need to be followed. By ensuring that all employees receive adequate training and safety information, companies can help to reduce the number of accidents and injuries that occur on-site. What Should Your Safety Induction Cover? At a minimum, a safety induction program should cover key health and safety information that is specific to the workplace. This could include, but is not limited to, information on hazardous substances and how to safely work with them; first aid and emergency procedures; evacuation plans; working at height, in confined spaces, or with dangerous machinery; and fire prevention procedures. The induction program should be tailored to the specific workplace, and employees should be provided with a copy of the program. Employees should also be given regular refresher training on the safety information in the program. When an organization implements a safety induction process, all employees must attend. The purpose of a safety induction is to ensure that all employees are aware of dangerous situations in the workplace and how to avoid them. Every workplace should have a safety induction process in place, but what should it cover? In detail, your safety induction process should include: - A tour of the workplace and where hazards may be present. - How to safely use equipment and tools. - Fire evacuation procedures. - Emergency response procedures. - First Aid/CPR information. - Any other specific safety information relevant to your workplace. - How to identify and report hazards. - What personal protective equipment to wear. - First aid procedures. - Emergency evacuation procedure. - How to operate emergency equipment. - Safe work practices. What happens when you don't do a safety induction? Safety induction is a key part of any workplace health and safety plan. It is a mandatory introduction for new workers that covers the key safety points for that workplace. Without it, new workers may not be aware of the risks they face and may not take the necessary precautions to stay safe. This can lead to accidents and injuries, which can have serious consequences for both the worker and the company. Safety induction should include information on the following topics: General safety:. First aid: This covers basic first aid procedures, such as how to treat a burn or a cut. It also explains when to seek medical help. This covers basic first aid procedures, such as how to treat a burn or a cut. It also explains when to seek medical help. Fire safety: This includes information on fire prevention measures, such as how to use fire extinguishers correctly and what to do in the event of a fire. What are the benefits of Site Induction? Site induction is one of the most important processes that need to be carried out before starting work at a construction site. The training provides workers with health and safety information which will help keep them safe while they are working. It also ensures that they are familiar with the site-specific hazards and know how to safely carry out their tasks. There are many benefits of site induction, some of which are listed below: - It helps to protect workers from accidents and injuries. - It teaches workers about the specific hazards associated with the site they are working on. - It ensures that workers are familiar with the safety procedures that need to be followed while working at the site. - It helps to maintain a safe and healthy working environment.
https://iso-docs.com/blogs/iso-9001-qms/qms-site-induction
2022-06-25T04:51:53
CC-MAIN-2022-27
1656103034170.1
[array(['https://cdn.shopify.com/s/files/1/0564/9625/9172/files/Common_Site_Induction_Checklist_1024x1024.png?v=1654172702', 'QMS Common Site Induction Checklist'], dtype=object) ]
iso-docs.com
Cross Site Request Forgery protection¶ The CSRF middleware and template tag provides easy-to-use protection against Cross Site Request Forgeries. This type of attack occurs when a malicious website contains a link, a form button or some JavaScript that is intended to perform some action on your website, RFC 7231#section-4WAREs{%); } } }); If you’re using AngularJS 1.1.3 and newer, it’s sufficient to configure the $http provider with the cookie and header names: $httpProvider.defaults.xsrfCookieName = 'csrftoken'; $httpProvider.defaults.xsrfHeaderName = 'X-CSRFToken'; }}.shortcuts import render from django.views.decorators.csrf import csrf_protect . salt salt which is both added to it and used to scramble it. The salt..http import HttpResponse from django.views.decorators.csrf import csrf_exempt .shortcuts import render from django.views.decorators.csrf import requires_csrf_token :.
https://docs.djangoproject.com/en/2.1/ref/csrf/
2022-06-25T05:26:10
CC-MAIN-2022-27
1656103034170.1
[]
docs.djangoproject.com
Deploying the Sock Shop Application In this example, we'll show how easy it is to deploy a real world application using Weave GitOps. The Sock Shop is a well known microservices application that is widely used in demonstration and testing of microservice environments such as Kubernetes. We'll actually see two different ways of deploying the Sock Shop: - as a plain set of Kubernetes manifests - as a helm chart Prerequisites In order to deploy the Sock Shop, you need to first deploy Weave GitOps to a Kubernetes cluster. If you'd like to test this out locally, you can set up a kind cluster by following the instructions at the link. Regardless of which cluster you'd like to use, you can install Weave GitOps by first making sure your default kubeconfig points to the chosen cluster and then running gitops install --app-config-url <configuration repository>. The configuration repository is a Git repository that will hold the resource definitions required to manage your applications via GitOps. Please note that these examples are being run with the GITOPS_TOKEN environment variable set to a valid GitHub Personal Access Token (PAT) possessing repo access. If that were not the case, you would see extra user authentication steps in the output. gitops install --app-config-url ssh://[email protected]/example/external.git ✚ generating manifests ✔ manifests build completed ► installing components in wego-system namespace CustomResourceDefinition/alerts.notification.toolkit.fluxcd.io created CustomResourceDefinition/buckets.source.toolkit.fluxcd.io created CustomResourceDefinition/gitrepositories.source.toolkit.fluxcd.io created CustomResourceDefinition/helmcharts.source.toolkit.fluxcd.io created CustomResourceDefinition/helmreleases.helm.toolkit.fluxcd.io created CustomResourceDefinition/helmrepositories.source.toolkit.fluxcd.io created CustomResourceDefinition/imagepolicies.image.toolkit.fluxcd.io created CustomResourceDefinition/imagerepositories.image.toolkit.fluxcd.io created CustomResourceDefinition/imageupdateautomations.image.toolkit.fluxcd.io created CustomResourceDefinition/kustomizations.kustomize.toolkit.fluxcd.io created CustomResourceDefinition/providers.notification.toolkit.fluxcd.io created CustomResourceDefinition/receivers.notification.toolkit.fluxcd.io created Namespace/wego-system created ServiceAccount/wego-system/helm-controller created ServiceAccount/wego-system/image-automation-controller created ServiceAccount/wego-system/image-reflector-controller created ServiceAccount/wego-system/kustomize-controller created ServiceAccount/wego-system/notification-controller created ServiceAccount/wego-system/source-controller created ClusterRole/crd-controller-wego-system created ClusterRoleBinding/cluster-reconciler-wego-system created ClusterRoleBinding/crd-controller-wego-system created Service/wego-system/notification-controller created Service/wego-system/source-controller created Service/wego-system/webhook-receiver created Deployment/wego-system/helm-controller created Deployment/wego-system/image-automation-controller created Deployment/wego-system/image-reflector-controller created Deployment/wego-system/kustomize-controller created Deployment/wego-system/notification-controller created Deployment/wego-system/source-controller created NetworkPolicy/wego-system/allow-egress created NetworkPolicy/wego-system/allow-scraping created NetworkPolicy/wego-system/allow-webhooks created ◎ verifying installation ✔ helm-controller: deployment ready ✔ image-automation-controller: deployment ready ✔ image-reflector-controller: deployment ready ✔ kustomize-controller: deployment ready ✔ notification-controller: deployment ready ✔ source-controller: deployment ready ✔ install finished Deploy key generated and uploaded to git provider ► Writing manifests to disk ► Committing and pushing gitops updates for application ► Pushing app changes to repository ► Applying manifests to the cluster arete: /tmp/sock-shop> Once you see ► Applying manifests to the cluster, your cluster is ready to go with Weave GitOps. Deploying with Weave GitOps Once you have a cluster running Weave GitOps, it's simple to deploy an application like Sock Shop. To deploy the Sock Shop, we need to use gitops add app. gitops add app will store the GitOps automation support for your application in the .weave-gitops directory of the configuration repository you specified at install time. The definition of your application can be stored either in a separate repository or in the configuration repository itself (for a simple all-in-one configuration). If you want to store the application resources in the configuration repository, you only need to specify the --url flag which will be used for both application and configuration resources; however, this assumes that the application repository URL was passed to gitops install. If you want the application resources to be stored separately, you need to specify both --url and --app-config-url parameters. The --url parameter should be the URL of the repository containing the application definition and the --app-config-url parameter must be the URL that was used in gitops install. First, let's fork the Sock Shop repository. You can simply go to the repository in GitHub and select Fork. Now, we can add the Sock Shop application to the configuration repository so it can be managed through GitOps: > gitops add app --url ssh://[email protected]/example/microservices-demo.git --path ./deploy/kubernetes/manifests --app-config-url ssh://[email protected]/example/external.git - > Here we see all the pods running: > kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-558bd4d5db-jgcf2 1/1 Running 0 9d kube-system coredns-558bd4d5db-sht4v 1/1 Running 0 9d kube-system etcd-kind-control-plane 1/1 Running 0 9d kube-system kindnet-tdcd2 1/1 Running 0 9d kube-system kube-apiserver-kind-control-plane 1/1 Running 0 9d kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 9d kube-system kube-proxy-mqvbc 1/1 Running 0 9d kube-system kube-scheduler-kind-control-plane 1/1 Running 0 9d local-path-storage local-path-provisioner-547f784dff-mqgjc 1/1 Running 0 9d sock-shop carts-b4d4ffb5c-g82h6 1/1 Running 0 9d sock-shop carts-db-6c6c68b747-xtlgk 1/1 Running 0 9d sock-shop catalogue-759cc6b86-jk4gf 1/1 Running 0 9d sock-shop catalogue-db-96f6f6b4c-865w4 1/1 Running 0 9d sock-shop front-end-5c89db9f57-99vw6 1/1 Running 0 9d sock-shop orders-7664c64d75-qlz9d 1/1 Running 0 9d sock-shop orders-db-659949975f-fggdb 1/1 Running 0 9d sock-shop payment-7bcdbf45c9-fhl8m 1/1 Running 0 9d sock-shop queue-master-5f6d6d4796-cs5f6 1/1 Running 0 9d sock-shop rabbitmq-5bcbb547d7-kfzmn 2/2 Running 0 9d sock-shop session-db-7cf97f8d4f-bms4c 1/1 Running 0 9d sock-shop shipping-7f7999ffb7-llkrw 1/1 Running 0 9d sock-shop user-68df64db9c-7gcg2 1/1 Running 0 9d sock-shop user-db-6df7444fc-7s6wp 1/1 Running 0 9d wego-system helm-controller-6dcbff747f-sfp97 1/1 Running 0 9d wego-system image-automation-controller-75f784cfdc-wxwk9 1/1 Running 0 9d wego-system image-reflector-controller-67d6bdcb59-hg2cv 1/1 Running 0 9d wego-system kustomize-controller-5d47cf49fb-b6pmg 1/1 Running 0 9d wego-system notification-controller-7569f7c974-824p9 1/1 Running 0 9d wego-system source-controller-5b976b8dd6-gqrl7 1/1 Running 0 9d > We can expose the sock shop in our browser by: > kubectl port-forward service/front-end -n sock-shop 8080:80 Forwarding from 127.0.0.1:8080 -> 8079 Forwarding from [::1]:8080 -> 8079 and if we visit, we'l see: Pretty simple! Now, let's go back and look at that command in more detail: gitops add app \ # (1) --url ssh://[email protected]/example/microservices-demo.git \ # (2) --path ./deploy/kubernetes/manifests \ # (3) --app-config-url ssh://[email protected]/example/external.git # (4) --auto-merge # (5)` - Add an application to a cluster under the control of Weave GitOps - The application is defined in the GitHub repository at the specified URL - Only the manifests at the specified path within the repository are part of the application - Store the management manifests in a separate configuration repository within GitHub; the app-config-urlparameter says where to store management manifests. The default location (if no app-config-urlis specified) is to place them in the .weave-gitopsdirectory within the application repository itself. An actual URL value causes them to be stored in the repository referenced by the URL - Don't create a pull request for the management manifests; push them directly to the upstream repository Using Helm Charts The application can also be deployed via a helm chart. Applications defined in helm charts can be deployed from either helm repositories or git repositories. In the case of the Sock Shop application, a helm chart is included in the GitHub repository. We only need to make minor changes to the command we used above to switch to a helm chart, but using a helm chart for Sock Shop requires the target namespace to exist before deploying. By default, the chart would be deployed into the wego-system namespace (since we know it exists), but we'd like to put it in the sock-shop namespace. So, before we run gitops add app, we'll run: kubectl create namespace sock-shop namespace/sock-shop created > Then, we can run: > gitops add app --url ssh://[email protected]/example/microservices-demo.git --path ./deploy/kubernetes/helm-chart --app-config-url ssh://[email protected]/example/external.git --deployment-type helm --helm-release-target-namespace sock-shop --auto-merge Adding application: Name: microservices-demo URL: ssh://[email protected]/example/microservices-demo.git Path: ./deploy/kubernetes/helm-chart Branch: master Type: helm ◎ Checking cluster status ✔ GitOps installed ✚ Generating application spec manifest ✚ Generating GitOps automation manifests ► Adding application "microservices-demo" to cluster "kind-kind" and repository ► Committing and pushing gitops updates for application ► Pushing app changes to repository > Examining this command, we see two new arguments: gitops add app \ --name microservices-demo --url ssh://[email protected]/example/microservices-demo.git \ --path ./deploy/kubernetes/helm-chart \ --app-config-url ssh://[email protected]/example/external.git --deployment-type helm \ # (1) --helm-release-target-namespace sock-shop # (2) --auto-merge - Since we're pulling the chart from a git repository, we need to explicitly state that we're using a helm chart. If we were using a helm repository, we would use --chart <chart name>instead of --path <path to application>and the deployment type would be unambiguous - The application will be deployed in the namespace specified by --helm-release-target-namespace You can check the status of the application by running the gitops get app microservices-demo command. Single Repository Usage As we mentioned above, it's possible to have a single repository perform hold both the application and the configuration. If you place the application manifests in the configuration repository passed to gitops install, you can leave off the separate --app-config-url parameter. In this case, we would either have had to pass the microservices-demo URL to gitops install or copy the application manifests into the external repository. Let's proceed as if we had initialized the cluster with: gitops install --app-config-url ssh://[email protected]/example/microservices-demo.git. > gitops add app --url ssh://[email protected]/example/microservices-demo.git --path ./deploy/kubernetes/manifests - > So, it's just like the example above except we didn't have to call out the location of the configuration repository. Regardless of whether or not the application manifests are stored in the configuration repository, though, the configuration itself is stored in a special directory ( .weave-gitops) at the top level of the configuration repository: > tree .weave-gitops .weave-gitops ├── apps │ └── microservices-demo │ ├── app.yaml │ ├── kustomization.yaml │ ├── microservices-demo-gitops-deploy.yaml │ └── microservices-demo kustomization.yaml 6 directories, 11 files In this case, the apps directory contains one app (microservices-demo). The app.yaml file looks like: --- apiVersion: wego.weave.works/v1alpha1 kind: Application metadata: labels: wego.weave.works/app-identifier: wego-85414ad27cd476d497d715818deda0c6 name: microservices-demo namespace: wego-system spec: branch: master config_url: ssh://[email protected]/example/external.git deployment_type: kustomize path: ./deploy/kubernetes/manifests source_type: git url: ssh://[email protected]/example/microservices-demo.git It describes the application and includes a label derived from the URL, path, and branch to prevent multiple applications from referencing the same source within git. The kustomization.yaml file holds a list of application components that will be deployed: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization metadata: name: microservices-demo namespace: wego-system resources: - app.yaml - microservices-demo-gitops-deploy.yaml - microservices-demo-gitops-source.yaml The microservices-demo-gitops-source.yaml file tells flux the location (repository) containing the application. It has a special ignore section that skips .weave-gitops to support keeping an application in the configuration repository: apiVersion: source.toolkit.fluxcd.io/v1beta1 kind: GitRepository metadata: name: microservices-demo namespace: wego-system spec: ignore: |- .weave-gitops/ .git/ .gitignore .gitmodules .gitattributes *.jpg *.jpeg *.gif *.png *.wmv *.flv *.tar.gz *.zip .github/ .circleci/ .travis.yml .gitlab-ci.yml appveyor.yml .drone.yml cloudbuild.yaml codeship-services.yml codeship-steps.yml **/.goreleaser.yml **/.sops.yaml **/.flux.yaml interval: 30s ref: branch: master url: The microservices-demo-gitops-deploy.yaml file defines the path within the repository and the sync interval for the application: --- apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 kind: Kustomization metadata: name: microservices-demo namespace: wego-system spec: interval: 1m0s path: ./deploy/kubernetes/manifests prune: true sourceRef: kind: GitRepository name: microservices-demo (This will look different in the case of a helm chart; it will hold a HelmRelease rather than a Kustomization) Finally, the clusters directory has a subdirectory for each cluster defining which applications will run there. The user/kustomization.yaml file in a specific cluster directory has a resources section containing a list of applications references: resources: - ../../../apps/microservices-demo Using Pull Requests We've reached the all-singing, all-dancing case now. This is the way most people will actually use Weave GitOps in a real environment. Whether you use the default application repository model or have a separate configuration repository, you can support reviewing and auditing changes to your GitOps resources via Pull Requests. (Also, as a practical matter, many people don't allow direct merges to their repositories without pull requests anyway) In order to use pull requests for your GitOps resources, you simply need to leave off the --auto-merge flag we've been passing so far (in other words, using pull requests is the default). For example, if we run the previous command without --auto-merge, we see different output: > gitops add app --url ssh://[email protected]/example/microservices-demo.git --path ./deploy/kubernetes/manifests Pull Request created: > Note the line: Pull Request created:. If we were to go to that GitHub repository and merge the pull request, the app would then be deployed. Hopefully, this example has given you a good understanding of how to deploy applications with Weave GitOps. Thanks for reading!
https://docs.gitops.weave.works/docs/0.5.0/examples/sock-shop/
2022-06-25T05:16:21
CC-MAIN-2022-27
1656103034170.1
[array(['/assets/images/sock-shop-d6f3139b052fef35a1d86a6712b0e6bd.png', 'sock shop'], dtype=object) ]
docs.gitops.weave.works
Stream Grabber Parameters# This topic describes the parameters related to the stream grabber. General Parameters# Access Mode# The AccessMode parameter indicates the mode of access the current application has to the device: Control: The application has control access to the device. Other applications are still able to monitor the device and can request to take over control or gain exclusive access to the device. Exclusive: The application has exclusive access to the device. No other application can control or monitor the device. Monitor: The application has monitoring, i.e., read-only, access to the device. NotInitialized: Access to the device hasn't been initialized. This parameter is read-only. Auto Packet Size# Use the AutoPacketSize parameter to optimize the size of the data packets transferred via Ethernet. When the parameter is set to true, the camera automatically negotiates the packet size to find the largest possible packet size. To retrieve the current packet size, get the value of the GevSCPSPacketSize parameter. Using large packets reduces the overhead for transferring images. The maximum packet size depends on the network hardware and its configuration. Maximum Buffer Size# Use the MaxBufferSize parameter to specify the maximum size (in bytes) of a buffer used for grabbing images. A grab application must set this parameter before grabbing starts. Maximum Number of Buffers# Use the MaxNumBuffer parameter to specify the maximum number of buffers that can be used simultaneously for grabbing images. Maximum Transfer Size# Use the MaxTransferSize parameter to specify the maximum USB data transfer size in bytes. The default value is appropriate for most applications. Increase the value to lower the CPU load. USB host adapter drivers may require decreasing the value if the application fails to receive the image stream. The maximum value depends on the operating system. Num Max Queued URBs# Use the NumMaxQueuedUrbs parameter to specify the maximum number of USB request blocks (URBs) to be enqueued simultaneously. Increasing this value may improve stability and reduce jitter, but requires more resources on the host computer. Decreasing this value can be helpful if you get error messages related to insufficient system memory, e.g., "Failed to probe and lock buffer=0xe2010130" or "Failed to submit transfer status=0xe2100001". Receive Thread Priority Override# Use the ReceiveThreadPriorityOverride parameter to enable assigning a custom priority to the thread which receives incoming stream packets. Only available if the socket driver is used. To assign the priority, use the ReceiveThreadPriority parameter. Receive Thread Priority# Use the ReceiveThreadPriority parameter to set the thread priority of the receive thread. Only available if the socket driver is used. To assign the priority, the ReceiveThreadPriorityOverride parameter must be set to true. Socket Buffer Size# Use the SocketBufferSize parameter to set the socket buffer size in kilobytes. Only available if the socket driver is used. Status# The Status parameter indicates the current status of the stream grabber: Closed: The stream grabber is closed. Locked: The stream grabber is locked. NotInitialized: The stream grabber is not initialized. Open: The stream grabber is open. This parameter is read-only. Transfer Loop Thread Priority# Use the TransferLoopThreadPriority parameter to specify the priority of the threads that handle USB requests from the stream interface. In pylon, there are two threads belonging to the USB transport layer, one for the image URBs (USB request blocks) and one for the event URBs. The transport layer enqueues the URBs to the xHCI driver and polls the bus for delivered URBs. You can control the priority of both threads via the TransferLoopThreadPriority parameter. On Windows, by default, the parameter is set to the following value: - 25 if the host application is run with administrator privileges. - 15 or lower if the host application is run without administrator privileges. On Linux and macOS, the default parameter value and the parameter value range may differ. The transfer loop priority should always be higher than the grab engine thread priority ( InternalGrabEngineThreadPriority parameter) and the grab loop thread priority ( GrabLoopThreadPriority parameter). For more information, see the C++ Programmer's Guide and Reference Documentation delivered with the Basler pylon Camera Software Suite ("Advanced Topics" -> "Application Settings for High Performance". Type of GigE Vision Driver# Use the Type parameter to set the host application's GigE Vision driver type: WindowsFilterDriver: The host application uses the pylon GigE Vision Filter Driver. This is a basic GigE Vision network driver that is compatible with all network adapters. The advantage of the filter driver is its extensive compatibility. This driver is available for Windows only. WindowsPerformanceDriver: The host application uses the pylon GigE Vision Performance Driver. This is a hardware-specific GigE Vision network driver. The performance driver is only compatible with network adapters that use compatible chipsets. The advantage of the performance driver is that it significantly lowers the CPU load needed to service the network traffic between the computer and the camera(s). It also has a more robust packet resend mechanism. This driver is available for Windows only. SocketDriver: The host application uses the socket driver. This is not a real driver. Instead, it uses the socket API of the respective operating system, e.g., Windows, Linux, or macOS, to communicate with cameras instead. The advantage of the socket driver is that it does not need any installation and is compatible with all network adapters. When using the socket driver, Basler recommends adjusting the network adapter settings (e.g., optimize the use of jumbo frames, receive descriptors, and interrupt moderation rate) as described in the Network Configuration topic. NoDriverAvailable: No suitable driver is installed. The driver type can't be set. Type: Socket Driver Available# The TypeIsSocketDriverAvailable parameter indicates whether the socket driver is currently available (1) or not available (0). Type: Windows Filter Driver Available# The TypeIsWindowsFilterDriverAvailable parameter indicates whether the pylon GigE Vision Filter Driver is currently available (1) or not available (0). Type: Windows Intel Performance Driver Available# The TypeIsWindowsIntelPerformanceDriverAvailable parameter indicates whether the pylon GigE Vision Performance Driver is currently available (1) or not available (0). Packet Resend Mechanism Parameters# The packet resend mechanism (GigE Vision only) optimizes the network performance by detecting and resending missing data packets. In GigE Vision data transmission, each packet has a header consisting of an ascending 24-bit packet ID. This allows the receiving end to detect if a packet is missing. You have to weigh the disadvantages and advantages for your special application to decide whether to enable or disable the mechanism: - If enabled, the packet resend mechanism can cause delays because the driver waits for missing packets. - If disabled, packets can get lost which results in image data loss. The pylon GigE Vision Filter Driver and the Performance Driver use different packet resend mechanisms. Enable Resends# Use the EnableResend parameter to enable the packet resend mechanism. - If the parameter is set to trueand the Typeparameter is set to WindowsFilterDriver, the packet resend mechanism of the Filter Driver is enabled. - If the parameter is set to trueand the Typeparameter is set to WindowsPerformanceDriver, the packet resend mechanism of the Performance Driver is enabled. - If the parameter is set to false, the packet resend mechanism is disabled. Packet Resend Mechanism (Filter Driver)# The pylon GigE Vision Filter Driver has a simple packet resend mechanism. If the driver detects that packets are missing, it waits for a specified period of time. If the packets don't arrive within the time specified, the driver sends one resend request. Packet Timeout# Use the PacketTimeout parameter to specify how long (in milliseconds) the filter driver waits for the next expected packet before it initiates a resend request. Make sure that the parameter is set to a longer time interval than the inter-packet delay. Frame Retention# Use the FrameRetention parameter to specify the maximum time in milliseconds to receive all packets of a frame. The timer starts when the first packet has been received. If the transmission is not completed within the time specified, the corresponding frame is delivered with the status "Failed". Packet Resend Mechanism (Performance Driver)# The pylon GigE Vision Performance Driver has a more advanced packet resend mechanism. It allows more fine-tuning. Also, the driver can send consecutive resend requests until a maximum number of requests has been reached. Receive Window Size# Use the ReceiveWindowSize parameter to specify the size (in frames) of the "receive window" in which the stream grabber looks for missing packets. Example: Assume the receive window size is set to 15. This means that the stream grabber looks for missing packets within the last 15 acquired frames. The maximum value of the ReceiveWindowSize parameter is 16. If the parameter is set to 0, the packet resend mechanism is disabled. Resend Request Threshold# Use the ResendRequestThreshold parameter to set the threshold after which resend requests are initiated. The parameter value is set in percent of the receive window size. Example: Assume the receive window size is set to 15, and the resend request threshold is set to 33 %. This means that the threshold is set after 15 * 0.3333 = 5 frames. In the example above, frames 99 and 100 are already within the receive window. The stream grabber detects missing packets in these frames. However, the stream grabber does not yet send a resend request. Rather, the grabber waits until frame 99 has passed the threshold: Now, the grabber sends resend requests for missing packets in frames 99 and 100. Resend Request Batching# Use the ResendRequestBatching parameter to specify the amount of resend requests to be batched, i.e., sent together. The parameter value is set in percent of the amount of frames between the resend request threshold and the start of the receive window. Example: Assume the receive window size is set to 15, the resend request threshold is set to 33 %, and the resend request batching is set to 80 %. This means that the batching is set to 15 * 0.33 * 0.8 = 4 frames. In the example above, frame 99 has just passed the resend request threshold. The stream grabber looks for missing packets in the frames between the two thresholds and groups them. Now, the stream grabber sends a single resend request for all missing packets in frames 99, 100, 101, and 102. Maximum Number of Resend Requests# Use the MaximumNumberResendRequests parameter to specify the maximum number of resend requests per missing packet. Resend Timeout# Use the ResendTimeout parameter to specify how long (in milliseconds) the stream grabber waits between detecting a missing packet and sending a resend request. Resend Request Response Timeout# Use the ResendRequestResponseTimeout parameter to specify how long (in milliseconds) the stream grabber waits between sending a resend request and considering the request as lost. If a request is considered lost and the maximum number of resend requests hasn't been reached yet, the grabber sends another request. If a request is considered lost and the maximum number of resend requests has been reached, the packet is considered lost. Stream Destination Parameters# The following parameters (GigE Vision only) allow you to configure where the stream grabber should send the grabbed data to. The stream grabber can send the stream data to one specific device or to multiple devices in the network. Transmission Type# Use the TransmissionType parameter to define how stream data is transferred within the network. You can set the parameter to the following values: Unicast(default): The stream data is sent to a single device in the local network, usually the camera's GigE network adapter (see destination address). Other devices can't receive the stream data. LimitedBroadcast: The stream data is sent to all devices in the local network (255.255.255.255), even if they aren't interested in receiving stream data. In large local networks, this uses a large amount of network bandwidth. To use this transmission type, you must set up the controlling and monitoring applications. SubnetDirectedBroadcasting: The stream data is sent to all devices in the same subnet as the camera, even if they aren't interested in receiving stream data. If the subnet is small, this may save network bandwidth. Because devices outside the subnet can't receive the stream data, this transmission type can be useful, e.g., for security purposes. For subnet-directed broadcasting, the stream grabber uses a subnet broadcast address. The subnet broadcast address is obtained by performing a bitwise OR between the camera's IP address and the bit complement of the camera's subnet mask (see destination address). To use this transmission type, you must set up the controlling and monitoring applications. Info - To set the camera's IP address and subnet mask, use the pylon IP Configurator. - For more information about IP addresses, subnet masks, and subnet broadcast addresses, visit the Online IP Subnet Calculator website. Multicast: The stream data is sent to selected devices in the local network. This saves network bandwidth because data is only sent to those devices that are interested in receiving the data. Also, you can specify precisely which devices you want to send the data to. To use multicast, the stream destination address must be set to a multicast group address (224.0.0.0 to 239.255.255.255). Also, you must set up the controlling and monitoring applications. Then, the pylon API automatically takes care of creating and managing a multicast group that other devices can join. - UseCameraConfig: The stream transmission configuration is read from the camera. Use this option only if you want to set up a monitoring application. Controlling and Monitoring Applications# When using limited broadcast, subnet-directed broadcast, or multicast, you usually want to send the image data stream from one camera to multiple destinations. To achieve this, you must set up exactly one controlling application and one or more monitoring applications. - The controlling application starts and stops image acquisition. It can also change the camera configuration. - The monitoring applications receive the stream data. Monitoring applications open the camera in read-only mode. This means that they can't start and stop image acquisition or change the camera configuration. For testing purposes, you can use one instance of the pylon Viewer as the controlling application and another instance of the pylon Viewer as the monitoring application. To use different instances of the pylon Viewer as controlling and monitoring applications: - Start the pylon Viewer and open a GigE device. - Start another instance of the pylon Viewer. This will act as the monitoring application: - Windows: Start the pylon Viewer. In the Devices pane of the pylon Viewer, right-click the GigE device opened in step 1 and then click Open Device … > Monitor Mode. - Linux: At the command line, type: /opt/pylon5/bin/PylonViewerApp -m - macOS: At the command line, type: ./Applications/pylon Viewer.app/Contents/MacOS/pylon Viewer -m Info For more information about setting up controlling and monitoring applications, see the C++ Programmer's Guide and Reference Documentation delivered with the Basler pylon Camera Software Suite ("Advanced Topics" -> "GigE Multicast/Broadcast"). Destination Address# The DestinationAddr parameter indicates the IP address to which the stream grabber sends all stream data. The value and the access mode of the parameter depend on the TransmissionType parameter value: Some addresses in this range are reserved. If you are unsure, use an address between 239.255.0.0 and 239.255.255.255. This range is assigned by RFC 2365 as a locally administered address space. Destination Port# The DestinationPort parameter indicates the port where the stream grabber will send all stream data to. If the parameter is set to 0, pylon automatically selects an unused port. For more information, see the C++ Programmer's Guide and Reference Documentation delivered with the Basler pylon Camera Software Suite ("Advanced Topics" -> "Selecting a Destination Port"). Statistics Parameters# The pylon API provides statistics parameters that allow you to check whether your camera is set up correctly, your hardware components are appropriate, and your system performs well. At camera startup, all statistics parameters are set to 0. While continuously grabbing images, the parameters are continuously updated to provide information about, e.g., lost images or buffers that were grabbed incompletely. Buffer Underrun Count# The Statistic_Buffer_Underrun_Count parameter counts the number of frames lost because there were no buffers in the queue. The parameter value increases whenever an image is received, but there are no queued, free buffers in the driver input queue and therefore the frame is lost. Failed Buffer Count# The Statistic_Failed_Buffer_Count parameter counts the number of buffers that returned with status "failed", i.e., buffers that were grabbed incompletely. The error code for incompletely grabbed buffers is 0xE1000014 on GigE cameras and 0xE2000212 on USB 3.0 cameras. Failed Packet Count# The Statistic_Failed_Packet_Count parameter counts packets that were successfully received by the stream grabber, but have been reported as "failed" by the camera. The most common reason for packets being reported as "failed" is that a packet resend request couldn't be satisfied by the camera. This occurs, e.g., if the requested data has already been overwritten by new image data inside the camera's memory. The Failed Packet Count does not count packets that are considered lost because all resend requests have failed. In this case, the Failed Buffer Count will be increased, but not the Failed Packet Count. Last Block ID# The Statistic_Last_Block_Id parameter indicates the last grabbed block ID. Last Failed Buffer Status# The Statistic_Last_Failed_Buffer_Status parameter indicates the status code of the last failed buffer. Last Failed Buffer Status Text# The Statistic_Last_Failed_Buffer_Status_Text parameter indicates the last error status of a read or write operation. Missed Frame Count# The Statistic_Missed_Frame_Count parameter counts the number of frames that were acquired but skipped because the camera's internal frame buffer was already full. Many Basler cameras are equipped with a frame buffer that is able to store several complete frames. A high Missed Frame Count indicates that the host controller doesn't support the bandwidth of the camera, i.e., the host controller does not retrieve the acquired images in time. This causes the camera to buffer images in its internal frame buffer. When the internal frame buffer is full, the camera will start skipping newly acquired sensor data. Resend Packet Count# The Statistic_Resend_Packet_Count parameter counts the number of packets requested by resend requests. Info - If you are using the Filter Driver and the driver hasn't received the "leader" of a frame, i.e., the packet indicating the beginning of a frame, it will disregard the complete frame. No resend requests will be sent and no statistics parameters will be increased. This means that if the "leader" packet is lost, the complete frame will be lost without notice. Basler recommends checking the Frame Counter chunk to detect lost frames. - If you are using the Performance Driver, the driver detects missing "leader" packets, sends resend requests, and adjusts the statistics parameters accordingly. Resend Request Count# The Statistic_Resend_Request_Count parameter counts the number of packet resend requests sent. Depending on the driver type and the stream grabber settings, the stream grabber may send multiple requests for one missing packet, or it may send one request for multiple packets. Therefore, the Resend Request Count and the Resend Packet Count will most likely be different. Resynchronization Count# The Statistic_Resynchronization_Count parameter counts the number of stream resynchronizations. If the host gets out of sync within the streaming process, it initiates a resynchronization, and the camera's internal buffer is flushed. A host may get out of sync if it requests stream packets with a specific sequence of IDs, but the device delivers packets with a different sequence. This may occur when the connection between the camera and the host is faulty. A host being out of sync results in massive image loss. A host resynchronization is considered the most serious error case in the USB 3.0 and USB3 Vision specification. Total Buffer Count# On GigE cameras, the Statistic_Total_Buffer_Count parameter counts the number of buffers that returned with "success" or "failed" status, i.e., all successfully or incompletely grabbed buffers. On other cameras, e.g. USB cameras, the number of buffers processed is counted. The error code for incompletely grabbed buffers is 0xE1000014 on GigE cameras and 0xE2000212 on USB 3.0 cameras. Total Packet Count# The Statistic_Total_Packet_Count parameter counts all packets received, including packets that have been reported as "failed", i.e., including the Failed Packet Count. Sample Code# // ** General Parameters ** // Access Mode AccessModeEnums accessMode = camera.GetStreamGrabberParams().AccessMode.GetValue(); // Auto Packet Size camera.GetStreamGrabberParams().AutoPacketSize.SetValue(true); // Maximum Buffer Size camera.GetStreamGrabberParams().MaxBufferSize.SetValue(131072); // Maximum Number of Buffers camera.GetStreamGrabberParams().MaxNumBuffer.SetValue(16); // Maximum Transfer Size camera.GetStreamGrabberParams().MaxTransferSize.SetValue(1048568); // Num Max Queued Urbs camera.GetStreamGrabberParams().NumMaxQueuedUrbs.SetValue(64); // Receive Thread Priority Override camera.GetStreamGrabberParams().ReceiveThreadPriorityOverride.SetValue(true); // Receive Thread Priority camera.GetStreamGrabberParams().ReceiveThreadPriority.SetValue(15); // Socket Buffer Size (socket driver only) camera.GetStreamGrabberParams().SocketBufferSize.SetValue(2048); // Status StatusEnums streamGrabberStatus = camera.GetStreamGrabberParams().Status.GetValue(); // Transfer Loop Thread Priority camera.GetStreamGrabberParams().TransferLoopThreadPriority.SetValue(15); // Type of GigE Vision Filter Driver camera.GetStreamGrabberParams().Type.SetValue(Type_WindowsIntelPerformanceDriver); // Type: Socket Driver Available int64_t i = camera.GetStreamGrabberParams().TypeIsWindowsIntelPerformanceDriverAvailable.GetValue(); // Type: Windows Filter Driver Available int64_t i = camera.GetStreamGrabberParams().TypeIsWindowsFilterDriverAvailable.GetValue(); // Type: Windows Intel Performance Driver Available int64_t i = camera.GetStreamGrabberParams().TypeIsSocketDriverAvailable.GetValue(); // ** Packet Resend Mechanism Parameters ** // Enable Resends camera.GetStreamGrabberParams().EnableResend.SetValue(true); // Packet Timeout (Filter Driver only) camera.GetStreamGrabberParams().PacketTimeout.SetValue(40); // Frame Retention (Filter Driver only) camera.GetStreamGrabberParams().FrameRetention.SetValue(200); // Receive Window Size (Performance Driver only) camera.GetStreamGrabberParams().ReceiveWindowSize.SetValue(16); // Resend Request Threshold (Performance Driver only) camera.GetStreamGrabberParams().ResendRequestThreshold.SetValue(5); // Resend Request Batching (Performance Driver only) camera.GetStreamGrabberParams().ResendRequestBatching.SetValue(10); // Maximum Number of Resend Requests (Performance Driver only) camera.GetStreamGrabberParams().MaximumNumberResendRequests.SetValue(25); // Resend Timeout (Performance Driver only) camera.GetStreamGrabberParams().ResendTimeout.SetValue(2); // Resend Request Response Timeout (Performance Driver only) camera.GetStreamGrabberParams().ResendRequestResponseTimeout.SetValue(2); // ** Stream Destination Parameters ** // Transmission Type camera.GetStreamGrabberParams().TransmissionType.SetValue(TransmissionType_Unicast); // Destination Address GenICam::gcstring destinationAddr = camera.GetStreamGrabberParams().DestinationAddr.GetValue(); // Destination Port camera.GetStreamGrabberParams().DestinationPort.SetValue(0); // ** Statistics Parameters ** // Buffer Underrun Count int64_t bufferUnderrunCount = camera.GetStreamGrabberParams().Statistic_Buffer_Underrun_Count.GetValue(); // Failed Buffer Count int64_t failedBufferCount = camera.GetStreamGrabberParams().Statistic_Failed_Buffer_Count.GetValue(); // Failed Packet Count int64_t failedPacketCount = camera.GetStreamGrabberParams().Statistic_Failed_Packet_Count.GetValue(); // Last Block ID int64_t lastBlockId = camera.GetStreamGrabberParams().Statistic_Last_Block_Id.GetValue(); // Last Failed Buffer Status Int64_t lastFailedBufferStatus = camera.GetStreamGrabberParams().Statistic_Last_Failed_Buffer_Status.GetValue(); // Last Failed Buffer Status Text GenICam::gcstring lastFailedBufferStatusText = camera.GetStreamGrabberParams().Statistic_Last_Failed_Buffer_Status_Text.GetValue(); // Missed Frame Count int64_t missedFrameCount = camera.GetStreamGrabberParams().Statistic_Missed_Frame_Count.GetValue(); // Resend Request Count int64_t resendRequestCount = camera.GetStreamGrabberParams().Statistic_Resend_Request_Count.GetValue(); // Resend Packet Count int64_t resendPacketCount = camera.GetStreamGrabberParams().Statistic_Resend_Packet_Count.GetValue(); // Resynchronization Count int64_t resynchronizationCount = camera.GetStreamGrabberParams().Statistic_Resynchronization_Count.GetValue(); // Total Buffer Count int64_t totalBufferCount = camera.GetStreamGrabberParams().Statistic_Total_Buffer_Count.GetValue(); // Total Packet Count int64_t totalPacketCount = camera.GetStreamGrabberParams().Statistic_Total_Packet_Count.GetValue(); // ** General Parameters ** // Access Mode string accessMode = camera.Parameters[PLStream.AccessMode].GetValue(); // Auto Packet Size camera.Parameters[PLStream.AutoPacketSize].SetValue(true); // Maximum Buffer Size camera.Parameters[PLStream.MaxBufferSize].SetValue(131072); // Maximum Number of Buffers camera.Parameters[PLStream.MaxNumBuffer].SetValue(16); // Maximum Transfer Size camera.Parameters[PLStream.MaxTransferSize].SetValue(1048568); // Num Max Queued Urbs camera.Parameters[PLStream.NumMaxQueuedUrbs].SetValue(64); // Receive Thread Priority Override camera.Parameters[PLStream.ReceiveThreadPriorityOverride].SetValue(true); // Receive Thread Priority camera.Parameters[PLStream.ReceiveThreadPriority].SetValue(15); // Socket Buffer Size (socket driver only) camera.Parameters[PLStream.SocketBufferSize].SetValue(2048); // Status string streamGrabberStatus = camera.Parameters[PLStream.Status].GetValue(); // Transfer Loop Thread Priority camera.Parameters[PLStream.TransferLoopThreadPriority].SetValue(15); // Type of GigE Vision Filter Driver camera.Parameters[PLStream.Type].SetValue(PLStream.Type.WindowsIntelPerformanceDriver); // Type: Socket Driver Available Int64 performanceDriverAvailable = camera.Parameters[PLStream.TypeIsWindowsIntelPerformanceDriverAvailable].GetValue(); // Type: Windows Filter Driver Available Int64 filterDriverAvailable = camera.Parameters[PLStream.TypeIsWindowsFilterDriverAvailable].GetValue(); // Type: Windows Intel Performance Driver Available Int64 socketDriverAvailable = camera.Parameters[PLStream.TypeIsSocketDriverAvailable].GetValue(); // ** Packet Resend Mechanism Parameters ** // Enable Resends camera.Parameters[PLStream.EnableResend].SetValue(true); // Packet Timeout (Filter Driver only) camera.Parameters[PLStream.PacketTimeout].SetValue(40); // Frame Retention (Filter Driver only) camera.Parameters[PLStream.FrameRetention].SetValue(200); // Receive Window Size (Performance Driver only) camera.Parameters[PLStream.ReceiveWindowSize].SetValue(16); // Resend Request Threshold (Performance Driver only) camera.Parameters[PLStream.ResendRequestThreshold].SetValue(5); // Resend Request Batching (Performance Driver only) camera.Parameters[PLStream.ResendRequestBatching].SetValue(10); // Maximum Number of Resend Requests (Performance Driver only) camera.Parameters[PLStream.MaximumNumberResendRequests].SetValue(25); // Resend Timeout (Performance Driver only) camera.Parameters[PLStream.ResendTimeout].SetValue(2); // Resend Request Response Timeout (Performance Driver only) camera.Parameters[PLStream.ResendRequestResponseTimeout].SetValue(2); // ** Stream Destination Parameters ** // Transmission Type camera.Parameters[PLStream.TransmissionType].SetValue(PLStream.TransmissionType.Unicast); // Destination Address string destinationAddr = camera.Parameters[PLStream.DestinationAddr].GetValue(); // Destination Port camera.Parameters[PLStream.DestinationPort].SetValue(0); // ** Statistics Parameters ** // Buffer Underrun Count Int64 bufferUnderrunCount = camera.Parameters[PLStream.Statistic_Buffer_Underrun_Count].GetValue(); // Failed Buffer Count Int64 failedBufferCount = camera.Parameters[PLStream.Statistic_Total_Buffer_Count].GetValue(); // Failed Packet Count Int64 failedPacketCount = camera.Parameters[PLStream.Statistic_Failed_Packet_Count].GetValue(); // Last Block ID Int64 lastBlockId = camera.Parameters[PLStream.Statistic_Last_Block_Id].GetValue(); // Last Failed Buffer Status Int64 lastFailedBufferStatus = camera.Parameters[PLStream.Statistic_Last_Failed_Buffer_Status].GetValue(); // Last Failed Buffer Status Text string lastFailedBufferStatusText = camera.Parameters[PLStream.Statistic_Last_Failed_Buffer_Status_Text].GetValue(); // Missed Frame Count Int64 missedFrameCount = camera.Parameters[PLStream.Statistic_Missed_Frame_Count].GetValue(); // Resend Packet Count Int64 resendPacketCount = camera.Parameters[PLStream.Statistic_Resend_Packet_Count].GetValue(); // Resend Request Count Int64 resendRequestCount = camera.Parameters[PLStream.Statistic_Resend_Request_Count].GetValue(); // Resynchronization Count Int64 resynchronizationCount = camera.Parameters[PLStream.Statistic_Resynchronization_Count].GetValue(); // Total Buffer Count Int64 totalBufferCount = camera.Parameters[PLStream.Statistic_Total_Buffer_Count].GetValue(); // Total Packet Count Int64 totalPacketCount = camera.Parameters[PLStream.Statistic_Total_Packet_Count].GetValue(); This sample code is only available in C++ and C# languages. You can also use the pylon Viewer to easily set the parameters.
https://docs.baslerweb.com/stream-grabber-parameters
2022-06-25T05:40:02
CC-MAIN-2022-27
1656103034170.1
[]
docs.baslerweb.com
Shrinkwrap Constraint¶¶ -. - Influence Controls the percentage of affect the constraint has on the object. See common constraint properties for more information. Mode¶ This selector allows you to select which method to use to compute the point on the target’s surface to which to move the owner’s origin. You have these options: Nearest Surface Point¶ The chosen target’s surface’s point will be the nearest one to the original owner’s location. This is the default and most commonly useful option. Projection¶¶ This method is very similar to the Nearest Surface Point one, except that the owner’s possible shrink locations are limited to the target’s vertices. This method doesn’t support the Snap Mode setting described below. Target Normal Projection¶¶. - Outside¶.
https://docs.blender.org/manual/en/2.93/animation/constraints/relationship/shrinkwrap.html
2022-06-25T04:38:17
CC-MAIN-2022-27
1656103034170.1
[]
docs.blender.org
system running Windows 10 with the Hyper-V role enabled to convert VHD images to an Azure-acceptable format. - Only VAs running version 2.4 or above can be deployed in Microsoft Azure. Procedural Overview - Prepare the virtual appliance image on Azure. This is a one time task. - Launch the virtual appliance on Azure. Perform this task for each VA after you have performed the one-time task of preparing the VA image. 1. Prepare the Virtual Appliance Image on Azure This is a one-time task to create an image in Azure that can be used to launch multiple VAs. Note: Generation 2 VMs are not supported for VA deployments. Before you begin, perform the following: - Add internal.cloudapp.net to their internal domains list for VAs. - Unless you are using your own DNS server in Azure, you should configure 168.63.129.16 as the local DNS server in the VA settings. This is the virtual IP used by Azure for recursive and local DNS queries. a. Navigate to Deployments > Configuration > Sites and Active Directory and click Download. b. Click Download for VA for Hyper-V. Umbrella generates and downloads to your computer a .tar file unique to your deployment. This tar file includes: - a .zip file containing the virtual hard disks that need to be deployed on Azure -. On successful signature validation, you should see a message saying “Verified OK." d. Extract the downloaded zip file. You'll find two folders—Virtual Hard Disks and Virtual Machines—and a config file. e. Open Windows PowerShell as Administrator, navigate to the Virtual Hard Disks folder and convert the vhd files in this folder to a fixed type format acceptable by Azure: - Convert-VHD -Path .\forwarder-va.vhd -DestinationPath forwarder-fixed.vhd -VHDType fixed - Resize-VHD .\forwarder-fixed.vhd -SizeBytes 8GB Conversion free disk space requirements Conversion requires at least 9GB of free disk space to create the modified disks. The new forwarder-fixed.vhd will consume approximately 8GB of space. Machines with less than 9GB of space will fail to convert with a red error message. f. Upload the forwarder-fixed.vhd and dynamic.vhd to a blob in your Azure storage account using the Azure portal or the AZ CLI. Note: This is a one-time upload. g. Create an image in Azure from these virtual hard disks using the Azure portal. Use the forwarder-fixed.vhd as the OS disk (OS type: Linux) and the dynamic.vhd as the data disk. Note: Ensure that Host caching for both the OS disk and data disk is set to ‘Read/write’. Generation 2 VMs are not supported for VA deployments, so ensure that the VM Generation is set to ‘Gen 1’. h. Once the VA image is created in Azure, use this image to launch multiple VAs. For more information, see Step 2. Launch the Virtual Appliance on Azure. 2. Launch the Virtual Appliance on Azure Note: Before performing this task, you must complete the one-time task of preparing the VA image on Azure. a. Use the Azure portal to launch Umbrella VAs in Azure using the VA image you created in Step 1. Prepare the Virtual Appliance Image on Azure: - Choose a VM size with at least one VCPU and 1024 MB RAM. Note: VM sizes above eight VCPUs are not supported. - For the Administrator account, set the Authentication type to Password. Note: It is a security risk to specify a public IP address for the VA, and is not recommended except in case of SNAT port exhaustion issues. If you need to configure a public IP for the VA on Azure for these issues, ensure that inbound access from the Internet is not permitted. For more information, see Troubleshoot Intermittent DNS Resolution Failures on a VA Deployed in Azure. - Provide the username as vmadmin and enter a password that meets complexity requirements. Note: The admin-password you create here is not actually set on the VA. b. You may also use the Azure Cloud Shell to launch VAs in Azure using the VA images you created in Step 1. Prepare the Virtual Appliance Image on Azure. Note that VM sizes above eight VCPUs are not supported. You may specify the static IP as part of the command. For example: az vm create --resource-group MyResourceGroup --size Standard_B2s --name UmbrellaVA --image VAImage --authentication-type password --admin-username vmadmin --admin-password <password> --vnet-name MyVnet --subnet MySubnet --private-ip-address 10.0.0.1 c. In Umbrella, navigate to Deployments > Configuration > Sites and Active Directory. You should see the VA listed here. d. Use the same image to launch multiple VAs as required. Provide a different name and different static IP for each VA. Note: If you do not specify the private IP address, the VA will automatically pull a DHCP IP and register to Umbrella with this IP address. This IP address will be listed as the VA name on Umbrella's Sites and Active Directory page. Diagnostics Settings It is not recommended to turn on Diagnostic Settings (Guest-level monitoring) or install any extension for a VA on Azure. Enabling diagnostics results in huge log files being generated on the VA, which causes the VA to run out of disk space. If your VA on Azure is reporting disk space issues, navigate to the Settings > Extensions page against your VA on the Azure portal and remove any extensions. Also, navigate to the Monitoring > Diagnostic Settings page against your VA on the Azure portal and verify that Guest-level monitoring is turned off. Deploy VAs in VMware < Deploy VAs in Microsoft Azure > Deploy VAs in Amazon Web Services Updated about a month ago
https://docs.umbrella.com/deployment-umbrella/docs/deploy-vas-on-microsoft-azure
2022-06-25T03:52:27
CC-MAIN-2022-27
1656103034170.1
[]
docs.umbrella.com
Push messaging for Android 1. Using. In your script where you have registered callbacks for OnTokenReceivedand OnMessageReceived, add the following code snippets. using WebEngageBridge; ... Firebase.FirebaseApp.CheckAndFixDependenciesAsync().ContinueWith(task => { var dependencyStatus = task.Result; if (dependencyStatus == Firebase.DependencyStatus.Available) { Firebase.Messaging.FirebaseMessaging.TokenReceived += OnTokenReceived; Firebase.Messaging.FirebaseMessaging.MessageReceived += OnMessageReceived; } else { ... } }); public void OnTokenReceived(object sender, Firebase.Messaging.TokenReceivedEventArgs token) { ... WebEngage.SetPushToken(token.Token); } public void OnMessageReceived(object sender, Firebase.Messaging.MessageReceivedEventArgs e) { Dictionary<string, string> data = new Dictionary<string, string>(e.Message.Data); if (data.ContainsKey("source") && "webengage".Equals(data["source"])) { WebEngage.SendPushData(data); } ... } Note: Push notifications will work as expected when app is in foreground. The drawback of this approach is that push notifications will not be shown when app is in background. However those push notifications are cached and will be shown on next app launch. If you wish to prevent this drawback, then follow the Overriding FCM Unity Plugin approach given below. 2. Overriding. Download and add the webengage-android-fcm.aar file in Assets/Plugins/Android/directory of your Unity project. Add the following service tag in your Assets/Plugins/Android/AndroidManifest.xmlfile as shown below. <?xml version="1.0" encoding="utf-8"?> <manifest ...> <application ...> <service android: <intent-filter> <action android: </intent-filter> </service> ... </application> </manifest> - Update the FCM registration token on app start as shown below. using WebEngageBridge; ... WebEngage.UpdateFcmToken(); Updated over 2 years ago
https://docs.webengage.com/docs/unity-push-messaging-for-android
2022-06-25T05:38:07
CC-MAIN-2022-27
1656103034170.1
[]
docs.webengage.com
Release Cycle This section documents the versioning and branching model of VarFish. Generally, we follow the idea of release cycles as also employed by Ceph. There is a new stable release every year, targeting the month of April. Each stable release receives a name (e.g., “Anthenea”) and a major release number, (e.g., 1 as “A” is the first letter of the alphabet). Releases are named after starfish species. Version numbers have three components, x.y.z. x identifies the release cycle (e.g., 1 for Anthenea). y identifies the release type: x.0.z- development versions (the bleeding edge) x.1.z- release candidates (for test users) x.2.z- stable/bugfix releases (for the general public) Stable Releases (x.2.z) There will be a new stable release per year (“x”) with a small number of bug fixes and “trivial feature” releases (“z”). Stable releases will be supported for 14-16 months, so users have some time to upgrade Release Candidates (x.1.z) We will start feature freezes roughly a month before the next stable releases. The release candidates are suitable for testing the Development Versions (x.0.z) These releases are suitable for sites that are involved in the development of Varfish themselves or that want to track the “bleeding edge” very closely. The main developing sites (currently Berlin, Bonn) deploy self-built Docker containers from the current development branch. Release Names Releases History Starting with the 1.0.0 release.
https://varfish-server.readthedocs.io/en/latest/release_cycle.html
2022-06-25T04:29:28
CC-MAIN-2022-27
1656103034170.1
[]
varfish-server.readthedocs.io
Reset-AnalyticsServiceGroupMembership¶ Reloads the access permissions and configuration service locations for the Analytics Service. Syntax¶ Reset-AnalyticsServiceGroupMembership [-ConfigServiceInstance] <ServiceInstance[]> [-LoggingId <Guid>] [-BearerToken <String>] [-TraceParent <String>] [-TraceState <String>] [-VirtualSiteId <String>] [-AdminAddress <String>] [<CommonParameters>] Detailed Description¶ Enables you to reload Analytics Service access permissions and configuration service locations. The Reset-AnalyticsServiceGroupMembership command must be run on at least one instance of the service type (Analytics) after installation and registration with the configuration service. Without this operation, the Analytics services will be unable to communicate with other services in the XenDesktop deployment. When the command is run, the services are updated when additional services are added to the deployment, provided that the configuration service is not stopped. The Reset-Analytic¶ Parameters¶ Input Type¶ Citrix.Analytics.Sdk.Serviceinstance[]¶ Service Instances Containing A Serviceinstance Object That Refers To The Central Configuration Service Inter-Service Interface Can Be Piped To The Reset-Analyticsservicegroupmembership Command. Return Values¶ Notes¶-ConfigRegisteredServiceInstance -ServiceType Config | Reset-AnalyticsS-AnalyticsServiceGroupmembership Description¶ Reset the service group membership for a service in a deployment where the configuration service that is configured and running on a machine named 'OtherServer.example.com'.
https://developer-docs.citrix.com/projects/citrix-virtual-apps-desktops-sdk/en/latest/Analytics/Reset-AnalyticsServiceGroupMembership/
2022-06-25T04:03:52
CC-MAIN-2022-27
1656103034170.1
[]
developer-docs.citrix.com
- - Deploying your BlackBerry Dynamics app - Implementing a method to back up app data Implementing a method to back up app data The default method that the AndroidOS provides for automatically backing up app data is partially compatible with BlackBerry Dynamics, but has limitations for what can be saved. Also, with the default backup you will require an unlock key after any data restore. You can modify the automatic backup method to be compatible with BlackBerry Dynamics, or you can turn off the automatic backup and implement a different option for backing up app data. For more information about your options and implementation instructions, see the Android Auto Backup appendix in the BlackBerry Dynamics SDK for Android API Reference.
https://docs.blackberry.com/en/development-tools/blackberry-dynamics-sdk-android/10_0/blackberry-dynamics-sdk-android-devguide/lqi1489679309982/bre1492017635310
2022-06-25T04:55:56
CC-MAIN-2022-27
1656103034170.1
[]
docs.blackberry.com
What is dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics? Azure Synapse Analytics is an analytics service that brings together enterprise data warehousing and Big Data analytics. Dedicated SQL pool (formerly SQL DW) refers to the enterprise data warehousing features that are available in Azure Synapse Analytics. Dedicated SQL pool (formerly SQL DW) represents a collection of analytic resources that are provisioned when using Synapse SQL. The size of a dedicated SQL pool (formerly SQL DW) is determined by Data Warehousing Units (DWU). Once your dedicated SQL pool is created, you can import big data with simple PolyBase T-SQL queries, and then use the power of the distributed query engine to run high-performance analytics. As you integrate and analyze the data, dedicated SQL pool (formerly SQL DW) will become the single version of truth your business can count on for faster and more robust insights. Note Not all features of the dedicated SQL pool in Azure Synapse workspaces apply to dedicated SQL pool (formerly SQL DW), and vice versa. To enable workspace features for an existing dedicated SQL pool (formerly SQL DW) refer to How to enable a workspace for your dedicated SQL pool (formerly SQL DW). Explore the Azure Synapse Analytics documentation and Get Started with Azure Synapse. Key component of a big data solution, dedicated SQL pool uses PolyBase to query the big data stores. PolyBase uses standard T-SQL queries to bring the data into dedicated SQL pool (formerly SQL DW) tables. Dedicated SQL pool (formerly SQL DW) stores data in relational tables with columnar storage. This format significantly reduces the data storage costs, and improves query performance. Once data is stored, - Explore Azure Synapse architecture - Quickly create a dedicated SQL pool - Load sample data. - Explore Videos Or look at some of these other Azure Synapse resources. - Search Blogs - Submit a Feature requests - Create a support ticket - Search Microsoft Q&A question page - Search Stack Overflow forum Feedback Submit and view feedback for
https://docs.microsoft.com/en-AU/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is
2022-06-25T05:45:30
CC-MAIN-2022-27
1656103034170.1
[array(['media/sql-data-warehouse-overview-what-is/dedicated-sql-pool.png', 'Dedicated SQL pool (formerly SQL DW) in relation to Azure Synapse'], dtype=object) array(['media/sql-data-warehouse-overview-what-is/data-warehouse-solution.png', 'Data warehouse solution'], dtype=object) ]
docs.microsoft.com
dears, i have 2 subscriptions under my account in the same directory and i want to consolidate all the resources within one subscription what is the best way to do it? using azure resource mover or just going to the resource group and choosing to move it to a new subscription? thank you in advance
https://docs.microsoft.com/en-us/answers/questions/601741/move-azure-resources-between-subscriptions.html
2022-06-25T05:39:46
CC-MAIN-2022-27
1656103034170.1
[]
docs.microsoft.com
We are developing a desktop application that has to use a custom URI protocol. This software will be mostly used from within Office apps, specifically Outlook, on multi user office PCs (no admin rights). Our installer (WiX toolset) adds the custom protocol to the registry like this: [HKCR\<procotolURI>] {default DWORD 'URL: <protocolName>', 'URL Protocol' DWORD ''} [HKCR\<procotolURI>\shell\open\command] {default DWORD '"<protocolHandlerEXE>" "%1"'} Parsing the arguments in our app <protocolHandlerEXE> works perfectly. Unfortunately, Outlook displays a security warning when the custom URI link is clicked, immensly disrupting the workflow of our service. We were able to suppress the warning by setting this registry key: [HKCU\SOFTWARE\Policies\Microsoft\Office\16.0\Common\Security\Trusted Protocols\All Applications\<protocolUri>:] However, there are a few issues arising from this approach: <1> the warning is only supressed for the given office version <2> the warning is only supressed for the installing user: other users and users that don't exist on the local machine yet will still see the warning We are currently creating above mentioned key for every version of Office (down to 14.0) to solve problem <1>. Many different solutions come to mind to solve issue <2>, although none seem to really solve the problem, some just straight up don't work: <2a> check the security policy key on app startup and create an entry if necessary --> not working if user has no admin rights <2b> using a shady Office registry copy mechanism according to this reddit post creating registry keys [HKLM\SOFTWARE\WOW6432Node\Microsoft\Office\16.0\User Settings\<someName>] {'Count' DWORD '00000001'} [HKLM\SOFTWARE\WOW6432Node\Microsoft\Office\16.0\User Settings\<someName>\Create\<subDir>] {} should trigger Office to create given <subDir> registry key under [HKCU] when any Office app is started. --> doesn't seem to work for security policy <2c> using ActiveSetup --> untested; this method seems extremely outdated and could stop working anytime <2d> edit ntuser.DAT of default user --> untested; feels hacky and overengineered <2e> edit group policy --> untested; just a thought, is this even an option? There are not that many resources available on this topic, therefore I decided to write all my findings and thoughts down in this post. Any other direction or general idea is highly appreciated!
https://docs.microsoft.com/en-us/answers/questions/835047/how-to-register-custom-protocols-in-the-office-sec.html
2022-06-25T03:59:12
CC-MAIN-2022-27
1656103034170.1
[]
docs.microsoft.com
Configuring Security in System Center 2012 -- Virtual Machine Manager Updated: May 13, 2016 Applies To: System Center 2012 SP1 - Virtual Machine Manager, System Center 2012 R2 Virtual Machine Manager, System Center 2012 - For an overview of VMM, see Overview of System Center 2012 - Virtual Machine Manager.
https://docs.microsoft.com/en-us/previous-versions/system-center/system-center-2012-R2/gg675078(v=sc.12)?redirectedfrom=MSDN
2022-06-25T04:26:32
CC-MAIN-2022-27
1656103034170.1
[]
docs.microsoft.com
Term Dates To start this job, select the questions: “How has the frequency of a term changed over time? When was a word used within a particular dataset?” This job results in a graph of the occurrences of the given term within your dataset, plotted by date, as well as those occurrences downloadable as a CSV file. This allows us to answer a wide variety of questions: How has the frequency of use of a term changed over time? (Input: a dataset of interest, plotting for the use of a given term) When was a term first introduced into the literature? (Input: a dataset of interest, looking for the place when the term is first introduced) How has a term moved through the literature? (Input: comparing these graphs for the same term across different journals and time periods) Options The only option for this analysis is to select the term that you would like to search for.
https://docs.sciveyor.com/manual/analyses/term_dates/
2022-06-25T04:29:38
CC-MAIN-2022-27
1656103034170.1
[]
docs.sciveyor.com
You can configure the BGP per segment for a Profile or an Edge. Configuring BGP is available for Underlay Neighbors and Non SD-WAN Neighbors. VMware supports 4-Byte ASN BGP as follows: - As the ASN of SD-WAN Edges. - Peer to a neighbor with 4-Byte ASN. - Accept 4-Byte ASNs in route advertisements. See the following sections for configuring BGP for Underlay Neighbors and Non SD-WAN Neighbors.
https://docs.vmware.com/en/VMware-SD-WAN/5.0/VMware-SD-WAN-Administration-Guide/GUID-B9FC0FD1-C049-4538-A8B8-806CEBA0A5C2.html
2022-06-25T05:36:36
CC-MAIN-2022-27
1656103034170.1
[]
docs.vmware.com
. These tags allow you to pinpoint screens in your app where you can later render in-app messages. Please do keep in mind that screens are only usable for targeting In-app message targeting. using WebEngageBridge; ... // Set screen name WebEngage.ScreenNavigated("Purchase Screen"); Tracking Screen Data Every screen can be associated with some contextual data, which can be part of the targeting rule for in-app messages. Your app can update the data associated with a screen using the below code snippet. using WebEngageBridge; ... // Update current screen data Dictionary<string, object> currentData = new Dictionary<string, object>(); currentData.Add("productId", "~hs7674"); currentData.Add("price", 1200); WebEngage.SetScreenData(currentData); // Set screen name with data Dictionary<string, object> data = new Dictionary<string, object>(); data.Add("productId", "~hs7674"); data.Add("price", 1200); WebEngage.ScreenNavigated("Purchase Screen", data); Please feel free to drop in a few lines at [email protected] in case you have any further queries. We're always just an email away! Updated over 1 year ago
https://docs.webengage.com/docs/unity-ios-in-app-messaging
2022-06-25T04:25:52
CC-MAIN-2022-27
1656103034170.1
[]
docs.webengage.com
Base class for 3D curves. More... Base class for 3D curves. 3D curves are used to represent curves in 3D space. Each non-degenerated edge must refer to a 3D curve. Refer to Curve Types for the list of supported curve types. Type() returns a curve type as enumeration value which can be used to downcast to a respective subclass type, for instance: Curve is defined using parametric definition as \(\mathbf{C}(t)\) where \(\mathbf{C}\) is a 3D radius-vector \((x,y,z)^\top\) and \(t\) is a parameter from a definition range \([a, b]\). UMin() and UMax(), and Domain() return parametric definition range. Parametric range can be bounded (e.g. \([0, 2\pi]\) for a circle) or unbounded (e.g. \((-\infty, +\infty)\) for a line). At any parameter \(t\) within a definition range, the curve can be evaluated as follows: The following example demonstrates computation of a point on a line at parameter t=2: If the curve is periodic (IsPeriodic() returns true) then the curve can be evaluated at any parameter t, otherwise behavior is undefined (e.g. an exception can be thrown or a weird value can be returned). Continuity() returns continuity ( \(C^0\), \(C^1\), \(C^2\), \(C^N\)) of the curve, where \(C^0\) that only the curve itself is continuous, \(C^1\) - that the curve is continuous together with its first derivative, and so on. The curve can be modified using the following operations: Returns the point theValue of parameter theParam. Throws exception only for the ModelData_OffsetCurve if it is not possible to compute the current point. For example when the first derivative on the basis curve and the offset direction are parallel. Returns the point theValue of parameter theParam and the first derivative theD1. Throws exception if the continuity of the curve is not \(C^1\). Returns the point theValue of parameter theParam and second derivatives theD1 and theD2. Throws exception if the continuity of the curve is not \(C^2\). Returns true if calculation completed successfully. In this case theD contains values of the derivatives from 0 up to theDerivativeOrder. Otherwise returns false. May throw exception if the continuity of the curve is less than theDerivativeOrder. Parameters: Returns a curve with reversed orientation. Creates a deep copy of the curve which does not share any definition with this object. Applies transformation matrix to this object. Results depends on the actual curve type. Returns a copy this object after applying transformation. The contents of this object is not modified. Returns a copy this curve translated along the vector. The contents of this object is not modified. Returns a maximum parameter of a definition domain. Returns a minimum parameter of a definition domain.
https://docs.cadexchanger.com/sdk/classcadex_1_1_model_data___curve.html
2022-06-25T05:11:06
CC-MAIN-2022-27
1656103034170.1
[]
docs.cadexchanger.com
Whenever a Part, Assembly, or Drawing is generated, DriveWorks can also save the Model or Drawing in a variety of other formats, for example, eDrawings and DXFs. When DriveWorks generates the Part, Assembly, or Drawing, it will automatically create any of the file formats in the same directory, and with the same name. For example, if a new file called "C:\DriveWorks\HydraulicCylinder\Results\Cylinder.slddrw" was created, and had PDF selected as a file format, then DriveWorks would also create a file called "C:\DriveWorks\HydraulicCylinder\Results\Cylinder.pdf". When DriveWorks generates the Part, Assembly, or Drawing, it will automatically create the captured file formats. The captured status of each available file format can be one of the following: For example, if a new file called "C:\DriveWorks\HydraulicCylinder\Results\Cylinder.sldprt" was created, and had eDrawings Part selected as a file format, then DriveWorks would also create a file called "C:\DriveWorks\HydraulicCylinder\Results\Cylinder.eprt". Making the File Name rule equate to "Delete" will not generate the additional format. The exact file formats that are available depend on whether you are working with a part, assembly, or drawing. File Formats can only be captured from Parts, Assemblies or Drawings that have been captured themselves. To choose one or more file formats: Multiple master drawings can be captured for each model. This is useful when only one sheet is required to be exported as the additional file format. DriveWorks has the ability to place and DXF file formats it creates into a common folder. See SOLIDWORKS Settings for more information. When a multiple Sheet Drawing is required to be saved out as a DWG or DXF file the following options can be set: To set these options: In the Drawing Task Pane within SOLIDWORKS, the sheets can be renamed to match the file extension of a specific format. DriveWorks will then export the sheets in this format. This drawing will have the same name and path as the original drawing. Please note that the chosen format must be supported by DriveWorks, as seen in the following section. Options can be set for the following file formats: To set export options:
https://docs.driveworkspro.com/Topic/DrawingFileFormatsPane
2022-06-25T05:03:30
CC-MAIN-2022-27
1656103034170.1
[]
docs.driveworkspro.com
How to find this tool: Main Console > Privacy > Privacy Scanner Because social networks make sharing information about yourself very easy, you might share more than you would like. The Privacy Scanner checks your Facebook, Twitter™, and Google+ settings for privacy risks, and can help you ensure that your personal information stays private. First, sign into the social network that you want to check. Near the top of the page, find and click the Check My Privacy button. When the Privacy Scanner results appear, follow the instructions to adjust your privacy settings. If the Privacy Scanner cannot scan your profile, make sure you have not signed out from the social network you want to check. If you do not see the Check My Privacy button, make sure that you have enabled the Trend Micro Toolbar in your browser. For more information, see the Privacy Scanner FAQ.
https://docs.trendmicro.com/en-us/consumer/titanium2014/tools/privacy_scanner.aspx
2022-06-25T04:17:40
CC-MAIN-2022-27
1656103034170.1
[]
docs.trendmicro.com
Iteration Burndown Chart Excel Sale Sold out Regular price $19.00 USD Regular priceUnit price per Sale price $19.00 USD Iteration Burndown Chart Excel is a type of chart that is used to visualize progress in software development. The graph shows the number of remaining work items for each day in order. It can be useful when trying to decide how much more time should be allocated towards a project, or when trying to identify when there are differences between original estimates and current estimations. Template Details: This type of chart is used to manage projects which have various iterations in their lifecycle, and they are managed using an agile methodology. Like most burndown charts this one also represents the effort required in its Y-axis and time available in its X-axis. Format: MS Excel This charts features- - Y-axis represents the effort required - The X-axis represents time available
https://iso-docs.com/products/iteration-burndown-chart
2022-06-25T05:12:41
CC-MAIN-2022-27
1656103034170.1
[array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/Iterationburndownchart_1445x.png?v=1643055646', 'Iteration burndown chart'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/Iteration_Burndown_Chart_4a774e47-5318-4362-bcf0-80e1e759eee8_1445x.png?v=1643055646', 'Iteration Burndown Chart Excel'], dtype=object) array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/Iteration_2BBurndown_2BChart_2BTemplate_2BExcel_1445x.png?v=1643055646', 'Iteration Burndown Chart Template Excel'], dtype=object) ]
iso-docs.com
ITIL 4 Managing Professional Transition Module Training Are you ITIL Expert or an ITIL V3 candidate looking to transition to ITIL 4? The ITIL 4® Managing Professional Transition module is designed to transition ITIL experts and ITIL v3 candidates across to ITil4 efficiently. This course consists of certified training that enables ITIL v3-certified professionals to transition into the new framework. Skills Covered : - Concepts of service management - ITIL guiding principles - Activities of the service value chain ITIL 4 Managing Professional Transition Module Training is a comprehensive, interactive course that provides an understanding of the structure and content of ITIL’s fourth edition. This training will provide you with all the necessary knowledge to plan for and implement a professional transition process within your organization. The training includes videos, examples, exercises, and activities Why ITIL 4 Managing Professional Transition Module Training is Important? This training module is designed to provide the knowledge and skills necessary for the successful implementation of ITIL 4. This course will cover all aspects of managing the professional transition, including how to balance service continuity with staff reduction strategies. How ITIL 4 Managing Professional Transition Module Training will help you? ITIL 4 is the latest version of the IT Infrastructure Library, a set of best practices for managing IT services. It was created to help organizations improve their service quality and increase customer satisfaction. The Managing Professional Transition Module Training course will teach you how to use this new version effectively.
https://iso-docs.com/products/itil-4-managing-professional-transition-module-training
2022-06-25T05:14:06
CC-MAIN-2022-27
1656103034170.1
[array(['http://cdn.shopify.com/s/files/1/0564/9625/9172/products/ITIL4ManagingProfessionalTransitionModuleTraining_1445x.png?v=1643054147', 'ITIL 4 Managing Professional Transition Module Training ,ITIL4'], dtype=object) ]
iso-docs.com
The perfect gifting app! Create a special gifting experience through your store. Allow your customers to send your products as gifts, directly to friends and family. Giftify is perfect for any gifting occasion, with an entirely customizable message from the gift sender to their recipient, sent immediately upon purchase to notify the recipient of the gift on the way! Whether a last minute gifting option, or a well thought out gift, Giftify provides the opportunity for your customers to instantly send gifts with immediate gifting notifications and optional shipping and tracking details along the way. Your customers already love your products, now they can become a great ambassador for your brand by introducing your products to their audience through gifting. Allowing your products to be sent as gifts will drive new users to your store through the gifting experience allowing you to incorporate the recipient into your marketing funnels. Features - Add a robust gifting option to your products. - Customizable look and feel to match your store's branding. - Immediate Gift notification sent to customer's gift recipient. - Track gifted orders. - Pro plan available for deeper integration and further customizations. Key Benefits - Allow customers to introduce your products through gifting. - Increase sales volume. - Create engagement surrounding your store. - Immediate email notifications sent for last minute gift givers.
https://docs.minionmade.com/giftify-app/gifting-app-for-shopify
2022-06-25T01:26:49
CC-MAIN-2022-27
1656103033925.2
[]
docs.minionmade.com
Link Configuration Pritunl server Link configuration Links allow creating site-to-site links with IPsec using Pritunl Link clients. The Pritunl Link client communicates with the Pritunl cluster using HTTPS. This allows linking sites without needing to provide database access. Multiple link clients can be deployed to each site for automatic failover. Automatic routing table support is available on AWS, Google Cloud and Ubiquiti Unifi. The diagram below shows the infrastructure design of Pritunl IPsec links. Creating a Link Create the link and locations for each network, the example below has two VPCs. Add a route for each subnet on the network. Create a host for each Pritunl link client, two link clients can not use the same host. The hosts will be associated with a Pritunl Link client using the URI. The link hosts are defined in the Pritunl web server first then associated with a pritunl-link client using the URI. Click Add Host to add a new host then associate that host to a pritunl-link client configured using the other tutorials for each cloud provider. The pritunl-link client cannot run on a Pritunl server and the Pritunl server will not function as a link host. All hosts must be defined in the link configuration. Once the links are configured the state will change to Available when the host is connected to the Pritunl cluster and Active when the host is being used for an IPsec connection. Recommended IPsec Ciphers If all sites are using the latest version of pritunl-link it's recommended to use the aes128-sha256-x25519 IKE cipher and aes128gcm128-x25519 ESP cipher. The Force Preferred Cipher can also be used to require IPsec links to use this cipher to prevent automatic cipher selection from selecting a slower cipher. Below is a list of common ciphers, a full list is available in the strongSwan Wiki. AES 128 GCM is the recommended cipher for the best performance # AES 128 GCM IKE=aes128-sha256-x25519 ESP=aes128gcm128-x25519 # AES 256 GCM IKE=aes256-sha256-x25519 ESP=aes256gcm128-x25519 # ChaChaPoly IKE=chacha20poly1305-prfsha256-x25519 ESP=chacha20poly1305-x25519 These newer GCM ciphers provide high link speeds with minimal CPU usage. High end network adapters such as the Mellanox ConnectX-6 DX also support offloading this cipher. Below is a speed test on an Oracle Cloud BM.Optimized3.36 bare metal server with Mellanox ConnectX-6 DX adapters. The single connection test measured 12.8 gigabit/sec and the 10 connection test measured a total speed of 13.2 gigabit/sec Host Checking Host checking uses an additional network check between all hosts in a link. These checks are used to detect network partitions and discover the best link to activate in a high availability configuration. All link clients must be updated to v1.2 to support host checking. The client must have access to TCP port 9790 on all other hosts. Backoff Timeout The Backoff Timeout option in the host settings allows setting a duration in seconds that the host will be ineligible for becoming an active host. This timer will begin after a host goes offline, once the configured number of seconds has past since the host went offline the host can be selected as a primary. Route Removal By default the link client will not attempt to remove any routes from routing tables. This is done to prevent potentially removing a route that was manually modified by an administrator. For larger more complex configurations automatic removal of unused routes can be important. To enable this run sudo pritunl-link remove-routes-on on all link clients. If a route is removed in the Pritunl link configuration the link clients will remove that route from the routing tables. Automatic Firewall The link automatic firewall will configure iptables to only allow other link hosts to access ports UDP/500, UDP/4500 and TCP/9790. The link will automatically adjust the allowed IP addresses when hosts are added or removed or when a host IP address changes. This allows configuring the instance/external firewall to allow all IP addresses to access these ports without reducing the security of the system. This option is useful for configurations where a host IP address can change frequently. Any application that interferes with iptables such as firewalld cannot be used with this option. This option is not intended to replace an instance/external firewall, it will only control access to the ports used by pritunl-link. Run the command sudo pritunl-link firewall-on on the link host to enable the firewall. When enabling this option all external firewalls such as the instance or VCN firewall should allow ports UDP/500, UDP/4500 and TCP/9790 from 0.0.0.0/0. The pritunl-link client will handle further restricting the access to these ports from the Linux system firewall. Host Priority Each host has a priority that defaults to 1. The host with the highest priority that is available will always be used. This allows using a more powerful server as the primary and a less powerful server for failover to reduce costs. It is also necessary for some use cases such as the one shown below. In the example below the unifi-office location has a primary and failover client in both us-east-office and us-west-office. This configuration is useful if the unifi-office location needs access to both aws-us-east and aws-us-west but aws-us-east can not access aws-us-west. Port forwarding is needed for the clients in the unifi-office location because the clients are behind a Unifi Security Gateway with only one public IP address. Without setting a priority it would be possible for both link0 and link1 to become active at the same time. If this were to happen both clients would continue to update the port forwarding preventing the other client from connecting. To solve this link0 in unifi-office should be given a higher priority then link1 in both us-east-office and us-west-office. This will ensure the same link client is active between us-east-office and us-west-office while still having fast and automatic failover to link1 if link0 were to fail. Host Timeout Each host has a optional timeout, this will override the default timeout of 30 seconds. The time to failover is around 3 seconds + timeout. A longer timeout will increase the time to failover when a link host goes offline. Depending on how the link host is lost it may send a signal to the Pritunl server to indicate it is offline which will allow the link to failover in 3 seconds. Setting a lower timeout can shorten the time to failover but can also lead to an unnecessary downtime and failover in the event of a short latency spike or connectivity issue. IPsec Routers When running pritunl-link on a network with an IPsec router only one host can be used. Automatic failure for that network will not be available but other link locations can use automatic failover. After configuring all link locations use the list of routes in /var/lib/pritunl_link/routes to then create static routes. Add a static route on the local network router for each of the subnets with the next-hop set to the local IP of the server running pritunl-link. If new link locations or routes are added this file will update and the static routes will also need to be updated. Extensive testing with different routers and cloud provider IPsec offerings has shown that these router IPsec clients will significantly underperform an instance or server running IPsec. This includes cloud IPsec offerings that are managed by the cloud provider. Running IPsec on a router should only be done when it is not possible to configure a pritunl-link client with port forwarding. Additionally many failover features will be unsupported when not using pritunl-link clients for IPsec. Connectivity Issues Connectivity issues are often caused by incorrectly configured firewalls. In the example below client1 is making a SSH connection to server1. For this connection server1 must allow ssh traffic from the IP address of client1. This connection will also require that pritunl-link accept ssh traffic from client1. The ssh connection going to server1 will still be detected as ingress traffic to pritunl-link requiring that both pritunl-link and server1 allow the traffic from client1. To avoid this typically pritunl-link should accept all traffic from the VPC network with client1. (client1 ssh client) -> (pritunl-link) -> (server1 ssh server) Updated 4 months ago
https://docs.pritunl.com/docs/link-configuration
2022-06-25T02:48:00
CC-MAIN-2022-27
1656103033925.2
[array(['https://files.readme.io/3816f10-link_infrastructure.png', 'link_infrastructure.png'], dtype=object) array(['https://files.readme.io/3816f10-link_infrastructure.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/dfb081e-link0.png', 'link0.png'], dtype=object) array(['https://files.readme.io/dfb081e-link0.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/3685114-preferred_cipher.png', 'preferred_cipher.png'], dtype=object) array(['https://files.readme.io/3685114-preferred_cipher.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/d24669f-ipsec_speedtest2.png', 'ipsec_speedtest2.png'], dtype=object) array(['https://files.readme.io/d24669f-ipsec_speedtest2.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/0f4ffa4-ipsec_speedtest.png', 'ipsec_speedtest.png'], dtype=object) array(['https://files.readme.io/0f4ffa4-ipsec_speedtest.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/cb85a27-link11.png', 'link11.png'], dtype=object) array(['https://files.readme.io/cb85a27-link11.png', 'Click to close...'], dtype=object) ]
docs.pritunl.com
Virtual Desktop Support is a native Apex One feature but is licensed separately. After you install the Apex One server, this feature is available but is not functional. Installing this feature means downloading a file from the ActiveUpdate server (or a custom update source, if one has been set up). When the file has been incorporated into the Apex One server, you can activate Virtual Desktop Support to enable its full functionality. Installation and activation are performed from Plug-in Manager. Virtual Desktop Support is not fully supported in pure IPv6 environments. For details, see Pure IPv6 Server Limitations.
https://docs.trendmicro.com/en-us/enterprise/trend-micro-apex-one-2019-server-online-help/managing-the-product/managing-the-trend_c/company_name-virtual/virtual-desktop-supp.aspx
2022-06-25T02:16:38
CC-MAIN-2022-27
1656103033925.2
[]
docs.trendmicro.com
Welcome to Cold Call Central! This functionality is designed to help you refine your cold calling craft by better understanding which calls connect, which don’t and why. For more resources on Cold Calling, click here. You may have noticed your main Chorus page, Recordings, looks a little bit different. Your new default Recordings page displays: - A few purple columns to easily show you which calls Connected, got stuck in a Phone Tree, went to a Gatekeeper, and which went to voicemail. These purple columns display the results of Chorus Smart Call Disposition AI which automatically determine which disposition(s) each call is. - By default, this view shows you only your connected calls over the last 7 days. You can change this by filter and sorting your calls by disposition. - We’ve also added a column that imports the disposition recorded in your Dialer. - As always, there is a column for automatically tracking next steps and scheduling. Adding these purple columns to filter and sort your calls to see specific results can be done by clicking on “Add Filters”. On the left side panel, you’ll see a list of filters. Scroll down that list to the Cold Call Central filters, and select the disposition(s) you’re interested in. Click “Apply Filters” to save the changes. If you find there is a set of filters you often use, you can have these filters on by default by clicking “Save View” and entering a name for this filtered view. Not sure what to make of the dispositions? Just want to make sure you’re on par? Here’s a chart to lay it all out for you: You can leverage this information to help you onboard faster if you’re new to the role, simplify your AE handoffs by sharing your connected call with them, request feedback from a manager when you ran into a tough gatekeeper, and find out what works best to get your calls connected by listening to your team member’s (and your own!) successful connected calls. Happy cold calling - you got this! Additional resources: Chorus.ai + SDR Teams: Intro to Cold Call Central Setting up Cold Call Central for Admins Cold Call Central for SDR Managers Please sign in to leave a comment.
https://docs.chorus.ai/hc/en-us/articles/360039475613-Cold-Call-Central-for-SDRs
2020-02-17T02:30:11
CC-MAIN-2020-10
1581875141460.64
[array(['/hc/article_attachments/360051103394/Image_2019-11-21_at_5.10.50_PM.png', 'Image_2019-11-21_at_5.10.50_PM.png'], dtype=object) array(['/hc/article_attachments/360051103414/Image_2019-11-21_at_5.15.40_PM.png', 'Image_2019-11-21_at_5.15.40_PM.png'], dtype=object) array(['/hc/article_attachments/360051103434/cold_call_fileter.png', 'cold_call_fileter.png'], dtype=object) array(['/hc/article_attachments/360051103454/Image_2019-11-22_at_9.15.57_AM.png', 'Image_2019-11-22_at_9.15.57_AM.png'], dtype=object) array(['/hc/article_attachments/360051103474/Image_2019-11-22_at_9.16.47_AM.png', 'Image_2019-11-22_at_9.16.47_AM.png'], dtype=object) array(['/hc/article_attachments/360052008793/Image_2019-11-22_at_9.44.18_AM.png', 'Image_2019-11-22_at_9.44.18_AM.png'], dtype=object) ]
docs.chorus.ai
EnGenius Cloud provides management views that collect information about connected clients in your organization/hierarchy view/network. Click Manage -> Clients to access this screen and double-click the organization/hierarchy view/network on the tree to change the scope. The list of clients can be customized based on time intervals, and the chart can be customized based on time intervals and SSIDs. To change these parameters, use the appropriate dropdown menu at the top of the screen. You can search for a client in the current client list by using the search. You can search by any parameter included in the search options, and it will attempt to match your query across all fields. You can also specify multiple parameters by clicking on the icon in the search box, as seen below:
https://docs.engenius.ai/engenius-cloud/managing-devices/managing-clients
2020-02-17T00:11:26
CC-MAIN-2020-10
1581875141460.64
[]
docs.engenius.ai
Uri. Authority Property Definition Gets the Domain Name System (DNS) host name or IP address and the port number for a server. public: property System::String ^ Authority { System::String ^ get(); }; public string Authority { get; } member this.Authority : string Public ReadOnly Property Authority As String Property Value Exceptions This instance represents a relative URI, and this property is valid only for absolute URIs. Examples The following example writes the host name () and port number (8080) of the server to the console. Uri^ baseUri = gcnew Uri( "" ); Uri^ myUri = gcnew Uri( baseUri,"shownew.htm?date=today" ); Console::WriteLine( myUri->Authority ); Uri baseUri = new Uri(""); Uri myUri = new Uri(baseUri,"shownew.htm?date=today"); Console.WriteLine(myUri.Authority); Dim baseUri As New Uri("") Dim myUri As New Uri(baseUri,"shownew.htm?date=today") Console.WriteLine(myUri.Authority) Remarks The Authority property is typically a server DNS host name or IP address. This property might include the service port number if it differs from the default port for the URI. If the Authority component contains reserved characters, these are escaped in the string value returned by this property.
https://docs.microsoft.com/en-gb/dotnet/api/system.uri.authority?view=netframework-4.7.2
2020-02-17T01:32:40
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Create accessPackageAssignmentRequest Important APIs under the /beta version in Microsoft Graph are subject to change. Use of these APIs in production applications is not supported. In Azure AD entitlement management, create a new accessPackageAssignmentRequest object. This operation is used to assign a user to an access package, or to remove an access package assignment. Permissions One of the following permissions is required to call this API. To learn more, including how to choose permissions, see Permissions. HTTP request POST /identityGovernance/entitlementManagement/accessPackageAssignmentRequests Request headers Request body In the request body, supply a JSON representation of accessPackageAssignmentRequest object. To create an assignment for a user, the value of the requestType property is AdminAdd, and the accessPackageAssignment property contains the targetId of the user being assigned, the assignmentPolicyId property identifying the accessPackageAssignmentPolicy, and the accessPackageId property identifying the accessPackage. To remove an assignment, the value of the requestType property is AdminRemove, and the accessPackageAssignment property contains the id property identifying the accessPackageAssignment being removed. Response If successful, this method returns a 200-series response code and a new accessPackageAssignmentRequest object in the response body. Examples Request The following is an example of the request for a direct assignment. The value of the targetID is the object ID of a user being assigned, the value of the accessPackageId is the desired access package, and the value of assignmentPolicyId is a direct assignment policy in that access package. POST Content-type: application/json { "requestType": "AdminAdd", "accessPackageAssignment":{ "targetId":"46184453-e63b-4f20-86c2-c557ed5d5df9", "assignmentPolicyId":"2264bf65-76ba-417b-a27d-54d291f0cbc8", "accessPackageId":"a914b616-e04e-476b-aa37-91038f0b165b" } } Response The following is an example of the response. Note: The response object shown here might be shortened for readability. All the properties will be returned from an actual call. HTTP/1.1 201 Created Content-type: application/json { "id": "7e382d02-4454-436b-b700-59c7dd77f466", "requestType": "AdminAdd", "requestState": "Submitted", "requestStatus": "Accepted", "isValidationOnly": false } Feedback
https://docs.microsoft.com/en-us/graph/api/accesspackageassignmentrequest-post?cid=kerryherger&view=graph-rest-beta
2020-02-17T02:35:04
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Test Case Overview This. Getting Started First of all you have to add this dependency to your Cargo.toml: [dev-dependencies] test-case = "0.3.3" Additionally you have to import the procedural macro with use statement: use test_case::test_case; The crate depends on proc_macro feature that has been stabilized on rustc 1.29+. Example usage: #![cfg(test)] extern crate test_case; use test_case::test_case; #[test_case( 2, 4 ; "when both operands are possitive")] #[test_case( 4, 2 ; "when operands are swapped")] #[test_case(-2, -4 ; "when both operands are negative")] fn multiplication_tests(x: i8, y: i8) { let actual = (x * y).abs(); assert_eq!(8, actual) } Output from cargo test for this example: $ cargo test running 3 tests test multiplication_tests::when_both_operands_are_possitive ... ok test multiplication_tests::when_both_operands_are_negative ... ok test multiplication_tests::when_operands_are_swapped ... ok test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out Examples); } } Inconclusive (ignored) test cases (since 0.2.0) If test case name (passed using ; syntax described above) contains word "inconclusive", generated test will be marked with #[ignore]. #[test_case("42")] #[test_case("XX" ; "inconclusive - parsing letters temporarily doesn't work but it's ok")] fn parses_input(input: &str) { // ... } Generated code: mod parses_input { // ... #[test] pub fn _42() { // ... } #[test] #[ignore] pub fn inconclusive_parsing_letters_temporarily_doesn_t_work_but_it_s_ok() { // ... } Note: word inconclusive is only reserved in test name given after ;. License Licensed under of MIT license (LICENSE-MIT or) Contribution
https://docs.rs/crate/test-case/0.3.3
2020-02-17T01:28:59
CC-MAIN-2020-10
1581875141460.64
[]
docs.rs
passed to a control to specify a maximum height. Maximum Height allowed for the window. using UnityEngine; public class ExampleScript : MonoBehaviour { // Draws a window you can resize between 80px and 200px height // Just click the box inside the window and move your mouse Rect windowRect = new Rect(10, 10, 100, 100); bool scaling = false; void OnGUI() { windowRect = GUILayout.Window(0, windowRect, ScalingWindow, "resizeable", GUILayout.MinHeight(80), GUILayout.MaxHeight(200)); } void ScalingWindow(int windowID) { GUILayout.Box("", GUILayout.Width(20), GUILayout.Height(20)); if (Event.current.type == EventType.MouseUp) { scaling = false; } else if (Event.current.type == EventType.MouseDown && GUILayoutUtility.GetLastRect().Contains(Event.current.mousePosition)) { scaling = true; } if (scaling) { windowRect = new Rect(windowRect.x, windowRect.y, windowRect.width + Event.current.delta.x, windowRect.height + Event.current.delta.y); } } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/GUILayout.MaxHeight.html
2020-02-17T02:03:55
CC-MAIN-2020-10
1581875141460.64
[]
docs.unity3d.com
the email settings: - Click “Dashboard” in the upper right corner of your concrete5 toolbar - Click the “System & Settings -> SMTP Method” menu item. Set the values for the SMTP account as below, assuming a Gmail account: - Send Mail Method: External SMTP Server - Mail Server: smtp.gmail.com - Username: [email protected] - Password: PASSWORD - Encryption: TLS - Port: 587 Replace the above values with settings specific to your SMTP server. If using Gmail, replace the USERNAME and PASSWORD placeholders with correct values for your Gmail account. Here is an example:.
https://docs.bitnami.com/google/apps/concrete5/configuration/configure-smtp/
2020-02-17T01:14:36
CC-MAIN-2020-10
1581875141460.64
[]
docs.bitnami.com
There is a possibility to make changes in the look and feel of a colorpicker. For this you need to take the following steps: <style> .my-first-class { /*some styles*/ } .my-second-class { /*some styles*/ } </style> var colorpicker = new dhx.Colorpicker({ css:"my-first-class my-second-class" }); Related sample: Custom css - DHTMLX ColorpickerBack to top
https://docs.dhtmlx.com/suite/colorpicker__customization.html
2020-02-17T02:09:35
CC-MAIN-2020-10
1581875141460.64
[]
docs.dhtmlx.com
ASP.NET SQL Server Registration Tool (Aspnet_regsql.exe) The. Note For information about how to find the correct version of Aspnet_regsql.exe, see Finding the Correct Version of Aspnet_regsql.exe later in this topic. Aspnet_regsql.exe <options> SQL Connection Options Application Services Options Note db_securityadmin roles for the SQL Server database. Session State Options Remarks You can set several types of options with the ASP.NET SQL Server Registration tool. You can specify a SQL connection, specify which ASP.NET application services use SQL Server to manage information, Express. For more information about using SQL Server Express to store session state, see Session-State Modes. Examples set up session state, you must use the command-line tool; the wizard will not set up session state. To run the wizard, run Aspnet_regsql.exe without any command-line arguments, as shown in the following example. aspnet_regsql.exe. aspnet_regsql.exe -E -S localhost -A mr Finding the Correct Version of Aspnet_regsql.exe Aspnet_regsql.exe is installed in the Microsoft.NET Framework directory. If the computer is running multiple .NET Framework versions side-by-side, multiple versions of the tool might be installed. The following table lists the locations where the tool is installed for different versions of the .NET Framework. See Also Concepts Implementing a Membership Provider Implementing a Profile Provider Implementing a Role Provider
https://docs.microsoft.com/en-us/previous-versions/ms229862(v=vs.100)?redirectedfrom=MSDN
2020-02-17T01:52:54
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
This is a C++ library for an efficient continuous and discrete time optimal control implementation. It includes methods for solving optimal control for continuous time problem with exogenous signal for switching in between predefine modes. The toolbox is capable of solving constrained problems. Our proposed method is based on a bi-level optimal control approach which synthesizes an optimal feedback control policy for continuous inputs in the bottom-level and optimizes the switching times in between two consecutive system modes in the top-level. Our Optimal Control for Switched Systems algorithm (OCS2 algorithm) consists of two main steps: a method which synthesizes the continuous input controller and a method which calculates the parametrized cost function derivatives with respect to the switching times. For synthesizing the continuous input controller, OCS2 uses the SLQ algorithm; a dynamic programming approach, which uses the Bellman equation of optimality to locally estimate the value function and consequently the optimal control law. In the second step, OCS2 uses a Riccati-based approach to compute the derivative of the total cost with respect to the switching times. Moreover, the library provides tools for implementing the SLQ algorithm in MPC fashion. It also includes a ROS interface for receiving and sending the MPC policy. This library also uses CppADCodeGen an automatic-differentiation toolbox to calculate the derivatives of the system dynamics, constraint, and cost function. This library consists of the following main modules: Core Module provides the followings features: SLQ Module provides the followings features: OCS2 Module provides the followings features: MPC Module provides the followings features: ROS Interfaces Module provides the followings features: Frank-Wolfe Module provides the followings features: Robotic Examples provides the followings tools and examples: The source code is available at Bitbucket To get started with the control toolbox, please see Getting Started. For any questions, issues or other troubleshooting please either The OCS2 Toolbox is released under the BSD Licence, Version 3.0. Please note the licence and notice files in the source directory. This toolbox has been used in the following publications:
https://docs.leggedrobotics.com/ocs2_doc/
2020-02-17T02:10:35
CC-MAIN-2020-10
1581875141460.64
[]
docs.leggedrobotics.com
Publish a NuGet package from the command line Azure DevOps Services | TFS 2018 | TFS 2017 Publish NuGet packages to a feed in Azure Artifacts to share them with your team and your organization. First, get the tools and your feed URL: Go to your feed (or create a feed if you haven't). Select Connect to feed: Select NuGet.exe under the NuGet header Select Get the tools in the top right corner Follow steps 1 and 2 to download the latest NuGet version and the credential provider. Follow the instructions in the Project setup, Restore packages, and Publish packages sections to publish. Go to your feed (or create a feed if you haven't). Select Connect to feed: Follow steps 1 and 2 to get the tools, add the feed to your local NuGet configuration, and push the package. You can also manually construct a push command as follows: nuget.exe push -Source {NuGet package source URL} -ApiKey key {your_package}.nupkg Note - The NuGet client's push command requires an API key. You can use any non-empty string you want. In this example, we used key. - If you're prompted for credentials on the command line, ensure that you set up the Azure Artifacts Credential Provider. For more help in using credential providers with NuGet, see NuGet Cross Platform Plugins. For Azure DevOps, use a personal access token when prompted for credentials, see Authenticate access with personal access tokens. Get or create a sample package to push Get If you don't have a package but want to try this out, Microsoft provides a sample package in the public NuGet gallery. Run these two commands: nuget.exe install HelloWorld -ExcludeVersion nuget.exe push -Source {NuGet package source URL} -ApiKey key HelloWorld\HelloWorld.nupkg Create If you want to create your own NuGet package to push, follow the steps in Creating NuGet packages Run the following command: nuget.exe push -Source {NuGet package source URL} -ApiKey key {your_package}.nupkg Publishing with upstream sources There are some important things to consider when publishing packages that involve upstream sources. Check out the documentation on overriding a package from an upstream source for more information. Feedback
https://docs.microsoft.com/en-us/azure/devops/artifacts/nuget/publish?view=azure-devops
2020-02-17T01:56:33
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Add/Modify Mailboxes Use this dialog box to discover, select, and deselect mailboxes on the Exchange Server. Specifies the mailboxes that you want to add. Display Name Provides the Display Name of the mailbox. Alias Name Displays the Alias Name of the mailbox. SMTP Address Displays the SMTP Address of the mailbox. Subclient Displays the subclient associated with the Exchange Message Journal mailbox. Change all selected mailboxes to: Lists subclients that can be associated with mailboxes. To change the subclient association for one or more selected mailboxes, click a subclient in the list. If no mailbox is selected, this field is disabled. Discover Click to discover any new mailboxes, or if the display window is blank and does not display any items.
http://docs.snapprotect.com/netapp/v11/article?p=en-us/universl/agent/addmbtarget_exti.htm
2020-02-17T02:06:35
CC-MAIN-2020-10
1581875141460.64
[]
docs.snapprotect.com
Collection Evaluation - One tool, One Report Hello, ConfigNinja here writing you about Collection Evaluation and information on how to review the current collection performance. Collection performance is often one of those questions I get when working with some of my customers, and recently I got a question about the collection evaluation and the changes made to the collection. I decided to show you guys a few tips and tricks on how to get information about collection evaluation and different ways to understand what is going on with your collections. The first method I use is a very simple method, this is using the System Center 2012 R2 Toolkit, Collection Evaluation Viewer(CEViewer). You can get this tool from the following link: Once you have downloaded this tool and installed on your desktop, connect to the Primary Site Server Name. There are going to be several sections on the collection evaluation viewer, the one we are going to use today is the Full Evaluation, which show us the different refresh rates and the member changes performed on your collections. First will display us the collection name, then it collection ID . This is a critical information so we can do some more reviews of the collection we are looking to evaluate and identify what is going on with this collection in specific. Second is the Runtime, this will give us the details on how long it took for this collection to evaluate. In our example, this collection took 4.1 seconds to evaluate a 17% more than the other collections. The third information I look for this tool is the Last Evaluation and the Next Evaluation time, this will let me know how often this collection is being evaluated on the system. The last information I look at the tool is the Member Changes and the Last Member change time, this is a very good information because it can let me know those collections changes in the environment. The tool provides a good level of information about the collections, the challenge I face is to extract this data and saved somewhere. This data will override once the collections get evaluated again, so I have very limited time to save the data to another place so I can keep track of the performance of my collections. To solve this problem, I have decided to write a report that will give me the same information and allow me to export it into another version so I can keep track of the performance periodically. You have the Start Auto Refresh to get the live version of this information, but since I’m a little old school I like to get this data and share with some of my other peers for them to review it and also provide feedback on it. Here is the sample of the report I Created that will provide the almost the same view of Full Evaluation in CE Viewer. As you can see on the report I have a very similar view to the CE Viewer. Now I can export this report and share it with some of my peers. I also created a second report to know about those machines that were added to the collection. I click on the Member Changes field and this will open the second report that will give me a list of those machines. Now this solves some of my current need to get a report about the current collection performance, using the Collection Evaluation Viewer and the report. If you can benefit from this report I have made it available on the following gallery: Santos Martinez – SR PREMIER FIELD ENGINEER – Author Steven Hernandez –
https://docs.microsoft.com/en-us/archive/blogs/smartinez/collection-evaluation-one-tool-one-report
2020-02-17T02:25:13
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Introduction to Quality Assurance¶ We are users and developers systematically working on increasing quality of Krita and the process of it’s development. We help sustain the self-auditing culture of Krita’s community. We Methodically assess functionality, usability and security. Hunt for bugs and cater for bugs already captured. Aid in quality management. Create tools to make developer’s life easier. How To Help?¶ The quality assurance field is really broad and diverse and we are always looking for people of all skills and talents. Below you will find a list of opportunities to help, so you can dive right into it. Also, don’t forget to visit us on the IRC, we will be happy to meet you. Bug Triaging¶ There is a great amount of incoming bug reports, more than the core team can handle. We are looking for volunteers who would go through the bug tracker and handle the reported bugs. This includes: Determining if a bug is really a bug or a new feature request Confirming bugs by reproducing Guiding reporters to provide all the information needed to fix the bug (Optional) Providing logs, backtraces, core dumps Reporting Bugs provides general information about bug reports and guidance for their creation Krita-specific guide to Triaging Bugs See also Guide to Developing Features Hints for user support also apply here: Introduction to User Support Docs for gathering logs, backtraces, etc. Beta Testing¶ To validate an upcoming stable version will work as expected, there is the beta version. You can help by dowloading the beta, trying it out and sharing your feedback. Every beta comes with a survey, which will ask for some basic information about your setup (all anonymized, of course) and guide you through testing latest features and bug fixes. You can find link to the survey on Krita’s welcome page. To know when there is a new beta, watch out for the news on the welcome page, or in the News section on Krita website. For more information about the process refer to the Testing Strategy. Test Engineering¶ The test suite is the safety net enabling the community to fearlessly move forward. We have a comprehensive testing strategy to help us find bugs early in the process and deliver the best user experience possible. But without people, the strategy is just a bunch of words. There are many ways you can help, for both technical and less technical people. If you like to experiment and try new things, consider exploratory testing. No coding skill required. Hone your analytical skills by designing end-to-end tests. Try your hand at unit testing. Design and implement the low level tests for both backend and UI code. Enhancement Projects¶ There is plenty of projects from small to big, some include writing and organizing, some require coding. We currently register following projects:. Does something catch your eye? Do you have something else in mind?¶ This list is not definite. We are always open to new ideas and approaches. Please, join us on the IRC (The Krita Community) to discuss the possibilities.
https://docs.krita.org/en/untranslatable_pages/quality_assurance.html
2020-02-17T01:31:40
CC-MAIN-2020-10
1581875141460.64
[]
docs.krita.org
Baby born with cell phone in hand! This seems strange, but I suppose it is inevitable in our ever-more interconnected, ‘wired’, world: “Half a million kids in the UK under the age of ten will have a mobile phone by the end of next year, according to research published by market intelligence outfit mobileYouth….”
https://docs.microsoft.com/en-us/archive/blogs/bwill/baby-born-with-cell-phone-in-hand
2020-02-17T02:29:33
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
Understand lock types You can use the lock command to temporarily prevent changes to a particular file or folder in the source control server. This can be helpful if you want to change an item in your workspace and then check it. Check-in lock A check-in lock is less restrictive than a check-out lock. When you apply a check-in lock, users can continue to make local changes to the item in other workspaces. But those changes cannot be checked in until you explicitly remove the check-in lock from the item or implicitly remove it by checking in your changes to the file. In Visual Studio Team Foundation Server 2012, check-out locks are generally not effective because of local workspaces (see Decide between using a local or a server workspace). Specifically, check-out locks are: Not enforceable because other users might be using local workspaces. Not available if you are using a local workspace. Disabled if a member of the Administrators security group of your team project collection has enabled asynchronous checkout for your team’s server workspaces. A check-out lock prevents users who are using server workspaces from checking out and making changes to the locked item in their workspaces. You cannot apply a check-out lock to an item for which any pending changes exist, in any workspace other than your own. How Locking Works If a file is checked out when you lock it, its check-out record is modified to contain the new lock type. If the files are not checked out, a "lock" change is added to the set of pending workspace changes. Unlike the check-out command, the lock command does not automatically make a file editable. Team Foundation unlocks an item automatically when you check in pending changes in the workspace where it is locked. Locks are also released if the pending changes for a file are undone by using the undo command. Locks on folders are implicitly recursive. If you lock a folder, you do not have to lock the files that version control server and by whom they were locked by using the Status command. A lock may be placed either as its own operation or as part of several other operations. These include by using the rename command, both old and new server paths are locked. Unlocking an Item You can unlock an item explicitly by using the unlock command or implicitly when you check in. When you check in pending changes to a locked item, Team Foundation removes any locks. Note By default, the UnlockOther permission is granted to administrators only. If you have the UnlockOther permission, you can remove a lock from an item in the workspace of another user by using the Lock Command. See Also Concepts Create and work with workspaces Other Resources Work with version control locks Resolve Team Foundation Version Control conflicts
https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2013/ms181419%28v%3Dvs.120%29
2020-02-17T02:30:21
CC-MAIN-2020-10
1581875141460.64
[]
docs.microsoft.com
SQL Add the saved table to a SQL schema. In addition to saving your data as a 1010data table, you can save the table to an existing SQL schema to which you have access. If you do not have access to any SQL schemas, you must ask your administrator for permissions. You can also create a schema with the SQL Metadata Tool. See SQL Metadata Tool for more information. - Add the saved table to a SQL schema - Selecting this option displays the SQL schema options. - Select the SQL schema - Select the SQL schema for this table from the drop-down list. - Choose SQL table name - Enter the desired name of the SQL table, or leave the field blank to use the 1010data table name. - Use 1010data column labels (not names) as SQL column names - Selecting this option uses 1010data column labels for SQL column names, while deselecting the option uses 1010data column names for SQL column names.
https://docs.1010data.com/1010dataUsersGuideV10/SaveData/SaveDataTableSQLTab.html
2022-08-07T19:07:09
CC-MAIN-2022-33
1659882570692.22
[array(['Screens/SaveDataAsTableSQLTab.png', None], dtype=object)]
docs.1010data.com
Payment Pending For payments using cards with 3D Secure authentication or local payment methods, the add-on places the order with a PAYMENT_PENDING status. This status will be removed as soon as the authorisation is finished. In cases where the payment is abandoned or the authorisation is not received, the order status is updated to PROCESSING_ERROR after a certain period. By default this period is 60 minutes. You can configure this period in the dynamic order process definition file: - In the extension you are using (adyenv6fulfilmentprocess or adyenv6ordermanagement), find the order-process.xml file. Update the timeout delayvalue and save the file. <wait id="waitForAdyenPendingPayment" then="checkPendingOrder"> <event>AdyenPaymentResult</event> <timeout delay="PT60M" then="checkPendingOrder"/> </wait> Capture If you have configured an automatic capture using the Hybris back office, we capture your payment automatically. However, if you prefer a manual capture flow you need to send a capture request before the authorisation expires. Some payment methods do not support manual capture. Your capture flow settings in Hybris should match the capture settings for your merchant account in your Customer Area. The add-on provides an implementation of the de.hybris.platform.payment.commands.CaptureCommand for capturing payments. Using adyenCheckCaptureAction, you can check if the capture of a payment has been completed. You can expect the following states: - OK – Payment was captured - NOK – Capture failed - WAIT – Waiting for capture completion (listening to the AdyenCapturedevent) Cancellations/Refunds The add-on provides an implementation of the de.hybris.platform.payment.commands.VoidCommand to cancel the payment, if required. This command uses our cancelOrRefund API call and refunds the payment. Refunds This extension provides an implementation of the de.hybris.platform.payment.commands.FollowOnRefundCommand to refund a payment. You can integrate AdyenCancelOrRefundAction to check if a refund is complete. You can expect the below states: - OK – Payment was refunded - NOK – Refund failed - WAIT – Waiting for refund completion (listening to the AdyenRefundedevent)
https://docs.adyen.com/pt/plugins/hybris/order-management
2022-08-07T18:50:19
CC-MAIN-2022-33
1659882570692.22
[]
docs.adyen.com
I have a process that is processing a list of coordinates. It does this in batches. Let say it's processing 100 coordinates in batches of 10. So I have a foreach loop over with my 100 coordinates and every 10th I do a SaveChanges(). Within the loop I'm checking to see if the coordinate is a duplicate. If so, I mark it as deleted by setting a flag. If it's not a duplicate I'm searching for the previous coordinate (in time) that is not deleted. Now the unexpected part of my case: often times the coordinate I get back from this method has it's deleted flag to true! If I, after setting the deleted flag, do a SaveChanges(), I get a different result. Who can explain this difference in behaviour? Is this a bug or by design? I'm using EF 5.0.0 .net framework 4.5
https://docs.microsoft.com/en-us/answers/questions/190452/unexpected-result.html
2022-08-07T19:54:38
CC-MAIN-2022-33
1659882570692.22
[]
docs.microsoft.com
Testing & deployment Test your Shopify integration before deploying and going live. Overview This page includes details on ensuring that Affirm is correctly configured as a payment option within your Shopify settings and ready for your customers to use as a payment option. 1. Test your integration To test the integration, please add any product to your cart and navigate through your checkout process as normal, then select Affirm as a payment option. As long as you are able to get redirected to the Affirm checkout flow after selecting Affirm as a payment method, Affirm will work as expected. You will know if the Affirm payment option is working correctly if you can see the screen below after clicking ‘Complete Order’ If you receive any errors at and are not able to get to the screenshot above, please reach out to [email protected] 2. Deploy to production Please note: while test mode is active, real loans are not created. Your customers will not be able to use Affirm as a payment method until test mode is deactivated by following the steps below To deactivate test mode: - Sign in to your Shopify account and click the Settings (gear) icon at the bottom-left of the screen. - Click Payments - In the Alternative Payments section, find Using: Affirm and click Edit - Uncheck the Use test mode checkbox and save settings. You will know test mode has successfully been disabled once the red 'You are currently using the Affirm Developer Sandbox' banner is removed from the Affirm checkout flow: If you see the image above, Affirm is successfully connected to your live environment and ready to be used by your customers! If you're experiencing any errors that prevent you from seeing the image above, please reach out to us at [email protected] Updated over 1 year ago Congrats on successfully testing and deploying you Affirm integration in Shopify! You may also want to learn more about processing transactions in Shopify, and some additional features for Shopify Plus users.
https://docs.affirm.com/developers/docs/shopify-testing-deployment
2022-08-07T19:48:21
CC-MAIN-2022-33
1659882570692.22
[array(['https://files.readme.io/6f48dff-Screen_Shot_2021-01-25_at_9.14.59_AM.png', 'Screen Shot 2021-01-25 at 9.14.59 AM.png 1722'], dtype=object) array(['https://files.readme.io/6f48dff-Screen_Shot_2021-01-25_at_9.14.59_AM.png', 'Click to close... 1722'], dtype=object) array(['https://files.readme.io/35c2035-Screen_Shot_2021-01-25_at_9.15.14_AM.png', 'Screen Shot 2021-01-25 at 9.15.14 AM.png 1718'], dtype=object) array(['https://files.readme.io/35c2035-Screen_Shot_2021-01-25_at_9.15.14_AM.png', 'Click to close... 1718'], dtype=object) ]
docs.affirm.com
Introduction At Amberdata.io we provide a rich set of high quality, real-time, raw & augmented blockchain data. Web3api enables you to easily gain access to this data. Usage Every request (with the exception of Websockets) requires this header: x-api-key The request will be refused if this header is missing or an invalid API Key is provided (see Authentication for more details on obtaining an API Key). Example Request: curl -X GET \ -H "accept: application/json" \ -H "x-api-key: <api_key>" \ "" You can explore the API in different ways: - REST - for all your historical data needs - RPC - to send transactions to the blockchain - Websockets - typically for real-time, low-latency applications - FIX - to retrieve Market Data via the Financial Information eXchange protocol The OpenAPI specification for the REST API is also available here. Now we're ready for the next step, retrieving data!
https://docs.amberdata.io/reference/reference-getting-started
2022-08-07T18:28:04
CC-MAIN-2022-33
1659882570692.22
[]
docs.amberdata.io
User dashboard# In dicehub, the user interface was designed to be as simple as possible and at the same time allow a very fast workflow. The user dashboard in dicehub has the following elements: - Global navigation: a consistently available user interface element that used for global navigation and provides access to system-wide functions. - Side navigation: a vertical list of navigational links. In this navigation you can find: - Projects: List of projects you have access too. - Groups: List of groups you have access too. - Recently opened: A list of recently opened applications. - Activities: A list of specific events, for example creating an application. - User settings: Here you can setup your account, profile and manage billing, security and limits. - Main content: the part of the page which that is unique to that page and can contain different content, such lists, forms, images and other. - Logo: is a visual representation of dicehub to promote public identification. This logo also represents a link to your dashboard. - Global navigation menu: is the main navigation on dicehub and includes links to the following pages: - Explore: A page with all publicly available projects and groups. - Templates: Publicly available pre-defined templates can be found here. - Community: In dicehub users can prepare their applications and publish them for the community here. - Admin: This section is only available for administrators on self-managed servers. - User menu: menu where you quickly find a link to your user settings, your public profile and a link to sign out. Last update: January 22, 2022
https://docs.dicehub.com/latest/guide/ui/user_dashboard_ui/
2022-08-07T18:38:25
CC-MAIN-2022-33
1659882570692.22
[array(['../images/user_dashboard_ui.png', 'User dashboard'], dtype=object)]
docs.dicehub.com
Export Products To export the product list, click the [Export to] button in the "Export Data" tab and choose Excel or CSV file format. After that, click the [Dropdown] icon and choose the category or subcategory of the required products. To export all products, choose "All Categories" or "Uncategorise" to export the uncategorised products. Click [Export] to confirm or [Cancel] to abolish exporting process. If request to export is successful, you get the notification. Afterward, click the [Notification] icon. Click the link in the notification to download the report to your computer. OR: Click the [Background Tasks] icon. And click the link in the notification to download report to your computer. Afterward, select the folder where to save the file and click [Save] or [Cancel].
https://docs.sellerskills.com/import-export/export/export-products
2022-08-07T18:43:00
CC-MAIN-2022-33
1659882570692.22
[]
docs.sellerskills.com
In fall of 2021 we introduced a new way to process payments through SingleOps with SingleOps Payments powered by ProPay. This article will address frequently asked questions in regards to the migration of moving from OpenEdge to SingleOps payments, and the functionality of SingleOps Payments as a whole. If you don't find an answer to the question you have, please feel free to reach out directly to our support team so we can get you an answer directly and add it this article so others can learn as well. What are the benefits of SingleOps Payments powered by ProPay? With SingleOps Payments powered by ProPay you will have access to one-step payment processing, e-invoicing with click to pay, secure card storage, full featured card present support, and automatic card updates. Additionally by processing payments through SingleOps you are able to get paid 3X faster, spend 93% less time on receivables, and have happier customers who spend more. What does the signup process entail? The sign up process should take no more than 10 minutes. Please refer to this article for step by step guides on how to sign up. How to Set Up SingleOps Payments powered by ProPay How long does it take to get approved? Approval is instant on SingleOps Payments. Once you complete the sign up form, all you have to do is turn the Payment Gateway on and begin processing payments in SingleOps immediately. Note- it is possible that ProPay will reach out for some additional information after the instant approval. This will not impact your ability to process payments in SingleOps. What are the credit card and ACH processing fees? Credit Card- 2.9% + 20c per transaction ACH- 1% with a $10 max fee Can I pass along processing fees to my customers? Yes! When you activate your payment gateway you will have the ability to set/customize your credit card processing fee, your bank transfer processing fee, and your back transfer fee cap. See Setting Up a Payment Gateway article for more details. Who is the payment processing partner? SingleOps has partnered with ProPay, a division of Global Payments, to provide merchant services as a part of our online payments solution. SingleOps customers are automatically approved after completing a short form in the app, and are ready to accept payments immediately. Are there any additional costs I need to be aware of? There are no costs to get started with SingleOps Payments. We offer a competitive flat rate of 2.9% + 20 cents per credit card transaction and 1% per ACH transaction (fee capped at $10), so that you are never surprised by a bill and can forecast expenses accurately. In SingleOps, you can even choose to pass these additional costs on to your customers through your Account Settings. Many processors try to quote lower rates and tack on extra fees. With SingleOps Payments, you only incur expenses after you get paid. Will SingleOps Payments/ProPay even delay transfers to my bank account? Transfers will only be delayed if you exceed 300% of your pre-approved transactions volume limit. Once you exceed the limit itself, you may be asked to complete additional underwriting steps in order to have your limit increased, so that transfers are never delayed. Limits are all soft limits and will never prevent you from accepting payments. What happens to my saved cards? All customers using our previous payment processing partner (OpenEdge) will have their saved cards migrated on the back end. The SingleOps team is coordinating with both OpenEdge and ProPay to execute this process so that you will have saved cards available as soon as you switch to using SingleOps Payments. How will this work within SingleOps? This is a fully integrated solutions. Your clients will be able to make payments directly through the online portal (matching how our processing works/worked with OpenEdge). How does the money end up in my bank account? Transactions will be deposited individually into your account as they are received. Note it may take a few days for the transaction to hit your bank account, but ProPay will make deposits daily. Will the billing for payment processing change? No, SingleOps Payments is set up on Gross Billing (similar to OE), so fees will be taken out monthly. What needs to be done to cancel OpenEdge? Nothing on your end. Once you switch your payment gateway to SingleOps Payments, we will automatically reach out to OpenEdge on your behalf and cancel your account. when should we consider moving over to ProPay or will it happen automatically? Article is closed for comments.
https://docs.singleops.com/hc/en-us/articles/4409542503831-FAQ-SingleOps-Payments-powered-by-ProPay
2022-08-07T19:50:04
CC-MAIN-2022-33
1659882570692.22
[]
docs.singleops.com
Creating a HTCondor compute service {#guide-101-htcondor} Overview # {#guide-htcondor-overview} HTCondor is a workload management framework that supervises task executions on various sets of; - A std::setof 'child' wrench::ComputeServiceinstances available to the HTCondor pool; and - A std::mapof properties ( wrench::HTCondorComputeServiceProperty) and message payloads ( wrench::HTCondorComputeServiceMessagePayload). The set of compute services may include compute service instances that are either wrench::BareMetalComputeService or wrench::BatchComputeService instances. The example below creates an instance of an HTCondor service with a pool of resources containing a Bare-metal server: // Simulation wrench::Simulation simulation; simulation.init(&argc, argv); // Create a bare-metal service auto baremetal_service = simulation.add( new wrench::BareMetalComputeService( "execution_hostname", {std::make_pair( "execution_hostname", std::make_tuple(wrench::Simulation::getHostNumCores("execution_hostname"), wrench::Simulation::getHostMemoryCapacity("execution_hostname")))}, "/scratch/")); std::set<std::shared_ptr<wrench::ComputeService>> compute_services; compute_services.insert(baremetal_service); auto htcondor_compute_service = simulation->add( new wrench::HTCondorComputeService(hostname, std::move(compute_services), {{wrench::HTCondorComputeServiceProperty::SUPPORTS_PILOT_JOBS, "false"}} )); Jobs submitted to the wrench::HTCondorComputeService instance will be dispatched automatically to one of the 'child' compute services available to that instance (only one in the above example).
https://docs.wrench-project.org/en/v1.10/wrench_101/htcondor/
2022-08-07T18:45:04
CC-MAIN-2022-33
1659882570692.22
[]
docs.wrench-project.org
Features Primary and secondary zones A DNS zone is a distinct portion or administrative space in the DNS domain name space that is hosted by a DNS server. DNS zones allow the DNS name space to be divided up for administration and for redundancy. The DNS server can be authoritative for multiple DNS zones. All of the information for a zone is stored in a DNS zone file, which contains the DNS database records for all of the names within that zone. These records contain the mapping between an IP address and a DNS name. DNS zone files must always start with a Start of Authority (SOA) record, which contains important administrative information about that zone and about other DNS records. You can implement Edge DNS as your primary or secondary DNS, either replacing or augmenting your existing DNS infrastructure as desired. Whether primary or secondary, Edge DNS can provide your organization with a scalable and secure DNS network that helps ensure the best possible experience for your users. The available zone modes are: - Primary mode. In primary mode, customers manage zones using either Akamai Control Center or the Edge DNS Zone Management API. The Edge DNS zone transfer agent pushes out your zone data to the Edge DNS name servers and provides you a list of name servers that you can register with your domain registrar. - Secondary mode. In secondary mode, customers enable DNS zone transfers from their primary name servers to Akamai. Edge DNS name servers use authoritative transfer (AXFR) as the DNS zone transfer method for secondary zones. However, if you configured your own master names servers to support incremental zone transfers (IXFR), the Edge DNS zone transfer agents (ZTAs) will automatically do incremental zone transfer for secondary zones. In secondary mode, you maintain zone information on your primary (master) name server, and Edge DNS zone ZTAs perform zone transfers from the primary name servers and upload these zones to Akamai name servers. ZTAs conform to the standard protocols described in RFCs 1034 and 1035 and work with most common primary name servers in use, including Internet Systems Consortium's BIND (version 9 and later), and also Microsoft Windows Server and Microsoft DNS operating systems. Refresh and retry intervals in the start of authority (SOA) determine the interval between zone transfers. In addition, you can configure the system to accept NOTIFY requests from your primaries to allow almost immediate updates. ZTAs are deployed in a redundant configuration across multiple physical and network locations throughout the Akamai network. All ZTAs will attempt to perform zone transfers from your master name servers, but only one (usually the first one that receives an update using one transfer) will send any given zone update to the name servers. This process uses a proprietary fault-tolerant data transfer infrastructure, thus providing a fault-tolerant system at every level. Cross-account subzone delegation Cross-account subzone delegation provides a mechanism for a parent zone owner to securely grant another Edge DNS account the capability to delegate subzones on the owner's existing zones. The zone owner participating in subzone grant requests needs to have their Edge DNS contract authorized for subzone grants. Contact your service representative for authorization. After a service representative adds a zone owner's contract to the allow list, the zone owner can enable subzone delegation on a specified zone. Enabling subzone creation permits any Akamai customer to submit a subzone request on this zone. Without an approved subzone grant request, every subzone starts in a PENDING_APPROVAL state. The subzone owner can begin building up records and uploading zone files while they wait for approval of their request from the zone owner. The zone owner is notified of any pending subzone requests and either approves or rejects them during the review process. Zone apex mapping Zone apex mapping uses the mapping data available on the Akamai platform to reduce DNS lookup times for your websites. With zone apex mapping, name servers resolve DNS lookup requests with the IP address of an optimal edge server. Resolving lookups in this way helps you: - Eliminate the CNAME chains inherent with CDN solutions - Reduce DNS lookup times for any website on the Akamai platform - Deploy Akamai acceleration solutions for records at the zone apex for which a CNAME cannot otherwise be used You use the AKAMAICDN private resource record type to configure zone apex mapping. DNSSEC for Edge DNS The DNS security extensions (DNSSEC), described in RFCs 4033, 4034, and 4035, allow zone administrators to digitally sign zone data using public key cryptography, proving their authenticity. The primary idea behind DNSSEC is to prevent DNS cache poisoning and DNS hijacking. These record types are used for DNSSEC: - DNSKEY (DNS public key). Stores the public key used for resource record set signatures. - RRSIG (resource record signature). Stores the signature for a resource record set (RRset). - DS (delegation signer). Parent zone pointer to a child zone's DNSKEY. - NSEC3 (next secure v3). Used for authenticated NXDOMAIN. The Security Option contract item of Edge DNS supports these features: - DNSSEC "sign and serve". Akamai manages signing the zone, key rotation, and serving the zone. - DNSSEC "serve". You manage signing the zone and key rotation, while Akamai serves the zone. DNSSEC sign and serve The DNSSEC sign and serve feature provides the ability to offload the DNSSEC support entirely to Akamai's existing key management infrastructure (KMI) for the zone signing key (ZSK) and key signing key (KSK) rotation. The ZSK is rotated weekly and the KSK is rotated annually. For zone key rotation, Akamai uses RFC 4641's prepublish key rollover method, modified for constant rotation. That is, two ZSKs are present in the zone apex DNSKEY record. One key actively signs the rest of the zone, while the other key is present so it has time to propagate before becoming active. This method: - Introduces a new, as of yet unused, DNSKEY record into the apex DNSKEY RRset. - Waits for the data to propagate (propagation time plus keyset TTL). - Switches to signing the zone's RRSIGs with the new key, but leaving the previous key available in the apex DNSKEY RRset. - Waits for propagation time plus maximum TTL in the zone. - Removes the old key from the apex DNSKEY RRset, which will then restart the key rotation process. Signature duration is three days. To be sure signatures don't reach expiration, even if records are not being modified, the zone is re-signed at least once per day. An added benefit of DNSSEC sign and serve is the ability to support top-level redirection. The current recommended algorithm is ECDSA-P256-SHA256, or RSA SHA-256 if you want to avoid the use of ECDSA. DNSSEC can be used with both ZAM and top-level redirection. DNSSEC serve The DNSSEC serve feature provides the ability to support DNSSEC for secondary zones, but the zone administrator is responsible for implementing their own key management infrastructure (KMI) solution and properly rotating their zone signing key (ZSK) and key signing key (KSK). DNSSEC requires transaction signature (TSIG). For zone access control, you need to enable TSIG with the supported algorithms. In addition to your responsibility for all of the key signing, you must ensure that all the necessary new records are in the zone transfer to Akamai. Subset RRsets of self-signed zones not served If you have a self-signed zone, Edge DNS won't serve subset RRsets. It will serve the full RRset as defined in your zone. If the RRset is too large for the standard DNS packet size, your end users' caching name servers will need to negotiate a larger packet size with extension mechanisms for DNS (EDNS0), or else use TCP. If you're concerned about end users' name servers not having this functionality, configure smaller RRsets in your zone. Alias zones An alias zone is a zone type that does not have its own zone file. Instead, it bases itself on another Edge DNS (base) zone's (either primary or secondary) resource records. In other words, the zone data is a copy of another Edge DNS zone. You can modify data in alias zones by changing the "parent" or "alias-of" zone. An organization may have several hundred vanity or brand domains that need to be registered and for which DNS services are required, but for which DNS is configured identically to a base zone. In these cases you can configure one base zone, point many aliases to the base zone, and easily manage any DNS changes by updating only the base zone file. Default limit for contracts is 2,000 By default, Edge DNS contracts are limited to 2,000 configured zones, including aliases. Contact technical support if you need to exceed this limit. Logging Alias zones generate their own log lines independent of their base zone. To receive logs with alias zone traffic, you need to enable log delivery for each alias zone individually. Compatibility Alias zones are compatible with: - Zone apex mapping - Top-level CNAME (Can only be configured by technical support.) - Vanity name servers (These name server records should not be from a base zone with aliases.) When using these features, pay attention before adding alias zones. Check the property receiving the traffic in Akamai Property Manager, ensuring that the appropriate hostnames are configured and that the correct redirects are set up. If HTTPS support is required, make sure that the certificates are configured correctly. With a certificate that supports subject alternative names (SAN), this typically means adding the domain apex to the certificate. Alias zones are not compatible with DNSSEC. Each DNSSEC zone requires its own resource record signatures. Base zones with DNSSEC enabled can't have any aliases. Alias zone example This example is displayed in standard BIND zone format. The example-base-zone.com base zone has the following resource records: The example-alias-zone.com zone is configured as an alias of. Any time a DNS request asks for resolution of a resource record in the alias zone, Edge DNS will answer with the resource records specified in the base zone, as if the alias zone had the same resource records as the base zone. Any time a change is made to the base zone's resource records, all the aliases of that zone will reflect the same change. Billing When configured on Edge DNS, alias zones receive traffic like any other zone and are counted as regular zones from a billing standpoint. For example, if a Edge DNS contract allows for 50 zones, and 20 regular zones and 30 alias zones are configured, then these 50 zones will be within the contract entitlement. If 5 additional alias zones are configured, then the total of 55 zones will incur a 5 zone overage based on the per-zone overage rate. Supported resource record types Edge DNS supports the Internet (IN) class and the following record types. - A. IPv4 address. - AAAA. IPv6 address. - AFSDB. AFS database. - AKAMAICDN. Akamai private resource record for zone apex mapping. - AKAMAITLC. Akamai private resource record for top-level CNAME. Can be configured only by technical support. Akamai recommends using AKAMAICDN instead. - CAA. Certification authority authorization. - CERT. Certificate record that stores public key certificates. - CDNSKEY. Child copy of the DNSKEY record, for transfer to parent. To add record sets of this type, use the Edge DNS Zone Management API. - CDS. Child copy of the DS record, for transfer to parent. To add record sets of this type, use the Edge DNS Zone Management API. - CNAME. Canonical name. - DNSKEY. DNS key. Stores the public key used for RRset signatures. Required for DNS security extensions (DNSSEC). - DS. Delegation signer. Parent zone pointer to a child zone's DNSKEY. Required for DNSSEC. - HINFO. System information. - HTTPS. Hypertext Transfer Protocol Secure. - LOC. Location. - MX. Mail exchange. - NAPTR. Naming authority pointer. - NS. Name server. - NSEC. Next secure. Available for self-signed secondary zones only. NSEC3 is a better choice. - NSEC3. Next secure, version 3. Used for authenticated NXDOMAIN. Required for DNSSEC. - NSEC3PARAM. NSEC3 parameters. - PTR. Pointer. - RP. Responsible person. - RRSIG. Resource record set (RRset) signature. Stores the digital signature used to authenticate data that is in the signed RRset. Required for DNSSEC. - SSHFP. Secure shell fingerprint record. Identifies SSH keys that are associated with a host name. - SOA. Start of authority record. Stores administrative information about a zone, including data to control zone transfers. To add record sets of this type, use the Edge DNS Zone Management API. - SPF. Sender policy framework. - SRV. Service locator. - SVCB. Service bind. - TLSA. Transport Layer Security Authentication certificate association. Used to associate a TLS server certificate or public key with the domain name where the record is found. - TXT. Text. - ZONEMD. Message digests for DNS zones. To add record sets of this type, use the Edge DNS Zone Management API. Updated 10 months ago
https://techdocs.akamai.com/edge-dns/docs/features#zone-apex-mapping
2022-08-07T18:41:21
CC-MAIN-2022-33
1659882570692.22
[]
techdocs.akamai.com
Create a Web Site by Tali Smith - Start IIS Manager. For information about starting IIS Manager, see Open IIS Manager (IIS 7). For information about navigating to locations in the UI, see Navigation in IIS Manager (IIS 7). - In the Connections pane, right-click the Sites node in the tree view, and then click Add Web Site. - In the Add Web Site dialog box, type a friendly name for your Web site in the Web site name box. - browse the file system to find the folder. - If the physical path that you entered in step 5 is to a remote share, click Connect as to specify credentials that have permission to access the path. If you do not use specific credentials, select the Application user (pass-thru..
https://docs.microsoft.com/en-us/iis/get-started/getting-started-with-iis/create-a-web-site
2022-08-07T18:32:31
CC-MAIN-2022-33
1659882570692.22
[]
docs.microsoft.com
Use the mobile base properly Basic mouvements We recommend that you get a feel for the inertia of the robot by holding on to the metal pole and pushing and pulling it. Block a wheel with your foot and try to gently tilt the robot. Then use the controller to move around the robot as explained in Moving the mobile base Common risks and advice Even though the mobile base is programmed to move relatively slowly, it is important to try to avoid any potential collisions. The mobile base is designed to be used indoors on a flat surface. 💡 The arms and any grasped objects should ideally be in the vertical projection of the mobile base. The idea here is that the robot should always be able to rotate in place safely. Also, keeping the arms tugged in reduces the risk of tipping. A low level collision avoidance safety always runs in the background, but it can only prevent collisions seen by the LIDAR. Read more about it in Anti-collision safety. A non exaustive list of cases where the safety can’t work: - Stairwells. There are no cliff sensors so the robot has no practical way of knowing it’s near downward stairs. - A table. The LIDAR can see the legs but not the table top. - A large clean bay window migh be invisible to the LIDAR. - Small obstacles that fit below the LIDAR won’t be seen. Another risk is the robot tipping. Avoid rapid variations in acceleration. Also, we don’t recommend using the robot on a slope above 10°. Battery management We chose a high end battery for the mobile base. The Life4PO technology and the overal grade of the equipement (BMS, charger, monitoring system, certified UN 38.3, 5 years warranty), should make this one of the safest choices available on the market. However, the battery can hold a large amount of energy (832 Wh) and should always be treated carefully. ⚠️ Only use the dedicated charger to charge the battery 💡 When stocking the battery for long periods, aim for at least 60% charge 💡 Use the monitoring system (small screen on the mobile base) to recharge the battery before reaching 0% and relying on the BMS to shutdown the battery. Dynamic capabilities The wheel motors are very potent. In their default configuration, they are used at 20% of their maximum capabilities. You can, at your own risk, modify this limit in the configuration of the HAL.
https://docs.pollen-robotics.com/advanced/safety/mobile-base/
2022-08-07T19:51:58
CC-MAIN-2022-33
1659882570692.22
[]
docs.pollen-robotics.com
WRENCH 101 {#wrench-101} This page provides high-level and detailed information about what WRENCH simulators can simulate and how they do it. Full API details are provided in the User API Reference. See the relevant pages for instructions on how to install WRENCH and how to setup a simulator project. - WRENCH 101 {#wrench-101} - 10,000-ft view of a WRENCH simulator # {#wrench-101-simulator-10000ft} - 1,000-ft view of a WRENCH simulator # {#wrench-101-simulator-1000ft} - Step 0: Include wrench.h # {#wrench-101-simulator-1000ft-step-0} - Step 1: Create and initialize a simulation # {#wrench-101-simulator-1000ft-step-1} - Step 2: Instantiate a simulated platform # {#wrench-101-simulator-1000ft-step-2} - Step 3: Instantiate services on the platform # {#wrench-101-simulator-1000ft-step-3} - Step 4: Create at least one workflow # {#wrench-101-simulator-1000ft-step-4} - Step 5: Instantiate at least one WMS per workflow # {#wrench-101-simulator-1000ft-step-5} - Step 6: Launch the simulation # {#wrench-101-simulator-1000ft-step-6} - Step 7: Process simulation output # {#wrench-101-simulator-1000ft-step-7} - Available services # {#wrench-101-simulator-services} - Customizing services # {#wrench-101-customizing-services} - Customizing logging # {#wrench-101-logging} 10,000-ft view of a WRENCH simulator # {#wrench-101-simulator-10000ft} workflow to be executed, which consists of a set of compute tasks each with input and output files, with control and/or data dependencies between tasks. A special service is then created, called a Workflow Management System (WMS), that will be in charge of executing the workflow on the platform using available hardware resources and software services. The WMS is implemented using the WRENCH Developer API, as discussed in the WRENCH 102 page. The simulation is then launched via a single call ( wrench::Simulation::launch()), and returns only once the WMS has terminated (after completing or failing to complete the execution of the workflow). Simulation output can be analyzed programmatically and/or dumped to a JSON file. This JSON file can be loaded into the WRENCH dashboard tool (just run the wrench-dashboard executable, which should have been installed on your system). 1,000-ft view of a WRENCH simulator # {#wrench-101-simulator-1000ft} In this section, we dive deeper into what it takes to implement a WRENCH simulator. To provide context, we refer to the example simulator in the examples/basic-examples/bare-metal-chain directory of the WRENCH distribution. This simulator simulates the execution of a chain workflow on a two-host platform that runs one compute service and one storage service. Although other examples are available (see examples/README.md), this simple example is sufficient to showcase most of what a WRENCH simulator does, which consists in going through the steps below. Note that the simulator's code contains extensive comments as well. Step 0: Include wrench.h # {#wrench-101-simulator-1000ft-step-0} For ease of use, all WRENCH abstractions in the WRENCH User API are available through a single header file: #include <wrench.h> Step 1: Create and initialize a simulation # {#wrench-101-simulator-1000ft-step-1} The state of a WRENCH simulation is defined by the wrench::Simulation class. A simulator must create an instance of this class and initialize it with the wrench::Simulation::init() member function. The bare-metal-chain simulator does this as follows: wrench::Simulation simulation; --help-wrench, which displays a WRENCH help message, and --help-simgrid, which displays an extensive SimGrid help message. Step 2: Instantiate a simulated platform # {#wrench-101-simulator-1000ft-step-2} This is done with the wrench::Simulation::instantiatePlatform() member function which takes as argument a SimGrid virtual platform description file. Any SimGrid simulation, and thus any WRENCH simulation, must be provided with the description of the simulated hardware platform (compute hosts, clusters of hosts, storage resources, network links, routers, routes between hosts, etc.). The bare-metal-chain simulator comes with a platform description file, examples/basic-examples/bare-metal-chain/two_hosts.xml, which we include here: <?xml version='1.0'?> <!DOCTYPE platform SYSTEM ""> <platform version="4.1"> <zone id="AS0" routing="Full"> <!-- The host on which the WMS will run --> <host id="WMSHost" speed="10Gf" core="1"> <disk id="hard_drive" read_bw="100MBps" write_bw="100MBps"> <prop id="size" value="5000GiB"/> <prop id="mount" value="/"/> </disk> </host> <!-- The host on which the BareMetalComputeService will run --> <host id="ComputeHost" speed="1Gf" core="10"> <prop id="ram" value="16GB" /> </host> <!-- A network link that connects both hosts --> <link id="network_link" bandwidth="50MBps" latency="20us"/> <!-- WMSHost's local "loopback" link --> <link id="loopback_WMSHost" bandwidth="1000EBps" latency="0us"/> <!--ComputeHost's local "loopback" link --> <link id="loopback_ComputeHost" bandwidth="1000EBps" latency="0us"/> <!-- Network routes --> <route src="WMSHost" dst="ComputeHost"> <link_ctn id="network_link"/> </route> <!-- Each loopback link connects each host to itself --> <route src="WMSHost" dst="WMSHost"> <link_ctn id="loopback_WMSHost"/> </route> <route src="ComputeHost" dst="ComputeHost"> <link_ctn id="loopback_ComputeHost"/> </route> </zone> </platform> This file defines a platform with two hosts, WMSHost and ComputeHost. The former is a 1-core host with compute speed 10 Gflop/sec, with a 5000-GiB disk with 100 MB/sec read and write bandwidth, which is mounted at /. The latter is a 10-core host where each core computes at speed 1Gflop/sec and with a total RAM capacity of 16 GB. The platform also declares three network links. The first one, called network_link is an actual network link to interconnect the two hosts, with 50 MB/sec bandwidth and 20 microsecond latency. The other two links ( loopback_WMSHost and loopback_ComputeHost) are used to model inter-process communication (IPC) performance within each host. Last, network routes are declared. The route from host WMSHost and ComputeHost is through network_link. Then, there is a route from each host to itself using each loopback link. Note that these loopback routes are optional. By default SimGrid includes a loopback route for each host, with bandwidths and latencies based on measurements obtained on actual computers. The above XML file does not use these defaults, and instead declare loop routes through much faster loopback links (zero latency and extremely high bandwidth). This is because, for this simulation, we want to model a platform in which IPC on a host is essentially free. We refer the reader to platform description files in other examples in the examples directory and to the SimGrid documentation for more information on how to create platform description files. The bare-metal-chain simulator takes the path to the platform description as its 2nd command-line argument and thus instantiates the simulated platform as: simulation.instantiatePlatform(argv[2]); Step 3: Instantiate services on the platform # {#wrench-101-simulator-1000ft-step-3} three services. The first one is a compute service: auto bare_metal_service = simulation.add(new wrench::BareMetalComputeService("ComputeHost", {"ComputeHost"}, "", {}, {}));. It has access to the compute resources of that same host (2nd argument). The third argument corresponds to the path of some scratch storage, i.e., storage in which data can be stored temporarily while a job runs. In this case, the scratch storage specification is empty as host ComputeHost has no disk attached to it. (See the examples/basic-examples/bare-metal-chain-scratch example simulator, in which scratch storage is used). The last two arguments are std::map objects (in this case both empty), that are used to configure properties of the compute service (see details in this section below). The second service is a storage service: auto storage_service = simulation.add(new wrench::SimpleStorageService( "WMSHost", {"/"}, { WMSHost. It uses storage mounted at /. The last two arguments, as for the compute service, are used to configure particular properties of the service. In this case, the service is configured to use a 50-MB buffer size to pipeline network and disk accesses (see details in this section below). The third service is a file registry service: auto file_registry_service = new wrench::FileRegistryService("WMSHost"); simulation.add(file_registry_service); The wrench::FileRegistryService class implements a simulation of a key-values pair service that stores for each file (the key) the locations where the file is available for read/write access (the values). This service can be used by a WMS to find out where workflow files are located (and is often required - see Step #4 hereafter). Step 4: Create at least one workflow # {#wrench-101-simulator-1000ft-step-4} Every WRENCH simulator simulates the execution of a workflow, and thus must create an instance of the wrench::Workflow class. This class has member functions to manually create tasks and files and add them to the workflow. For instance, the bare-metal-chain simulator does this as follows: wrench::Workflow workflow; /* Add workflow tasks */ for (int i=0; i < num_tasks; i++) { /* Create a task: 10GFlop, 1 to 10 cores, 10MB memory footprint */ auto task = workflow.addTask("task_" + std::to_string(i), 10000000000.0, 1, 10, 10000000); } /* Add workflow files */ for (int i=0; i < num_tasks+1; i++) { /* Create a 100MB file */ workflow.addFile("file_" + std::to_string(i), 100000000); } /* Set input/output files for each task */ for (int i=0; i < num_tasks; i++) { auto task = workflow.getTaskByID("task_" + std::to_string(i)); task->addInputFile(workflow.getFileByID("file_" + std::to_string(i))); task->addOutputFile(workflow.getFileByID("file_" + std::to_string(i + 1))); } The above creates a "chain" workflow (hence the name of the simulator), in which the output from one task is input to the next task. The number of tasks is obtained from a command-line argument. In the above code, each task has 100% parallel efficiency (e.g., will run 10 times faster when running on 10 cores than when running on 1 core). It is possible to customize the parallel efficiency behavior of a task, as demonstrated in examples/basic-examples/bare-metal-multicore-tasks for an example simulator in which tasks with different parallel efficiency models are created and executed. The wrench::Workflow class also provides member functions to import workflows from workflow description files in standard JSON format and DAX format. The input files to the workflow must be available (at some storage service) before the simulated workflow execution begins. These are the files that are input to some tasks, but not output from any task. They must be "staged" on some storage service, and the bare-metal-chain simulator does it as: for (auto const &f : workflow.getInputFiles()) { simulation.stageFile(f, storage_service); } Note that in this particular case there is a single input file. But the code above is more general, as it iterates over all workflow input files. The above code will throw an exception if no wrench::FileRegistryService instance has been added to the simulation. Step 5: Instantiate at least one WMS per workflow # {#wrench-101-simulator-1000ft-step-5} One special service that must be started is a Workflow Management System (WMS) service, i.e., software that is in charge of executing the workflow given available software and hardware resources. The bare-metal-chain simulator does this as: auto wms = simulation.add(new wrench::OneTaskAtATimeWMS({baremetal_service}, {storage_service}, "WMSHost")); Class wrench::OneTaskAtATimeWMS, which is part of this example simulator, is implemented using the WRENCH Developer API. See the WRENCH 102 page for information on how to implement a WMS with WRENCH. The code above passes the list of compute services (1st argument) and the list of storage services (2nd argument) to the WMS constructor. The 3rd argument specifies that the WMS should run on host WMSHost. The previously created workflow is then associated to the WMS: wms->addWorkflow(&workflow); Step 6: Launch the simulation # {#wrench-101-simulator-1000ft-step-6} This is the easiest step, and is done by simply calling wrench::Simulation::launch(): simulation.launch(); This call checks the simulation setup, and blocks until the WMS terminates. Step 7: Process simulation output # {#wrench-101-simulator-1000ft-step-7} Once wrench::Simulation::launch() has returned, simulation output can be processed programmatically. The wrench::Simulation::getOutput() member function returns an instance of class wrench::SimulationOutput. Note that there are member functions to configure the type and amount of output generated (see the wrench::SimulationOutput::enable*Timestamps() member functions). The bare-metal-chain simulator does minimal output processing as: auto trace = simulation.getOutput().getTrace<wrench::SimulationTimestampTaskCompletion>(); for (auto const &item : trace) { std::cerr << "Task " << item->getContent()->getTask()->getID() << " completed at time " << item->getDate() << std::endl; Specifically, class wrench::SimulationOutput has a templated wrench::SimulationOutput::getTrace() member function to retrieve traces for various information types. The first line of code above returns a std::vector of time-stamped task completion events. The second line of code iterates through this vector and prints task names and task completion dates (in seconds). The classes that implement time-stamped events are all classes named wrench::SimulationTimestampSomething, where "Something" is self-explanatory (e.g., TaskCompletion, TaskFailure). --activate-energy/basic-examples/bare-metal-bag-of-task and examples/basic-examples/cloud-bag-of-task. These simulators produce a JSON file in /tmp/wrench.json. Simply run the command wrench-dashboard, which pops up a Web browser window in which you simply upload the /tmp/wrench.json file. Available services # {#wrench-101-simulator-services} Below is the list of services available to-date in WRENCH. Click on the corresponding links for more information on what these services are and on how to create them. Compute Services: These are services that know how to compute workflow tasks: - - Cloud Platforms - Virtualized Cluster Platforms - Batch-scheduled Clusters -: Network Proximity Service EnergyMeter Services: These services are used to periodically measure host energy consumption and include these measurements in the simulation output (see this section). - Workflow Management Systems (WMSs) (derives wrench::WMS): A WMS provides the mechanisms for executing workflow applications, include decision-making for optimizing various objectives (often attempting to minimize workflow execution time). At least one WMS should be provided for running a simulation. By default, WRENCH does not provide a WMS implementation as part of its core components. Each example simulator in the examples/directory implements its own WMS. Additional WMS implementations may also be found on the WRENCH project website. See WRENCH 102 for information on how to implement a WMS. Customizing services # {#wrench-101., a WMS service sends a "do some work" message to a compute service). # {#wrench-101/basic-examples/bare-metal-chain directory: ./wrench-example-bare-metal-chain 10 ./two_hosts.xml The above generates no output whatsoever. It is possible to enable some logging to the terminal. It turns out the WMS class in that example ( OneTaskAtATimeWMS.cpp) defines a logging category named custom_wms (see one of the first lines of examples/basic-examples/bare-metal-chain/OneTaskAtATimeWMS.cpp), which can be enabled as: ./wrench-example-bare-metal-chain 10 ./two_hosts.xml --log=custom_wms.threshold=info You will now see some (green) logging output that is generated by the WMS implementation. It is typical to want to see these messages as the WMS is the brain of the workflow execution. They can be enabled while other messages are disabled as follows: One can disable the coloring of the logging output with the --wrench-no-color argument: ./wrench-example-bare-metal-chain 10 ./two_hosts.xml --log=custom_wms.threshold=info --wrench-no-color Disabling color can be useful when redirecting the logging output to a file. Enabling all logging is done with the argument --wrench-full-log: ./wrench-example-bare-metal-chain 10 ./two.
https://docs.wrench-project.org/en/v1.8/wrench_101/
2022-08-07T18:17:59
CC-MAIN-2022-33
1659882570692.22
[]
docs.wrench-project.org
File system Windows file system settings are stored in the File system key. HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\FileSystem File System key Values: NTFS Disable Short (8.3) Filename Creation value NTFS Disable Last Access Time value TODO the explanation of the values differs between versions of Windows The meaning of the value 0 for Windows 2000 according to NtfsDisableLastAccessUpdate: When listing directories, NTFS updates the last-access timestamp on each directory it detects, and it records each time change in the NTFS log. In contrast to the meaning of the value 0 for Windows 2003 according to NtfsDisableLastAccessUpdate: NTFS updates the last-accessed timestamp of a file whenever that file is opened. TODO value does not exist by default until Windows XP SP3/Vista
https://winreg-kb.readthedocs.io/en/latest/sources/system-keys/File-system.html
2022-08-07T18:36:25
CC-MAIN-2022-33
1659882570692.22
[]
winreg-kb.readthedocs.io
Object. Finalize Method Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. !Object () ~Object (); abstract member Finalize : unit -> unit override this.Finalize : unit -> unit Finalize () Examples open System.Diagnostics type ExampleClass() = let sw = Stopwatch.StartNew() do printfn "Instantiated object" member this.ShowDuration() = printfn $"This instance of {this} has been in existence for {sw.Elapsed}" override this.Finalize() = printfn "Finalizing object" sw.Stop() printfn $"This instance of {this} has been in existence for {sw.Elapsed}" let ex = '. Remarks.).(disposing:(); } } } open Microsoft.Win32.SafeHandles open System open System.ComponentModel open System.IO open System.Runtime.InteropServices // Windows API constants. let HKEY_CLASSES_ROOT = 0x80000000 let ERROR_SUCCESS = 0 let KEY_QUERY_VALUE = 1 let KEY_SET_VALUE = 0x2 let REG_SZ = 1u let MAX_PATH = 260 // Windows API calls. [<DllImport("advapi32.dll", CharSet= CharSet.Auto, SetLastError=true)>] extern int RegOpenKeyEx(nativeint hKey, string lpSubKey, int ulOptions, int samDesired, nativeint& phkResult) [<DllImport("advapi32.dll", CharSet= CharSet.Unicode, EntryPoint = "RegQueryValueExW", SetLastError=true)>] extern int RegQueryValueEx(nativeint hKey, string lpValueName, int lpReserved, uint& lpType, string lpData, uint& lpcbData) [<DllImport("advapi32.dll", SetLastError = true)>] extern int RegSetValueEx(nativeint)>] extern int RegCloseKey(unativeint hKey) type FileAssociationInfo(fileExtension: string) = // Private values. let ext = if fileExtension.StartsWith "." |> not then "." + fileExtension else fileExtension let mutable args = "" let mutable hAppIdHandle = Unchecked.defaultof<SafeRegistryHandle> let mutable hExtHandle = Unchecked.defaultof<SafeRegistryHandle> let openCmd = let mutable lpType = 0u let mutable hExtension = 0n // Get the file extension value. let retVal = RegOpenKeyEx(nativeint HKEY_CLASSES_ROOT, fileExtension, 0, KEY_QUERY_VALUE, &hExtension) if retVal <> ERROR_SUCCESS then raise (Win32Exception retVal) // Instantiate the first SafeRegistryHandle. hExtHandle <- new SafeRegistryHandle(hExtension, true) let appId = String(' ', MAX_PATH) let mutable appIdLength = uint appId.Length let retVal = RegQueryValueEx(hExtHandle.DangerousGetHandle(), String.Empty, 0, &lpType, appId, &appIdLength) if retVal <> ERROR_SUCCESS then raise (Win32Exception retVal) // We no longer need the hExtension handle. hExtHandle.Dispose() // Determine the number of characters without the terminating null. let appId = appId.Substring(0, int appIdLength / 2 - 1) + @"\shell\open\Command" // Open the application identifier key. let exeName = String(' ', MAX_PATH) let exeNameLength = uint exeName.Length let mutable hAppId = 0n let retVal = RegOpenKeyEx(nativeint HKEY_CLASSES_ROOT, appId, 0, KEY_QUERY_VALUE ||| KEY_SET_VALUE, &hAppId) if retVal <> ERROR_SUCCESS then raise (Win32Exception retVal) // Instantiate the second SafeRegistryHandle. hAppIdHandle <- new SafeRegistryHandle(hAppId, true) // Get the executable name for this file type. let exePath = String(' ', MAX_PATH) let mutable exePathLength = uint exePath.Length let retVal = RegQueryValueEx(hAppIdHandle.DangerousGetHandle(), String.Empty, 0, &lpType, exePath, &exePathLength) if retVal <> ERROR_SUCCESS then raise (Win32Exception retVal) // Determine the number of characters without the terminating null. let exePath = exePath.Substring(0, int exePathLength / 2 - 1) // Remove any environment strings. |> Environment.ExpandEnvironmentVariables let position = exePath.IndexOf '%' if position >= 0 then args <- exePath.Substring position // Remove command line parameters ('%0', etc.). exePath.Substring(0, position).Trim() else exePath member _.Extension = ext member _.Open with get () = openCmd and set (value) = if hAppIdHandle.IsInvalid || hAppIdHandle.IsClosed then raise (InvalidOperationException "Cannot write to registry key.") if not (File.Exists value) then raise (FileNotFoundException $"'{value}' does not exist") let cmd = value + " %1" let retVal = RegSetValueEx(hAppIdHandle.DangerousGetHandle(), String.Empty, 0, REG_SZ, value, value.Length + 1) if retVal <> ERROR_SUCCESS then raise (Win32Exception retVal) member this.Dispose() = this.Dispose true GC.SuppressFinalize this member _.Dispose(disposing) = // Ordinarily, we release unmanaged resources here // but all are wrapped by safe handles. // Release disposable objects. if disposing then if hExtHandle <> null then hExtHandle.Dispose() if hAppIdHandle <> null then hAppIdHandle.Dispose() interface IDisposable with member this.Dispose() = this.Dispose() Imports Microsoft.Win32.SafeHandles(disposing:
https://docs.microsoft.com/en-us/dotnet/api/system.object.finalize?view=net-6.0
2022-08-07T18:48:53
CC-MAIN-2022-33
1659882570692.22
[]
docs.microsoft.com
Help & Feedback In the "Help & Feedback" section, you can turn off or turn on documentation links in your application by clicking the [Toggle] icon. They are activated by default, but you can change the settings whenever you need to. These links help you with making use of SellerSkills. Click the [Reference Info] icon on the page you want to get more details about, and you will be redirected to the appropriate chapter of the SellerSkills documentation. Note! To turn off the documentation links function in the application, point the cursor on the [Reference Info] icon and click the [On/Off] icon. You have to choose [Disable] to confirm deactivating or click [Keep It Working] to continue using help. You can contact us and send your requests via SellerSkills application or our official site. To get to the "Help & Feedback" section in the application, you have to click the avatar image in the top right corner and choose [Help & Feedback] from the list. Here you can specify the topic of your request by clicking the [Dropdown] icon in the "Topic" (1) field and picking the needed topic. In the "Message"(2) text-box you have to write the main information of your request. Also, you can attach .jpg, .png, or .gif files (3) to your message by dragging and dropping or browsing from your device. The maximal size of each file must be up to 5 MB. Afterward, click [Send] to confirm sending the request or [Cancel] to abolish. SellerSkills is an open and dynamic company, and we are glad to receive your requests and feedback. Each of your messages is processed, and you get an answer from our support team with the required information. Also, you can contact our support department via this link at our official site. Here you have to specify the topic (1) of the request, your name (2), and your email address (3) to send you a response. In the "Message" (4) field you have to write the information with the details of your request. Also, you can attach .jpg, .png, .jpeg, .jp2, .jpm, .jpx, or .jxr files (5) to your message by dragging and dropping or browsing from your device. To send a request, you have to click [Submit Form] (6). No matter what method you choose to send a request (via application "Help & Feedback" section or the "Contact Us" form at the SellerSkills site) be sure you receive an answer from our support team. We use all your feedback and suggestions to improve our quality and become better. Thank you for having chosen us!
https://docs.sellerskills.com/help-and-feedback
2022-08-07T19:28:44
CC-MAIN-2022-33
1659882570692.22
[]
docs.sellerskills.com
interfaces bridge <brx> ipv6 address Assigns an IPv6 address to a bridge interface. - brx - Bridge group ID. - autoconf - Generates an IPv6 address using the Stateless Address Autoconfiguration (SLAAC) protocol. Set this value if the interface is performing a “host” function rather than a “router” function. This value can be specified in addition to specifying static IPv6, static IPv4, or IPv4 DHCP addresses on the interface. - ipv6prefix - The 64-bit IPv6 address prefix used to configure an IPv6 address, in EUI-64 format. The system concatenates this prefix with a 64-bit EUI-64 value derived from the 48-bit MAC address of the interface. Configuration mode interfaces bridge brx { ipv6 { address { autoconf eui64 ipv6prefix } } } Use this command to assign an IPv6 address to an interface. You can use the autoconf keyword to direct the system to autoconfigure the address, using the SLAAC protocol defined in RFC 4862. Alternatively, you can provide an EUI-64 IPv6 address prefix so that the system constructs the IPv6 address. If you want the system to use SLAAC to acquire addresses on this interface, then in addition to setting this parameter, you must also disable IPv6 forwarding, either globally (using the system ipv6 disable-forwarding command) or specifically on this interface (using the interfaces bridge brx ipv6 disable-forwarding command). Use the set form of this command to specify an IPv6 address for the interface. Use the delete form of this command to delete an IPv6 address from the interface. Use the show form of this command to view IPv6 address configuration settings.
https://docs.vyatta.com/en/supported-platforms/vrouter/configuration-vrouter/system-and-services/bridging/bridge-group-commands/interfaces-bridge-brx-ipv6-address
2022-08-07T18:47:13
CC-MAIN-2022-33
1659882570692.22
[]
docs.vyatta.com
period: - nanoseconds ( ns) - microseconds ( usor µs) - milliseconds ( ms) - seconds ( s) - minutes ( m) - hours ( h) - days ( d) - weeks ( w) The minimum retention period is one hour. # Syntax influx bucket update -i <bucket-id> -r <retention period with units> # Example influx bucket update -i 034ad714fdd6f000 -r 1209600000000000ns.
https://test2.docs.influxdata.com/influxdb/v2.3/organizations/buckets/update-bucket/
2022-08-07T20:02:50
CC-MAIN-2022-33
1659882570692.22
[]
test2.docs.influxdata.com
Zone Statistics Bar bar chart. Each bar color shows the current, minimum, maximum, and average content for the specified Zone. If you choose a local Zone from an instanced Process Flow, there will be one set of bars per instance. Properties Panels The Zone Statistics Bar template uses the following properties panels:
https://docs.flexsim.com/en/22.2/Reference/Dashboard/ChartTemplates/ZoneTemplates/ZoneStatisticsBar/ZoneStatisticsBar.html
2022-08-07T20:14:05
CC-MAIN-2022-33
1659882570692.22
[array(['Images/Demo.png', None], dtype=object)]
docs.flexsim.com
Click the Expediting task on the dashboard, or select Expediting from the Tasks list. Select an agreement. Select the Item Shipment Details tab. The item shipments of the selected agreement are displayed. To quickly navigate to a particular column in the grid, click Go to Column on the right and select the column. Click Expand All to expand all item shipments and display the detail tags of split tagged items. To hide the detail tags, click Collapse All. On filtering the grid, the item shipments must be expanded to apply the filter also on the details. Click Requisition and select a requisition from the list to display only item shipments that refer to this requisition. You can use the mini tool bar buttons and the tabs at the bottom to do one of the following. - - - - Assign and edit item shipment properties and CIPs Add comments to item shipments View the ident description and tag details View and edit position details View and edit quantities, weights, volumes, and dimensions View and edit delivery information View routing method details View and edit expediting dates View and edit fabrication information View and edit inspection details - View requisition line items - Attach vendor data requirements (VDRs) -
https://docs.hexagonppm.com/r/en-US/Intergraph-Smart-Materials-Help-10.1/Version-10.1/460211
2022-08-07T19:07:33
CC-MAIN-2022-33
1659882570692.22
[]
docs.hexagonppm.com
Making connections to a Scylla cluster that uses SSL can be a tricky process, but it doesn’t diminish the importance of properly securing your client connections with SSL. This is especially needed when you are connecting to your cluster via the Internet or an untrusted network. Install the Java Cryptography Extensions. You can download the extensions from Oracle. The extension must match your installed Java version. Once downloaded, extract the contents of the archive to the lib/security subdirectory of your JRE installation directory /usr/lib/jvm/java-8-oracle/jre/lib/security/14. Create a new cqlsh configuration file at ~/.cassandra/cqlshrc, using the template below. [authentication] username = myusername password = mypassword [cql] version = 3.3.1 [connection] hostname = 127.0.0.1 port = 9042 factory = cqlshlib.ssl.ssl_transport_factory [ssl] certfile = path/to/rootca.crt validate = true userkey = client_key.key usercert = client_cert.crt_signed The [ssl] section of the above template applies to a CA signed certificate. If you are using a self-signed certificate, the [ssl] section will resemble the following: [ssl] certfile = /etc/scylla/db.crt validate = true userkey = /etc/scylla/db.key usercert = /etc/scylla/db.crt Note If validate = true, the certificate name must match the machine’s hostname. If using client authentication ( require_client_auth = true in cassandra.yaml), you also need to point to your userkey and usercert. SSL client authentication is only supported via cqlsh on C* 2.1 and later. Change the following parameters: Save your changes. Connect to the node using cqlsh --ssl. If the configuration settings were saved correctly, you will be able to connect. Run Cassandra Stress to generate required files and to connect to the SSL cluster. Supply the URL of the SSH node, and the path to your certificates. In addition supply the credentials associated with the certificate. The truststore file is the Java keystore containing the cluster’s SSL certificates. For example: $> cassandra-stress write -node 127.0.0.1 -transport truststore=/path/to/cluster/truststore.jks truststore-password=mytruststorepassword -mode native cql3 user=username password=mypassword Cassandra stress will generate some files, you will need these to configure client - node encryption in-transit. Encryption: Data in Transit Client to Node
https://docs.scylladb.com/stable/operating-scylla/security/gen-cqlsh-file.html
2022-08-07T20:14:11
CC-MAIN-2022-33
1659882570692.22
[]
docs.scylladb.com
Forms have never been this crispy¶ dj. User Guide¶ Get the most out of django-crispy-forms - Installation - crispy filter - {% crispy %} tag with forms - FormHelper - Layouts - How to create your own template packs - {% crispy %} tag with formsets - Updating layouts on the go See who’s contributed to the project at crispy-forms contributors You can find a detailed history of the project in Github’s CHANGELOG API documentation¶ If you are looking for information on a specific function, class or method, this part of the documentation is for you. Developer Guide¶ Think this is awesome and want to make it better? Read our contribution page, make it better, and you will be added to the contributors list!
https://django-crispy-forms.readthedocs.io/en/latest/index.html
2022-08-07T19:02:39
CC-MAIN-2022-33
1659882570692.22
[]
django-crispy-forms.readthedocs.io
Home In-person payments ... Make a diagnosis request DiagnosisResponse DiagnosisResponse Component Type Required Description SaleToPOIResponse Defined datastructure MessageHeader Defined datastructure ProtocolVersion String Conditional Value: 3.0 SaleID String Required Repeated from request MessageClass Enumeration Required Value: Service MessageCategory Enumeration Required Value: Diagnosis ServiceID String Conditional Repeated from request POIID String Required Repeated from request MessageType Enumeration Required Value: Response DiagnosisResponse Defined datastructure POIStatus Defined datastructure Conditional State of a POI Terminal.Rule: if Response.Result is Success. CommunicationOKFlag Boolean Conditional Indicates if the communication infrastructure is working and usable.Rule: If communication infrastructure present. GlobalStatus Enumeration Required Global status of a POI Server or POI Terminal. Response Defined datastructure Required Result of a message request processing. Result Enumeration Required Result of the processing of the message. HostStatus Defined datastructure Required Array State of a Host.
https://docs.adyen.com/point-of-sale/diagnostics/request-diagnosis/diagnosisresponsenexo
2022-08-07T19:50:59
CC-MAIN-2022-33
1659882570692.22
[]
docs.adyen.com
Choosing visualization types This section describes the available visualization types. To understand your devices, processes, and equipment, you should choose the right type of visualization for each asset property that you add to a dashboard. Each visualization type is covered in detail in this section. Changing the visualization type doesn't change your data, so you can try different visualizations to discover which type helps you and your project's viewers gain insights from the data. Line A line graph is a good way to visualize time series data that fluctuates over time. When you drag a time series property to the dashboard, the values for that property are shown as a line graph by default. If that property has an alarm, the line chart shows that alarm's threshold. The following line chart shows four asset properties. To display a line graph, choose the line graph icon from the visualization type menu. Scatter You can use a scatter chart to visualize time series data with distinct data points. A scatter chart looks like a line graph without lines between data points. If you add a property that has an alarm, the scatter chart shows that alarm's threshold. The following scatter chart shows one asset property. To display a scatter chart, choose the scatter icon from the visualization type menu. Bar A bar chart is another way to visualize time series data. You might use a bar chart when your data values change infrequently, such as daily readings. If you add a property that has an alarm, the bar chart shows that alarm's threshold. The following bar chart shows four asset properties. To display a bar graph, choose the bar graph icon from the visualization type menu. Status A status widget is a good way to visualize data that has a small number of well-defined states, such as an alarm. For example, if you have a pressure indicator that can be high, medium, or low, you could display each state in a different color with a status grid. You can configure a status widget to show current status as a grid or historical status as a timeline. Status grid The following status grid shows the status of four asset properties. To display a status grid widget, choose the status grid icon from the visualization type menu. Status timeline The following status timeline shows the status over time for four asset properties. To display a status timeline widget, choose the status timeline icon from the visualization type menu. Configuring status widgets To set status colors, configure thresholds with the color and rule for each status. For more information, see Configuring thresholds. You can also configure what information the widget displays about asset properties. To toggle property units and values Choose the Configuration icon for the status widget to change. Select or clear the Show labels option. When this option is enabled, the widget displays the unit and value of each asset property. After you finish editing the dashboard, choose Save dashboard to save your changes. The dashboard editor closes. If you try to close a dashboard that has unsaved changes, you're prompted to save them. KPI The KPI visualization provides a compact representation when you need an overview of your asset properties. This overview gives you the most critical insights into the overall performance of your devices, equipment, or processes. You can change the title of each property within the visualization. The following is a key performance indicator (KPI) visualization that shows four asset properties. The KPI visualization shows the following information: The latest value for an asset property or the latest state of an alarm for the selected time range. The trend for that value compared to a previous value, which is the first data point before the selected time range. To display a KPI, choose the KPI icon from the visualization type menu. Table The table widget provides a compact representation of multiple asset properties or alarms. You can use the overview to see detailed information about the performance of multiple devices, equipment, or processes. You can display either properties or alarms in a table. You can't display properties and alarms in the same table. The following is a table widget that shows four asset properties. To display a table widget, choose the table icon from the visualization type menu.
https://docs.aws.amazon.com/iot-sitewise/latest/appguide/choose-visualization-types.html
2022-08-07T19:23:45
CC-MAIN-2022-33
1659882570692.22
[array(['images/dashboard-line-graph-console.png', 'A sample line chart showing four properties.'], dtype=object) array(['images/dashboard-line-visualization-type-console.png', 'The line graph visualization type icon.'], dtype=object) array(['images/dashboard-scatter-chart-console.png', 'A sample scatter chart showing four properties.'], dtype=object) array(['images/dashboard-scatter-chart-visualization-type-console.png', 'The scatter chart visualization type icon.'], dtype=object) array(['images/dashboard-bar-graph-console.png', 'A sample bar chart showing four properties as a time series.'], dtype=object) array(['images/dashboard-bar-visualization-type-console.png', 'The bar graph visualization type icon.'], dtype=object) array(['images/dashboard-status-chart-console.png', 'A sample status grid widget.'], dtype=object) array(['images/dashboard-status-visualization-type-console.png', 'The status grid visualization type icon.'], dtype=object) array(['images/dashboard-status-timeline-chart-console.png', 'A sample status timeline widget.'], dtype=object) array(['images/dashboard-status-timeline-visualization-type-console.png', 'The status timeline visualization type icon.'], dtype=object) array(['images/dashboard-configure-status-thresholds-console.png', 'A sample status widget threshold configuration.'], dtype=object) array(['images/dashboard-kpi-chart-console.png', 'A sample KPI visualization.'], dtype=object) array(['images/dashboard-kpi-visualization-type-console.png', 'The KPI visualization type icon.'], dtype=object) array(['images/dashboard-table-widget-console.png', 'A sample table widget.'], dtype=object) array(['images/dashboard-table-visualization-type-console.png', 'The table widget type icon.'], dtype=object) ]
docs.aws.amazon.com
Migrating Consumer Groups Between Clusters Learn how to migrate consumers between clusters. - Make sure that the clusters that you are migrating consumers between are set up with bidirectional replication. - Verify that all mission critical consumer groups and topics, including the ones on the secondary cluster are whitelisted. - Export the translated consumer group offsets of the source cluster: srm-control offsets --source [SOURCE_CLUSTER] --target [TARGET_CLUSTER] --group [GROUP1] --export > out.csv - Reset consumer offsets on the target cluster: kafka-consumer-groups --bootstrap-server [TARGET_BROKER:PORT] --reset-offsets --group [GROUP1] --execute --from-file out.csv - Start consumers on the target cluster. Consumers automatically resume processing messages on the target cluster where they left off on the source cluster.
https://docs.cloudera.com/csp/2.0.1/srm-using/topics/srm-migrating-consumer-groups.html
2022-08-07T19:06:26
CC-MAIN-2022-33
1659882570692.22
[]
docs.cloudera.com
This topic describes how to configure a static IP address on a second network interface. Procedure Perform the following steps to configure a second network interface. - Launch the Delphix Engine Setup interface using the sysadmin credentials. - Navigate to the Network widget and click Modify to view the available network interfaces, and select the new interface to be configured. - Click the Settings button next to the network interface that you want to configure. - The Network Interface Settings screen appears. Select the checkbox before Enabled to enable the network. - Select one of the following: DHCP or Static andenter the IP address and Subnet Mask address in the respective fields. - MTU: Enter a value for the MTU field. This is the maximum size in bytes of a packet that can be transmitted on this interface. - Click Save to save the settings.
https://docs.delphix.com/docs602/configuration/network-and-dns-management/configuring-a-second-network-interface
2022-08-07T20:04:15
CC-MAIN-2022-33
1659882570692.22
[]
docs.delphix.com
Install CDH Install or upgrade to the version of CDH you want to run. Supported CDH version are CDH 5.13 and later, and CDH 6.3.0 and later. About this task: When you install CDH you will install Apache Kafka in one of two ways: - If you install CDH 6.3.0 or later, Apache Kafka is included in your CDH installation. - If you are using CDH 5.13.0 or later, you must install Apache Kafka using the CDK 4.1 parcel.
https://docs.cloudera.com/csp/2.0.1/deployment/topics/csp-install-cdh.html
2022-08-07T19:49:36
CC-MAIN-2022-33
1659882570692.22
[]
docs.cloudera.com
Upscale the cluster Use the resize option to upscale a cluster. If you would like to resize a cluster: - From cluster details select Actions > Resize: - Select the host group that you would like to upscale and then click on + to increment the number of nodes. For example, you can resize the worker host group from 4 to 5 nodes: - Your cluster status will change from Running to Update in Progress. You will see messages similar to the following written to the Event History: Cluster scaled up 8/20/2019, 6:23:24 PM Scaling up the cluster 8/20/2019, 6:14:05 PM Stack successfully upscaled 8/20/2019, 6:14:04 PM Mounting disks on new nodes 8/20/2019, 6:14:04 PM Bootstrapping new nodes 8/20/2019, 6:13:48 PM Infrastructure metadata extension finished 8/20/2019, 6:13:48 PM Billing changed due to upscaling of cluster infrastructure 8/20/2019, 6:13:47 PM Adding 1 new instances to the infrastructure 8/20/2019, 6:12:34 PM - Once the upscale is completed, your cluster status will change to Running and the cluster will again be accessible.
https://docs.cloudera.com/data-hub/cloud/getting-started-tutorial/topics/dh-tutorial-resize.html
2022-08-07T18:35:03
CC-MAIN-2022-33
1659882570692.22
[]
docs.cloudera.com
Options Terms. Underlying Assets¶ Underlying assets are the assets upon which an option is based. Underlying assets represent the assets from which the options derive their value. As in the example above, the apples are basically the option’s underlying asset. In the traditional financial market, the underlying assets are normally stocks, indexes, foreign exchange, or even commodity futures. Here in the crypto market, underlying assets usually refer to a certain type of cryptocurrency, e.g. BTC or ETH. Option Type: Call or Put¶ As discussed above, options are contracts that give the buyer/holder the right, but not the obligation, to sell or buy an underlying asset. They are called call or put options, respectively. Call options allow the holder to buy an underlying asset at a certain price. Put options allow the holder to sell an underlying asset at a certain price. In the specific example, the fruit store wants to “buy” apples in August, hence, it is buying a call option contract. If we change the story a bit, the orchard owner worries that the apple price may drop in August, and he wants to fix his minimum profit. Therefore, he reaches a contract with the fruit store, that he may sell the apples at the price of $4/kg in August, but he doesn’t have to if the market price is higher. Then this contract is a typical put option contract. Also, from these examples, we may see how call and put options work. As the holder of a call option, he/she expects or concerns that the price may go higher, and hence will be too expensive to buy in the future, therefore, a call option gives him/her cost protection. As the holder of a call option, he/she holds the underlying asset already expects or is concerned that the price may go lower, and affecting potential future profits, therefore, a put option gives him/her insurance on the profit. Sellers and Buyers¶ A contract normally has two parties, a seller and a buyer. An option seller is selling (shorting) an option contract to a buyer and getting the premium(the price of the option). An option buyer is buying (longing) an option contract from a seller and paying the premium. Also, note that the option can be a call or a put. Strike Price¶ The strike price (or exercise price) of an option is the price at which a put or call option can be exercised. For a call option, the strike price is the price at which the option holders can buy the underlying asset. For a put option, the strike price is the price at which the option holders can sell the underlying asset. It is predetermined in the options contracts. Also, the strike price can have a big influence on the value of the options. For a call option, the lower the strike price is, which means the holders can buy the asset with lower costs, the higher the value of the option; and vice versa. Expiration Date¶ The expiration date of an option contract is the last date that the contract is valid, on which the holder has the right to exercise the option according to its terms. Owners of American-style options may exercise at any time before the option expires, while owners of European-style options may exercise only at expiration. The expiration dates may also have a significant influence on the value of the options. In general, the longer an option contract has to expiration, the higher value it will have. Price Volatility¶ Price volatility, in relation to the options market, refers to the degree of fluctuation in the market price of the underlying asset. Price volatility data sometimes is not easily acquired and often calculated as a prediction of the degree to which underlying asset price moves in the future. Obviously, the price volatility has a direct influence on the option value. The more volatile the price is, the more difficult it is to make predictions in the future, which gives the option sellers more risk exposure, hence, the value of the option will be higher. Settlement¶ Settlement is the process for the terms of an options contract to be resolved between the holder and seller when it’s exercised. An option contract can be physically settled or cash-settled. Physically settled options require the actual delivery of the underlying assets. When exercising, the holder of physically settled call options would, therefore, buy the underlying assets, whereas the holder of physically settled put options would sell the underlying assets. Cash-settled options do not require the actual delivery of the underlying assets. Instead, the market value, at the exercise date, of the underlier is compared to the strike price, and the difference (if in a favorable direction) is paid by the option seller to the owner. With the apple example above, if the apple price is higher than $4/kg in August, if the options were physically settled options, it would mean that the fruit store purchases the apples at the previously agreed upon price and the orchard owner makes the delivery. If the options were cash settled options, it would mean that the orchard owner pays the difference in cash between the market price and the previously agreed upon price, times the total kilograms covered by the contract, to the fruit store. Option Moneyness¶ In-the-money, at-the-money and out-of-the-money are commonly used terms that refer to an option's moneyness, an insight into the intrinsic value of these derivatives contract. At-the-money (ATM) options have a strike price exactly equal to the current price of the underlying asset. Out-of-the-money (OTM) options have no intrinsic value, only "time value", and occur when a call's strike is higher than the current market, or a put's strike is lower than the market. In-the-money (ITM) options have intrinsic value, meaning you can exercise the option immediately for a profit opportunity - i.e. if a call's strike is below the current market price or a put's strike is higher. Exercise¶ Exercise means to put into effect the right to buy or sell the underlying assets specified in an options contract.The holder of an option has the right, but not the obligation, to buy or sell the option's underlying asset at a specified price on or before a specified date in the future. If the owners of an option decide to buy or sell the underlying asset—instead of allowing the contract to expire, worthless or closing out the position—they will be "exercising the option," or making use of the right, or privilege that is available in the contract. Exercising a put option allows you to sell the underlying asset at a stated price within a specific timeframe. Exercising a call option allows you to buy the underlying asset at a stated price within a specific timeframe. IV¶ IV stands for Implied Volatility. It is a metric that captures the market's view of the likelihood of changes in a given asset's price. Implied volatility is often used to price options contracts: High implied volatility results in options with higher premiums and vice versa. Implied volatility does not predict the direction in which the price change will proceed. Low volatility means that the price likely won't make broad, unpredictable changes. Implied volatility can be determined by using an option pricing model. It is the only factor in the model that isn't directly observable in the market. Instead, the mathematical option pricing model uses other factors to determine implied volatility and the option's premium. BS Pricing Model¶ The Black Scholes model, also known as the Black-Scholes-Merton (BSM) model, is a mathematical model for pricing an options contract. It is the best known model for pricing options. The model won the Nobel prize in economics. For the mathematical expression, please check the details here. Put-Call Parity¶ Put-call parity is a principle that defines the relationship between the price of European put options and European call options of the same class, that is, with the same underlying asset, strike price, and expiration date. or the current market value of the underlying asset Greeks¶ "Greeks" is a term used in the options market to describe the different dimensions of risk involved in taking an options position. These variables are called Greeks because they are typically associated with Greek symbols. Each risk variable is a result of an imperfect assumption or relationship of the option with another underlying variable. Delta (Δ) represents the rate of change between the option's price and a $1 change in the underlying asset's price. Theta (Θ) represents the rate of change between the option price and time, or time sensitivity - sometimes known as an option's time decay. Gamma (Γ) represents the rate of change between an option's delta and the underlying asset's price. Vega (v) represents the rate of change between an option's value and the underlying asset's implied volatility. Rho (p) represents the rate of change between an option's value and a 1% change in the interest rate. VIX¶ The Cboe Volatility Index, or VIX, is a real-time market index representing the market's expectations for volatility over the coming 30 days. Investors use the VIX to measure the level of risk, fear, or stress in the market when making investment decisions. Derived from the price inputs of the S&P 500 index options, it provides a measure of market risk and investors' sentiments. It is also known by other names like "Fear Gauge" or "Fear Index." Volatility Smile¶ Volatility smiles are implied volatility patterns that arise in pricing financial IV against the strike prices, you might get the following U-shaped curve resembling a smile. Hence, this particular volatility skew pattern is better known as the volatility smile. :max_bytes(150000):strip_icc():format(webp)/VolatilitySmileDefinitionandUses2-6adfc0b246cf44e2bd5bb0a3f2423a7a.png) Volatility Surface¶ The volatility surface is a three-dimensional plot of the implied volatility of options, where the x-axis is the time to maturity, the z-axis is the strike price, and the y-axis is the implied volatility. An example of BTC Volatility Surface is shown as below.
https://docs.phx.finance/terminology/options/
2022-08-07T19:59:50
CC-MAIN-2022-33
1659882570692.22
[array(['https://miro.medium.com/max/875/1*OX0XE8mtoYP3kfTbf2f9Lw.png', None], dtype=object) array(['https://www.investopedia.com/thmb/hCk5O800WeWofg9Fp7Xnkc-NbPA=/6251x0/filters:no_upscale(', None], dtype=object) array(['https://pbs.twimg.com/media/EbG0-PhWoAcO3WW.jpg', None], dtype=object) ]
docs.phx.finance
Search… Introduction Goldfinch Overview Protocol Mechanics Governance Tokenomics Token Launch FAQ Unique Identity (UID) Guides General FAQ 🔗 Important links Governance Portal Data Dashboard November 2021 Audit Github Discord Twitter Telegram Medium Youtube Immunefi Bug Bounty Borrower Launch GitBook Tokenomics Total Token Supply The initial token supply is capped at 114,285,714 GFI tokens . There is currently no inflation, but it is expected that it will be beneficial for the protocol to incorporate modest inflation after 3 years in order to reward future active participants. Ultimately, this will be up to the community to discuss and decide. Token Allocations The initial allocation of the total supply of GFI are as follows: Liquidity Providers (16.2%) 4.2% — Early Liquidity Provider Program: These tokens are allocated to the early Liquidity Provider program, which incentivized the very first participants to supply capital to the protocol. This program closed in July 2021. These allocations unlock over 6 months beginning on January 11, 2022, with a 12-month transfer restriction for U.S. participants. 4.0% — Retroactive Liquidity Provider Distribution: These tokens are allocated to all 5,157 liquidity providers as of a Dec 14 snapshot, excluding the Early Liquidity Provider program above. These distributions are only to non-U.S. persons and unlock over a range of immediate to 12 months, depending on the contribution amount and earliest contribution date. 8.0% — Senior Pool Liquidity Mining: These tokens are allocated to ongoing liquidity mining, beginning immediately. Senior Pool liquidity mining is described in the Liquidity Mining section. Backers (8.0%) 3.0% — Flight Academy: These tokens are allocated to the 10,182 non-U.S. participants in Flight Academy. 2.85% are distributed according to the tiers described in this post , with an unlock schedule ranging from immediate to 24 months. The remaining 0.15% are allocated to future participants. 2.0% — Backer Pool Liquidity Mining: The Backer Pool liquidity mining system grants tokens to Backers as interest payments are made into Borrower pools, and the system is now in place, following a governance proposal . There have also been retroactive distributions for existing Backers who have already supplied to Borrower pools as well – this proposal has more details. 3.0% — Backer Staking: These tokens are allocated for Backers who stake GFI on other backers, as described in the whitepaper . This is not yet live, but the community is expected to introduce and vote on a proposal for this in the coming months. Auditors (3.0%) 3.0% of tokens are set aside for auditors, for any future auditor system launched by the protocol through decentralized community governance. An auditor system is not yet live, but we expect the community to introduce and vote on a proposal for one in the coming months. Borrowers (3.0%) 3.0% of tokens are set aside for Borrowers, for when and if the community decides to implement a future distribution system for Borrowers. Contributors (0.65%) 0.65% is allocated to contributors who have already significantly contributed to the community and protocol, either through a management role in the community Discord, by creating great art or memes, or through contracting agreements with the Foundation. Contributors who participated in Flight Academy will receive Flight Academy rewards as part of this category. These distributions generally follow the same unlock schedule as the Flight Academy distributions. Community Treasury (14.8%) 14.8% is allocated to the community’s treasury, which the community can decide to use for purposes such as grants to developers and contributors, adjustments to protocol distribution mechanics, and coverage for potential loan defaults. Early and Future Team (28.4%) 28.4% is allocated to the early Goldfinch team of 25+ employees, advisors, and contractors. Full-time contributors are subject to 4 or 6 year unlock schedules, and part-time contributors are subject to 3-year unlock schedules, all with initial 6-month lock-ups and 12-month transfer restrictions. Warbler Labs (4.4%) 4.4% is allocated to Warbler Labs, a separate organization spun out from the early Goldfinch team that will contribute to the Goldfinch community and broader DeFi ecosystem. The tokens are subject to a 3-year unlock schedule with an initial 6-month lock-up and 12-month transfer restriction. Early Supporters (21.6%) 21.6% is allocated to a group of 60+ early supporters who invested $37M to help build the protocol. These supporters are all long-term oriented and have 3-year unlock schedules, as well as an initial 6-month lock-up and 12-month transfer restriction. Governance Token Launch FAQ Last modified 1mo ago Export as PDF Copy link Edit on GitHub Contents Total Token Supply Token Allocations Liquidity Providers (16.2%) Backers (8.0%) Auditors (3.0%) Borrowers (3.0%) Contributors (0.65%) Community Treasury (14.8%) Early and Future Team (28.4%) Warbler Labs (4.4%) Early Supporters (21.6%)
https://docs.goldfinch.finance/goldfinch/tokenomics
2022-06-25T07:40:30
CC-MAIN-2022-27
1656103034877.9
[]
docs.goldfinch.finance
22.3.5 Release Notes InsightCloudSec Software Release Notice - 22.3.5 Minor Release (06/01/2022) Our latest Minor Release 22.3.5 is available for hosted customers on Wednesday, June 1, 2022. Availability for self-hosted customers is Thursday, June 2, 2022. If you’re interested in learning more about becoming a hosted customer, reach out through our Customer Support Portal. Release Highlights (22.3.5) InsightCloudSec is pleased to announce Minor Release 22.3.5. This Minor Release includes EDH support for AWS CloudFront, added visibility into Azure role assignment usage, and harvesting of the certificate type for GCP SSL certificates. In this release we have updated Bot actions around AWS EC2 instances to support hibernating. We have also revised Insights related to Database Instance Flags for GCP CIS (view the full list below). 22.3.5 also includes three updated Query Filters, five new Query Filters, and 14 bug fixes. Contact us through the new unified Customer Support Portal with any questions. Features & Enhancements (22.3.5) MULTI-CLOUD/GENERAL - Improved the confirmation dialog when updating custom Insights to include any linked Bots which will also be updated as a part of the Insight change. [ENG-16678] - We now prepend the organization name to resource export downloads when there is more than one organization. [ENG-15438] - Container Vulnerability Assessment customers can now watch their container definition in multiple tabs to improve their investigation of the workload vulnerabilities. [ENG-15594] Resources (22.3.5) AWS - Updated EDH Support for AWS CloudFront to include the following events: CreateDistribution, DeleteDistribution, and UpdateDistribution. [ENG-16652] - Added harvesting of the creation date of AWS EC2 SSH Keypair resources to help customers identify keypairs which are in need of rotation based on age. [ENG-16356] AZURE - Added visibility into Azure role assignment usage within the Cloud Limit resource type. The new limit is named RoleAssignmentLimit. [ENG-15108] GCP - Added harvesting of the type of certificate for GCP SSL certificates (managed vs self-managed) and added GCP support to the filter SSL Certificate Type. [ENG-16645] Insights (22.3.5) GCP - Updated our GCP CIS Insights that relate to Database Instance Flags to include filters by engine type. Without the engine type filter, some Insights incorrectly flagged database instances as not having a proper Database Instance Flag setting when the setting does not apply to their engine type. The updated Insights are [ENG-16649]: Database Instance Flag '3625 (trace flag)' Disabled Database Instance Flag 'contained database authentication' Enabled Database Instance Flag 'cross db ownership chaining' Enabled Database Instance Flag 'external scripts enabled' Enabled Database Instance Flag 'local_infile' Enabled Database Instance Flag 'log_checkpoints' Disabled Database Instance Flag 'log_connections' Disabled Database Instance Flag 'log_disconnections' Disabled Database Instance Flag 'log_duration' Disabled Database Instance Flag 'log_error_verbosity' Not Default Database Instance Flag 'log_executor_stats' Enabled Database Instance Flag 'log_hostname' Enabled Database Instance Flag 'log_lock_waits' Disabled Database Instance Flag 'log_min_duration_statement' Enabled 'log_temp_files' Disabled Database Instance Flag 'remote access' Enabled Database Instance Flag 'skip_show_database' Disabled Query Filters (22.3.5) AWS Access List Exposes Port (Network ACL) (AWS)- New Query Filter identifies AWS Network ACLs that have specific ports open to the world. [ENG-15832] Autoscaling Launch Configuration IP Address Type- New Query Filter identifies autoscaling launch configurations that are launching EC2 instances with Public IPs, either explicitly or when defaulting to subnet auto-assign IP settings. This Query Filter will be used to create an Insight to support AWS FSBP pack. [ENG-16728] Cloud Account Without Public Access Bucket Controls- New Query Filter identifies accounts that are missing any of the four public access bucket controls or any combination of them. Alternatively, it allows you to identify accounts that have any of the four public access bucket controls or any combination of them. [ENG-16700] Cloud Region With/Without API Accounting- New Query Filter assesses whether a region has CloudTrail enabled. This new Query Filter will consider regions covered by organization-wide trails, single-account multi-region trails, and single-account single-region trails. [ENG-14586] Site-to-Site VPN Tunnel Status- New Query Filter identifies site-to-site VPNs that have one or both tunnels down. [ENG-16738] AZURE Cloud User With/Without Owner Access- Updated–and renamed–the Query Filter Cloud User With Owner Accessto include the inverse, i.e., ‘without’. [ENG-16772] GCP Resource Associated With Default Role- Expanded support for this Query Filter to work with GCP Dataproc Clusters. [ENG-16552] SSL Certificate Type- Enhanced this Query Filter to add support for GCP. [ENG-16645] Bot Actions (22.3.5) AWS - Added a "hibernate" action for AWS EC2s that meets certain requirements. We have added the action to BotFactory (under Schedule Hibernateand Periodic Hibernate) and updated the ondemand action as well. [ENG-16612] Bug Fixes (22.3.5) - [ENG-16814] For IaC fixes TF converter for AWS encryption keys to capture if the key is public due to a permissive policy. - [ENG-16751] Resolved an issue to make sure findings will be saved correctly and not reappear once processed. - [ENG-16744] Fixed IaC scan policy evaluating incorrectly on non-string booleans and does not appropriately set the SQS or s3 resource’s data model. - [ENG-16720] Revised JSON to ensure the email body for bulk email includes the correct formatting. - [ENG-16698] Fixed a regression that was introduced in 22.3.3 where the calculation of a particular filter for orphaned resources would cause excessive memory and computational load that may have impacted Insight and Scorecard reports. - [ENG-16651] Fixed an issue when harvesting GCP Cloud SQL to correctly identify SQL Server engines as sqlserver. - [ENG-16564] Fixed issue for API Gateway Key resource not able to scan. - [ENG-16558] Resolved an edge case issue where Bots were failing to delete child resources due to missing parents. - [ENG-16521] Hardened the process of adding jobs from plugins to avoid re-loading. - [ENG-16513] Updated the configuration to restrict the Query Filter "Load Balancer Invalid Diagnostic Logging Configuration" to Application load balancers as expected. - [ENG-16341] Addressed an issue sorting by name and namespace for Container Cron Jobs. - [ENG-15491] Fixed an edge case that prevented certain browsers from loading fonts and other assets. - [ENG-15055] Clarified failure context for harvesting jobs that failed due to IAM or service control policy. - [ENG-12660] Resolved Compliance Scorecard issue where apply button was perpetually in an error state.
https://docs.divvycloud.com/changelog/2235-release-notes
2022-06-25T08:32:33
CC-MAIN-2022-27
1656103034877.9
[]
docs.divvycloud.com
Create informative WooCommerce product listings on your website with ease. Using Product List # Step 1 # Make sure you have the Product List widget enabled from your Dashboard > Droit Addons > Elements. Also, make sure to have the WooCommerce plugin installed and activated. Step 2 # Search for the Product List widget from the left-hand side menu, and drag the widget to your preferred location. Step 3 # Before you get started with the Product List # When you’re done adding your products to WooCommerce. Make sure to press “Publish” to make it live. Step 6 # After you’re done configuring/adding products via the WooCommerce plugin, your products will now be visible via the Product List widget on your website. From the Query Settings drop-down menu, you will be able to add – per page item view, sort by order, order type, and posts per row. Step 7 # To customize your product list block, click on the Style tab and start by customizing the Content Box. From here you can add alignment, background type, margin, padding, and border radius to your product list section. Step 8 # To customize the thumbnails of your product list, click on the Thumbnail drop-down menu. From here you can customize the image settings with height, weight, image fit, border type, margin, padding, and image shadow. Step 9 # You can add customization options to your product titles. Click on the Title drop-down menu. From here you can add typography options, color, and padding to your titles. Step 10 # You can add customization options to your price tags. Click on the Price Style drop-down menu. From here you can add typography options, color, and spacing to your price tags. Step 11 # To add customization options to your cart buttons, click on the Cart Button drop-down menu. From here you can add typography options, color, hover color, margin, and padding to your cart buttons. And that’s it. When you’re done, press “Publish” to save your changes.
https://docs.droitthemes.com/droit-elementor-addons/docs/product-list/
2022-06-25T07:11:04
CC-MAIN-2022-27
1656103034877.9
[]
docs.droitthemes.com
Check the Last Sync Time property for a storage account When you configure a storage account, you can specify that your data is copied to a secondary region that is hundreds of miles from the primary region. Geo-replication offers durability for your data in the event of a significant outage in the primary region, such as a natural disaster. If you additionally enable read access to the secondary region, your data remains available for read operations if the primary region becomes unavailable. You can design your application to switch seamlessly to reading from the secondary region if the primary region is unresponsive. Geo-redundant storage (GRS) and geo-zone-redundant storage (GZRS) both replicate your data asynchronously to a secondary region. For read access to the secondary region, enable read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). For more information about the various options for redundancy offered by Azure Storage, see Azure Storage redundancy. This article describes how to check the Last Sync Time property for your storage account so that you can evaluate any discrepancy between the primary and secondary regions. About the Last Sync Time property Because geo-replication is asynchronous, it is possible that data written to the primary region has not yet been written to the secondary region at the time an outage occurs. The Last Sync Time property indicates the last time that data from the primary region was written successfully to the secondary region. All writes made to the primary region before the last sync time are available to be read from the secondary location. Writes made to the primary region after the last sync time property may or may not be available for reads yet. The Last Sync Time property is a GMT date/time value. Get the Last Sync Time property You can use PowerShell or Azure CLI to retrieve the value of the Last Sync Time property. To get the last sync time for the storage account with PowerShell, install version 1.11.0 or later of the Az.Storage module. Then See also Feedback Submit and view feedback for
https://docs.microsoft.com/en-us/azure/storage/common/last-sync-time-get
2022-06-25T09:29:31
CC-MAIN-2022-27
1656103034877.9
[]
docs.microsoft.com
Huge foliage library for mountain and forest environment, landscape. Assets are optimized, trees contain cross lod’s. All assets 100% photoscanned Showcase YouTube Video Link This pack contains: - 448 meshes and blueprints for: trees, grass, plants, rocks, stones, branches, mushrooms, roots, cones, roots, etc - Huge library of 100% scanned assets, carefully optimized, atlased, LOD’ed; - Group of materials which will bring better quality and simplify your workflow, - All demo files from the video are included in the asset - Splines for the road (with ground snap) and rivers. With splines you can build automatic river, with automatic waterfalls, small cascades and slow water. Just setup few points. - Rocks, slopes, roots with leaves, grass, moss, sand cover blueprints - In pine trees, you will find plants, small trees, small forest trees, medium, big standalone, forest and dead. - Complex and playable, full of detail map: 800x800m size Landscape: Rocks, roots, slopes : Trees: Foliage (grass and plants) Details : Collision: Yes automatically generated, trees use capsule, Number of Meshes:
https://docs.unrealengine.com/marketplace/ko/product/mountain-environment-set?lang=ko
2022-06-25T08:40:49
CC-MAIN-2022-27
1656103034877.9
[]
docs.unrealengine.com
In order to test this tutorial design in hardware, a few additional steps are required. These steps include: - Building the Vitis application project to run in MicroBlaze - Creating the full design bitstream with this application present - Generating all partial bitstreams and the PROM image - Loading the PROM image in hardware and running hardware tests Because this design is the same one used for the DFX Controller tutorial in Lab 7, the same process is used to generate the software application and operate the design in hardware. Rather than reiterate these details here, the following steps will reference the appropriate steps in Lab 7. However, given that the Abstract Shell solution was used to generate some of the partial bitstreams, the bitstream generation scripts have been modified. - If the main project_dfxc_vcu118 project has been closed, reopen it within Vivado. - Turn to Lab 7, Step 2, Instruction 8 and follow this lab through Instruction 23. Bitstream generation can be done in two ways when using Abstract Shell. The first is the standard way, where a full design is open in Vivado and both full and partial bitstreams are generated. Alternatively, partial bitstreams only can be generated directly from the Abstract Shell implementation for any RM.The following two sections describe the Vivado Tcl commands used to create partial bitstreams using each of these methodologies. The set of commands are embedded in the Tcl script noted at the beginning of each section. Choose one approach and call the script for that approach before moving on to hardware validation.
https://docs.xilinx.com/r/2020.2-English/ug947-vivado-partial-reconfiguration-tutorial/Step-5-Validate-the-Design-in-Hardware
2022-06-25T08:14:37
CC-MAIN-2022-27
1656103034877.9
[]
docs.xilinx.com