content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Activity Log Retention Should Not Be Set To Less Than 365 Days A Log Profile controls how your Activity Log is exported and retained. Since the average time to detect a breach is over 200 days, it is recommended to retain your activity log for 365 days or more in order to have time to respond to any incidents. Policy Details Build Rules Activity Log Retention should not be set to less than 365 days. JSON Query: $.resource.*.azurerm_monitor_log_profile size greater than 0 and ( $.resource.*.azurerm_monitor_log_profile[*].*[*].retention_policy size equals 0 or $.resource.*.azurerm_monitor_log_profile[*].*[*].retention_policy[*].enabled anyFalse or $.resource.*.azurerm_monitor_log_profile[*].*[*].retention_policy[?(@.days<365)] size greater than 0 ) Recommendation: Recommended solution setting Activity Log Retention to greater than or equal 365 day. It is recommended that Activity Log Retention should not be less than 365 day. Please make sure your template has the "days" under "retention_policy" set to 365 or greater. For example: azurerm_monitor_log_profile": [ { "<monitor_log_profile_name>": [ { "name": "default", "retention_policy": [ { "days": 367, "enabled": true } ] } ] } ] Run Rule Recommendation If there is no activity log profile exists follow below steps:. - Log in to Azure Portal. - Navigate to Monitor dashboard. - Click on Activity log. - Click on 'Diagnostics Settings'. - Click on Looking for the legacy experience? Click here to launch the 'Export activity log' blade. - Set 'Retention (days)' to '365' and other parameters to as per you requirement. - Click on 'Save'.If a log profile already exists we cannot update the retention days through console. Follow below CLI command to update the log profile:az monitor log-profiles update --name ${resourceName} --set retentionPolicy.days=365 retentionPolicy.enabled=true location=global Remediation CLI Command: az monitor log-profiles update --name ${resourceName} --set retentionPolicy.days=365 retentionPolicy.enabled=true location=global CLI Command Description: This CLI command requires 'Microsoft.Insights/LogProfiles/[Read, Write, Delete]' permission. Successful execution will update the Azure monitor log profile retention policy days to 365 days. Compliance There are 11 standards that are applicable to this policy: - HIPAA - NIST CSF - CSA CCM v3.0.1 - CIS v1.1 (Azure) - ISO 27001:2013 - PIPEDA - NIST 800-53 Rev4 - SOC 2 - HITRUST CSF v9.3 - PCI DSS v3.2 - CCPA 2018 Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-policy-reference/configuration-policies/configuration-policies-build-phase/microsoft-azure-configuration-policies/policy_a9937384-1ee3-430c-acda-fb97e357bfcd.html
2020-09-18T19:45:42
CC-MAIN-2020-40
1600400188841.7
[]
docs.paloaltonetworks.com
Content Management meta tags Meta tags are special tags in web pages that contain information about the page but are not rendered with the page. You can define custom meta tags for content pages. Meta tags. Structurally, a meta tag consists of a tag and a name/content pair and looks similar to the following code. <meta name="generator" content="MediaWiki 1.16wmf4" /> The Content Management System is defined on a site and included on every page within that site.Configure DIV-based layoutsAfter you create your site, you can change the site layout with DIV tags.Content meta tag hierarchyPage and site level meta tags are included in a content meta tag hierarchy.Related conceptsContent sitesContent pages in CMSContent Management templatesContent typesContent blocksStyle in Content Management
https://docs.servicenow.com/bundle/newyork-servicenow-platform/page/administer/content-management/concept/c_ContentManagementMetaTags.html
2020-09-18T20:52:12
CC-MAIN-2020-40
1600400188841.7
[]
docs.servicenow.com
Bunifu User Control is one unique control in that it allows developers to build their own User Controls on top of it, using built-in features such as curving the borders or styling the control's surface. You can also easily implement your own custom designs using the .NET Graphics API that allows you to draw more shapes and effects. Here's a sample UI inspiration combining various aspects of the control, including the circular shapes: And here's a preview of Bunifu User Control in action: Properties Here's a list of the built-in properties: (1) BackColor: Applies the control's background color. (2) BorderColor: Applies the control's border color. (3) BorderRadius: Sets the control's border or corner radius. (4) BorderThickness: Sets the control's border thickness. (5) ShowBorders: Allows showing/hiding of the control's borders. (6) Style: Applies either a round or a flat style to the control's surface. Events The list of built-in events are: (1) BackColorChanged: Raised whenever the control's background color has changed. (2) BorderColorChanged: Raised whenever the control's border color has changed. (3) BorderRadiusChanged: Raised whenever the control's border radius has changed. (4) BorderThicknessChanged: Raised whenever the control's border thickness has changed. (5) ShowBordersChanged: Raised whenever the control's ShowBorders properties have changed. (5) StyleChanged: Raised whenever the control's Style property has changed.
https://docs.bunifuframework.com/en/articles/2762917-bunifu-user-control
2020-09-18T20:21:37
CC-MAIN-2020-40
1600400188841.7
[array(['https://downloads.intercomcdn.com/i/o/108840880/80c6ae51fdc670b70f8e2894/bunifu-user-control-sample-two.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/105436759/5ede6fa45b2c991fbf90784c/bunifu-user-control-preview-01.gif', None], dtype=object) ]
docs.bunifuframework.com
Submit Intrusion Report¶ On the left side-bar menu, click “Dashboard”. The “Intrusion Reports” box is in the bottom left hand corner. Click on the name of the report you want to submit. A page like the one below will appear. Scroll down to the bottom of the page for submission options. Submit your report via text, by copying and pasting into the small box next to submission text. You may also submit a file by uploading your document from your computer, by clicking on “Choose File”. Click the blue “Submit” button.
https://docs.iseage.org/iscore/user/v1.7/blue/intrusion_report.html
2020-09-18T19:25:27
CC-MAIN-2020-40
1600400188841.7
[]
docs.iseage.org
Fixture Groups allow you to group various Fixtures together. Each Fixture can be assigned to multiple Groups. Groups are used in multiple places. You can apply a Block Stack onto a whole group, reference them in spatial Blocks and more. Assigning and removing Fixture Groups is done through the Schematic view. To add a Fixture to a Group, open the property inspector and press the circle with Add. The text may be hidden when you're zoomed out too much but it will work the same. Then either select one of the existing groups or write a new name and press enter. To remove a Fixture from a Group, middle click the circle representing the Group.
https://docs.scenic.tools/guide/groups
2020-09-18T19:24:03
CC-MAIN-2020-40
1600400188841.7
[]
docs.scenic.tools
Disable or re-enable a saved Wi-Fi network If you want to stop your BlackBerry device from automatically connecting to a saved Wi-Fi network, but you don't want to delete the saved network, you can disable the network instead. - On the home screen, swipe down from the top of the screen. - Tap Wi-Fi. - Check that the Wi-Fi switch is set to On. - Tap . - Tap a network. - To disable the network, set the Enable Connections switch to Off. - To re-enable the network, set the Enable Connections switch to On. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/47561/mes1334606683736.jsp
2014-10-20T12:12:37
CC-MAIN-2014-42
1413507442497.30
[]
docs.blackberry.com
Hi GPars enthusiasts, on our path towards the next GPars release we've reached an important milestone - the beta-1 has just been made available. To get a feel of what's coming, experiment with the new dataflow constructs, try agent validators, test GPars from pure Java applications using the new Java API or just get your hair blown back by our lightning fast actors, grab it now at or use the usual integration options described at As is our good tradition, an updated User Guide is ready at You might also like to check out what's new - Have groovy times and let us know your opinion. The GPars team
http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=186712082
2014-10-20T11:48:05
CC-MAIN-2014-42
1413507442497.30
[]
docs.codehaus.org
To insert the SIM card, slide it into place as shown. To remove the SIM card: Before you start using your BlackBerry device, it is recommended that you charge the battery. The battery in the box that your device came in isn't fully charged. A media card is optional. If a media card is included, it might already be inserted.
http://docs.blackberry.com/en/smartphone_users/deliverables/57817/Chunk1934839227.html
2014-10-20T11:35:04
CC-MAIN-2014-42
1413507442497.30
[]
docs.blackberry.com
Using South with django CMS¶ South is an incredible piece of software that lets you handle database migrations. This document is by no means meant to replace the excellent documentation available online, but rather to give a quick primer on why you should use South and how to get started quickly. Installation¶ As always using Django and Python is a joy. Installing South is as simple as typing: pip install South Then, simply add south to the list of INSTALLED_APPS in your settings.py file. Basic usage¶ For a very short crash course: - Instead of the initial manage.py syncdb command, simply run manage.py schemamigration --initial <app name>. This will create a new migrations package, along with a new migration file (in the form of a python script). - Run the migration using manage.py migrate. Your tables will be created in the database and Django will work as usual. - Whenever you make changes to your models.py file, run manage.py schemamigration --auto <app name> to create a new migration file. Next run manage.py migrate to apply the newly created migration. More information about South¶ Obviously, South is a very powerful tool and this simple crash course is only the very tip of the iceberg. Readers are highly encouraged to have a quick glance at the excellent official South documentation.
http://docs.django-cms.org/en/develop/basic_reference/using_south.html
2014-10-20T11:21:09
CC-MAIN-2014-42
1413507442497.30
[]
docs.django-cms.org
Create a link for a PIN You can create a link for a PIN in a message, calendar entry, task, or memo. If you click the link, you can send a PIN message. When you are typing text, type pin: and the PIN. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/41695/1052915.jsp
2014-10-20T11:43:05
CC-MAIN-2014-42
1413507442497.30
[]
docs.blackberry.com
How to automatically login the user after registration By default, BuddyForms does not offer an option to auto-login the user during registration. This is a security risk and we highly recommend not to do it. By default, the user is forced to click an activation link in the activation mail and verifier the correctness of the registration. By click on the activation link, he gets auto logged in. The activation link can be redirected to any page to create a new password or write a post or add additional profile information. You can use the activation mail without losing the workflow and can create powerful funnels with auto-login after clicking the activation. If you want to log in to register without the use of the activation mail you can use this following teaching hooks First, make sure you stop sending the activation mail Next Auto Login the user after submission
https://docs.buddyforms.com/article/521-how-to-automatically-login-the-user-after-registration
2021-01-16T00:19:47
CC-MAIN-2021-04
1610703497681.4
[]
docs.buddyforms.com
What is a Baseline Definition? A Baseline Definition is a set of inclusion or filter criteria based on TeamForge components such as Tracker Artifacts, Documents, Source Code Repositories (only Git and Subversion are supported), File Releases, and Binaries (only Nexus binaries are supported) of a given TeamForge project. What is a Project Baseline Definition? A Project Baseline Definition is a set of inclusion or filter criteria for a given TeamForge project based on components such as Tracker Artifacts, Documents, Source Code Repositories (only Git and Subversion are supported), File Releases, and Binaries (only Nexus binaries are supported). A TeamForge project can have only one Project Baseline Definition, which can be modified when required. What is a Project Baseline? A Project Baseline is a baseline created on a project at a given point in time. Once you have Project Baselines created, you can kick start new projects from Project Baselines and proceed from when and where the Project Baselines were created in the past. Project Baselines are typically created using Project Baseline Definitions. You can create as many as Project Baselines as required. What is an External Baseline? What is a Baseline Package? A Baseline Package is a downloadable package of physical project artifacts such as Tracker Artifacts, Documents, and so on generated from an approved Baseline or a Project Baseline. Once generated, you can download and share the package with your stakeholders. Is there a separate license for the Baseline tool in TeamForge? Yes, Baseline has its own license in TeamForge. You must have both ALM and Baseline licenses to create and work with the Baseline tool in TeamForge. For more information, see TeamForge License. What is a Configuration Item? A Configuration item is a project artifact that can be uniquely identified. Typically, a Baseline in TeamForge can include the following configuration items: - Tracker Artifacts - Documents - Source Code Repositories (from Git/Subversion repositories, identified by Tags) - File Releases - Binaries (only Nexus Maven2and Rawformatted Proxy, Hosted and Group types of repositories are supported) What are the permissions associated with the Baseline tool? Here’s a list of Baseline-specific permissions. You can set up site-wide, project-level or global roles in TeamForge with the following permissions. two baselines created in distinct timelines to view the difference between them. For more information, see Compare Baselines. 5. Create baseline packages You can create one or more baseline packages from approved baselines. As baseline package creation takes a long time, it runs as a backend process. The baseline database server takes care of the package creation. For more information, see Generate and Download Baseline Packages. How would I install Baseline? Baseline services can be installed when you install TeamForge. For more information on Baseline hardware requirements, see Baseline Hardware Requirements. It’s highly recommended that you install the TeamForge Baseline services on a separate server as the baseline process can consume considerable CPU and database resources. For more information, see Install TeamForge in a Distributed Setup. How to enable Baseline for projects? The Baseline tool is enabled by default for any new project created after you install Baseline on your site. However, you must enable Baseline for old projects that were created before Baseline installation. To add the Baseline tool to an existing TeamForge project: - Log on to TeamForge and select a project from the My Workspace menu. - Select Project Admin > Tools from the Project Home menu. Select the Baselines check box and click Save. Enable Baselines for projects created before Baseline installation A new tool, Baselines, is added to the Project Home menu. Baselines tool added to the Project Home menu To use TeamForge Baselines, a TeamForge user must have the Baseline license and the required baseline permissions to perform various functions. For example, a user with the VIEW ONLY permission can view baselines and baseline definitions, search for baselines and baseline definitions, and compare baselines. For more information about baseline permissions, see What are the permissions associated with the Baseline tool? []: []:
https://docs.collab.net/teamforge200/baseline-overview.html
2021-01-16T00:21:56
CC-MAIN-2021-04
1610703497681.4
[array(['images/status-success-small.png', None], dtype=object) array(['images/status-success-small.png', None], dtype=object) array(['images/status-success-small.png', None], dtype=object) array(['images/status-success-small.png', None], dtype=object) array(['images/baseline-workflow-2.png', None], dtype=object)]
docs.collab.net
TeamForge site administration involves managing several aspects of TeamForge including setting up users and a role based access control, managing users, managing scm and other integrated applications, setting up SSL, managing the database and datamart, managing projects and so on. Here’s a list of site administration tasks. - Manage Site-wide roles - Manage SCM tools - Manage global project roles - Manage projects and project groups - Manage users - Manage email settings - Monitor the site - Integrate and link external applications - Set up SSL - Manage the database and datamart []: []:
https://docs.collab.net/teamforge200/siteadminoverview.html
2021-01-16T00:24:10
CC-MAIN-2021-04
1610703497681.4
[]
docs.collab.net
Model Actions¶ Model actions allow you to trigger a sequence of Action Framework actions that run automatically when model-level events occur. This means the actions don’t need to be tied to a button or intentionally activated by the end-user, but are activated “behind the scenes” when specific things happen on the page. Model actions are useful for creating simplified, automated workflows, even for complex processes. Create a Model Action¶ - In the App Elements pane, click the Models tab. - Click the model name. - Below the model name, click Actions. - ClickAdd to create a new model action. - Set initiating events (and properties of those events) in the Initiating events tab. - Add specific action framework sequences using the Actions tab. - Click Save. Using Model Actions¶ The following are a couple examples where model actions create a streamlined user experience. Update Roll-up Summaries [[]]¶ The situation¶ A Table component connected to on an object that collects sales opportunities. The table displays: - the opportunity by name - the account name - the amount of the opportunity - the close date for the opportunity Each opportunity has a drawer with a table component where the end user can add products to an “ordered product” list, with fields for the price book ID, quantity of the product, sales price and total price. The total price is a UI-only field: {{UnitPrice}}*{{Quantity}} This field updates in real time when changes are made to the quantity or the sales price. The challenge¶ End user would like to see the amount on the opportunity table automatically update as products are added to the product list—without having to refresh the page. Push Data from Ui-Only Fields to Database Fields [[]]¶ The situation¶ A Table component connected to on an object that collects sales opportunities. The table displays: - the opportunity by name - the account name - the amount of the opportunity - the close date for the opportunity Each opportunity has a drawer with a table component connected to a products object. Here, the end user can add products to an “ordered product” list. This list includes the following fields: - price book ID - quantity of the product - sales price - discount (a UI-only field: {{DiscountUI}}) - total price (a UI-only field: {{UnitPrice}}*{{Quantity}}*((100-{{DiscountUi}})/100)). - Current price The challenge¶ Often builders employ UI-Only fields so that end users can see (in real time) how their changes will affect data, for example, how changes in the UI-Only discount field shift the (also UI-Only) total price. But, if the discount is confirmed, then that UI-only total price needs to be pushed back to the actual discount field in the database (not the UI-Only discount field used for the onscreen calculation) when the user clicks Save. This ensures that the agreed-upon discount will be saved to the external system. Properties¶ Initiating events tab [[]]¶ Initiating events are changes to Skuid models that begin a sequence of Action Framework actions. Click Add to create a new model action, then select the initiating event for it: Initiating events Model saved: When the model is saved, whether as the result of user input or an Action Framework action. Model requeried: When the model is queried, which can occur on page load or by an Action Framework action. Note This event is published on page load—regardless of the model’s Query on Page Load property value—because Skuid must retrieve the model’s metadata. Model cancelled: When changes to the model (as the result of user input or an Action Framework action) are cancelled. New row created in model: When a new data row is added to the model, as the result of user input or an Action Framework action. Row in model updated: When one or more fields are updated as the result of user input or an Action Framework trigger. You may select whether this initiating event applies to specific fields or to all fields in the model. Row in model marked for deletion: Marks the selected row on the model for deletion as the result of user input or an Action Framework trigger. Row in model un-marked for deletion: Unmarks the row on the model selected for deletion as the result of user input or an Action Framework trigger. Changes to model conditions can also be selected as initiating events. Model condition changed: If the value of a model’s condition changes (even in the background, unactivated). Model condition activated: When a model’s condition is activated, whether or not the model affected by it has been requeried. Model condition deactivated: When a model’s condition is deactivated, whether or not the model affected by it has been requeried. Model condition reset: When the condition’s value is reset. This can only be done through Skuid’s JavaScript API. - When which conditions are changed? For each of the previous condition-related events, select trigger to begin the sequence. Options include: - Any filterable condition - One or more named conditions. Actions tab [[]]¶ Select the model actions: - Actions: - Add action: Clickto add actions and set them. - Convert to action sequence: Clickto establish an action sequence. - Display Name: Give the sequence a memorable name. - Description: Describe the goal or purpose of the sequence. - Action type: Use the Action Framework to create actions.
https://docs.skuid.com/latest/v2/en/skuid/models/model-actions.html
2021-01-16T00:47:03
CC-MAIN-2021-04
1610703497681.4
[]
docs.skuid.com
Tutorial: Configuring a Lambda function to access Amazon ElastiCache in an Amazon VPC In this tutorial, you do the following: Create an Amazon ElastiCache cluster in your default Amazon Virtual Private Cloud. and verify that it accessed the ElastiCache cluster in your VPC. For details on using Lambda with Amazon VPC, see Configuring a Lambda function to access resources in a VPC. – Lambda. Permissions – AWSLambdaVPCAccessExecutionRole. Role name – lambda-vpc-role. The AWSLambdaVPCAccessExecutionRole has the permissions that the function needs to manage network connections to a VPC. Create an ElastiCache cluster Create an ElastiCache cluster in your default VPC. Run the following AWS CLI command to create a Memcached cluster. $ aws elasticache create-cache-cluster --cache-cluster-id ClusterForLambdaTest --cache-node-type cache.t3.medium --engine memcached --num-cache-nodes 1 --security-group-ids sg-0123a1b123456c1de You can look up the default VPC security group in the VPC console under Security Groups. Your example Lambda function will add and retrieve an item from this cluster. Write down the configuration endpoint for the cache cluster that you launched. You can get this from the Amazon ElastiCache console. You will specify this value in your Lambda function code in the next section. Create a deployment package The following example Python code reads and writes an item to your ElastiCache cluster. Example app.py.decode("utf-8") ==.decode("utf-8") Dependencies pymemcache – The Lambda function code uses this library to create a HashClientobject to set and get items from memcache. elasticache-auto-discovery – The Lambda function uses this library to get the nodes in your Amazon ElastiCache cluster. Install dependencies with Pip and create a deployment package. For instructions, see Deploy Python Lambda functions with .zip file archives. Create the Lambda function Create the Lambda function with the create-function command. $ aws lambda create-function --function-name AccessMemCache --timeout 30 --memory-size 1024 \ --zip-file fileb://function.zip --handler app.handler --runtime python3.8 \ --role arn:aws:iam:: 123456789012:role/lambda-vpc-role \ --vpc-config SubnetIds= subnet-0532bb6758ce7c71f,subnet-d6b7fda068036e11f,SecurityGroupIds= sg-0897d5f549934c2fb You can find the subnet IDs and the default security group ID of your VPC from the VPC console. Test the Lambda function In this step, you invoke the Lambda function manually using the invoke command. When the Lambda function runs, it generates a UUID and writes it to the ElastiCache cluster that you specified in your Lambda code. The Lambda function then retrieves the item from the cache. Invoke the Lambda function with the invokecommand. $ aws lambda invoke --function-name AccessMemCache output.txt Verify that the Lambda function executed successfully as follows: Review the output.txt file. Review the results in the AWS Lambda console. Verify the results in CloudWatch Logs. Now that you have created a Lambda function that accesses an ElastiCache cluster in your VPC, you can have the function invoked in response to events. For information about configuring event sources and examples, see Using AWS Lambda with other services. Clean up your resources You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting AWS resources that you are no longer using, you prevent unnecessary charges to your AWS account. To delete the Lambda function Open the Functions page of the Lambda console. Select the function that you created. Choose Actions, Delete. Choose Delete. To delete the execution role Open the Roles page of the IAM console. Select the execution role that you created. Choose Delete role. Choose Yes, delete. To delete the ElastiCache cluster Open the Memcached page of the ElastiCache console. Select the cluster you created. Choose Actions, Delete. Choose Delete.
https://docs.aws.amazon.com/lambda/latest/dg/services-elasticache-tutorial.html
2021-01-16T00:14:27
CC-MAIN-2021-04
1610703497681.4
[]
docs.aws.amazon.com
Use Inline Javascript to Create a Slider Field Renderer¶ This tutorial shows you how to make a number field render as a movable slider. There is some Javascript involved, but you can copy and paste it all. In this example, we’re going to make the “Number of Employees” field on the Account Detail Page show up as an adjustable slider. Step 3: For the new resources, choose In-Line(Snippet) as the Resource Location, and name your snippet.¶ - Click on the new resource. - For Resource Location, select In-Line (Snippet). - Name your snippet. We’ll call this one inlineSnippet. - Click on the script icon to add the Snippet Body. Step 4: Copy and paste the following code into the snippet body editor.¶ This snippet will make a field render as a slider that slides from 1-10 with step size 1. You can change the code to suit your number field (e.g. make max:100 and step:10 etc.) Once you’ve pasted in the code, you can click to close the editor. Step 6: Select the appropriate number field and drag it into your page.¶ In this example, we’re using the Number of Employees field from the Account object, but you can use another number field if you want. - Click Models > AccountData > Fields. - Select the field you want to include and drag it into a component on your page, such as a field editor or table. Step 7: Choose Custom as the Field Renderer.¶ - Click on your number field. - Choose Custom as the Field Renderer. - For Render Snippet, enter the exact name of the inline Snippet you created in Step 3 (e.g. inlineSnippet). Success! Your Slider will be displayed. Just drag to the desired location and then click Save.¶ Note If you have a slider-field in a component that’s in a tabset, it won’t break but it won’t look as nice. Want to see the numbers on this slider? Check out this discussion in the Skuid community.
https://docs.skuid.com/v11.1.3/en/tutorials/javascript/field-renderer-slider.html
2021-01-16T00:55:14
CC-MAIN-2021-04
1610703497681.4
[]
docs.skuid.com
ESP Local Control¶ Overview¶ ESP Local Control (esp_local_ctrl) component in ESP-IDF provides capability to control an ESP device over Wi-Fi + HTTPS or BLE. It provides access to application defined properties that are available for reading / writing via a set of configurable handlers. Initialization of the esp_local_ctrl service over BLE transport is performed as follows: esp_local_ctrl_config_t config = { .transport = ESP_LOCAL_CTRL_TRANSPORT_BLE, .transport_config = { .ble = & (protocomm_ble_config_t) { .device_name = SERVICE_NAME, .service_uuid = { /* LSB <--------------------------------------- * ---------------------------------------> MSB */ 0x21, 0xd5, 0x3b, 0x8d, 0xbd, 0x75, 0x68, 0x8a, 0xb4, 0x42, 0xeb, 0x31, 0x4a, 0x1e, 0x98, 0x3d } } }, )); Similarly for HTTPS transport: /* Set the configuration */ httpd_ssl_config_t https_conf = HTTPD_SSL_CONFIG_DEFAULT(); /* Load server certificate */ extern const unsigned char cacert_pem_start[] asm("_binary_cacert_pem_start"); extern const unsigned char cacert_pem_end[] asm("_binary_cacert_pem_end"); https_conf.cacert_pem = cacert_pem_start; https_conf.cacert_len = cacert_pem_end - cacert_pem_start; /* Load server private key */ extern const unsigned char prvtkey_pem_start[] asm("_binary_prvtkey_pem_start"); extern const unsigned char prvtkey_pem_end[] asm("_binary_prvtkey_pem_end"); https_conf.prvtkey_pem = prvtkey_pem_start; https_conf.prvtkey_len = prvtkey_pem_end - prvtkey_pem_start; esp_local_ctrl_config_t config = { .transport = ESP_LOCAL_CTRL_TRANSPORT_HTTPD, .transport_config = { .httpd = &https_conf }, )); Creating a property¶ Now that we know how to start the esp_local_ctrl service, let’s add a property to it. Each property must have a unique name (string), a type (e.g. enum), flags (bit fields) and size. The size is to be kept 0, if we want our property value to be of variable length (e.g. if its a string or bytestream). For fixed length property value data-types, like int, float, etc., setting the size field to the right value, helps esp_local_ctrl to perform internal checks on arguments received with write requests. The interpretation of type and flags fields is totally upto the application, hence they may be used as enumerations, bitfields, or even simple integers. One way is to use type values to classify properties, while flags to specify characteristics of a property. Here is an example property which is to function as a timestamp. It is assumed that the application defines TYPE_TIMESTAMP and READONLY, which are used for setting the type and flags fields here. /* Create a timestamp property */ esp_local_ctrl_prop_t timestamp = { .name = "timestamp", .type = TYPE_TIMESTAMP, .size = sizeof(int32_t), .flags = READONLY, .ctx = func_get_time, .ctx_free_fn = NULL }; /* Now register the property */ esp_local_ctrl_add_property(×tamp); Also notice that there is a ctx field, which is set to point to some custom func_get_time(). This can be used inside the property get / set handlers to retrieve timestamp. Here is an example of get_prop_values() handler, which is used for retrieving the timestamp. static esp_err_t get_property_values(size_t props_count, const esp_local_ctrl_prop_t *props, esp_local_ctrl_prop_val_t *prop_values, void *usr_ctx) { for (uint32_t i = 0; i < props_count; i++) { ESP_LOGI(TAG, "Reading %s", props[i].name); if (props[i].type == TYPE_TIMESTAMP) { /* Obtain the timer function from ctx */ int32_t (*func_get_time)(void) = props[i].ctx; /* Use static variable for saving the value. * This is essential because the value has to be * valid even after this function returns. * Alternative is to use dynamic allocation * and set the free_fn field */ static int32_t ts = func_get_time(); prop_values[i].data = &ts; } } return ESP_OK; } Here is an example of set_prop_values() handler. Notice how we restrict from writing to read-only properties. static esp_err_t set_property_values(size_t props_count, const esp_local_ctrl_prop_t *props, const esp_local_ctrl_prop_val_t *prop_values, void *usr_ctx) { for (uint32_t i = 0; i < props_count; i++) { if (props[i].flags & READONLY) { ESP_LOGE(TAG, "Cannot write to read-only property %s", props[i].name); return ESP_ERR_INVALID_ARG; } else { ESP_LOGI(TAG, "Setting %s", props[i].name); /* For keeping it simple, lets only log the incoming data */ ESP_LOG_BUFFER_HEX_LEVEL(TAG, prop_values[i].data, prop_values[i].size, ESP_LOG_INFO); } } return ESP_OK; } For complete example see protocols/esp_local_ctrl Client Side Implementation¶ The client side implementation will have establish a protocomm session with the device first, over the supported mode of transport, and then send and receive protobuf messages understood by the esp_local_ctrl service. The service will translate these messages into requests and then call the appropriate handlers (set / get). Then, the generated response for each handler is again packed into a protobuf message and transmitted back to the client. See below the various protobuf messages understood by the esp_local_ctrl service: get_prop_count : This should simply return the total number of properties supported by the service get_prop_values : This accepts an array of indices and should return the information (name, type, flags) and values of the properties corresponding to those indices set_prop_values : This accepts an array of indices and an array of new values, which are used for setting the values of the properties corresponding to the indices Note that indices may or may not be the same for a property, across multiple sessions. Therefore, the client must only use the names of the properties to uniquely identify them. So, every time a new session is established, the client should first call get_prop_count and then get_prop_values, hence form an index to name mapping for all properties. Now when calling set_prop_values for a set of properties, it must first convert the names to indexes, using the created mapping. As emphasized earlier, the client must refresh the index to name mapping every time a new session is established with the same device. The various protocomm endpoints provided by esp_local_ctrl are listed below: API Reference¶ Header File¶ Functions¶ - const esp_local_ctrl_transport_t * esp_local_ctrl_get_transport_ble(void)¶ Function for obtaining BLE transport mode. - const esp_local_ctrl_transport_t * esp_local_ctrl_get_transport_httpd(void)¶ Function for obtaining HTTPD transport mode. - esp_err_t esp_local_ctrl_start(const esp_local_ctrl_config_t *config)¶ Start local control service. - Return ESP_OK : Success ESP_FAIL : Failure - Parameters [in] config: Pointer to configuration structure - esp_err_t esp_local_ctrl_add_property(const esp_local_ctrl_prop_t *prop)¶ Add a new property. This adds a new property and allocates internal resources for it. The total number of properties that could be added is limited by configuration option max_properties - Return ESP_OK : Success ESP_FAIL : Failure - Parameters [in] prop: Property description structure - esp_err_t esp_local_ctrl_remove_property(const char *name)¶ Remove a property. This finds a property by name, and releases the internal resources which are associated with it. - Return ESP_OK : Success ESP_ERR_NOT_FOUND : Failure - Parameters [in] name: Name of the property to remove - const esp_local_ctrl_prop_t * esp_local_ctrl_get_property(const char *name)¶ Get property description structure by name. This API may be used to get a property’s context structure esp_local_ctrl_prop_twhen its name is known - Return Pointer to property NULL if not found - Parameters [in] name: Name of the property to find - esp_err_t esp_local_ctrl_set_handler(const char *ep_name, protocomm_req_handler_t handler, void *user_ctx)¶ Register protocomm handler for a custom endpoint. This API can be called by the application to register a protocomm handler for an endpoint after the local control service has started. - Note In case of BLE transport the names and uuids of all custom endpoints must be provided beforehand as a part of the protocomm_ble_config_tstructure set in esp_local_ctrl_config_t, and passed to esp_local_ctrl_start(). - Return ESP_OK : Success ESP_FAIL : Failure - Parameters [in] ep_name: Name of the endpoint [in] handler: Endpoint handler function [in] user_ctx: User data Unions¶ - union esp_local_ctrl_transport_config_t¶ - #include <esp_local_ctrl.h> Transport mode (BLE / HTTPD) configuration. Public Members - esp_local_ctrl_transport_config_ble_t * ble¶ This is same as protocomm_ble_config_t. See protocomm_ble.hfor available configuration parameters. - esp_local_ctrl_transport_config_httpd_t * httpd¶ This is same as httpd_ssl_config_t. See esp_https_server.hfor available configuration parameters. Structures¶ - struct esp_local_ctrl_prop¶ Property description data structure, which is to be populated and passed to the esp_local_ctrl_add_property()function. Once a property is added, its structure is available for read-only access inside get_prop_values()and set_prop_values()handlers. Public Members - size_t size¶ Size of the property value, which: if zero, the property can have values of variable size if non-zero, the property can have values of fixed size only, therefore, checks are performed internally by esp_local_ctrl when setting the value of such a property - uint32_t flags¶ Flags set for this property. This could be a bit field. A flag may indicate property behavior, e.g. read-only / constant - void * ctx¶ Pointer to some context data relevant for this property. This will be available for use inside the get_prop_valuesand set_prop_valueshandlers as a part of this property structure. When set, this is valid throughout the lifetime of a property, till either the property is removed or the esp_local_ctrl service is stopped. - struct esp_local_ctrl_prop_val¶ Property value data structure. This gets passed to the get_prop_values()and set_prop_values()handlers for the purpose of retrieving or setting the present value of a property. Public Members - struct esp_local_ctrl_handlers¶ Handlers for receiving and responding to local control commands for getting and setting properties. Public Members - esp_err_t (* get_prop_values)(size_t props_count, const esp_local_ctrl_prop_t props[], esp_local_ctrl_prop_val_t prop_values[], void *usr_ctx)¶ Handler function to be implemented for retrieving current values of properties. - Note If any of the properties have fixed sizes, the size field of corresponding element in prop_valuesneed to be set - current values for which have been requested by the client [out] prop_values: Array of empty property values, the elements of which need to be populated with the current values of those properties specified by props argument [in] usr_ctx: This provides value of the usr_ctxfield of esp_local_ctrl_handlers_tstructure - esp_err_t (* set_prop_values)(size_t props_count, const esp_local_ctrl_prop_t props[], const esp_local_ctrl_prop_val_t prop_values[], void *usr_ctx)¶ Handler function to be implemented for changing values of properties. - Note If any of the properties have variable sizes, the size field of the corresponding element in prop_valuesmust be checked explicitly before making any assumptions on the size. - values for which the client requests to change [in] prop_values: Array of property values, the elements of which need to be used for updating those properties specified by props argument [in] usr_ctx: This provides value of the usr_ctxfield of esp_local_ctrl_handlers_tstructure - void * usr_ctx¶ Context pointer to be passed to above handler functions upon invocation. This is different from the property level context, as this is valid throughout the lifetime of the esp_local_ctrlservice, and freed only when the service is stopped. - struct esp_local_ctrl_config¶ Configuration structure to pass to esp_local_ctrl_start() Public Members - const esp_local_ctrl_transport_t * transport¶ Transport layer over which service will be provided - esp_local_ctrl_transport_config_t transport_config¶ Transport layer over which service will be provided - esp_local_ctrl_handlers_t handlers¶ Register handlers for responding to get/set requests on properties Type Definitions¶ - typedef struct esp_local_ctrl_prop esp_local_ctrl_prop_t¶ Property description data structure, which is to be populated and passed to the esp_local_ctrl_add_property()function. Once a property is added, its structure is available for read-only access inside get_prop_values()and set_prop_values()handlers. - typedef struct esp_local_ctrl_prop_val esp_local_ctrl_prop_val_t¶ Property value data structure. This gets passed to the get_prop_values()and set_prop_values()handlers for the purpose of retrieving or setting the present value of a property. - typedef struct esp_local_ctrl_handlers esp_local_ctrl_handlers_t¶ Handlers for receiving and responding to local control commands for getting and setting properties. - typedef struct esp_local_ctrl_transport esp_local_ctrl_transport_t¶ Transport mode (BLE / HTTPD) over which the service will be provided. This is forward declaration of a private structure, implemented internally by esp_local_ctrl. - typedef struct protocomm_ble_config esp_local_ctrl_transport_config_ble_t¶ Configuration for transport mode BLE. This is a forward declaration for protocomm_ble_config_t. To use this, application must set CONFIG_BT_BLUEDROID_ENABLED and include protocomm_ble.h. - typedef struct httpd_ssl_config esp_local_ctrl_transport_config_httpd_t¶ Configuration for transport mode HTTPD. This is a forward declaration for httpd_ssl_config_t. To use this, application must set CONFIG_ESP_HTTPS_SERVER_ENABLE and include esp_https_server.h - typedef struct esp_local_ctrl_config esp_local_ctrl_config_t¶ Configuration structure to pass to esp_local_ctrl_start()
https://docs.espressif.com/projects/esp-idf/zh_CN/latest/esp32/api-reference/protocols/esp_local_ctrl.html
2021-01-16T00:00:01
CC-MAIN-2021-04
1610703497681.4
[]
docs.espressif.com
Introduction to DHCP Policies Applies To: Windows Server 2012 The DHCP Server role in Windows Server®. Why DHCP PBA? Consider the following scenarios: A subnet has a mix of different types of clients: desktop computers, printers, IP phones, and other devices. You want different types of clients to get IP addresses from different IP address ranges within the subnet. This is possible using DHCP policies if the devices have different vendors. For example: Printers can get IP addresses from 10.10.10.1 to 10.10.10.9. IP phones can get IP addresses from 10.10.10.10 to 10.10.10.49. Desktop computers can be assigned IP addresses from 10.10.10.50 to 10.10.10.239. Additional devices can be assigned IP addresses of 10.10.10.240 to 10.10.10.254. By specifying a different IP address range for different device types, you can more easily identify and manage devices on the network. In a subnet which has a mix of wired and mobile computers, you might want to assign a shorter, 4 hour lease duration to mobile computers and longer, 4 day lease duration to wired computers. You want to control who gets access to the network by providing a DHCP lease to only a known set of clients based on MAC address. Employees bring in their own devices such as smartphones and tablets to work and you want to manage network traffic or control network access based on device type. You want to provide a different set of scope options to different types of devices. For example, IP phones can get a different Boot Server Host Name (TFTP server) and Bootfile Name option. DHCP policies provide a very useful tool to achieve these goals. See the following example. .jpeg) In this example: Subnet A contains DHCP client devices of several different types including workstations, printers, and IP phones. A DHCP server on another subnet is configured to provide leases to these devices from scope A. Polices are configured at the scope level to control IP address range and at the server level to specify lease duration. DHCP client requests are processed as follows: A client on subnet A submits a DHCPREQUEST that is sent to the DHCP server via DHCP relay. The client’s vendor class and MAC prefix are included in the DHCPREQUEST packet along with the gateway IP address (GIADDR). The DHCP server uses the GIADDR to determine that the client requires a lease from scope A, and begins processing policies in that scope. Since scope B does not apply, these policies are ignored. Based on the vendor class and MAC prefix values provided, the client request matches conditions of policy A3. After all scope polices are processed, server level policies are processed and the client also matches conditions of policy 1. After all policies are processed, the DHCP server returns an IP address configuration to the client using the settings specified in policies A3 and 1. Based on the client’s MAC address it is determined that the device is a printer (it matches policy A3). It is assigned the first available IP address in the IP address range 10.10.10.1 to 10.10.10.9, with a lease duration of 14 days. In Windows Server 2008 R2 and previous operating systems, if you want to specify the IP address range for a specific set of clients or devices, or assign different option values based on device type, the only way to achieve this is to configure a scope with individual reservations. This method can require high effort, and is difficult to manage on an ongoing basis. DHCP policies in Windows Server 2012 provide much more flexibility to assign unique IP addresses and options to specific DHCP clients in a single subnet, or in multiple subnets. Note See Policy processing to understand how settings are applied when they are configured in multiple policies, in reservations, at the scope level, or at the server level. How DHCP PBA works DHCP policies are rules that you can define for DHCP clients. You can define a single policy, or several. Characteristics of DHCP policies include: Policy level: Polices can apply at the server level or the scope level. Server level policies are processed for all DHCP client requests received by the server. Scope level policies are processed only for DHCP client requests that apply to a specific scope. Processing order: Each policy has an associated processing order that is unique within a server or scope. Policies with a lower numbered processing order are evaluated before higher number policies. If both scope and server level policies apply to a client, the scope level policies are always processed before any server level policies. Conditions: The conditions specified in a policy enable you to evaluate clients based on fields that are present in the DHCP client request. If a client request matches the conditions in the policy, the settings associated with a policy will be applied to the client by the DHCP server when it responds to the DHCP request. Settings: Settings are network configuration parameters (ex: IP address, options, lease duration) that are provided to DHCP clients in the DHCP server response. Settings enable you to group clients by applying the same set of network parameters to them. Enabled/Disabled: Policies at the scope or server level can also be enabled or disabled. A policy that is disabled is skipped when processing incoming DHCP client requests. To create a policy at the server level using the Windows interface, open the DHCP console, navigate to IPv4, right-click Policies and then click New Policy. .jpeg) If other server level policies exist, they are displayed in the details pane and can be modified by right-clicking the policy and then clicking Move Up, Move Down, Disable, Enable, Delete, or Properties. .jpeg) To create a policy at the scope level using the Windows interface, open the DHCP console, navigate to an IPv4 scope, right-click Policies and then click New Policy. If other scope level policies exist, they are displayed along with any server level policies that exist. You can modify existing scope level policies by right-clicking them. You cannot modify a server level policy at the scope level. .jpeg) You must provide a unique policy name when creating a new policy. A policy description is optional. A policy must have at least one condition. Policy settings are optional, but DNS settings are included by default so it is not possible to have a policy with no settings. To view DNS settings for a policy, right-click the policy, click Properties, and then click the DNS tab. .jpeg) DHCP policy conditions and settings The following conditions and settings are available when creating a policy: Conditions: Vendor Class, User Class, MAC Address, Client Identifier, Relay Agent Information. Settings: IP Address Range, Standard DHCP Options, Vendor Specific DHCP Options. Conditions In Windows Server 2012, you can specify five conditional criteria to evaluate and group DHCP clients: MAC Address: The media access control (MAC) address or link-layer address of the client. Vendor Class: Vendor managed DHCP option assignments. User Class: Non-standard DHCP option assignments. Client Identifier: The client identifier (ClientID) is typically a MAC address. In the case of PXE clients, it can be the GUID of the network interface card (NIC). Relay Agent Information, including sub-options: Agent Circuit ID, Agent Remote ID, and Subscriber ID: Information inserted into DHCP client requests by a DHCP relay using option 82. The operators that can be used with these conditions are equals and not equals. You can also use a trailing wildcard with MAC address, Vendor Class, User Class and Client Identifier conditions to perform a partial match. By combining the equals or not equals with a wildcard in the condition you can effectively achieve a starts with or does not start with condition. You can either have a single condition in a policy or a set of conditions which can be OR’ed or AND’ed.. Important Using multiple criterion values: When you list multiple values for a single criterion, such as “User Class Equals (valueA, valueB, valueC)” or “MAC Address Not Equals (value1, value2, value3)” these values are interpreted as being OR’d if the EQ (equals) operator is used, but they are AND’d if the NEQ (not equals) operator is used. An incoming client request for an IP address and options from the DHCP server matches a policy if the client satisfies the cumulative set of conditions in the policy. A client that does not match conditions of any policy is granted an IP address lease from the rest of the IP address range of the scope, exclusive of all the policy IP address ranges, and is assigned the default option values configured in the scope. Settings In Windows Server 2012, three types of policy settings are available that can be applied to DHCP clients: IP Address Range: A specified sub-range of IP addresses within the scope range. The IP address range setting cannot be specified in a server-level policy. Standard DHCP Options: Standard DHCP options like default gateway (003 Router) and preferred DNS servers list (006 DNS Servers). Vendor Specific DHCP Options: Vendor managed DHCP option assignments. In addition, you can also specify the following settings in policy properties: DNS settings: DNS registration and Name Protection settings can be specified on the DNS tab. Lease duration: The lease duration can be specified on the General tab. See the following example .jpeg) When a client matches the conditions of a policy, the DHCP server responds to the client and includes settings in that policy, provided these settings are not already applied in a higher priority policy or using a reservation. See Policy processing for more information. A policy can specify an IP address range with no options, or it can specify options with no IP address range, or it can specify both, or it can specify neither. A policy can also specify multiple standard options, vendor-specific options, or both. Policy processing Since you can configure multiple policies at both the scope level and server level, each policy is assigned a processing order. The processing order can also be modified, assuming more than a single policy exists. The following conditions exist: When processing DHCP client requests, the DHCP server evaluates each client request against the conditions in all applicable policies, based on their processing order. Scope level policies are processed first by the DHCP server, followed by server wide policies. Theoretically, a client can match the conditions of several scope policies and also several server policies. If a client satisfies the conditions of more than 1 policy, it will get the combined settings from all policies that it matched. If the same option setting is provided in multiple policies, the client will use the setting from the first policy that is processed. For example, assume that policy-1 has an option value for 003 Router and policy-2 has an option value for 006 DNS Servers, and a client request matches both policies. The DHCP server will assign a default gateway value (003 Router) using policy-1 and a DNS server value using policy-2. However, if policy-1 has the higher processing priority (a value of “1”) and also has an option value for DNS server, the client will get both the router and DNS server option values from policy-1. The DNS server option value in policy-2 is ignored because policy-2 has a lower processing priority (a value of “2”). A policy does not need to be configured with all option values that you have already configured at the scope or server level. If a policy client has requested an option which is not present in the policy but has been configured in scope level or server level options, these options are applied to the client in the server response. However, if you wish to specify options for certain clients, you can include these option settings in policies and they will have a higher priority than scope or server level options. The only type of option setting that has a higher priority than those configured in polices are options that you configure for a reservation. The priority for options settings is reservation > scope policy > server policy > scope-level > server-level. See the following figure. .jpeg) If a DHCP client obtains option settings because it matched a reservation, it will ignore the same options settings if they are present in any scope or server polices, or configured globally at the scope or server level. Deploying DHCP policies A common reason to deploy DHCP policies is to provide unique settings to different types of devices on the network. Two common methods used to identify device type include: Vendor class: A text string is sent in option 60 by most DHCP clients that identifies the vendor and therefore the type of the device. MAC address prefix: The first three bytes of a MAC address is called the organizationally unique identifier (OUI), and can be used to identify the vendor or manufacturer of a device. For example, you might decide to group DHCP clients on the network by device type. After assigning IP address ranges to devices, you can configure your router to handle network traffic from each IP address range differently. In effect, you can configure network access control for a class of devices using DHCP policies. You might also manage network traffic by configuring route options such as default gateway (option 003) and classless static routes (option 121) based on device type. It is often desirable to configure a short lease duration for wireless devices, and grant a longer lease to wired devices. Since wireless access points are typically capable of behaving as a DHCP relay agent, or are connected to a DHCP relay, they can provide DHCP option 82 (DHCP relay agent). Presence of a specific value in the relay agent option can therefore indicate that the DHCP client is a wireless device. With DHCP policies, you can configure a policy with a condition based on the relay agent information option value that identifies wireless clients and provides a shorter lease duration. Other DHCP clients in the scope will continue to be provided with the longer lease duration configured at the scope level. These scenarios and others are discussed in detail in this guide. See also Scenario: Manage the network configuration of virtual machines Scenario: Secure a subnet to a specific set of clients Scenario: Customize lease duration based on device type
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn425039(v=ws.11)
2021-01-16T00:19:49
CC-MAIN-2021-04
1610703497681.4
[array(['images/dn425039.d7977e94-6d98-44e1-a352-dcea7dc48f25(ws.11', None], dtype=object) array(['images/dn425039.70057bb1-acf4-4a15-b7d6-418cf2546b7c(ws.11', None], dtype=object) array(['images/dn425039.8c4a5d91-6b37-446a-8fae-c7e2a0eb505c(ws.11', None], dtype=object) array(['images/dn425039.9d06d11a-7aef-4514-ac16-f71545b85aaf(ws.11', None], dtype=object) array(['images/dn425039.7af59e17-8e9e-44ed-8eeb-b1894bdb745c(ws.11', None], dtype=object) array(['images/dn425039.2963ff30-69cb-4221-81e3-00a0afde4249(ws.11', None], dtype=object) array(['images/dn425039.eb566e7f-fc22-4671-90da-ef4d34e2bab5(ws.11', None], dtype=object) ]
docs.microsoft.com
A temporary channel is a channel that is created when a user reacts to a certain message, and is deleted when they remove the reaction. These can be easily set up with ChannelBot. Before you setup the reaction, make sure you have created a category for channels to be created in. I called mine "Temp" for this example, but you can choose whatever you'd like. Run ch!tempchannels create. This will start the setup process. If you are prompted to upvote, click the link and upvote, then run the command again. The bot will ask you for the reaction channel, as shown above. This is the channel that the reaction to create a channel will be in. Send the #channel mention. Next, the bot will ask you for the message. This is the message that the reaction to create a channel will be on. Mobile Users: Make sure you have dev tools enabled by following this guide. This will allow you to copy the message's ID. Hold down on the message, and click "Copy ID" and send it to the channel. Everyone Else: Click the three dots next to the message and click "Copy Link" and send it to the channel. Next, it will ask you for the emoji. This is the emoji users will click on to create their channel. Send your choice of emoji to continue. Finally, it will ask you for the category. This is the category you created in the prep step earlier. Send its name to continue, and if you get a success message, you're done! Click on the reaction you created, and see if the channel is created! Let us know in our Support Server if you need help! To delete a temporary channel reaction, run ch!tempchannels delete. It will send a list of temporary channel reactions, and ask for the ID of the one to delete. Send the number to the left of the one you want to delete to delete it! Let us know in our Support Server if you need any help!
https://docs.channelbot.xyz/guides/temporary-channels
2021-01-16T00:18:00
CC-MAIN-2021-04
1610703497681.4
[]
docs.channelbot.xyz
Decision-making process This page describes how the Fedora D&I team makes decisions. Process Members of the Fedora Diversity and Inclusion Team FAS group are eligible to vote. Lazy Approval This means general consent is assumed unless valid objections are raised within a specific period of time i.e. 5 days along with minimum two positive vote (+1) and no -1’s. Votes can be cast by any approved member of diversity team. This process is used for decisions with short-term consequences and which can be easily reversed. Any team member can ask for the deadline to be extended or the decision escalated to require full consensus. Full Consensus More significant decisions are made through a process of full consensus. In order to pass, these decisions need three positive votes (+3) and no negative votes (-1) within a specific period of time i.e. 14 days. The votes can be cast by any of the approved members in the FAS group for the Diversity and Inclusion Team. A negative vote immediately halts the process and requires discussion. Therefore, in order to remain valid, negative votes must be supported with a specific concern about the proposal, and suggestions for what could be changed in order to make the proposal acceptable. Choosing a process Lazy approval is followed for tasks with a defined process in D&I Team. When all prior steps for a task are completed with just the ticket-based approval remaining. For example, translating an article or proposing a Flock session fit in this model. Full consensus is required to approve new processes, make changes to existing team policies, and tickets requiring D&I budget. For example, proposing new event guidelines for events, changing the decision-making process, and voting on Outreachy budget allocation requires full consensus.
https://docs.fedoraproject.org/pt_PT/diversity-inclusion/policy/decision-process/
2021-01-16T00:58:31
CC-MAIN-2021-04
1610703497681.4
[]
docs.fedoraproject.org
, etc. Accomplishing this typically involves a lot of custom coding and development work, but with Skuid, you can craft a custom clone page for any object type using the Skuid drag-and-drop page builder! Let’s get started. The first thing to know is that any Skuid detail page, such as an Account detail page, is a custom clone page waiting to happen — the fastest way to create a custom clone page is to just clone an existing detail page, such as an “AccountDetail” page you may have already created, and then tweak it to remove components that are not needed during the clone process, such as system information fields. Step 1: Create a new Skuid page, optionally by cloning an existing detail page.¶ To save time, we can clone an existing 7: Remove all fields that you don’t want users to modify.¶ Remember, all fields and objects you’ve included in your models will be cloned, but you can choose what you allow the user to modify. A. Remove tabs and components you don’t want to appear in the clone page.¶ We’re streamlining this page, so there’s not too much to distract this user from the clone process.).
https://docs.skuid.com/v11.1.3/en/tutorials/pages/clone-account-page.html
2021-01-16T00:18:28
CC-MAIN-2021-04
1610703497681.4
[]
docs.skuid.com
UI internationalization Internationalize the Splunk Web user interface. - Translate text generated by Python code, JavaScript code, views, menus and Mako templates. - Set language/locale specific alternatives for static resources such as images, CSS, other media. - Create new languages or locales. - Format times, dates and other numerical strings. Splunk software translation Splunk software uses the language settings for the browser where you are accessing Splunk Web. You can change the browser language settings to see the user interface in another language. Locale strings indicate the language and location that Splunk software uses to translate the user interface. Typically, a locale string consists of two lowercase letters and two uppercase letters linked by an underscore. For example, en_US means American English while en_GB means British English. Splunk software first tries to find an exact match for the full locale string but falls back to the language specifier if settings are not available for the full locale. For example, translations for fr answer to requests for fr_CA and fr_FR (French, Canada and France respectively). In addition to language, translation also addresses the formatting of dates, times, numbers, and other localized settings. Configuration Splunk software uses the gettext internationalization and localization (i18n) system. Steps - Create a directory for the locale. For example, to create the fictional locale mz, create the following directory. $SPLUNK_HOME/lib/python2.7/site-packages/splunk/appserver/mrsparkle/locale/mz_MZ/LC_MESSAGES/ - Load the following messages.potfile into your PO editor. $SPLUNK_HOME/lib/python2.7/site-packages/splunk/appserver/mrsparkle/locale/messages.pot - Use the PO editor to translate any strings that you want to localize. Save the file as messages.poin the directory you created in the previous step. The PO editor also saves a messages.mofile, which is the machine readable version of the PO file. - Restart the Splunk instance. No other configuration file edits are required. Splunk software detects the new language files when it restarts. Localization files The Splunk platform stores localization information at the following location. $SPLUNK_HOME/lib/python<version>/site-packages/splunk/appserver/mrsparkle/locale This directory contains the following items. messages.pot: Holds the strings to translate. You can use a PO editor to edit these files. <locale_string>:: Machine readable version of messages.po. Splunk software uses this file to find translated strings. The PO editor creates the file for you when it creates the messages.pofile. Localize dates and numbers You can format numbers and dates to the standards of a locale without translating any text. Create a directory for the locale whose numbers and dates you want to format. Copy the contents of the en_US directory to the target locale directory. Example Enable localization of numbers and dates for the de_CH locale (German – Switzerland). Create the following target directory for the de_CH locale. $SPLUNK_HOME/lib/python2.7/site-packages/splunk/appserver/mrsparkle/locale/de_CH Copy the contents of the following directory. $SPLUNK_HOME/lib/python2.7/site-packages/splunk/appserver/mrsparkle/locale/en_US Copy the contents from the en_US directory into the de_CH directory. Translate Apps You can use gettext to translate apps. Most apps must be translated in their own locale subdirectory. Apps that ship with the Splunk platform are automatically extracted and their text is included in the core messages.pot file. You do not need to handle them separately. To extract the strings from an installed app and make the strings ready for translation in a PO editor, run the following extraction command on the command line. > splunk extract i18n -app <app_name> This creates a locale/ subdirectory in the app root directory and populates it with a messages.pot file. Follow the steps above to translate the strings within the app. When using views from a different app, the new messages.pot file contains the strings for these views. Locale-specific resources The Splunk platform stores static resources such as images, CSS files, and other media as subdirectories at the following location. $SPLUNK_HOME/share/splunk/search_mrsparkle/exposed/ When serving these resources, Splunk software checks to see whether a localized version of the resource is available before falling back to the default resource. For example, if your locale is set to fr_FR, Splunk software searches for the logo image file in the following order. exposed/img/skins/default/logo-mrsparkle-fr_FR.gif exposed/img/skins/default/logo-mrsparkle-fr.gif exposed/img/skins/default/logo-mrsparkle.gif Splunk software, 7.0.13, 6.3.1,.2, 7.0.3, 7.0.4 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/7.1.6/AdvancedDev/TranslateSplunk
2021-01-16T00:25:44
CC-MAIN-2021-04
1610703497681.4
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Organizing projects by categories can help users find what they need on a site quickly and easily. Project categories express the relationships among projects. If your project is in a category, it is visible to users browsing projects from the main Project Categories page. Your project can be in one or more project categories. Click PROJECT ADMIN from the Project Home menu. On the Project Admin Menu, click the PROJECT CATEGORIZATION tab. A list of all the categories that the project belongs to is displayed. Click Add. The Select Category window shows all the available project categories. If you find a project category that your project should be in, select that category and click Add. Your project is now a member of the selected project category. You can repeat this process to add your project to any number of project categories.
https://docs.collab.net/teamforge182/projectadmin-categorizingaproject.html
2021-01-15T23:04:42
CC-MAIN-2021-04
1610703497681.4
[]
docs.collab.net
Arduino Nano ATmega328 (New Bootloader)¶ nanoatmega328new ; Nano ATmega328 (New Bootloader) has on-board debug probe and IS READY for debugging. You don’t need to use/buy external debug probe.
https://docs.platformio.org/en/latest/boards/atmelavr/nanoatmega328new.html?highlight=nanoatmega328new
2022-08-07T23:00:13
CC-MAIN-2022-33
1659882570730.59
[]
docs.platformio.org
Software Download Directory frevvo Latest - This documentation is for frevvo v10.3. v10.3 is a Cloud Only release. Not for you? Earlier documentation is available too. Access the Guided Designer: Settings editing mode simply by clicking Settings at the top of you page while editing any form or workflow. Settings mode displays the Form/Workflow Properties wizard, which is intuitive and organizes logical groupings of related properties in a single UI with multiple tabs. Hover over any field on any tab for a helpful hint about the property. Links to the frevvo documentation are also provided. Refer to Edit Settings to see how it works. Check out this 7-minute video for an overview of Form Settings!
https://docs.frevvo.com/d/display/frevvo103/Form+and+Workflow+Settings
2022-08-07T23:10:57
CC-MAIN-2022-33
1659882570730.59
[]
docs.frevvo.com
Introduction PeakRDL-regblock is a free and open-source control & status register (CSR) compiler. This code generator translates your SystemRDL register description into a synthesizable SystemVerilog RTL module that can be easily instantiated into your hardware design. Generates fully synthesizable SystemVerilog RTL (IEEE 1800-2012) Options for many popular CPU interface protocols (AMBA APB, AXI4-Lite, and more) Configurable pipelining options for designs with fast clock rates. Broad support for SystemRDL 2.0 features Fully synthesizable SystemVerilog. Tested on Xilinx/AMD’s Vivado & Intel Quartus Warning The PeakRDL-regblock SV generator is still in pre-production (v0.x version numbers). During this time, I may decide to refactor things which could break compatibility. Installing Install from PyPi using pip python3 -m pip install peakrdl-regblock Quick Start Below is a simple example that demonstrates how to generate a SystemVerilog implementation from SystemRDL source. from systemrdl import RDLCompiler, RDLCompileError from peakrdl_regblock import RegblockExporter from peakrdl_regblock.cpuif.apb3 import APB3_Cpuif input_files = [ "PATH/TO/my_register_block.rdl" ] # Create an instance of the compiler rdlc = RDLCompiler() try: # Compile your RDL files for input_file in input_files: rdlc.compile_file(input_file) # Elaborate the design root = rdlc.elaborate() except RDLCompileError: # A compilation error occurred. Exit with error code sys.exit(1) # Export a SystemVerilog implementation exporter = RegblockExporter() exporter.export( root, "path/to/output_dir", cpuif_cls=APB3_Cpuif )
https://peakrdl-regblock.readthedocs.io/en/latest/
2022-08-07T21:15:32
CC-MAIN-2022-33
1659882570730.59
[]
peakrdl-regblock.readthedocs.io
TaskSet Information about a set of Amazon ECS tasks in either an Amazon CodeDeploy or an EXTERNAL deployment. An Amazon ECS task set includes details such as the desired number of tasks, how many tasks are running, and whether the task set serves production traffic. Contents - capacityProviderStrategy The capacity provider strategy that are associated with the task set. Type: Array of CapacityProviderStrategyItem objects Required: No - clusterArn The Amazon Resource Name (ARN) of the cluster that the service that hosts the task set exists in. Type: String Required: No - computedDesiredCount The computed desired count for the task set. This is calculated by multiplying the service's desiredCountby the task set's scalepercentage. The result is always rounded up. For example, if the computed desired count is 1.2, it rounds up to 2 tasks. Type: Integer Required: No - createdAt The Unix timestamp for the time when the task set was created. Type: Timestamp Required: No - externalId The external ID associated with the task set. If an Amazon CodeDeploy deployment created a task set, the externalIdparameter contains the Amazon CodeDeploy deployment ID. If a task set is created for an external deployment and is associated with a service discovery registry, the externalIdparameter contains the ECS_TASK_SET_EXTERNAL_IDAmazon Cloud Map attribute. Type: String Required: No - id The ID of the task set. Type: String Required: No - launchType The launch type the tasks in the task set are using. For more information, see Amazon ECS launch types in the Amazon Elastic Container Service Developer Guide. Type: String Valid Values: EC2 | FARGATE | EXTERNAL Required: No - loadBalancers Details on a load balancer that are used with a task set. Type: Array of LoadBalancer objects Required: No - networkConfiguration The network configuration for the task set. Type: NetworkConfiguration object Required: No - pendingCount The number of tasks in the task set that are in the PENDINGstatus during a deployment. A task in the PENDINGstate is preparing to enter the RUNNINGstate. A task set enters the PENDINGstatus when it launches for the first time or when it's restarted after being in the STOPPEDstate. Type: Integer Required: No - platformFamily The operating system that your tasks in the set are running on. A platform family is specified only for tasks that use the Fargate launch type. All tasks in the set must have the same value. Type: String Required: No - platformVersion The Amazon Fargate platform version where the tasks in the task set are running. A platform version is only specified for tasks run on Amazon Fargate. For more information, see Amazon Fargate platform versions in the Amazon Elastic Container Service Developer Guide. Type: String Required: No - runningCount The number of tasks in the task set that are in the RUNNINGstatus during a deployment. A task in the RUNNINGstate is running and ready for use. Type: Integer Required: No - scale A floating-point percentage of your desired number of tasks to place and keep running in the task set. Required: No - serviceArn The Amazon Resource Name (ARN) of the service the task set exists in. Type: String Required: No - serviceRegistries The details for the service discovery registries to assign to this task set. For more information, see Service discovery. Type: Array of ServiceRegistry objects Required: No - stabilityStatus The stability status. This indicates whether the task set has reached a steady state. If the following conditions are met, the task set sre in STEADY_STATE: The task runningCountis equal to the computedDesiredCount. The pendingCountis 0. There are no tasks that are running on container instances in the DRAININGstatus. All tasks are reporting a healthy status from the load balancers, service discovery, and container health checks. If any of those conditions aren't met, the stability status returns STABILIZING. Type: String Valid Values: STEADY_STATE | STABILIZING Required: No - stabilityStatusAt The Unix timestamp for the time when the task set stability status was retrieved. Type: Timestamp Required: No - startedBy The tag specified when a task set is started. If an Amazon CodeDeploy deployment created the task set, the startedByparameter is CODE_DEPLOY. If an external deployment created the task set, the startedBy field isn't used. Type: String Required: No - status The status of the task set. The following describes each state. - PRIMARY The task set is serving production traffic. - ACTIVE The task set isn't serving production traffic. - DRAINING The tasks in the task set are being stopped, and their corresponding targets are being deregistered from their target group. Type: String Required: No The metadata that you apply to the task set Amazon that the task set is using. Type: String Required: No - taskSetArn The Amazon Resource Name (ARN) of the task set. Type: String Required: No - updatedAt The Unix timestamp for the time when the task set was last updated. Type: Timestamp Required: No See Also For more information about using this API in one of the language-specific Amazon SDKs, see the following:
https://docs.amazonaws.cn/AmazonECS/latest/APIReference/API_TaskSet.html
2022-08-07T21:43:15
CC-MAIN-2022-33
1659882570730.59
[]
docs.amazonaws.cn
Windows Server Update Services 3.0 SP2 Dynamic Installer for Server Manager This article describes the Windows Server Update Services 3.0 SP2 Dynamic Installer for Server Manager. Applies to: Windows Server 2008 R2 Service Pack 1 Original KB number: 972493 Introduction: 940518 An update is available that integrates Windows Server Update Services (WSUS) 3.0 into Server Manager in Windows Server 2008 that is being updated by a WSUS 3.0 server, approve the Windows Server Update Services 3.0 SP2 Dynamic Install for Windows Server 2008, and then install WSUS role by using Server Manager. How to determine whether the service pack is installed Look for "Windows Server Update Services 3.0 SP2" in Add or Remove Programs or Program Files and Features in Control Panel. If "Windows Server Update Services 3.0 SP2" doesn't appear, the service pack isn't installed. Removal information You can't to page, click Next. - On the Confirm Installation Selections page, click Remove. - On the Remove Windows Server Update Services 3.0 SP2 page, select any additional items to be removed, and then click Next. - Click Finish to exit the wizard when the WSUS 3.0 SP2 Removal Wizard is finished.
https://docs.microsoft.com/en-US/troubleshoot/windows-server/deployment/windows-server-update-services-3-sp2
2022-08-07T22:30:58
CC-MAIN-2022-33
1659882570730.59
[]
docs.microsoft.com
TPC Intrinsics Guide¶ The Habana® TPC CLANG compiler is based on the widely used LLVM open-source compiler infrastructure project. TPC CLANG compiles TPC -C/C++ language in which the intrinsic functions are a part of. This gives access to the full power of the Habana® ISA, while allowing the compiler to optimize register allocation and instruction scheduling for faster execution. Most of these functions are associated with a single VLIW instruction, although some may generate multiple instructions or different instructions depending on how they are used. The intrinsic function naming convention consists of instruction name, instruction data type, return data type width, scalar/vector properties of its arguments and predicate values. For more details, refer to Intrinsics in TPC User Guide. TPC Intrinsics Header¶ For more information on TPC Intrinsics, refer to the headers listed below:
https://docs.habana.ai/en/latest/TPC/TPC_Intrinsics_Guide/index.html
2022-08-07T22:59:51
CC-MAIN-2022-33
1659882570730.59
[]
docs.habana.ai
- total memory is 1000Mb (disabled all memory reservations for GPU) - drivers for LVDS LCD display modules are added. TS module: ft5x_ts, added configuration for 7″ – other sizes. (legacy kernel / vanilla) - legacy kernel: BCM53125 switch configured as follows – looking at front of ports: |2|1|0|4|(LAN=manual) |3|(WAN=dhcp and bridged to enabled wireless adapter in (theoretical) high throughput mode with SSID lamobo and password 12345678 - mainline kernel: please check the “Known issues” tab.
https://docs.armbian.com/boards/lamobo-r1/
2018-01-16T14:55:58
CC-MAIN-2018-05
1516084886437.0
[]
docs.armbian.com
Evergreen is currently the primary showcase for the use of OpenSRF as an application architecture. Evergreen 1.6.0 includes the following set of OpenSRF services: open-ils.actor: Supports common tasks for working with user accounts and libraries. open-ils.auth: Supports authentication of.ingest: Supports tasks for importing bibliographic and authority records. open-ils.pcrud: Supports permission-based access to Evergreen fieldmapper objects. open-ils.penalty: Supports the calculation of penalties for users, such as being blocked from further borrowing, for conditions such as having too many items checked out or too many unpaid fines. open-ils.reporter: Supports the creation and scheduling of reports. open-ils.reporter-store: Supports access to Evergreen fieldmapper objects for the reporting service. This is a private service. open-ils.search: Supports searching across bibliographic records, authority records, serial records, Z39.50 sources, and ZIP codes..vandelay:.
http://docs.evergreen-ils.org/2.4/_evergreen_specific_opensrf_services.html
2018-01-16T15:42:59
CC-MAIN-2018-05
1516084886437.0
[]
docs.evergreen-ils.org
Set up vendor assessments Have an assessment administrator configure vendor assessments so you can evaluate vendors using questionnaires and scripted database queries. Examples of vendor assessment setup tasks include setting an assessment generation schedule for recurring assessments, associating users to categories or vendors they are knowledgeable about, and creating decision matrixes.
https://docs.servicenow.com/bundle/istanbul-it-business-management/page/product/vendor-performance/concept/c_SetUpVendorAssessments.html
2018-01-16T15:49:39
CC-MAIN-2018-05
1516084886437.0
[]
docs.servicenow.com
activites. Figure 1. View all cases.Propose a KB article as a customer service case solutionPropose a knowledge base article as a solution and attach the article to a case.Create a work order for a customer service caseCreate a work order as part of the case resolution process.Close a customer service caseClose a case at any time, except when it is in the Resolved state. Customer service case formThe Case form displays detailed information about a customer issue or problem.
https://docs.servicenow.com/bundle/kingston-customer-service-management/page/product/customer-service-management/concept/c_CustomerServiceCaseOverview.html
2018-01-16T15:49:44
CC-MAIN-2018-05
1516084886437.0
[]
docs.servicenow.com
The System.Data.Design namespace contains classes that can be used to generate a custom typed-dataset. This class is used to generate a database query method signature, as it will be created by the typed dataset generator. Sets the type of parameters that are generated in a typed System.Data.DataSet class. Generates a strongly typed System.Data.DataSet class. The exception that is thrown when a name conflict occurs while a strongly typed System.Data.DataSet is being generated. Generates internal mappings to .NET Framework types for XML schema element declarations, including literal XSD message parts in a WSDL document.
http://docs.go-mono.com/monodoc.ashx?link=N%3ASystem.Data.Design
2018-01-16T15:14:51
CC-MAIN-2018-05
1516084886437.0
[]
docs.go-mono.com
$ oadm create-bootstrap-project-template -o yaml > template.yaml In OpenShift Origin,adm. Removing the self-provisioners cluster. To create an individual project with a node selector, use the --node-selector option when creating a project. For example, if you have an OpenShift Origin topology with multiple regions, you can use a node selector to restrict specific OpenShift Origin Origin Origin for the changes to take effect. # systemctl restart origin-master
https://docs.openshift.org/3.6/admin_guide/managing_projects.html
2018-01-16T15:37:51
CC-MAIN-2018-05
1516084886437.0
[]
docs.openshift.org
This page lists the highlights of the Ficus release. The Open edX Ficus release includes the following updates. In keeping with edX’s commitment to creating accessible content for everyone, everywhere, the Open edX Ficus release contains numerous accessibility enhancements and improvements to readability and navigability. The edX Release Notes contain a summary of changes that are deployed to edx.org. Those changes are part of the master branch of the edX Platform in GitHub. You can also find release announcements on the open.edx.org website. Changes listed for 10 January 2017 and before are included in the Ficus release of Open edX. Changes after that point will be in future Open edX releases.
http://edx.readthedocs.io/projects/open-edx-release-notes/en/latest/ficus.html
2018-01-16T15:00:22
CC-MAIN-2018-05
1516084886437.0
[]
edx.readthedocs.io
Overview When setting up your UPS carrier in ShipperHQ, you’ll need certain credentials that will allow ShipperHQ to connect to the UPS servers and obtain a rate quote. Follow these directions to obtain/set-up the necessary credentials. Create a new UPS Carrier The first step is to create a new UPS carrier in ShipperHQ. - Click on Carriers in the left-hand navbar - Click the Add New button to add a new carrier - Enter the name you want to use for this carrier in the Carrier Name field (e.g. “UPS”) - Select “Small Package” from the Carrier Type select - Select “UPS” from the Carrier select - Click Create Carrier Fetch a UPS Invoice In order to connect ShipperHQ to your UPS account, you’ll need your most recent invoice. This can usually be downloaded at UPS.com. If not, you’ll need to find the physical copy of the most recent invoice received from UPS. If you have not received an invoice from UPS in the preceding 90 days, no invoice is required to register and you can skip this section. - Go to UPS.com and click the “Log-in” link at the top of the page. Log in with your UPS.com username and password. - Once logged in, select “Billing” in the “My UPS” drop-down in the navbar - Click the “Access” button to access the UPS Billing Center - In the “Billing Center Quick Links” section, click “View Invoice” - You should see a list of recent invoices under the “Invoice Information” header. Click the Invoice Number for the most recent invoice (it doesn’t matter if you’ve already paid or not). - Click the “View/Download Invoice Data” button at the bottom of the page and select “PDF” and click “Submit” to view the invoice PDF - You’ll need information from this Invoice to connect ShipperHQ to your UPS account Example Invoice Your invoice may vary somewhat from this example invoice but this helps identify where to find the required invoice information. Register ShipperHQ with UPS - Back in ShipperHQ and looking at your new UPS carrier, open the Account Settings Panel and click the Register Credentials Now button. - If you do not already have a UPS Shipping Account, click the Create UPS Account on the left to get that set up and then return to ShipperHQ to complete this process once you’ve set up an account with UPS - If you already have an account with UPS and have access to your most recent invoice (or have not received a UPS invoice in the preceding 90 days), click the Register Credentials button - Read and scroll all the way to the bottom of the License Agreement and click the I Agree button. I you need to print the agreement out for review, a print button is provided. - On the Customer Info page, fill in your company details and click Next. Important: The address entered here must be the address where UPS picks up shipments. This may differ from your account or billing address shown on your Invoice. Enter your UPS Account Number - If you have not been invoiced in the preceding 90 days, click the No, not invoiced in past 90 days button. - If you have been invoiced in the preceding 90 days, click the Yes, invoiced in past 90 days button. - Select your country and enter the Invoice Number and Charges this Period that are shown on your invoice. - Click the Invoice Date field and choose the date of the invoice from the calendar - Enter the Control ID if present on your invoice. Some invoices may not have a Control ID present. If it is present, it will generally be listed in the summary at the top right corner of the first page of your invoice. - Click the Next button Troubleshooting If for some reason your invoice is not accepted on your first attempt, double check all of your information. Most common causes of being unable to register are: - Not using the most recent invoice UPS has generated. See the “Fetch a UPS Invoice” section above for how to download your most recent invoice. - Entering the Account or Billing address instead of the Pickup address on the Customer Info screen. The address entered must be the address of the primary location where UPS picks up packages - Entering the wrong invoice date or charges this period. These must match exactly what was shown on your UPS invoice Attempting to register unsuccessfully three times will disable your account for 24 hours. You will still be able to use your UPS account, but will not be able to attempt to register for another 24 hours.
http://docs.shipperhq.com/how-to-set-up-ups-with-credentials/
2018-01-16T15:38:46
CC-MAIN-2018-05
1516084886437.0
[array(['http://docs.shipperhq.com/wp-content/uploads/2015/10/ups_invoice_example-1024x944.png', 'Example UPS Invoice'], dtype=object) array(['http://docs.shipperhq.com/wp-content/uploads/2015/10/Screen-Shot-2015-10-14-at-11.54.11-AM-300x210.png', 'Screen Shot 2015-10-14 at 11.54.11 AM'], dtype=object) ]
docs.shipperhq.com
Log in to your MailChimp account. Choose Account from the dropdown at the top right of the screen. Choose API Keys from the Extras dropdown. Find the section called Your API Keys. Use the default key or create a new key for use by Listery. Copy to the clipboard the key you want to use as you will need it when you add the MailChimp account to Listery.
http://docs.listery.io/attaching-email-service-provider-accounts/mailchimp-finding-your-api-key
2018-01-16T15:01:09
CC-MAIN-2018-05
1516084886437.0
[]
docs.listery.io
$ oc get svc/docker-registry -o yaml | grep clusterIP: OpenShift Origin. To re-use the IP address, you must save the IP address of the old docker-registry service prior to deleting it, and arrange to replace the newly assigned IP address with the saved one in the new docker-registry service. Make a note of the clusterIP foradm registry <options> -o yaml > registry.yaml Edit registry.yaml, find the Service there, and change its clusterIP to the address noted in step 1. Create the registry using the modified registry.yaml: $ oc create -f registry.yaml If you are unable to re-use the IP address, any operation that uses a pull specification that includes the old IP address will fail. To minimize cluster disruption, you must reboot the masters: # systemctl restart origin-master This ensures that the old registry URL, which includes the old IP address, is cleared from the cache. You can specify a whitelist of docker registries, allowing you to curate a set of images and templates that are available for download by OpenShift Origin users. This curated set can be placed in one or more docker registries, and then added to the whitelist. When using a whitelist, only the specified registries are accessible within OpenShift Origin, and all other registries are denied access by default. To configure a whitelist: Edit the /etc/sysconfig/docker file to block all registries: BLOCK_REGISTRY='--block-registry=all' You may need to uncomment the BLOCK_REGISTRY line. In the same file, add registries to which you want to allow access: ADD_REGISTRY='--add-registry=<registry1> --add-registry=<registry2>'. You can override the integrated registry’s default configuration, found by default at /config.yml in a running registry’s container, with your own custom configuration. To enable management of the registry configuration file directly and deploy an updated configuration using a ConfigMap: Edit the registry configuration file locally as needed. The initial YAML file deployed on the registry is provided below. Review supported options. holding: Edit the local registry configuration file, config.yml. Delete the registry-config secret: $ oc delete secret registry-config Recreate the secret to reference the updated configuration file: $ oc secrets new registry-config config.yml=</path/to/custom/registry/config.yml> Redeploy the registry to read the updated configuration: $ oc driver: storage: delete: enabled: true (1) redirect: disable: false cache: blobdescriptor: inmemory maintenance: uploadpurging: enabled: true age: 168h interval: 24h dryrun: false readonly: enabled: false Auth options should not be altered. The openshift extension is the only supported option. auth: openshift: realm: openshift The repository middleware extension allows to configure OpenShift Origin middleware responsible for interaction with OpenShift Origin (1). This section reviews the configuration of global settings for features specific to OpenShift Origin. In a future release, openshift-related settings in the Middleware section will be obsoleted. Currently, this section allows you to configure registry metrics collection: openshift: version: 1.0 (1) metrics: enabled: false (2) secret: <secret> (3) See Accessing Registry Metrics for usage information. Upstream options are supported. Learn how to alter these settings via environment variables. Only the tls section should be altered. For example: http: addr: :5000 tls: certificate: /etc/secrets/registry.crt key: /etc/secrets/registry.key Upstream options are supported. The REST API Reference provides more comprehensive integration options. Example: notifications: endpoints: - name: registry disabled: false url: headers: Accept: - text/plain timeout: 500 threshold: 5 backoff: 1000 Upstream options are supported. The registry deployment configuration provides an integrated health check at /healthz. Proxy configuration should not be enabled. This functionality is provided by the OpenShift Origin repository middleware extension, pullthrough: true.
https://docs.openshift.org/3.6/install_config/registry/extended_registry_configuration.html
2018-01-16T15:39:42
CC-MAIN-2018-05
1516084886437.0
[]
docs.openshift.org
Batch Job - Options The Batch Job Options panel lets you define how a Batch Job functions. Using the Batch Job Options panel, you can - Identify a Batch Job - Specify whether the Batch Job should stop if it encounters errors in one of the individual jobs included in the batch - Specify whether you want the individual jobs in the batch to execute sequentially, in parallel, or by server - Specify whether you want to enable a rolling execution of the Batch Job divided across multiple job runs, so that each time the Batch Job runs, it executes against a certain (user-defined) maximum number of servers The Batch Job Options panel also lets you specify target servers. Each of the individual jobs included in the batch can run on the target servers specified in the definition of that job, or all of the individual jobs can run on the same set of target servers that you specify for the entire Batch Job. When first defining and saving a Batch Job, you do not have to specify target servers. You can specify target servers at a later time. If you define a Batch Job that includes other nested Batch Jobs, the settings you define for the current Batch Job only apply to the jobs it contains. A Batch Job's settings do not apply to any jobs contained within nested Batch Jobs unless you specify target servers on this panel. In that case, the specified target servers apply to all jobs nested within the Batch Job. Field definitions
https://docs.bmc.com/docs/ServerAutomation/82/using/creating-and-modifying-bmc-server-automation-jobs/creating-and-modifying-batch-jobs/creating-a-batch-job/batch-job-options
2020-07-02T21:05:14
CC-MAIN-2020-29
1593655880243.25
[]
docs.bmc.com
Use the people tab or survey queries to view survey responses. Viewing individual survey responses Once you’ve received completed surveys, each survey response is stored in an individual persons files (under the Survey tab). You can view completed survey results for individual respondents in this way. Querying the survey system If you need to compile data across all responses, you can access and create custom survey queries for any answer and response combination that you need to see. For example, you could create queries to see: - Who answered “Spanish” in the question “Spoken Languages” - Who answered “Left handed” in the “Sport Preference” question - Who did not answer the question “Do you have any time constraints?” - Who did not provide information when prompted to “Please provide contact information” - etc.
https://docs.grenadine.co/accessing-survey-data.html
2020-07-02T23:03:37
CC-MAIN-2020-29
1593655880243.25
[]
docs.grenadine.co
Assembly Record of Committee Proceedings Committee on Family Law Assembly Bill 97 Relating to: the involvement and cooperation of both parents in a physical placement schedule. By Joint Legislative Council. March 22, 2019 Referred to Committee on Family Law June 04, 2019 Public Hearing Held Present: (8) Representative Rodriguez; Representatives James, Tauchen, Plumer, Pronschinske, Doyle, Brostoff and Emerson. Absent: (0) None. Excused: (1) Representative Duchow. Appearances For · Representative Rob Brooks - 60th Assembly District · Tony Bickel - WFCF · Steve Blake - Dads of Wisconsin Appearances Against · Chase Tarrier - End Domestic Abuse WI Appearances for Information Only · None. Registrations For · None. Registrations Against · None. Registrations for Information Only · None. September 10, 2019 Executive Session Held Present: (9) Representative Rodriguez; Representatives James, Duchow, Tauchen, Plumer, Pronschinske, Doyle, Brostoff and Emerson. Absent: (0) None. Excused: (0) None. Moved by Representative Pronschinske, seconded by Representative Plumer that Assembly Bill 97 be recommended for passage. Ayes: (7) Representative Rodriguez; Representatives James, Duchow, Tauchen, Plumer, Pronschinske and Doyle. Noes: (2) Representatives Brostoff and Emerson. PASSAGE RECOMMENDED, Ayes 7, Noes 2 ______________________________ Nick Bentz Committee Clerk
https://docs.legis.wisconsin.gov/2019/related/records/assembly/family_law/1515163
2020-07-02T21:41:43
CC-MAIN-2020-29
1593655880243.25
[]
docs.legis.wisconsin.gov
staticLookup A Workflow Engine function that searches for a key in a static lookup table, retrieves the corresponding value, and applies that value to a field in the object. lookupName references a .lookup file in JSON format in the following folder: $MOOGSOFT_HOME/config/lookups/. For example, Locations refers to $MOOGSOFT_HOME/config/lookups/Locations.lookup. On first use, the lookup loads into constants. You do not need to edit the Workflow Engine Moobot to load. The default lifespan for the lookup is 3600 seconds, after which the Workflow Engine reloads the file. This function is available for event, alert, enrichment, and Situation workflows. Back to Workflow Engine Functions Reference. Arguments Workflow Engine function staticLookup takes the following arguments:
https://docs.moogsoft.com/Enterprise.8.0.0/staticlookup.html
2020-07-02T22:16:31
CC-MAIN-2020-29
1593655880243.25
[]
docs.moogsoft.com
Contents The TIBCO StreamBase® Syslog Input Adapter allows a StreamBase application to act as a Syslog collector for receiving syslog messages. The adapter has the following capabilities: Listens for syslog messages sent to the given port number. When such a message is received, outputs a tuple describing the values contained in the message. in this section, the Property column shows each property name as found in the one or more adapter properties tabs of the Properties view for this adapter. syslog messages on the configured UDP port. Required fields: command: Should contain the value Connect. Case is unimportant. Disconnect: Stop listening for syslog messages. Required fields: command: Should contain the value Disconnect. Case is unimportant. The adapter has at least one output port, and optionally a second one: The first port issues a tuple for each syslog message received, and describes its content. The second (optional) port routes status messages, such as connection and disconnection events. The schema for each output port is described in the following sections: The first output port routes incoming syslog messages to the StreamBase application. The schema for this port is pre-set by the adapter, and is described in the following table. Note Only the rawMessage and receiptTime fields are guaranteed to be non-null in all cases. The validity of the other fields in the schema will be dependent on whether the adapter is configured to attempt further parsing of the message, and the extent to which this parsing was successful. The parsing rules (and the fields they affect) are described in Message Parsing: Syslog messages do not have to conform to a hard and fast format. The only real requirement is that only one message be present in a given UDP packet. However over the years some conventions have emerged and some specifications have even been published, with varying degrees of adoption. When the adapter receives a syslog message it will at a minimum set the rawMessage field of the outgoing tuple to contain the entirety of this message, and the receiptTime field will contain a time stamp indicating the exact time at which the message was received by the adapter (this is not to be confused with the timestamps often contained within the message itself, which represent the time at which the message was sent). These two fields are the only ones that are guaranteed to be non-null and meaningful in all cases, regardless of whether further interpretation has been attempted and was successful. When the adapter is configured to attempt further parsing of the messages (see Adapter Properties Tab), incoming syslog messages are examined in order to extract meaningful information (such as the hostname of the machine originating the message, RFC 5424 "structured data elements", etc). If identified in the parsing process, these discrete elements are placed in dedicated fields in the output schema to aid the application in interpreting and characterizing the message. Several parsing attempts are made, according to the following rules: Parse using the specification outlined in RFC 5424, "The Syslog Protocol" (March 2009). If the message's format does not conform to the above, parse it using the specification outlined in RFC 3164, "The BSD Syslog Protocol" (August 2001). If the message still cannot be parsed successfully, attempt to at least extract the message's priority by checking if it begins with the pattern <NNN>where NNN is an integer number. If successful, the priority is extracted and the rest of the message is treated as a single string. Failing all these attempts (or if Parse Incoming Messages is unchecked) the entirety of the message is placed in the rawMessagefield and all other fields are left null. The complete list of fields in the output schema describing a syslog message is given in Syslog Message Output Port Schema. The table below describes the circumstances in which a given field's value may be set. Note of course that even when a message conforms to a given format some fields may still be null if they had no value in the message (for example an RFC 5242 message may leave its 'appname' component unset while still providing meaningful values for other fields)..
https://docs.streambase.com/latest/topic/com.streambase.sfds.ide.help/data/html/adaptersguide/embeddedInputSyslog.html
2020-07-02T23:04:13
CC-MAIN-2020-29
1593655880243.25
[]
docs.streambase.com
Contents The Map to sub-fields of tuple fields option appears in the Data File Options dialog invoked from the Feed Simulation Editor, when specifying options for a CSV data file used as input for a feed simulation. This option only appears in this dialog if the schema of the input stream for this feed simulation includes at least one field of type tuple. Use the Map to sub-fields option to specify that the fields of a flat CSV file are to be mapped to the sub-fields of tuple fields, not to the tuple fields themselves. This feature lets Studio read flat CSV files generated manually or generated by non-StreamBase applications such as Microsoft Excel, and apply them to schemas that have fields of type tuple. Do not enable this option for reading any CSV file whose fields have sub-fields, designated by quotes within quotes according to the CSV standard. Do not enable this option for reading hierarchical CSV files generated by StreamBase Studio, or by a StreamBase adapter such as the CSV File Writer Output adapter. For example, let's say StreamBase generates a CSV file to capture data emitted from an output stream, whose schema includes tuple fields. In this case, the generated CSV file is already in the correct format to reflect the nested tuple field structure, and does not need further processing to be recognized as such. Do enable this option for CSV files generated by third-party applications, including Microsoft Excel. These CSV files generally have a flat structure, with each field following the next, each field separated by a comma, tab, space, or other delimiter. Despite the flat structure, if the fields of a CSV file are ordered correctly, you can use the Map to sub-fields option to feed or validate a stream with nested tuple fields. The examples in this section will clarify this feature. Let's say we have an input stream that has a two-field schema: T tuple(i1 int, i2 int, i3 int), W tuple(x1 string, x2 string) This is illustrated in the figure below. There are two ways to create a CSV file that contains fields that correctly map to this schema: Create a CSV file that contains the expected hierarchy, separated and using quotes within quotes according to CSV standards. Create a flat CSV file that contains the correct number of fields in the right order, then tell StreamBase to interpret this file by mapping to the sub-fields of the two tuples. The following example shows a hierarchical CSV file that can be used with the schema shown above. In this file, each line maps to two fields, and each field contains sub-fields. To use a CSV file like this example as a feed simulation data file or a unit test validation file in Studio, do not enable the Map to sub-fields option. "100,200,300","alpha,beta" "655,788,499","gamma,delta" "987,765,432","epsilon,tau" The following example shows a flat CSV file that can also be used with the schema shown above. In this case, you must enable the Map to sub-fields option. 100,200,300,alpha,beta 655,788,499,gamma,delta 987,765,432,epsilon,tau The following image shows the Column mapping grid of a Data File Options dialog that is reading this flat CSV file. The Map data file columns to sub-fields check box is visible and selected. When specifying a CSV file to use with a feed simulation, the Data File Options dialog shows you graphically how the CSV file will be interpreted. However, StreamBase cannot fully validate the CSV file against the input port's schema until the application is run. When using the Map to sub-fields option for a feed simulation, use the following steps to make sure your CSV file is validated as expected: Run the application. Start the feed simulation that uses the CSV file. Examine the resulting tuples in the Input Streams or Output Streams views. For the example CSV files above, the following Output Streams view shows that the tuples fed to the input stream were interpreted as expected, and all sub-fields were filled with data: The same results are obtained in these two cases: Using the hierarchical CSV file with the Map to sub-fields option disabled. Using the flat CSV file with the Map to sub-fields option enabled. If you see several fields interpreted as null, this indicates that the Map to sub-fields option is enabled for an already-hierarchical CSV file, or that the fields in a flat CSV file do not line up field-for-field with the schema of the input port you are feeding. The following shows an example of an incorrect result. In this case, only the first sub-field of each tuple received input.
https://docs.streambase.com/latest/topic/com.streambase.sfds.ide.help/data/html/testdebug/maptoleaffields.html
2020-07-02T22:08:23
CC-MAIN-2020-29
1593655880243.25
[array(['../images/maptoleaf_sampleschema.png', None], dtype=object)]
docs.streambase.com
Information Security Guide Latest revision as of 23:43, 29 March 2013 Confidence in the meaning, disposition, and provenance of information is at the heart of scientific research and discovery. Our confidence originates from our trust in the processes used to conduct experiments, analyze results, and create knowledge. The processes we define to support reproducible discovery are the foundation of information security in the research domain. Modern science is increasingly an expression of ideas in the virtual spaces of the computational platforms that surround us. The computer is our most versatile scientific instrument. The computer allows us to explore any abstraction we can envision and to build pathways to our discoveries. They help us by supporting development of a reproducible process. Good process lets us explore our virtual worlds with confidence. It underlies our trust in the experiments we conducted and the results we obtained from our virtual worlds. At UAB, we are building a Research Computing System (RCS) that supplies researchers with HPC, storage, web, and virtual infrastructure to facilitate investigation, develop research applications, and enable collaboration. This system is being designed to promote and support processes that ensure confidence in the experiments conducted and results obtained using this system. In other words, we are building a scientific instrument to support the virtual expressions of modern science. The information security guide will document the function of the Research Computing System to engender trust in the information recorded on and derived from the conduct of science on this platform. [edit] Background To facilitate dialogs about the Research Computing System and its development across a wide variety of groups and interests, this document will leverage definitions and standards for information security being developed by NIST. According to NIST, information security is the protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability. This term is defined in (FIPS-199), the primary standards document that all participants in this dialog should be familiar with. FIPS-199 identifies "information types" and "information systems" as the two primary classes used to document information security requirements. Additionally, it defines three areas of information security "confidentiality, integrity, and availability" that are used to guide the implementation of appropriate process. FIPS-199 is a short document, and the heart of the matter is covered in the first 6 pages. The remaining content is an appendix defining the referenced terms. [edit] RCS is an Information System A basic statement of the operating principles for the Research Computing system could be written as follows: - The Research Computing System provides controlled access to data and applications maintained on the system. Every data and application resource has an access control list which specifies allowed interactions with the resource. All requests to access data and applications are verified against the resource access control list to assure all allowed interactions are permitted. - - A person is granted access to the Research Computing System according to their affiliations with the University. Individuals are assigned a unique identity to account for their use of the system and the resources which they maintain on the system. Valid credentials must be presented to modify resources maintained on the system. Individuals may be associated with groups which reflect their affiliations with the University or with other individuals using the system. Group membership can be used to expand or constrain access to data and application resources maintained on the system. - It is important to note that this statement only describes how the system operates. It does not dictate any restriction to the access of information. For example, this wiki, visible to the world, is in full harmony with that operation. The "access control list" for the wiki includes "world readable".
https://docs.uabgrid.uab.edu/w/index.php?title=Information_Security_Guide&oldid=4505&diff=prev
2020-07-02T23:07:55
CC-MAIN-2020-29
1593655880243.25
[]
docs.uabgrid.uab.edu
Declare and initialize the variables in your cash register application, and pass them as arguments with the COM extension methods. The COM extension assigns values to these variables. The cash register can use these variables to access, store, and process the data returned from the library through the COM. The additional data construct provides this data in the responseHeader. We provide a helper method to aid processing this in your cash register application Variable Method This mechanism applies to progress events, Additional data callback, Handle the Dynamic Currency Conversion (DCC) callback, Print Receipt callback, and the Final State callback. Code example [Some Callback Event](objPED, objTender, objHeader) Dim additionalDataKey, additionalDataValue For i=1 To objHeader.GetAdditionalData().Count intCallResult = objHeader.GetAdditionalData().GetData(i, additionalDataKey, additionalDataValue) 'Add results to internal array for later processing '... Next
https://docs.adyen.com/point-of-sale/classic-library-integrations/com-extension-for-windows-integration/key-steps-com-extension/process-a-basic-transaction-com-extension/handle-and-extract-data-from-callbacks-com-extension
2020-07-02T23:14:54
CC-MAIN-2020-29
1593655880243.25
[]
docs.adyen.com
Enabling the BMC Atrium Orchestrator integration Before you can create Workflow Jobs, you must enable the integration between BMC Server Automation and BMC Atrium Orchestrator. To fully enable the integration, you must set up the connection with BMC Atrium Orchestrator through the BMC Server Automation Console. If you want secure data communication between the products, you must also enable an HTTPS connection on both products.For more information, see Enabling Change Automation for BMC Server Automation jobs.
https://docs.bmc.com/docs/ServerAutomation/89/enabling-the-bmc-atrium-orchestrator-integration-653397634.html
2020-07-02T23:19:00
CC-MAIN-2020-29
1593655880243.25
[]
docs.bmc.com
Find out how to add participants to your event. Introduction After you have created your event and its sessions, you can begin to add participants (those who are speaking, presenting, performing, etc. at your event) to your overall event, as well as individual sessions. When you add participants, Grenadine lets you assign them to specific sessions, view their invitation status, and more. Navigation Path Menu Items - Name: The full name of the participant. - Put a different name (or a “stage” name) on public profile: If the person goes by a stage name, or a name other than their given name select this box. - Published Name: If a person goes by a name other than their given name ( such as a stage or pen name) type it here - Primary Email: The main email address for this participant. - Label: Make note of the type of e-mail address this is for example., is it the participants’ main e-mail, their assistant’s e-mail, a personal e-mail, etc. - Organization Information: The organization for which this participant works and their job title. - Categories: Is this participant is a speaker, a performer, a vendor, etc. - Invitation Category: When inviting this person to your event, you have the option of labeling them in an invitation category (for example, VIP, keynote, etc.) - Participant Status: You can invite someone to speak at your event or you can invite someone to participate in your event in some other way, for example as a volunteer. If you invite someone through a Grenadine Event Manager survey, you will see the invitation status automatically populate in this section. If you’ve invited a person through other means (by phone, through a 3rd-party system, etc.), you can set the invitation status manually. Note: The invitation is different than registration, which has to do with an attendee purchasing a ticket to attend your event. - Acceptance Status: In some cases, participants will be invited to an event. Here, you can monitor whether or not they have accepted. - Registered: This checkbox will be checked automatically by the system if the person registers him or herself. You can check the box to register or unregister the person manually. - Birthdate: The birthdate of this person. - Dietary Preferences: If applicable, you can input a dietary status here. - Allergies: Indicate any food or medical allergies this person has. - Comments: For internal use only. Comments will not appear publicly.
https://docs.grenadine.co/creating-participants.html
2020-07-02T22:12:21
CC-MAIN-2020-29
1593655880243.25
[array(['images/add_person.jpg', None], dtype=object)]
docs.grenadine.co
6.0 Release Notes (5.0 to 6.0) 18 October 2018 We are happy to introduce OnApp 6.0 (6.0.0-55). OnApp 6.0 is a new long-term support (LTS) version. This document lists features, improvements, and fixes implemented for all available OnApp components within the 5.1-6.0 versions. Before You Upgrade You can upgrade to OnApp 6.0 from the 5.0 version. Before you upgrade, see the Upgrade Notes for important information about this release. Then you can follow the upgrade instructions to get your Control Panel up and running on OnApp 6.0. Highlights Here you can find the key features that we deliver as a part of OnApp 6.0. You can also check other features and improvements and see the list of issues that were resolved in OnApp 6.0. Buckets Buckets are introduced to merge user and company billing plans into one logical unit. Buckets consist of Access Control where you can give users access to cloud resources and Rate Card where you can set prices for the usage of these resources. The Access Control and Rate Card are arranged according to the types of resources available in your cloud, such as virtual, smart, baremetal, and others. The Federation billing plans are also adapted to the Buckets functionality. Disable Billing You can disable billing to hide all pricing and billing information from users in your Control Panel. When you disable billing, the virtual server and user billing statistics is not calculated. You can disable billing if you don't use Federation compute zones in your Control Panel. Backup Plugin System Backup Plugin System enables you to integrate OnApp with a third-party backup service. The plugin allows to back up and restore your virtual servers by means of a service that you use for backup management. OnApp provides the plugins for R1Soft and Veeam that you can install to your Control Panel. You can also create your own plugin to integrate OnApp with a backup service of your choice. Service Insertion Framework Service Insertion Framework allows you to bring other services to OnApp. When you create the service insertion groups and pages, they become available on the main menu on your Control Panel. Software Defined Networking Software Defined Networking is designed for you to manage networks faster, using the VXLAN technology across OnApp cloud compute resources. SDN enables you to build level-two network infrastructure with OnApp on top of the existing level-three IP network. Network IP Nets OnApp introduces network IP nets that allow you to offer a broad range of networking services and create more flexibility. Now a network can include several IP nets that are IP ranges with a default gateway. vCenter 2.0 OnApp provides a set of updates for the vCenter integration: removed Vyatta, customer networks, customer VLANs, and IP address pools. Added Manual IP Nets that are created when an IP address of a vCenter virtual server that is being imported is not a part of an IP range in any of the vCenter networks. The updated virtual server wizard allows to select a cluster and data center to import networks and data stores from. Accelerator 2.0 Accelerator Dashboard is designed to enable acceleration for all types of networks to speed up the traffic flow for virtual servers. Accelerator enables to load your websites in several seconds regardless of the end users current location. Migration from VMware Solutions to KVM OnApp enables you to migrate virtual servers from vCloud Director and vCenter to KVM. The migration workflow includes several steps that you can perform from a Control Panel to get your virtual servers up and running on KVM. vCloud Director Multiple Organizations OnApp provides support for vCloud Director multiple organizations within one user group. When several organizations are associated with one user group, users from each organization are created in the other organizations in the group. In such a case, users in the user group have access to multiple vCloud Director instances. OVA You can use OVAs to import virtual servers created on other cloud platforms to OnApp. The virtual server is created from an OVA file that you can upload to your Control Panel. OnApp enables you to create virtual servers with multiple disks and network interfaces from OVA templates. You can also convert OVA templates more than once into KVM and vCenter virtualization formats and use a simplified procedure to configure CloudBoot backup servers to support the OVA functionality. Hot and Cold Migration The hot and cold migration methods allow you to perform an online or offline migration of virtual servers and virtual server disks. You can apply the following hot and cold migration scenarios: virtual server can be migrated to another compute resource within the same compute zone, virtual server with disks - to another compute resource within the same compute zone and to another data store from the destination compute zone and compute resource respectively, and only disks - between compute resources that share common data stores or data store zones. New SAML Attributes New SAML attributes are added to import users into OnApp with a wider set of properties. It is implemented to enable importing users into OnApp cloud with a number of preset properties (user role, time zone, group, etc). These attributes can be imported into or synchronized with the Server Provider (OnApp), making it possible to configure SP users on the Identity Provider system. Service Add-ons Service Add-ons allow you to offer your users additional services on top of the current virtual server appliances. You can offer features such as Managed Services, Software Installations, and other components that are not integrated in OnApp. CPU Quota You can set a CPU quota to limit the maximum virtual server CPU load on a KVM compute resource. You can set the CPU quota for all virtual servers on a compute resource and customize the value for particular virtual servers. Before you enable the CPU quota, the default value is set to unlimited for all virtual servers on a compute resource. Zone Types OnApp reinforces the role of zone types in the cloud. All compute, data store, network, and backup server zones can belong to one of the following types: Virtual, Baremetal, Smart, or VPC. The VPC zone type serves vCloud Director related resources. All individual resources such as compute resources, data stores, networks, and backup servers should be assigned to zones. You cannot use unassigned resources to create virtual servers. Notifications Transaction Approvals Transaction Approvals enable you to create users (approvers) who can approve or decline actions performed by other users (requesters). The approver can allow or prohibit any transaction that requires an approval in the cloud. Bulk Power Actions You can simultaneously power on and power off multiple virtual servers that reside on the same Xen or KVM compute resource. Operating System Types You can select an operating system type for a compute resource that can be Any OS, Windows-Only, or Non-Windows. By selecting an operating system type, you can consolidate your Windows-based virtual servers to control the Microsoft licensing costs. Downloading Reports You can download a CSV file with cloud and billing reports from the Usage Statistics, User Group Report, Virtual Server Billing Statistics, and User Billing Statistics pages on your Control Panel. CDN Reporting You can conduct an in-depth analysis of your CDN resources, using CDN Reporting on top files and top referrers. You can apply different filters for every report and export it to a CSV file. Maintenance Mode for CloudBoot You can use the maintenance mode to temporarily take a CloudBoot compute resource out of service in order to perform fixes or upgrade procedures. Integrated Storage Auto Healing OnApp introduces auto healing that is an auto-scheduling option to repair degraded virtual disks from Integrated Storage. Auto healing allows to repairs disks one by one for each data store. You can use auto healing only in case there are no serious issues with Integrated Storage. Isolated License The new licensing model is designed for use in an isolated environment. The Isolated License is applicable to a Control Panel that is run in a secure environment that allows no external access from the public Internet. Hardware Info The Hardware Info enables you to overview the hardware that is used by Control Panel to run compute resources and backup servers available in your cloud. The Hardware Info provides data on CPU, RAM, hard disk drives, networks, and other hardware components. You can also create custom fields to provide additional hardware information that you find necessary. Container Servers OnApp introduces container servers that are regular virtual servers based on the default CoreOS template. Container servers allow you to customize a virtual server to implement integration with Docker or other container services. Internationalization OnApp introduces an updated internationalization interface that is based on a standard Rails i18n system. You can add any custom language to your Control Panel and translate all the interface labels, error messages, and other texts from English into the custom language.
https://docs.onapp.com/rn/6-0-release-notes/6-0-release-notes-5-0-to-6-0
2020-07-02T23:28:48
CC-MAIN-2020-29
1593655880243.25
[]
docs.onapp.com
The Input Streams view allows you to see what tuples have been sent to the running application via the Manual Input View or Feed Simulations View. The Input Streams view works much like the Output Streams View. By default, the Input Streams view is located in the lower left corner of the SB Test/Debug perspective. The figure below shows an Input Streams view that has logged several tuples sent through the Best Bids and Asks sample application. When enabled (default), thebutton sends the selected tuples back to the input stream named in the second column of the Input Streams grid. The Input Streams view is cleared by default whenever you restart an application. You can also clear the view for the current session by clicking the Clear box. The Input Streams and Output Streams views each have an associated preference, Tuple Buffer Size, which specifies how many tuples are shown in each view before scrolling out of the buffer. The default buffer size is 400 tuples. You can change the default by selecting > and opening the > panel. You can sort the tuples in the Input Streams view by clicking the heading that corresponds to the way you want to sort (Time, Input Stream, or Fields), or by using the drop-down menu on the view's toolbar to select one of these options. The Show Details Pane can help you read output data that is too long to easily see in the Input Streams grid. Click the Show Details Pane button to display or hide the Show Details Pane (displayed by default) below the table. Select any row in the Input Streams grid to display its field values in the Show Details Pane. The Scroll Lock button is a toggle that allows you to control how tuples scroll in the Input Streams grid: When Scroll Lock is off: as tuple rows are added to the top of the table, the bottom of the table (containing the first tuples) remains visible; you must scroll up to see newer tuples. When Scroll Lock is on: as rows are added to the top of the table, the top of the table (containing the most recent tuples) remains visible; you must scroll down to see older tuples. You can right-click any selection of rows in the view to show the context menu, which has the same features as the Output Streams view: Show Time column, Copy, and various Copy as options. See Show or Suppress the Time Column and Copy Tuples to Clipboard.
https://docs.streambase.com/latest/topic/com.streambase.sfds.ide.help/data/html/studioref/applicationinput.html
2020-07-02T22:59:45
CC-MAIN-2020-29
1593655880243.25
[]
docs.streambase.com
first level label either by clicking the desired label, or searching for the label, then clicking it. Define how the value relates to the label, and define the value. Scope editor restricts the scope of the selection for subsequent filters by rendering values that are specific to the selected labels. For example, if the value of the kubernetes. For example, in the image below, the kubernetes.namespace.namelabel is set as a variable: Once saved, the dashboard has multiple values that can be displayed: Optional: Add additional label/value combinations to further refine the scope. Click the Savebutton to save the new scope, or click the Cancelbutton to revert the changes. To reset the dashboard scope to the entire infrastructure, or to update an existing dashboard's scope to the entire infrastructure, open the first scope drop-down menu and select everywhere. Configure Panel Scope To configure the scope of an existing dashboard panel: From the Dashboardmodule, select the relevant dashboard from the dashboard list. Hover the cursor over the desired panel, and select the Edit(pencil) icon: Click the Override Dashboard Scopelink to enable a custom panel scope. To return an individual panel scope to the default dashboard scope, click the Default to Dashboard Scopelink, and save the changes. Open the Scopedrop-down menu. Either select the new scope, or search for the desired scope, and then select it. Click the Savebutton to confirm the changes. Panels that have a custom scope (a different scope to the overall dashboard) are marked with a shaded corner:
https://docs.sysdig.com/en/dashboard-scope.html
2020-07-02T23:02:55
CC-MAIN-2020-29
1593655880243.25
[]
docs.sysdig.com
End-to-end integration steps in HA environment This topic provides a high-level overview for installing and configuring BMC Remedy AR System, BMC Remedy Mid Tier in a High Availability (HA) load-balancing, server-group environment with BMC Atrium Single Sign-On. The following topics are provided: ITSM integration architecture in an HA load-balancing environment (when integrated with BMC Atrium SSO) Installing and configuring BMC Remedy AR System Follow the steps for installing and configuring BMC Remedy Action Request (AR) System and BMC Remedy Mid Tier on multiple servers in an HA environment. Before you begin - Create a list of all load balancers, AR System servers, mid-tier servers, and BMC Atrium Single Sign-On servers and their IP addresses, along with the accepted fully qualified domain names (FQDNs), in a text file. You can refer to the ITSM architecture provided in this topic. Create a comprehensive list of all of the following items in a text file: Load balancers AR System servers BMC Remedy Mid Tier servers BMC Atrium Single Sign-On servers and their IP addresses along with their accepted FQDNs You can refer to the ITSM architecture provided in this topic. For more information, see Planning where software is installed in server groups. Setting up your load balancer A load balancer is a valuable component in building a scalable, highly available BMC Remedy AR System infrastructure. Scalability is achieved through the ability to add BMC Remedy AR System servers as demand and workload increase. For more information about setting up a load balancer, see Configuring a hardware load balancer with BMC Remedy AR System. To set up your load balancer Configure the AR System server load balancer with all of the servers in the server group. For more information, see Load balancer configuration examples. - Configure the BMC Remedy Mid Tier load balancer. - Configure the SSO server load balancer. For more information, see Installing BMC Atrium Single Sign-On as a High Availability cluster. Note Ensure that your load balancers include all of the computers on which you have installed servers. For example, your BMC Remedy Mid Tier load balancer must include all of the computers on which you have installed the mid tier. Installing the server group The following list describes the list of tasks required to install a server group for BMC Remedy AR System, BMC Atrium Core, and other BMC products. For more information, see Installing a server group. - Installing the first AR System server. - Installing the first mid tier server. - Obtaining BMC Remedy license keys. - (Optional) Testing the mid tier in your server group. This step is used for testing the installation of the first AR System server. - Configuring the first server to be a server group member. - Testing the first server to confirm that it is working properly. - Installing the next AR System server in the server group. - Configuring the next server for the server group. - Configuring the mid tier to include all AR System servers that you have installed. - Testing the current server to confirm that it is working properly. Use the AR System Server Group Operation Ranking form to distribute the load between the AR System servers and the load balancer. - Configuring the mid tier to use the AR System server load balancer. Remove the first AR System from the mid tier and add the name of the virtual host of the AR System server load balancer (for example, remedyssoservergroup). - Logging on to the BMC Remedy Mid Tier Make sure that the mid tier resolves to the AR System server load balancer. For example, you should be able to access the BMC Remedy AR System Administration Console. - Installing the remaining BMC Remedy Mid Tier servers for your environment. - Configuring the Mid Tier load balancer with all of the Mid Tiers in the server group. When you log on to the Mid Tier load balancer, it should resolve to the AR System server load balancer URL. Installing and configuring BMC Atrium Single Sign-On BMC Atrium Single Sign-On is the first authentication step for BMC Remedy AR System. Installing BMC Atrium Single Sign-On as a High Availability cluster. Note If you are not using BMC Atrium SSO as an HA cluster, see Installing BMC Atrium Single Sign-On as a standalone. - Running the BMC Atrium Single Sign-On Installer on AR System server — Perform the BMC Atrium Single Sign-On integration with the AR System servers for each server in a cluster. Reviewing AR System server external authentication settings and configuring group mapping — Before you configure BMC Atrium Single Sign-On, you must configure group mapping for external authentication in each BMC Remedy AR System server in a cluster. - Running the BMC Atrium Single Sign-On Installer on BMC Remedy Mid Tier — After you integrate BMC Atrium Single Sign-On with the computers on which the AR System server is installed, you must integrate BMC Atrium Single Sign-On with each server where mid tier is installed. Configure the SSO AREA plug-in with a Java plug-in entry, along with other external authentication parameters. - (Optional) Enabling multi-tenancy support — Set the value of the allow.tenant.adminand allow.multiple.realmsparameters to true in the web.xml file. - (Optional) Adding a tenant realm on the Realms panel — Add a new realm in BMC Atrium SSO so that single sign-on is seamlessly available for the new tenant. To add realms to a BMC Atrium SSO server, use the Realms panel in the BMC Atrium SSO Admin Console. - Editing the realm — Set the Realm Authentication, Federation, or User Stores configuration values by using the tabs on the Realm Editor form. The new realm does not contain these values, which are necessary for authentication. Mapping realm URLs to an agent for multiple realms — You must map the realm URLs on the Agent Editor console for successful authentication with BMC Atrium SSO. This mapping helps the BMC Atrium SSO server to identify requests coming from different realms and tenants. Tip If you have enabled multi-tenancy, you can also enable automatic mapping of realm URLs to an agent. For more information, see Mapping realm URLs to an agent for multi-tenancy. If you not enable multi-tenancy and still use multiple realms, you can still map realm URLs to an agent. For more information, see Mapping realm URLs to an agent for multiple realms. - Configuring BMC Atrium Single Sign-On settings for AR System — Perform this step in conjunction with the AR Data Store to retrieve group information and other user attributes from the AR System server for each realm. - (Optional) Running a health check on the BMC Atrium Single Sign-On installation — After you complete all of the steps, run a health check on your integration of BMC Atrium Single Sign-On with each mid tier.
https://docs.bmc.com/docs/sso90/end-to-end-integration-steps-in-ha-environment-474057060.html
2020-07-02T21:22:54
CC-MAIN-2020-29
1593655880243.25
[]
docs.bmc.com
Main Menu The Main Menu is a very advanced menu building module. It supports simple multi-level categories dropdowns as well as robust and feature rich mega menu dropdowns with page builder support. Main Menu WorkflowMain Menu Workflow See below explanations: Create one or more Main Menu modules in Journal > Header > Main Menu. - Menu Item Name seen in the admin. Use the vertical dots icon on the left to drag menus and change sort order. - Style Override. By default the main menu module uses the Menu Item Style applied in the header module, but you may style each menu differently than the others here. - Menu Label. Each main menu item can have a custom label applied from here. The label uses the Menu Label style from Journal > Styles > Menu Label so make sure to create some styles first. - Menu item status. You may create conditional menu items that only appear on specific devices or users (guests, customers, customer groups, etc.) - Menu title seen in the storefront. - Optional custom icon. - Hides menu text and displays the icon only. - The link where the menu item should navigate. - The menu dropdown type - Dropdown - displays normal dropdown or multi-level subcategories. - Flyout. Create the Flyout module separately in Journal > Modules > Flyout Menu. - Mega Menu. The mega menu uses the layout page builder. For category links the best module is the Catalog module found in Journal > Modules > Catalog, but any other compatible module may be used. - None. Choose this option for simple menu items that don't require any dropdown. - Dropdown menus or modules, based on the dropdown type. Assign the menu module to the active header in Journal > Header > Active Header > Main Menu > Menu Module. You can see which header is active in Journal > Skins > Header > Desktop Header. Apply Menu Style. Styles for the menus are created in Journal > Styles > Menu Item. You may create unlimited styles and activate each one in different headers. Additionally, each menu item can be styled differently directly from the Main Menu module. Activate various other options available for the main menu in each header. note The Classic header can display two different menu modules. This is useful when you want to have some menu items on the left and others on the right.
https://docs.journal-theme.com/docs/header/main-menu
2020-07-02T20:56:16
CC-MAIN-2020-29
1593655880243.25
[array(['/docs/headers/menu.png', None], dtype=object) array(['/docs/headers/menus.png', None], dtype=object)]
docs.journal-theme.com
Analyze Your ServiceNow Data Before you can configure Moogsoft Enterprise to add data to alerts from ServiceNow, gather the required information about your ServiceNow system. This topic covers analysis in the the first step in the ServiceNow enrichment example Enrich Alerts with ServiceNow Data. The following diagram illustrates the process to enrich alert data from ServiceNow: Identify the following about your ServiceNow system: The URL for your ServiceNow Incident Management instance. If Moogsoft Enterprise must connect to ServiceNow through a proxy, collect the proxy information. Credentials with API access to ServiceNow. The user you choose can be a non-interactive user with "Web service access only", but it must have the cmdb_read role. The ServiceNow data you want to use in your alerts. The ServiceNow Enrichment Integration includes default data from the cmdb_citable. The fields in your alert data that relate to the database tables. Make note if different types of alerts store the relationship in different fields. By default the integration maps the cmdb_ci.namefield from ServiceNow to the sourcefield in alerts. After your collect information about your ServiceNow data, you are ready to Configure the ServiceNow Enrichment Integration . Step 1 example: Analyze ServiceNow data The following sections outline the database details you need during the ServiceNow Enrichment Integration configuration step: ServiceNow connection information The example scenario uses the following ServiceNow sample connection information: URL: user: enricher password: password123 Table information Location and support group data is stored in the cmdb_ci table. The field cmdb_ci.name relates to the source field in your alerts. For example, given the following alert data: { ... "source":"lnux100", ...} ServiceNow provides the following enrichment data: location: 1265 Battery St., San Francisco, CA support group: SF NOC Learn More To continue with the JDBC Enrichment example, go to step 2: Configure the ServiceNow Enrichment Integration. For more information about tables in ServiceNow, see the ServiceNow docs.
https://docs.moogsoft.com/Enterprise.8.0.0/analyze-your-servicenow-data.html
2020-07-02T21:07:02
CC-MAIN-2020-29
1593655880243.25
[array(['image/uuid-0f92ec85-c366-c6ad-f765-906ca96ba974-en.png', 'analyze_SNOW_data.png'], dtype=object) ]
docs.moogsoft.com
Configure a Webhook Channel Sysdig Monitor and Sysdig Secure support sending an alert notification to a destination (a website, custom application, etc.) for which Sysdig does not have a native integration. Do this using a custom webhook channel. Prerequisites Webhooks via HTTPS only work if a signed/valid certificate is in use. Have your desired destination URL on hand. Enable Feature in the UI Complete steps 1-3 in Set Up a Notification Channel and choose Webhook. Enter the webhook channel configuration options: URL: The destination URL to which notifications will be sent Channel Name: Add a meaningful name, such as "Ansible," "Webhook.Site," etc. Enabled: Toggle on/off Notification options: Toggle for notifications when alerts are resolved and/or acknowledged. Test notification: Toggle to be notified that the configured URL is working. From Shared With: Choose whether to apply this channel globally (All Teams) or to a specific team from the drop-down. Click Save. When the channel is created, you can use it on any alerts you create. Then, when the alert fires, the notification will be sent as a POST in JSON format to your webhook endpoint. (See Alert Output, below.) For testing purposes, you can use a third-party site to create a temporary endpoint to see exactly what a Sysdig alert will send in any specific notification. Option: Configure Custom Headers or Data By default, alert notifications follow a standard format (see Description of POST Data, below). However, some integrations require additional headers and/or data, which you can append to the alert format using a custom header or custom data entry. For example, Ansible uses token-based authentication, which requires an entry for the bearer token. This entry is not included in the default alert template built into Sysdig, but you can add it using a custom header. You must do this from the command line, as described below. Note additionalHeadersis usually used for authentication customDatais used to add values to the alert Sample Use Case This example adds two custom headers and defines additional custom data, as well as the format for that data. Use the curl command to retrieve all configured notification channels: curl -X GET -H 'Authorization: Bearer API-KEY' Add the custom headers and execute the request: curl -X PUT -H 'Authorization: Bearer API-KEY' -H 'Content-Type: application/json' -d '{ "notificationChannel": { "id": 1, "version": 1, "type": "WEBHOOK", "enabled": true, "name": "Test-Sysdig", "options": { "notifyOnOk": true, "url": "", "notifyOnResolve": true, "customData": { "String-key": "String-value", "Double-key": 2.3, "Int-key": 23, "Null-key": null, "Boolean-key": true }, "additionalHeaders": { "Header-1": "Header-Value-1", "Header-2": "Header-Value-2" } } } }' Standard Alert Output Alerts that use a custom webhook for notification send a JSON-format with the following data. Description of POST Data: "timestamp": Unix timestamp of when notification fired "timespan": alert duration in seconds "alert": info on the alert that generated the event triggering the notification "severity": 0 - 7 int value "editUrl": URL to edit the alert "scope": scope as defined in the alert "name": alert name "description": alert description "id": alert id "event": info on the event that triggered the notification "id": event id "url": URL to view the event "state": ACTIVE (alert condition is met) or OK (alert condition no longer met) "resolved": false (alert has not been manually resolved) or true (it has) "entities": array of nodes within the alert scope that triggered the notification "entity": metadata to identify the node "metricValues": array of metrics that triggered the notification "metric": metric name "aggregation": time aggregation method used to calculate the metric "groupAggregation": group aggregation method used to calculate the metric "value": metric value "additionalInfo": array of additional metadata about the entity "metric": metadata key "value": metadata value "condition": alert condition Example of POST Data: { "timestamp": 1471457820000000, "timespan": 60000000, "alert": { "severity": 4, "editUrl": "", "scope": "host.mac = \"00:0c:29:04:07:c1\"", "name": "alertName", "description": "alertDescription", "id": 1 }, "event": { "id": 1, "url": "" }, "state": "ACTIVE", "resolved": false, "entities": [{ "entity": "host.mac = '00:0c:29:04:07:c1'", "metricValues": [{ "metric": "cpu.used.percent", "aggregation": "timeAvg", "groupAggregation": "none", "value": 100.0 }], "additionalInfo": [{ "metric": "host.hostName", "value": "sergio-virtual-machine" }] }], "condition": "timeAvg(cpu.used.percent) > 10" } Example of Failure $ curl -X GET -H 'authorization: Bearer dc1a42cc-2a5a-4661-b4d9-4ba835fxxxxx’' {"timestamp":1543419336542,"status":401,"error":"Unauthorized","message":"Bad credentials","path":"/api/notificationChannels"} Example of Success $ curl -X GET -H 'Authorization: Bearer dc1a42cc-2a5a-4661-b4d9-4ba835fxxxxx' {"notificationChannels":[{"id":18968,"version":2,"createdOn":1543418691000,"modifiedOn":1543419020000,"type":"WEBHOOK","enabled":true,"sendTestNotification":false,"name":"robin-webhook-test","options":{"notifyOnOk":true,"url":"","notifyOnResolve":true}}]} $ The webhook feature is used to integrate the following channels:
https://docs.sysdig.com/en/configure-a-webhook-channel.html
2020-07-02T21:13:57
CC-MAIN-2020-29
1593655880243.25
[]
docs.sysdig.com
Sending Announcements You can send Alexa announcements to one or more rooms in your Alexa for Business organization. When you do this, Alexa wakes and speaks the announcement that you enter, for the rooms that you select. You can create an announcement from the Alexa for Business console, or with the SendAnnouncement API. For more information, see the Alexa for Business API Reference. The API allows developers to trigger a text or audio announcement on Alexa for Business-managed endpoints from any app. For example, when a threshold is reached on an IoT sensor, send an alert to the shared devices in an operations team area. Or, you can turn your Alexa for Business deployment into a PA system. Systems using the API need IAM permissions. Use the following steps to create an announcement from the console. Also use these steps with the API, to test how the announcement sounds, or to make sure it reaches the correct rooms. To send or test an announcement from the console Open the Alexa for Business console at . Choose Announcements, Create announcement. On the Write message page, next to Message text, enter a message for Alexa to announce. Choose Next. Note There is a maximum of 250 characters. On the Select rooms page, choose one of the following options from the Room selection drop-down menu: Manual selection - Select one room from a list of all your rooms. You can filter by Room name and Profile. This option is good for testing an announcement. You can send it to one room while sitting in that room, to hear how it sounds. Room ARN - Enter the ARN of the room or rooms, separated by commas or line breaks. You can call an API to retrieve room ARNs. Room profile - Select the name of the room profile and review the list of rooms. Room name filter - Enter an exact room name, or the prefix of multiple rooms. For example, enter Roomto see Room1 and Room2. All rooms - Select all the rooms in your organization. Choose Send announcement. Note Alexa doesn’t proactively listen for requests after making the announcement. After hearing an announcement, users must say the wake word to make Alexa requests.
https://docs.aws.amazon.com/a4b/latest/ag/announcements.html
2020-07-02T23:09:37
CC-MAIN-2020-29
1593655880243.25
[]
docs.aws.amazon.com
Step 1: Set up prerequisites Before you begin setting up an Amazon Redshift cluster, make sure that you complete the following prerequisites in this section: If you don't already have an AWS account, you must sign up for one. If you already have an account, you can skip this prerequisite and use your existing account. Open . Follow the online instructions. Part of the sign-up procedure involves receiving a phone call and entering a verification code on the phone keypad. Determine firewall rules As part of this tutorial, you specify a port when you launch your Amazon Redshift cluster. You also create an inbound ingress rule in a security group to allow access through the port to your cluster. If your client computer is behind a firewall, you need to know an open port that you can use. This open port enables you to connect to the cluster from a SQL client tool and run queries. If you do not know this, you should work with someone who understands your network firewall rules to determine an open port in your firewall. Though Amazon Redshift uses port 5439 by default, the connection doesn't work if that port is not open in your firewall. You can't change the port number for your Amazon Redshift cluster after it is created. Thus, make sure that you specify an open port that works in your environment during the launch process.
https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-prereq.html
2020-07-02T23:23:52
CC-MAIN-2020-29
1593655880243.25
[]
docs.aws.amazon.com
In this video, I’ll show you how you can specify at which venue (or at which venues) your event will take place. I’ll also show you how you can list one or more rooms at each venue, as well as how you can set capacities and default setups for each room. - Grenadine Event Software - Overview - Events - Venues and Rooms - People - Schedule - Creating Sessions - Grouping Sessions - Assigning Speakers and Participants - Private Sessions - Identifying Scheduling Conflicts - Top Popular Sessions - Public Discussions - Feedback - Session Images - Add Tags to Sessions - Create Requirement - Session People - Session Notes - Session Documents - Session Finances - Session Tasks - The Registration Process - Registration Configuration - The Registration Process For Attendees - Shopping Cart - Ticket Types - Tickets For Individual Sessions - Ticketless Registration - Registration Orders - Tickets Sold - Pay Later Tickets - Pay Later Set-up - Registration Forms - Post-Registration Messages - Cancellations and Refunds - Promo Codes - Pick Up Tickets at the Event - Setting Up Taxes - Event Check-In - Submissions - The Call for Submissions Process - Create a Call for Submissions - Submissions - Send Invitations to Submit - Submission Summaries - Sharing Submissions - How To Assign Reviewers to Submissions - Review Submissions - Submission Documents - People Submissions - Configure Submissions - Submission Reviews - Submission Reviewers - Finance - Publishing - Community - Pages - Event Website - Mobile Apps - Surveys - Reports - Trash - Organizations - Logistics - Settings & Configuration - Global Settings - Global Settings Overview - User Roles - Managing Users - Categories - Custom Fields - Session Formats - How to Update Your Credit Card Information - Payment Processors - Global Settings: Badges - Global Settings: Checkout - Registration Setup - Global Settings: Finance - Global Settings: Chart of Accounts - Global Settings: Venues and Rooms - Global Settings: Room Setups - Global Settings: Calls for Submissions - Global Settings: Website - Global Settings: Mobile App - Global Settings: Surveys - Glossary - Knowledge Base - FAQs - Release Notes
https://docs.grenadine.co/specifying-a-venue-for-your-event.html
2020-07-02T22:27:48
CC-MAIN-2020-29
1593655880243.25
[]
docs.grenadine.co
ObjectBox comes with full Kotlin support including data classes. And yes, it supports RxJava and reactive queries without RxJava. Yes. ObjectBox comes with strong relation support and offers features like “eager loading” for optimal performance. The ObjectBox plugin only looks for entities in the current module, it does not search library modules. However, you can have a separate database ( MyObjectBox file) for each module. Just make sure to pass different database names when building your BoxStore. It depends. Internally and in the C API, ObjectBox does zero-copy reads. Java objects require a single copy only. However, copying data is only a minor factor in overall performance. In ObjectBox, objects are POJOs (plain objects), and all properties will be properly initialized. Thus, there is no run time penalty for accessing properties and values do not change in unexpected ways when the database updates. No. The objects you get from ObjectBox are POJOs (plain objects). You are safe to pass them around in threads. ObjectBox supports Android 4.0.3 (API level or minimum SDK 15) and above and works on most devices (armeabi-v7a, arm64-v8a, x86 and x86_64). It works with Java and Kotlin projects. ObjectBox also runs on Linux (64 bit), Windows (64 bit), macOS and iOS with support for Kotlin, Java, Go, C, Swift and Python. Yes, you can ObjectBox on the desktop/server side. Contact us for details if you are interested in running ObjectBox in client/server mode or containerized! Generally speaking: Yes. You can run the ObjectBox database on any IoT device that runs Linux. We also offer Go and C APIs. If you only do a rename on the language level, ObjectBox will by default remove the old and add a new entity/property. To do a rename, you must specify the UID. The Google Play download size increases by around 2.0 MB (checked for ObjectBox 2.5.0) as a native library for each supported architecture is packaged. If you build multiple APKs split by ABI or use Android App Bundle it only increases around 0.5 MB. Tip: Open your APK or AAB in Android Studio and have a look at the lib folder to see the raw file size and download size added. The raw file (APK or AAB) size increases around 5.3 MB. This is because ObjectBox adds extractNativeLibs="false" to your AndroidManifest.xml as recommended by Google. This turns off compression. However, this allows Google Play to optimally compress the APK before downloading it to each device (see download size above) and reduces the size of your app updates (on Android 6.0 or newer). Read this Android developers post for details. It also avoids issues that might occur when extracting the libraries. If you rather have a smaller APK instead of smaller app downloads and updates (e.g. when distributing in other stores) you can override the flag in your AndroidManifest.xml: <application...// not recommended, increases app update sizeandroid:extractNativeLibs="true"tools:replace="android:extractNativeLibs"...</applicaton> More importantly, ObjectBox adds little to the APK method count since it’s mostly written in native code. Yes. ObjectBox stores all data in a single database file. Thus, you just need to prepare a database file and copy it to the correct location on the first start of your app (before you touch ObjectBox’s API). There is an experimental initialDbFile() method when building BoxStore. Let us know if this is useful! The database file is called data.mdb and is typically located in a subdirectory called objectbox (or any name you passed to BoxStoreBuilder). On Android, the DB file is located inside the app’s files directory inside objectbox/objectbox/. Or objectbox/<yourname> if you assigned the custom name <yourname> using BoxStoreBuilder. To reclaim disk space, close() the BoxStore and delete the database files using BoxStore.deleteAllFiles(objectBoxDirectory). To avoid having to close BoxStore delete files before building it, e.g. during app start-up. // If BoxStore is in use, close it first.store.close();BoxStore.deleteAllFiles(new File(BoxStoreBuilder.DEFAULT_NAME));// TODO Build a new BoxStore instance. BoxStore.removeAllObjects() does not reclaim disk space. It keeps the allocated disk space so it returns fast and to avoid the performance hit of having to allocate the same disk space when data is put again. Questions not related to Java, Kotlin or Android are answered in the general ObjectBox FAQ. If you believe to have found a bug or missing feature, please create an issue. If you have a usage question regarding ObjectBox, please post on Stack Overflow.
https://docs.objectbox.io/faq
2020-07-02T20:56:36
CC-MAIN-2020-29
1593655880243.25
[]
docs.objectbox.io
Getting Started with Sysdig Secure Install the Agent Installing the agent on your infrastructure allows Sysdig to collect data for monitoring and security purposes.. Secure Your Pipeline. Set up a Repository Scanning Alert By integrating scan results with any of the notification channels provided by Sysdig, users can swiftly receive actionable updates reporting on the output of the image analysis process. Repository alerts can then be customized using different trigger conditions depending on the registry/repo scope. Secure Your Runtime Environment Set up a Runtime Scanning Alert One of the most actionable alerts a user can set up is to detect if an existing runtime image is impacted by newly discovered vulnerabilities. These alerts can be scoped using container and Kubernetes metadata so the right teams are notified as soon as the image falls out of compliance.. Basic Onboarding This section describes onboarding tips for Sysdig Secure (on-premises). Access the Sysdig Secure Interface To access the Sysdig Secure interface, the Sysdig agent must be installed, and a core admin user must be created during the Welcome Wizard. For installation instructions, refer to the Agent Installation documentation. Note Subsequent users must also have user credentials defined, either through Sysdig Secure, or through an integrated authentication tool. For more information on user creation, refer to the User and Team Administration documentation. Explore the Sysdig Secure Interface The Sysdig Secure UI is comprised of the following modules: There are a couple of potential starting points, depending on preferred workflow, and whether the Sysdig Secure implementation or the user is new: For new Sysdig Secure environments, navigate to the Policies module to start configuring the policies and rules required for the environment. For new Sysdig Secure users, navigate to the Policy Events module to review the current state of the environment.
https://docs.sysdig.com/en/getting-started-with-sysdig-secure.html
2020-07-02T22:31:04
CC-MAIN-2020-29
1593655880243.25
[array(['image/uuid-ff282df0-1680-caba-86e9-4d2ec4097936-en.png', 'Onboarding_Screenshot.png'], dtype=object) array(['image/uuid-90923be9-3370-92d4-6b83-7580ec880b37-en.png', 'Secure_landing.png'], dtype=object) ]
docs.sysdig.com
Zoom, press the key > Zoom Mode. - Do one of the following: - To zoom in, on the trackpad, slide your finger up. - To zoom to a point on a map, press the key > Zoom to Point. - To zoom out, on the trackpad, slide your finger down. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/38106/1486471.jsp
2015-08-28T00:22:50
CC-MAIN-2015-35
1440644060103.8
[]
docs.blackberry.com
Pages that link to "User:Wgviana" ← User:Wgviana The following pages link to User:Wgviana:View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500) - User talk:Dextercowley (← links) - User talk:Chris Davenport (← links) - Template:Extension DPL (← links) - User:Tom Hutchison/DPL Advanced/Feeds (← links) - JDOC:Editors (← links)
https://docs.joomla.org/index.php?title=Special:WhatLinksHere&target=User%3AWgviana
2015-08-28T01:19:04
CC-MAIN-2015-35
1440644060103.8
[]
docs.joomla.org
Crate git_warp_timesource · [−] Modules Structs Functions Return a repository discovered from from the current working directory or $GIT_DIR settings. Iterate over either the explicit file list or the working directory files, filter out any that have local modifications, are ignored by Git, or are in submodules and reset the file metadata mtime to the commit date of the last commit that affected the file in question.
https://docs.rs/git-warp-time/latest/git_warp_time/
2022-05-16T12:34:44
CC-MAIN-2022-21
1652662510117.12
[]
docs.rs
You are viewing documentation for Kubernetes version: v1.20 Kubernetes v1.20 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version. ShareThis: Kubernetes In Production.
https://v1-20.docs.kubernetes.io/blog/2016/02/sharethis-kubernetes-in-production/
2022-05-16T12:38:00
CC-MAIN-2022-21
1652662510117.12
[]
v1-20.docs.kubernetes.io
Changelog for package flexbe_widget_widget] Robustify action server when spammed with failing behaviors Merge remote-tracking branch 'origin/master' into develop Contributors: Philipp Schillinger 1.1.1 (2018-12-18) Merge remote-tracking branch 'origin/master' into develop Contributors: Philipp Schillinger 1.1.0 (2018-12-01) Merge branch 'develop' Merge branch 'feature/flexbe_app' into develop [flexbe_widget] Fix: Remove launch install rule Update maintainer information [flexbe_widget] Remove deprecated Chrome app files State logger is optional and off by default [flexbe_widget] Update create_repo script to rename behaviors package Merge remote-tracking branch 'origin/develop' Merge remote-tracking branch 'origin/develop' into feature/flexbe_app [flexbe_widget] be_launcher ignores standard roslaunch args Merge remote-tracking branch 'origin/develop'/tudarmstadt' into develop Conflicts: flexbe_widget/src/flexbe_widget/behavior_action_server.py Merge remote-tracking branch 'origin/develop' Conflicts: flexbe_onboard/src/flexbe_onboard/flexbe_onboard.py [flexbe_widget] Launcher accepts behavior params via command line [flexbe_widget] Use behavior lib for action server behavior action server: fixed race condition between execute_cb and status_cb - sorted member variable initialization before subscriber and action server startup - moved preempt check to allow preempting behavior even if behavior did not start for some reason behavior action server: allow clean exit on ros shutdown [flexbe_widget] Updated minimum ui version to flexbe_app version [flexbe_widget] Marked chrome launcher as deprecated [flexbe_onboard] [flexbe_widget] Removed old launch files [flexbe_widget] Updated create_repo to initialize new layout Find behaviors by export tag and execute via checksum [flexbe_widget] revert action server autonomy level [flexbe_widget] Reverted App ID in flexbe_app script Merge branch 'automatic_reload' into develop behavior action server: remove "special" autonomy level "255" so behaviors will enable ros control by default [flexbe_widget] Removed debugging launchfile Merge pull request #26 from jgdo/automatic_reload Automatic reload automatic reload of imported behaviors upon sm creation fixed timing issue on behavior engine start by waiting for engine status updated flexbe_app start script to allow for locally set app-id Merge remote-tracking branch 'origin/develop' [flexbe_widget] Catch missing behavior package and give helpful error message Merge remote-tracking branch 'origin/master' into develop Merge remote-tracking branch 'origin/master' Merge remote-tracking branch 'origin/develop' [flexbe_widget] Set correct behavior outcome in action result Merge branch 'develop' [flexbe_widget] Print warning if new repo is not on pkg path (address #13 ) Merge remote-tracking branch 'origin/master' into develop Merge pull request #10 from team-vigir/cnurobotics Fix #11 Merge pull request #9 from icemanx/master Added behavior stopping feature for behavior action server (resolve #8 ) Added behavior stopping feature for behavior action server. Merge branch 'master' into cnurobotics Merge remote-tracking branch 'origin/develop' [flexbe_widget] Only require sudo in create_repo if pkg needs to be installed (resolve #4 ) Merge branch 'master' into cnurobotics Merge remote-tracking branch 'origin/develop' [flexbe_widget] Use behavior prefix in clear_cache script modify to read and allow parameterizing default behaviors_package in launch files [flexbe_widget] Fix #3 : consider correct ros distro in create_repo Merge remote-tracking branch 'origin/develop' [flexbe_widget] Fix #2 Provide option to set userdata input on behavior action calls Merge remote-tracking branch 'origin/develop' into feature/pause_repeat [flexbe_widget] Fixed handling of YAML parameters [flexbe_widget] Check UI version against a minimum required one [flexbe_widget] Accept rosbridge port as launch arg [flexbe_widget] Notify GUI when behavior to launch is not found Merge remote-tracking branch 'origin/feature/multirobot' [FlexBE] Updated App to 0.21.4 * Added support for namespace via param Merge remote-tracking branch 'origin/master' into feature/multirobot Conflicts: flexbe_core/src/flexbe_core/core/monitoring_state.py flexbe_core/src/flexbe_core/core/operatable_state.py [flexbe_widget] Correctly resolve file params of embedded behaviors [flexbe_widget] Behavior action server now correctly detects errors on behavior start [flexbe_onboard] [flexbe_widget] Improved support for yaml files Changed absolute topic references to relative [flexbe_widget] Added a simple action server for executing a behavior [flexbe_widget] Added references to the example states in create_repo script [flexbe_widget] Added a script to create a new project repo [flexbe_widget] Use environment variable for behaviors package in behavior launcher as well Removed some old and unused project files [flexbe_widget] Added input package to ocs launch file Initial commit of software Contributors: Bolkar Altuntas, David Conner, Dorian Scholz, DorianScholz, Mark Prediger, Philipp, Philipp Schillinger
http://docs.ros.org/en/lunar/changelogs/flexbe_widget/changelog.html
2022-05-16T12:53:32
CC-MAIN-2022-21
1652662510117.12
[]
docs.ros.org
Create a Federated Query - 2 minutes to read On this wizard page, you can create federated queries based on data from other data sources. Note that initial data sources can contain data at the root level (e.g., an Excel data source) or have one or more queries (e.g., a SQL data source). Include Data into Separate Queries Enable check boxes for data fields, queries and/or entire data sources. The selected items are included in data federation as separate queries based on initial data source queries. The wizard specifies query names as follows: - If the initial data source contains one or more queries (SQL data sources), the federated query name consists of the data source name and query name separated by an underscore. - If the initial data source contains data at the root level (Excel data sources), the federated query name is equivalent to the data source name. Right-click the federated data source and choose Manage Relations. This invokes the Master-Detail Relation Editor you can use to specify a master-detail relationship between the separate queries. Combine Data into a Single Query To combine data from multiple data sources into a single query, click Add Query. This invokes the Query Builder designed to federated data sources. You can use the Join, Union, Union All, and Transform query types to combine data. Refer to the following articles for more information on these query types. - Bind a Report to a Join-Based Federated Data Source - Bind a Report to a Union-Based Federated Data Source - Bind a Report to a Transformation-Based Data Source Specify Master-Detail Relationships Click Manage Relations to define master-detail relationships between two or more queries. In the invoked editor, drag and drop the key field from the master query to the detail query. Once the wizard is complete, you can see the master-detail hierarchy in the Field List. For more information, refer to the following guide: Bind a Report to a Federated Master-Detail Data Source.
https://docs.devexpress.com/XtraReports/400938/visual-studio-report-designer/data-source-wizard/connect-to-a-federated-data-source/create-a-federated-query
2022-05-16T12:36:02
CC-MAIN-2022-21
1652662510117.12
[]
docs.devexpress.com
RPC version mismatch error while accessing inSync Share file on web browser This article applies to: - OS: Windows, Mac - Product edition: inSync Cloud Problem description While accessing a shared folder through inSync Web, an RPC version mismatch error is displayed on the browser. Cause The above error is displayed on the browser when the shared folder being accessed through inSync Web contains more than 5000 files. Resolution To resolve this error: - Ensure the parent folder shared through inSync Web contains a lesser number of files than 5000. - If the shared folder contains more than 5000 files, distribute them in multiple folders before sharing them through inSync Web.
https://docs.druva.com/Knowledge_Base/inSync/Troubleshooting/RPC_version_mismatch_error_while_accessing_inSync_Share_file_on_web_browser
2022-05-16T11:31:25
CC-MAIN-2022-21
1652662510117.12
[]
docs.druva.com
Prepared Template When executing a query operation on the space, there’s an overhead incurred by translating the query to an internal representation (in object templates the properties values are extracted using reflection, in SQLQuery the expression string is parsed to an expression tree). If the same query is executed over and over again without modification, that overhead can be removed by using prepared templates. The ISpaceProxy interface provides a method called Snapshot which receives a template or query , translates it to an internal XAP query structure and returns a reference to that structure as IPreparedTemplate<T>. That reference can then be used with any of the proxy’s query operations to execute queries on the space in a more efficient manner, since there’s no need to translate or parse the query. In previous versions the Snapshot() method was also used as a workaround for using SQLQuery with blocking operations. Starting 8.0 SQLQuery supports blocking operations out of the box so that workaround is no longer required. Example Use ISpaceProxy.Snapshot to create a prepared template from an object template or a SqlQuery. Creating a prepared template from an object Person template= new Person(); template.Age = 21; IPreparedTemplate<Person> preparedTemplate = proxy.Snapshot(template); Creating a prepared template from SqlQuery SqlQuery<Person> query = new SqlQuery<Person>(personTemplate, "Age >= ?"); query.SetParameter(1, 21); IPreparedTemplate<Person> preparedTemplate = proxy.Snapshot(query); Using the ISpaceProxy.Snapshot method with complex SQL queries is not supported. For more information see simple SQL queries. After creating the prepared template, it can be passed as a template to the Read, Take, ReadMultiple, TakeMultiple, Count and Clear operations, as well as a template when registering for notification. Taking an object from the space using the prepared template Person person = proxy.Take(preparedTemplate);
https://docs.gigaspaces.com/xap/12.0/dev-dotnet/query-prepared-template.html
2022-05-16T12:55:58
CC-MAIN-2022-21
1652662510117.12
[]
docs.gigaspaces.com
KnightsDefiDocuments Search… KnightsDefiDocuments Knights DeFi Overview Tokenomics Farms & Pools Battlefield King's Chance (Lottery) The Queen's Gallery (NFTs) SHILLING Token Litepaper Roadmap Socials Links GitBook SHILLING Token Litepaper Shilling Token Overview $SHILLING is a Secure Moonrat Token fork with bug fixes and additional anti-bot / anti-whale features added. Earn claimable BNB by holding! Earn SHILLING by participating in Knights DeFi's unique Battlefield feature. Use Shilling to play games and win tokens! Hold for Claimable BNB and Reflect, use to play games, buy NFTs. Join Knights DeFi and see what we're doing. This token is a part of our evolving ecosystem. Tokenomics Uses the Pancake V2 Router to lock liquidity and for buying/selling. 2 Trillion total supply 50% burn (1 Trillion) Of the remaining 50% (1 Trillion): 25% Battlefield rewards (250 Billion) Emitted at a rate of 100000 SHILLING per block shared amongst Battlefield Participants. (2.8 Billion per day, roughly 90 days of supply not taking reflect into account) 2.5% Marketing (25 Billion) 2.5% Treasury (25 Billion) 70% Liquidity (700 Billion with 5 BNB for Fair Launch) - 95% LP Tokens burned forever, 5% LP Tokens Held for Treasury 10% Total Tax (10%+ Slippage a must) 2% Reflect Tax 8% Liquidity Tax (4% to Liquidity, 4% to BNB pool) Max Transaction at launch: 1 Billion Tokens (0.05% of total supply, .1% of circulating supply) (Will be increased to 5 Billion (0.5% of circulating supply) 30 minutes after launch) - Intended to keep bots at bay during fair launch. Min Liquidity Lock Amount: 500 Million Tokens Each purchase increases BNB Claim Timer based on purchase token amount relative to existing balance, up to 1 full claim cycle (1 day for 2 weeks, 3 days thereafter) if more than doubling holdings. BNB is claimable daily for the first two weeks, then every 3 days afterwards. This will be exposed through the front page UI @ . The pool is replenished via sales, and will also be replenished when people use the tokens for games as a % of the total. Battlefield will be replenished by Reflect rewards. Anti-Bot / Anti-Whale / Safety Measures .1% of circulating supply available for purchase per txn at launch (limit will be increased to .5% of circulating supply after 30 minutes) 30 second delays between buys (but not sells) for the first 30 minutes of launch (limit will be completely lifted and cannot be reintroduced shortly after launch) Contract Activation done with a single function. Early buyers will get a better price on a max txn, but will not be able to buy multiples, thus most people will be able to buy up to 1 Billion tokens at launch. Launch Limits will be lifted and contract will be time-locked for 5 days immediately following launch, all fees are unchangeable once the contract is activated to increase overall contract safety. LP Tokens generated via Locking Liquidity will be sent to address(0) while in time-lock. I will only remove the time-lock to make changes very rarely, with transparency, and only when adding new features. Selling or transferring ALL SHILLING tokens will cause the BNB claim timer to be moved out 50 years. Purchases and sales can still be done, but the user will be unable to claim BNB again. If you want to claim BNB later, use a different wallet. This is an anti-whale panic sale feature. Suggestion: Don't sell your entire stack or try to swing trade. The Queen's Gallery (NFTs) Roadmap Last modified 1yr ago Copy link Contents Shilling Token Overview $SHILLING is a Secure Moonrat Token fork with bug fixes and additional anti-bot / anti-whale features added. Tokenomics Anti-Bot / Anti-Whale / Safety Measures
https://docs.knightsdefi.com/shilling
2022-05-16T12:45:09
CC-MAIN-2022-21
1652662510117.12
[]
docs.knightsdefi.com
Hi, I have noticed a little problem with the proximity join of the logitech MTR. In fact, the Teams Room’s proximity join is not accessible to the two places furthest from the screen, which are spaced more than 4.5 meter from the MTR NUC. The computers placed on these two places cannot detect the signal and therefore cannot connect in proximity join Do you have a solution to increase the range of the proximity join without moving the MTR NUC? Thank you in advance. Best regards, Essia
https://docs.microsoft.com/en-us/answers/questions/29494/how-can-i-increase-the-range-of-proximity-join-on.html
2022-05-16T13:26:50
CC-MAIN-2022-21
1652662510117.12
[]
docs.microsoft.com
Installing MLReef on offline server The best way to run MLReef on your own on-premises infrastructure is the MLReef Nautilus package. Nautilus is a single docker image containing everything necessary to create machine learning projects and run ML workloads. Nautilus contains: - MLReef Management Service - Postgres - Gitlab for hosting Git repositories - Gitlab Runners for running Machine Learning workloads - API Gateway Installation Two steps need to be done in order to run MLReef Nautilus locally on a server which has no internet access. - In the first step, run bin/build-export-nautilus-offlineto pull and tar all required images at one place. Then copy the tar files to the offline server at a location of your choice. - In the second step, run bin/build-run-nautilus-offlinewith tar files location as an argument. This will start up a the local instance of MLReef with persistent docker volumes named mlreef-opt, mlreef-etc, and mlreefdb-opt containing all user data on offline server. The installation on an online server: git clone [email protected]:mlreef/mlreef.git bin/build-export-nautilus-offline Copy the tar files from mlreef-images-tar to the offline server. Copy bin/build-run-nautilus-offline script to offline server. On the offline host: bin/build-run-nautilus-offline -d $THE_PATH_OF_TAR_FILES -s $PIP_SERVER(optional) Example: bin/build-run-nautilus-offline -d mlreef-images-tar bin/build-run-nautilus-offline -d mlreef-images-tar -s localhost:10010/simple bin/build-run-nautilus-offline -d mlreef-images-tar -s This will start up a local instance of mlreef with persistent docker volumes named mlreef-opt, mlreef-etc, and mlreefdb-opt containing all user data. The container comes up with a default runner running on same docker network on localhost. In order to run MLReef Nautilus locally with local volume binding, you will have to replace docker volumes in docker run command of bin/build-run-nautilus-offline script with persistent data volumes. Example: --volume /root/mlreef-gitlab-opt:/var/opt/gitlab \ --volume /root/mlreef-gitlab-etc:/etc/gitlab \ --volume /root/mlreefdb-opt:/var/opt/mlreef \ Notes for pip server in offline mode: If the pip server is running on the same offline host with MLReef, localhostneeds to be used for pip server URL, e.g : localhost:10010/simple. If the pip server is running on some other host in intra network, the DNS host entry needs to be configured for docker. Example for Ubuntu: - Get the DNS Server IP: $ nmcli dev show | grep 'IP4.DNS' IP4.DNS[1]: 192.168.0.1 - Edit 'dns' in /etc/docker/daemon.json (create this file if already not there). Multiple DNS server IPs can be added separated by comma. { "dns": ["192.168.0.1"] } - Restart docker $ sudo service docker restart Now, the PIP server host should be accessible from mlreef service as well. Installing a pypi server Install pypiserver ( with this command: pip install pypiserver mkdir ~/packages # put offline python packages into this directory. Copy some packages into your ~/packages folder and then get your pypiserver up and running: python3 -m pip download -d ~/packages -r <requirements file> Start the server with this command, you can choose a different port number: pypi-server -p 10010 ~/packages & # Will listen to all IPs. From the client computer, type this to test if the pypip server is working: pip install --extra-index-url <package-name>
https://docs.mlreef.com/80-on-prem/1-MLReef_offline.md/
2022-05-16T12:37:30
CC-MAIN-2022-21
1652662510117.12
[]
docs.mlreef.com
airshipctl document pull¶ Airshipctl command to pull manifests from remote git repositories Synopsis¶ The remote manifests repositories as well as the target path where the repositories will be cloned are defined in the airship config file. By default the airship config file is initialized with the repository “ as a source of manifests and with the manifests target path “$HOME/.airship/default”. airshipctl document pull [flags] Examples¶ Pull manifests from remote repos # airshipctl document pull For the below sample airship config file, it will pull from remote repository where URL mentioned to the target location /home/airship with manifests->treasuremap->repositories->airshipctl->checkout options branch, commitHash & tag mentioned in manifest section. In the URL section, instead of a remote repository location we can also mention already checkout directory, in this case we need not use document pull otherwise, any temporary changes will be overwritten. >>>>>>Sample Config File<<<<<<<<< cat ~/.airship/config apiVersion: airshipit.org/v1alpha1 contexts: ephemeral-cluster: managementConfiguration: treasuremap_config manifest: treasuremap target-cluster: managementConfiguration: treasuremap_config manifest: treasuremap currentContext: ephemeral-cluster kind: Config managementConfiguration: treasuremap_config: insecure: true systemActionRetries: 30 systemRebootDelay: 30 type: redfish manifests: treasuremap: inventoryRepositoryName: primary metadataPath: manifests/site/eric-test-site/metadata.yaml phaseRepositoryName: primary repositories: airshipctl: checkout: branch: "" commitHash: f4cb1c44e0283c38a8bc1be5b8d71020b5d30dfb force: false localBranch: false tag: "" url: primary: checkout: branch: "" commitHash: 5556edbd386191de6c1ba90757d640c1c63c6339 force: false localBranch: false tag: "" url: targetPath: /home/airship permissions: DirectoryPermission: 488 FilePermission: 416 >>>>>>>>Sample output of document pull for above configuration<<<<<<<<< pkg/document/pull/pull.go:36: Reading current context manifest information from /home/airship/.airship/config (currentContext:) pkg/document/pull/pull.go:51: Downloading airshipctl repository airshipctl from airshipctl.git into /home/airship (url: & targetPath:) pkg/document/repo/repo.go:141: Attempting to download the repository airshipctl pkg/document/repo/repo.go:126: Attempting to clone the repository airshipctl from airshipctl.git pkg/document/repo/repo.go:120: Attempting to open repository airshipctl pkg/document/repo/repo.go:110: Attempting to checkout the repository airshipctl from commit hash ##### pkg/document/pull/pull.go:51: Downloading primary repository treasuremap from treasuremap.git into /home/airship (repository name taken from url path last content) pkg/document/repo/repo.go:141: Attempting to download the repository treasuremap pkg/document/repo/repo.go:126: Attempting to clone the repository treasuremap from /home/airship/treasuremap pkg/document/repo/repo.go:120: Attempting to open repository treasuremap pkg/document/repo/repo.go:110: Attempting to checkout the repository treasuremap from commit hash ##### Options¶ -h, --help help for pull -n, --no-checkout no checkout is performed after the clone is complete. Options inherited from parent commands¶ --airshipconf string path to the airshipctl configuration file. Defaults to "$HOME/.airship/config" --debug enable verbose output SEE ALSO¶ airshipctl document - Airshipctl command to manage site manifest documents
https://docs.airshipit.org/airshipctl/cli/document/airshipctl_document_pull.html
2022-05-16T12:07:52
CC-MAIN-2022-21
1652662510117.12
[]
docs.airshipit.org
- About Bamboo product Release Numbers - About Permissions in TTM - A license error occurs after successful license activation - About Time Tracking Modes - About the Installation Files - How to activate a Bamboo Site Collection Feature - Activating Your Bamboo Product License Offline - Activating Your Bamboo Product License - Add a view to a TTM report - Create Timesheet tasks for TTM - Adding tasks using a PM Central project tasks list - Approving and Rejecting Timesheets - Assigning Cost Codes to Resources -? - Change the skin of a TTM site - Choosing which farm servers to license - Complementary Products for Time Tracking and Management - Add or remove columns from TTM's Timesheet Entry display - Configure Cost Tracking for PMC - Configure Working Hours in TTM - Enable and configure Cost Tracking Options in TTM - Configuring TTM Report Center Permissions - Create a tasks rollup for TTM - Create Cost Codes for TTM - Create and Manage Timesheet Reporting Periods - Create a new TTM site - CSS Overview - Add Administrative Time Categories in TTM - Deleting Reporting Periods - Deploying a Bamboo solution to a new Web Application - Determine the Labor Rates that will be used in TTM - Enter Time - Error Log Files - ERRMSG: Failed to find the XML file at location 15 or 14 or ''12TemplateFeaturesBamboo.UI.Licensingfeature.xml'' - Highlights of Time Tracking and Management - Enter Time in TTM - Submit Timesheets for Approval - - Localize Bamboo Applications or Custom Columns - Location of Installation - Managing alerts in TTM - - Modify an existing view of TTM Report - Modify TTM column display names - Overview of moving a Bamboo Product License - My product was licensed yesterday and today there is an error; what happened? - Overview of the new Bamboo product Logging - Overview of Bamboo Product Trials - Overview of Licensing and Product Activation - Overview of Suite or Pack Licensing - Overview of the Installation/Setup Program - Overview of the Updated Installation Process for Bamboo components - Overview of Time Tracking and Management - Overview of TTM's Timesheet Entry Web Part - Best Practices for a successful install - Release Notes for Time Tracking & Management - Remove a view from a report - How to modify a web.config file using PowerShell - Required Installation Permissions - What Site Collection Features are associated with my Bamboo product? - System Requirements - The Bamboo Web License Manager does not show up in Central Administration. How do I fix that? - TTM Administrative Time data-source options - Use a Bamboo List Rollup as the Tasks data source in TTM - Configure TTM's Timesheet Entry Web Part's resource data source - Use a SharePoint list as the Tasks data source in TTM - Change the appearance of the time entry grid display in TTM - Timesheet data is missing after changing the Timesheet Tasks Data Source - Add items to TTM's Timesheet Resources list - Troubleshoot Problems with Deploying Farm Solutions - Configure Time Tracking and Management's Approval Center - TTM Archive - Using the Auto Approved Timesheet group in TTM - Using the Bulk Approvers group in TTM - TTM Configuration Decision Map - TTM Configuration Home Page - Overview of the TTM Control Panel - TTM Cost Tracking Checklist - TTM Cost Tracking Columns - Use the TTM Export Timesheets options - TTM Report Center - Configure the TTM Send Message Web Part - TTM Time Tracking Checklist - What alerts come with TTM? - How to know what TTM version you are using - Using the TTM Configuration Lists - Using the Time Tracking and Management Upgrade Manager - Configuring TTM's User Profile Import Web Part - Uninstalling from SharePoint Central Administration - Uninstalling using the updated process - Upgrade to TTM 2.5 - Upgrading using the updated process - Upgrading your Bamboo Web Part - Use the provided .lic file to extend your product's trial period - Using the Setup program to Uninstall - View actual cost in PM Central - View reports in TTM Report Center - What is left behind after uninstalling? - What version of Telerik Components are deployed by Bamboo Products?
https://docs.bamboosolutions.com/document/topics-for-time-tracking-management/
2022-05-16T12:00:28
CC-MAIN-2022-21
1652662510117.12
[]
docs.bamboosolutions.com
Align Objects There are multiple ways Max can help you align objects. Auto Align Auto Align makes an educated guess as to how to align the selected objects vertically or horizontally. If all objects in the selection are below the top edge of the leftmost object, but not completely to the right of the right edge of the leftmost object, the objects are aligned vertically. Otherwise, they are aligned horizontally. - Select the objects you wish to align. - Choose Auto Align (command-Y) from the Arrange menu. Align You can explicity tell Max how you want your selected objects to be aligned. - Select the objects you wish to align. - Choose Top from the Align submenu under the Arrange menu.
https://docs.cycling74.com/max8/vignettes/aligning_objects
2022-05-16T11:57:52
CC-MAIN-2022-21
1652662510117.12
[]
docs.cycling74.com
DRaaS support matrix, prerequisites, and limitations Support Matrix (Phoenix AWS proxy 4.8.0 and later) If you have deployed backup proxy 4.8.0 and later, Phoenix DraaS supports failback on the following operating systems: Notes: - Windows: - Phoenix DraaS does not support failback on the FAT and FAT 32. - Phoenix DraaS does not support failback on the Extended partition on Windows. - Phoenix DraaS supports failback on the Dynamic disk. During failback, Phoenix DraaS converts the dynamic disk to basic disk. - Linux: - Linux RAID configuration is not supported. Supported AWS regions To know the AWS regions that Druva supports for disaster recovery, see the Downloads page. In addition, AWS provides other regions that Druva can support for disaster recovery. For more information on regions that AWS provides, see Global Cloud Infrastructure. To know more about the regions that Druva can support for disaster recovery, but are not listed in the previous list, contact Support. Supported AWS instance types The following instance types are supported for failover. The instance types are displayed based on the region you have selected and the instances supported by Druva. Phoenix DraaS prerequisites if you are using Phoenix AWS proxy version 4.8.0 or later Recommended instance types: While registering a Phoenix AWS proxy, it is recommended that you select an instance type of size/family with the following minimum configuration: 8 CPUs, 16 GB memory, 3500 Mbps bandwidth, 10,000 IOPS. For example: General purpose: m4.xlarge | m4.2xlarge | m4.4xlarge | m4.8xlarge | m4.10xlarge | m5.xlarge | m5.2xlarge | m5.4xlarge | m5.12xlarge | m5.24xlarge Compute optimized: c5.2xlarge | c5.4xlarge | c5.9xlarge | c5.18xlarge | c5n.large You have set up the respective AWS Cloud/GovCloud account. AWS account must have required permissions to create IAM Policy to delegate access to the AWS resources. AWS account must have permissions to create multiple S3 buckets for each region. Subnet entered in failover settings for each virtual machine should be able to reach AWS services like SQS and S3. Security group should be selected appropriately if SSH/RDP is required. All instances launched in public subnet must have a public IP address and the instances launched in private subnet must not have a public IP address. Elastic public IPs should be selected based on available Elastic IPs in the AWS account. Static private IP should be selected appropriately based on the subnet’s Classless Inter-Domain Routing (CIDR) block. IAM role should have the same policies as that available on Management Console. VM should have a minimum of 1 GB free space on the boot partition. If the VM has an outdated Kernel, it will be updated on failover. Subnet prerequisite for Phoenix AWS proxy deployment The Druva services should be available in the availability zone for the subnet that you intend to select during the Phoenix AWS proxy deployment. Perform the following tasks to determine if the chosen subnet can be selected for the Phoenix AWS proxy deployment or not. - Copy the Druva backup service name that corresponds to the region where you intend to deploythe Phoenix AWS proxy from the following table: - Log in to the AWS Management Console. Ensure you are logged into the region where you want to deploy the Phoenix AWS proxy From the search bar at the top, search for and navigate to the VPC service. In the navigation pane on the left, under VIRTUAL PRIVATE CLOUD, click Endpoints. - On the Endpoints page, click Create Endpoint. On the Create Endpoint page, under the Service category click Find service by name. - In the Service Name field, paste the service name that you copied in step 1. Click Verify. Note: After clicking Verify, you will see the service name not found error. This is because the Druva backup service hasn’t been created yet. It will be created as part of the Phoenix AWS proxy deployment. Ignore the message. In the VPC dropdown, select the VPC that you want to use for the Phoenix AWS proxy deployment. Ensure that the Druva service is available in the availability zone for the subnet that you intend to use. If the service is available in the availability zone, proceed with the Phoenix AWS proxy deployment. Else repeat the verification for an alternate VPC and a different subnet where the Druva service is available in the Availability Zone. - In the Create Endpoint page, click Cancel. Virtual machine prerequisites if you are using Phoenix AWS proxy version 4.8.0 or later Before you set up Phoenix DraaS, go through the following: In case of Linux virtual machines, if there are any devices in fstab mounted at the time of booting up, those devices will not be available in the EC2 instance created after failover. Virtual machine must not have multi-boot partitions. Virtual machine must not boot in the recovery mode. Disks should be online and formatted for Windows. Disks should be formatted and mounted for Linux. "/" and "/boot" should be on the same disk for Linux. Limitations (for Phoenix AWS proxy version 4.8.0 or later) AWS Limitations VMDK disk size should not be greater than 16TB - EBS volume supports up to 16TB. Phoenix AWS proxy Limitations - The update DR copy job does not support restore of VM having the number of VMDKs more than 21. To be able to successfully restore a VM with 21 VMDKs ensure that only one update DR copy job is running on Phoenix AWS proxy. Failover Limitations - DR failover does not support multiple NICs. Failover Instance will have only one NIC with Public-IP and Private-IP as configured in failover settings of the DR plan. - Failover for Windows 2008 non R2 will work only with t2 instance type. Failback Limitations - Phoenix DraaS does not allow you to resume failback on backup proxy with version earlier than 4.8.8_80128. - DR failbacks will fail if a VM had an NVMe controller or disk and was configured for DR. Windows Limitations - Windows Dynamic Disk as a boot partition is not supported. - Windows extended partitions are not supported. - Clustered drives are not supported: if your Windows Servers are a part of a cluster, exclude the clustered drive and perform the migration using the system drive only. Linux Limitations - Linux LVM with extended partitions is not supported. - RAID configurations are not supported.
https://docs.druva.com/Phoenix/040_Disaster_Recovery_as_a_Service_(DRaaS)/010_Introduction_to_DRaaS/050_DRaaS_Support_Matrix
2022-05-16T11:48:09
CC-MAIN-2022-21
1652662510117.12
[]
docs.druva.com
Upgrade IntroductionUpgrade Introduction This section walks through upgrading different components of the Magma topology. This specific page provides a high-level overview of the upgrade process. Orc8r-gateway compatibilityOrc8r-gateway compatibility Orc8r and gateways both follow a SemVer-like versioning of MAJOR.MINOR.PATCH, with the following constraints - Gateway version must be <= Orc8r version - Gateway can diverge at most 1 minor version from Orc8r For example - ✅ Gateway 1.4.0, Orc8r 1.4.0 - ✅ Gateway 1.3.0, Orc8r 1.4.0 - ✅ Gateway 1.3.3, Orc8r 1.4.0 - ✅ Gateway 1.3.3, Orc8r 1.4.10 - 🚨 Gateway 1.2.0, Orc8r 1.4.0 (more than 1 minor) - 🚨 Gateway 1.4.1, Orc8r 1.4.0 (gateway > Orc8r) Orc8r-gateway upgrade flowOrc8r-gateway upgrade flow Based on these compatibilities, the following upgrade flow is prescribed - Upgrade all gateways to Orc8r's current version - Upgrade Orc8r: 1 minor version and/or any number of patch versions - (Optional) Upgrade all gateways to Orc8r's current version
https://docs.magmacore.org/docs/general/upgrade_intro
2022-05-16T12:50:50
CC-MAIN-2022-21
1652662510117.12
[]
docs.magmacore.org
Time series pro This modifier plots the time evolution of one or more global attributes as a function of time. It can be used to study a quantity, which may be dynamically computed by OVITO’s data pipeline on each animation frame, over the entire simulation trajectory. The modifier outputs the generated time series as a data table, with one row per frame of the loaded trajectory. The modifier lets you select one or more existing input attributes from the current pipeline output. For each of the selected input attributes, a separate time series will be generated to plot its evolution as a function of time. Furthermore, you can select a custom source attribute for the time axis. Its dynamic value will serve as time axis for the plot, for instance if you would like to plot the time series as a function of simulation timestep number or physical simulation time instead of the default animation timestep. Note that the modifier steps through all frames of the simulation trajectory to compute the input attribute’s current value at each frame. This can be a lengthy process depending on the extent of the trajectory and the dataset size. However, the sampling will happen in the background, and you can continue working with the program while the modifiers is performing the computation. Once the time series is complete, you can press the button Show in data inspector to reveal the generated function plot in the data inspector of OVITO. See also ovito.modifiers.TimeSeriesModifier (Python API)
https://docs.ovito.org/reference/pipelines/modifiers/time_series.html
2022-05-16T11:49:04
CC-MAIN-2022-21
1652662510117.12
[]
docs.ovito.org
Installation with conda# Install# Installation via a conda environment circumvents compatibility issues when installing certain libraries. This guide assumes you have a working installation of conda. First, update conda: conda update -n base -c defaults conda First, create a conda environment (we name this autolens to signify it is for the PyAutoLens install): The command below creates this environment with some of the bigger package requirements, the rest will be installed with PyAutoFit via pip: conda create -n autolens numba astropy scikit-image scikit-learn scipy Activate the conda environment (you will have to do this every time you want to run PyAutoLens): conda activate autol.
https://pyautolens.readthedocs.io/en/latest/installation/conda.html
2022-05-16T11:20:51
CC-MAIN-2022-21
1652662510117.12
[]
pyautolens.readthedocs.io
Adding single-token nodes to a cluster Steps for adding nodes in single-token architecture clusters, not vnodes. Steps for adding nodes in single-token architecture clusters, not vnodes. -. Procedure - Calculate the tokens for the nodes based on your expansion strategy using the Token Generating Tool. - Install Cassandra and configure Cassandra on each new node. - If Cassandra starts automatically (Debian), stop the node and clear the data. - Configure cassandra.yaml on each new node: - auto_bootstrap: If false, set it to true. This option is not listed in the default cassandra.yaml configuration file and defaults to true. - cluster_name - cassandra.yaml configuration file/broadcast_address: Usually leave blank. Otherwise, use the IP address or host name that other Cassandra nodes use to connect to the new node. - endpoint_snitch - initial_token: Set according to your token calculations.CAUTION: If this property has no value, Cassandra assigns the node a random token range and results in a badly unbalanced ring. - seed_provider: Make sure that the new node lists at least one seed node in the existing cluster. Cassandra on each new node in two minutes intervals with consistent.rangemovement turned on: - Package installations: To each bootstrapped node, add the following option to the /usr/share/cassandra/cassandra-env.sh file and then start Cassandra: JVM_OPTS="$JVM_OPTS -Dcassandra.consistent.rangemovement=true - Tarball installations: $ bin/cassandra -Dcassandra.consistent.rangemovement=true.The location of the cassandra-rackdc.properties file depends on the type of installation:The location of the cassandra-topology.properties file depends on the type of installation:The location of the cassandra.yaml file depends on the type of installation:
https://docs.datastax.com/en/cassandra-oss/3.x/cassandra/operations/opsAddRplSingleTokenNodes.html
2022-05-16T10:55:29
CC-MAIN-2022-21
1652662510117.12
[]
docs.datastax.com
fiskaltrust.Middleware 1.3.6 (Germany) September 14, 2020 This release adds the functionality to locally generate our preview version of the DSFinV-K export, and the new production-ready Diebold Nixdorf SCU. Additionally, we fixed several issues and added some smaller improvements, mostly related to special receipt responses. New feature: Local DSFinV-K export (Preview) Starting with this version, Middleware users can download the DSFinV-K export not only from the Portal, but also directly from the Middleware's Journal endpoint. This enables users that use the free Middleware only (without any add-ons like the revision-safe cloud storage) to be fully compliant to the tax authorities' regulations. The ftJournalType for getting a DSFinV-K export is 0x4445000000000002, the returned file is a .zip stream. More details about how to access this endpoint can be found in our docs. Please note that this is a preview version of the DSFinV-K export, and still contains a few issues and minor limitations: - Some VAT rates are currently not calculated correctly, e.g. in lines_vat.csv - Depending on how vouchers are booked, they may not yet be properly written into the export. - The KASSE_UST_ZUORDNUNG field is not yet filled. We decided for releasing this preview versions despite the known limitations to both give POS Creators the opportunity to already implement everything, and also gather as much direct feedback as possible. We will focus on resolving the open points as soon as possible and will release a new version with a feature-complete export soon. New SCU: Diebold Nixdorf After testing our Diebold Nixdorf SCU thoroughly on the sandbox and obtaining some customer feedback (also via Github - thanks to everyone who reached out), we released the Diebold Nixdorf SCU to production. It therefore can be used and configured via the Portal the exact same way as our other SCUs. Combined with the possibility to order these TSEs via our Portal's shop, we hope to be able to support all customer demands who plan to work with this hardware now. New feature: More detailed responses for closing and out-of-operation receipts We got some customer feedback about missing information in daily-closing and out-of-operation receipt responses, and therefore included some additional data to them: - Responses to daily-closing receipts now contain the upcounting DailyClosingNumberproperty in the JSON body of the ftStateDatafield. This makes it possible to reference this number e.g. in later receipts without the requirement to count the daily closings in the POS software itself. - Equivalent to initial-operation receipts, out-of-operation receipts now properly create notifications in both the SignatureItems of the response and the Action Journal. While the German tax authorities still don't require this (this regulation was deferred), we're now prepared to properly handle these notifications as soon as they become mandatory. New feature: Strongly signed interface package As we received some requests from our customers for a fiskaltrust.interface NuGet package with a strong name, we decided to provide an additional package with these capabilities. The fiskaltrust.interface.StrongName package can be downloaded from NuGet.org. Stability improvement: Exception handling We fixed two issues related to exception handling in this release. The first one was a bug that we accidentally introduced in version 1.3.5 and made the Middleware freeze in certain cases when an exception occurred in the Queue, e.g. when the sent ftCashBoxId was not matching. The second issue was less critical, but annoying as it hid other errors. In some cases, customers who use gRPC did not receive real exception messages, but an exception that The gRPC trailing metadata exceeded the allowed size. The reason is that we append error details to this trailing metadata - an example of how to read this can be found in our gRPC samples. This issue is now resolved as well. How to update Existing configurations with versions greater than 1.3.1 continue to work, but we of course recommend updating, especially if you're affected by one of the mentioned bugs or depend on the local DSFinV-K generation. If you are affected by the gRPC issue, please also update your Launcher by re-downloading it from the.6 - fiskaltrust.Middleware.Queue.SQLite v1.3.6 - fiskaltrust.Middleware.Queue.MySQL v1.3.6-rc1 (sandbox only) - fiskaltrust.Launcher v1.3.6 - fiskaltrust.interface.StrongName v1.3.1.
https://docs.fiskaltrust.cloud/docs/release-notes/middleware/1.3.6
2022-05-16T11:06:07
CC-MAIN-2022-21
1652662510117.12
[]
docs.fiskaltrust.cloud
IP capacity = public cloud default capacity - sum(current IP assignments) As a cluster administrator, you can configure the OVN-Kubernetes Container Network Interface (CNI) cluster network provider to assign one or more egress IP addresses to a namespace, or to specific pods in a namespace.. To assign one or more egress IPs to a namespace or specific pods in a namespace, the following conditions must be satisfied: At least one node in your cluster must have the k8s.ovn.org/egress-assignable: "" label. An EgressIP object exists that defines one or more egress IP addresses to use as the source IP address for traffic leaving the cluster from pods in a namespace. When creating an EgressIP object, the following conditions apply to nodes that are labeled with the k8s.ovn.org/egress-assignable: "" label: An egress IP address is never assigned to more than one node at a time. An egress IP address is equally balanced between available nodes that can host the egress IP address. If the spec.EgressIPs array in an EgressIP object specifies more than one IP address, the following conditions apply: No node will ever host more than one of the specified IP addresses. Traffic is balanced roughly equally between the specified IP addresses for a given namespace. If a node becomes unavailable, any egress IP addresses assigned to it are automatically reassigned, subject to the previously described conditions. When a pod matches the selector for multiple EgressIP objects, there is no guarantee which of the egress IP addresses that are specified in the EgressIP objects is assigned as the egress IP address for the pod. Additionally, if an EgressIP object specifies multiple egress IP addresses, there is no guarantee which of the egress IP addresses might be assigned to a pod. For example, if a pod matches a selector for an EgressIP object with two egress IP addresses, 10.10.20.1 and 10.10.20.2, either might be assigned to the pod. The following diagram depicts an egress IP address configuration. The diagram describes four pods in two different namespaces running on three nodes in a cluster. The nodes are assigned IP addresses from the 192.168.126.0/18 CIDR block on the host network. Both Node 1 and Node 3 are labeled with k8s.ovn.org/egress-assignable: "" and thus available for the assignment of egress IP addresses. The dashed lines in the diagram depict the traffic flow from pod1, pod2, and pod3 traveling through the pod network to egress the cluster from Node 1 and Node 3. When an external service receives traffic from any of the pods selected by the example EgressIP object, the source IP address is either 192.168.126.10 or 192.168.126.102. The traffic is balanced roughly equally between these two nodes. The following resources from the diagram are illustrated in detail: Namespaceobjects The namespaces are defined in the following manifest: apiVersion: v1 kind: Namespace metadata: name: namespace1 labels: env: prod --- apiVersion: v1 kind: Namespace metadata: name: namespace2 labels: env: prod EgressIPobject The following EgressIP object describes a configuration that selects all pods in any namespace with the env label set to prod. The egress IP addresses for the selected pods are 192.168.126.10 and 192.168.126.102. EgressIPobject apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressips-prod spec: egressIPs: - 192.168.126.10 - 192.168.126.102 namespaceSelector: matchLabels: env: prod status: assignments: - node: node1 egressIP: 192.168.126.10 - node: node3 egressIP: 192.168.126.102 For the configuration in the previous example, OKD assigns both egress IP addresses to the available nodes. The status field reflects whether and where the egress IP addresses are assigned. The following YAML describes the API for the EgressIP object. The scope of the object is cluster-wide; it is not created in a namespace. apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: <name> (1) spec: egressIPs: (2) - <ip_address> namespaceSelector: (3) ... podSelector: (4) ... The following YAML describes the stanza for the namespace selector: namespaceSelector: (1) matchLabels: <label_name>: <label_value> The following YAML describes the optional stanza for the pod selector: podSelector: (1) matchLabels: <label_name>: <label_value> In the following example, the EgressIP object associates the 192.168.126.11 and 192.168.126.102 egress IP addresses with pods that have the app label set to web and are in the namespaces that have the env label set to prod: EgressIPobject apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group1 spec: egressIPs: - 192.168.126.11 - 192.168.126.102 podSelector: matchLabels: app: web namespaceSelector: matchLabels: env: prod In the following example, the EgressIP object associates the 192.168.127.30 and 192.168.127.40 egress IP addresses with any pods that do not have the environment label set to development: EgressIPobject apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group2 spec: egressIPs: - 192.168.127.30 - 192.168.127.40 namespaceSelector: matchExpressions: - key: environment operator: NotIn values: - development You can apply the k8s.ovn.org/egress-assignable="" label to a node in your cluster so that OKD can assign one or more egress IP addresses to the node. Install the OpenShift CLI ( oc). Log in to the cluster as a cluster administrator. To label a node so that it can host one or more egress IP addresses, enter the following command: $ oc label nodes <node_name> k8s.ovn.org/egress-assignable="" (1)
https://docs.okd.io/4.10/networking/ovn_kubernetes_network_provider/configuring-egress-ips-ovn.html
2022-05-16T11:49:48
CC-MAIN-2022-21
1652662510117.12
[]
docs.okd.io
. Previous topic Frequently asked questions about contact policies Next topic Creating a volume constraint
https://docs.pega.com/pega-customer-decision-hub-user-guide/85/avoiding-overexposure-actions-volume-constraints
2022-05-16T11:14:07
CC-MAIN-2022-21
1652662510117.12
[]
docs.pega.com
Access Paths An accessor can operate on specific elements of collections (lists, maps, strings, and ranges) using an access path. An access path is made up of one or more access path components, of which there are several types: indices, keys, dynamic keys, and slices. These are separated by the path separator token /. Indexing Accessing elements in ordered collections (lists, strings, and ranges) is known as indexing. All Rant collections use "zero-based indexing"; in other words, the first element starts at index 0, the second at index 1, and so on. <%numbers = (: 1; 3; 5; 7; 9)> # Set the first number to 1 <numbers/0 = 100> # Get the first number <numbers/0> # -> 100 Negative indices are relative to the end of the collection. This means -1 represents the last element, -2 the penultimate element, and so on. <% Last char: <msg/-1>\n # same as <msg/4> or <msg/([len: <msg>] - 1)> # Last char: o Keying Accessing elements in maps is known as keying. Simply specify the desired map key after the /to access the map element with that key: <%citizen = (:: name = "Steve"; age = 50; mood = "angry"; )> # Set <citizen/age> to 150 <citizen/age = 150> <citizen/name>\n # -> Steve <citizen/age>\n # -> 150 <citizen/mood>\n # -> angry Keys follow the same naming rules as variables, unless specified as a dynamic key (see below). Dynamic keys Where an index or key must be calculated at runtime, a dynamic key may be used. Simply use an expression enclosed in ()in place of the index or key: <%fruits = (apple; orange; banana; tomato)> <fruits/( [len: <fruits> |> rand: 0; [] - 1])> # returns a random fruit The value of a dynamic key must be compatible with the type of collection being accessed: for example, a string cannot be used to index a list, but an int can be used to key a map because its conversion to a string is infallible (in other words, all int values have a valid string conversion). Dynamic key type compatibility Slicing Accessing a contiguous range of elements in an ordered collection is known as slicing. Slices can be fully-bounded (having start + end points), half-bounded (having a start or end point but not both), or unbounded (all elements). When you get a slice of a collection, the accessor returns a new copy of the collection containing the slice contents. Slice notation takes the following forms: # fully-bounded <my-list/2..5> # get all elements between index 2 (inclusive) and index 5 (exclusive) # start-bounded <my-list/2..> # get all emements starting at index 2 (inclusive) # end-bounded <my-list/..5> # get all elements until index 5 (exclusive) # unbounded <my-list/..> # get all elements (equivalent to a shallow-clone) Splicing You can also set a slice on mutable collection types, an operation also known as splicing: <$my-list = (: 1; 2; 3)> <my-list/1..2 = (: a; b)> # the splice value doesn't have to be the same size! <my-list/3..4 = (c; d)> # the splice value can also be a tuple! <my-list> # -> (: 1; a; b; 3) Dynamic slices Slices also support dynamic bounds; just replace any slice bound with a dynamic key: <% [rep: [len: <message>]] { # Use the current block iteration number to slice the message <message/..([step])>\n } This produces the following output: f fa fan fant fanta fantas fantast fantasti fantastic Nested access paths Access paths can be nested. This means if you have an array in a map and you want to access an element of that array, you most certainly can; just add another component to the path. <%arrays = (:: odd-numbers = (: 1; 3; 5; 7; 9); even-numbers = (: 0; 2; 4; 6; 8); )> # Get the last element of <arrays/odd-numbers> <arrays/odd-numbers/-1>
https://docs.rant-lang.org/language/accessors/access-paths.html
2022-05-16T13:07:05
CC-MAIN-2022-21
1652662510117.12
[]
docs.rant-lang.org
Sharing tables and columns As an administrator, you can share view or edit access to any table. By (CLS), or multiple tables from the Data tab, or share a single table from within the table that you want to share. Share from the Data tab To share a table, worksheet, or view from the Data tab, follow these steps. You can also share multiple objects at a time from the Data tab. More.
https://docs.thoughtspot.com/software/6.2/share-source-tables
2022-05-16T12:10:37
CC-MAIN-2022-21
1652662510117.12
[]
docs.thoughtspot.com
Health Check API response if RabbitMQ fails to start Check the status of each IQ Bot service using the Health Check API if RabbitMQ fails to start. The Health Check response for RabbitMQ startup failure is different in case of FileManager, Project, Validator, VisionBot as described in the following table.
https://docs.automationanywhere.com/bundle/enterprise-v2019/page/enterprise-cloud/topics/iq-bot/install/iqb-validation-healthcheck-rabbitmq-failure.html
2022-05-16T12:23:51
CC-MAIN-2022-21
1652662510117.12
[]
docs.automationanywhere.com
fiskaltrust.Middleware 1.3.7 (Germany) September 21, 2020 Version 1.3.7 contains an important fix for Entity Framework (EF) queues, which should resolve a recurring SQL concurrency exception that multiple customers experienced. While this only occurred for a small percentage of receipts, it was nevertheless a critical issue for affected customers - hence we decided to release version 1.3.7 earlier than expected. The feature and stability updates we announced in the last release notes will be included in version 1.3.8. Stability improvement: Fixed concurrency error in EF queue We fixed an error multiple customers were experiencing when using the EF queue (SQLite was not affected). In some cases, the component that backups receipt data to the fiskaltrust.Cloud (HelipadHelper) was blocking sign requests from being processed. With unlucky timing, this could lead to failed requests, and exceptions similar to this one: System.Data.SqlClient.SqlException: New transaction is not allowed because there are other threads running in the session. This issue is now resolved, and concurrent access is now possible (sign requests are still processed sequentially, of course). How to update Existing configurations with versions greater than 1.3.1 continue to work, but we strongly recommend users of the EF queue to update to prevent this issue..7.
https://docs.fiskaltrust.cloud/docs/release-notes/middleware/1.3.7
2022-05-16T12:03:19
CC-MAIN-2022-21
1652662510117.12
[]
docs.fiskaltrust.cloud
Transactions View The Transactions view allows you to view all of the transactions that are currently taking place in the selected space (active transactions). This view can provide helpful information, for example, when a transaction is stuck. The Transactions view allows you to: View transaction details: - The transaction ID. - The transaction type – local or distributed. - The transaction status. - The date and time the transaction was started. - The transaction lease in milliseconds – relevant only for local transactions. - The number of objects locked under the transaction. View the specific objects locked under the transaction – clicking on the transaction displays a table of the locked objects at the bottom of the screen which shows each object’s UID, class name, operation type, and lock type. Drill into objects locked under the transaction by double-clicking them, using the Object Inspector. Refresh rate – every 1, 3, or 5 seconds (see below). For details on transactions, refer to the Transaction Management section. Refresh Options You can choose to refresh the transactions displayed periodically. Select the refresh rate desired from the Refresh Rate drop-down menu. To stop automatically refreshing the transactions, click the Stop button. When auto-refresh is running, a green blinking dot is displayed on the right side of the screen. The Transactions view is dynamic: - The transaction’s ID displayed at the top of the screen changes constantly. Every time the ID number changes, this means a new transaction is running. This continues until all of the transactions defined finish, then the table becomes empty. You won’t be able to see each running transaction – depending on the size of the transaction, it usually takes less than a second to finish. - The number of objects locked under a transaction displayed also changes constantly. Therefore, when you click on a transaction, the number of locked objects displayed below isn’t 100% accurate. For example, if the refresh rate selected is one second, the locked objects displayed in the few seconds it took to click on the transaction and view its details, are not shown below.
https://docs.gigaspaces.com/xap/11.0/admin/gigaspaces-browser-transaction-view.html
2022-05-16T12:32:34
CC-MAIN-2022-21
1652662510117.12
[]
docs.gigaspaces.com
TSC ProcessesTSC Processes This document provides a collection of norms for how the Magma TSC operates. PositionsPositions TSCTSC Positions are elected via majority vote from the codeowners. Last until earliest of: - 1-year term (pending TSC-wide phase-adjustment) - Resignation - TSC-majority vote to end term early TSC chairTSC chair Position is elected via majority vote from the TSC. Last until earliest of: - 1-year term (pending TSC-wide phase-adjustment) - Resignation - TSC-majority vote to end term early Voting considerationsVoting considerations - Majority vote means >50% of the vote -- a tie vote means the motion fails - There is no consideration for quorum for the following votes -- motions require yea votes from >50% of the full voting body - Ending a TSC member's term early - Voting in the TSC chair Additional normsAdditional norms - In the event of disagreements within the TSC, codeowners, or community on whether bylaws and norms were followed, or whether the set of bylaws and norms are reasonable, escalation routes first through the Governance Board, then finally to LinuxFoundation when necessary - TSC chair election to align with calendar year - Phase-adjustment: TSC 1-year terms to be phased such that, at a uniform turnover rate, full turnover would not occur in less than 6 months. E.g., if the TSC had 6 members, phase the term end-dates out by 1 month each - TSC votes are held in the [email protected] mailing list - Codeowner votes are held in #governance-codeowners-privatevia the Accord bot
https://docs.magmacore.org/docs/contributing/contribute_tsc_norms
2022-05-16T11:03:44
CC-MAIN-2022-21
1652662510117.12
[]
docs.magmacore.org
Filter your own visits from NS8 reports How to prevent your own visits to your website from impacting your Visitors and NS8 reports. To filter or exclude your own visits to your websites from session reporting, use Customize Columns on the page you are pulling data from. #Identify your own address If you do not know your own IP address you will need to look up your IP address, using a service like What's My IP. Bear in mind that unless you have a static IP address, which usually involves a surcharge from your internet provider, then your IP address may periodically change. You might want to keep track of the addresses you've filtered if you need to repeat the results in the future. Once you have determined your IP address, you can build a filter. To do this, go to Create an on-screen filter. Updated over 1 year ago
https://docs.ns8.com/docs/filtering-your-own-visits-from-ns8-reports
2022-05-16T12:53:45
CC-MAIN-2022-21
1652662510117.12
[]
docs.ns8.com
Open MPI v5.0.x. To-Do Items - 2. Quick start - 3. Getting help - 4. Release notes - 5. Building and installing Open MPI - 6. Open MPI-specific features - 7. Validating your installation - 8. Version numbers and binary compatibility - 9. Building MPI applications - 10. Running MPI applications - 11. Networking system support - 12. Frequently Asked Questions (FAQ) - 12.1. Supported systems - 12.2. System administrator-level technical information - 12.3. Building Open MPI - 12.4. Running MPI applications - 12.5. Fault Tolerance - 12.6. Troubleshooting - 12.7. Parallel debugging - 12.8. Large Clusters - 12.9. Open MPI IO (“OMPIO”) - 12.10. MacOS - 12.11. Run-Time Tuning - 12.12. General Tuning - 13. Developer’s guide - 13.1. Prerequisites - 13.2. Obtaining a Git clone - 13.3. Compiler Pickyness by Default - 13.4. Running autogen.pl - 13.5. Building Open MPI - 13.6. Open MPI terminology - 13.7. Source code tree layout - 13.8. Internal frameworks - 13.9. Manually installing the GNU Autootools - 13.10. Installing Sphinx - 13.11. ReStructured Text for those who know Markdown - 14. Contributing to Open MPI - 15. License - 16. History of Open MPI - 17. News - 18. Open MPI manual pages - 19. OpenSHMEM manual pages
https://docs.open-mpi.org/en/v5.0.x/
2022-05-16T12:34:25
CC-MAIN-2022-21
1652662510117.12
[]
docs.open-mpi.org
ChipWhisperer Logging¶ As of ChipWhisperer 5.5.1, logging has been reworked in ChipWhisperer to take advantage of having multiple specialized loggers. Instead of using the default logger for everything, we now use 6 primary loggers for different parts of ChipWhisperer software (from chipwhisperer.logging): other_logger = logging.getLogger("ChipWhisperer Other") target_logger = logging.getLogger("ChipWhisperer Target") scope_logger = logging.getLogger("ChipWhisperer Scope") naeusb_logger = logging.getLogger("ChipWhisperer NAEUSB") tracewhisperer_logger = logging.getLogger("ChipWhisperer TraceWhisperer") glitch_logger = logging.getLogger("ChipWhisperer Glitch") These loggers are all in the top level ChipWhisperer __init__.py, so you can do: import chipwhisperer as cw cw.scope_logger.warning("Test warning") This allows you to turn different parts of the software to different logging levels. For example, if you’re having issues communicating with the target, you might set the target_logger to debug: import chipwhisperer as cw import logging cw.target_logger.setLevel(logging.DEBUG) Or if you’re doing glitching and find the warnings about double glitches and width/offset of 0 annoying: cw.glitch_logger.setLevel(logging.ERROR) There’s also a convenience function for setting the logging level of all the ChipWhisperer levels: cw.set_all_log_levels(logging.WARNING)
https://chipwhisperer.readthedocs.io/en/latest/logging.html
2022-05-16T12:02:44
CC-MAIN-2022-21
1652662510117.12
[]
chipwhisperer.readthedocs.io
User authentication By default, only e-mail and password authentication is required for logins to Devo domains. However, to implement stricter policies for user authentication, you can enable one of the following methods for your domain in the Authentication tab of the Preferences → Domain Preferences area. - Password - This is the standard way of logging in using your e-mail and password, and is selected by default. Additionally, you can activate multi-factor authentication (MFA) to add an extra security layer to the e-mail/password credentials. After entering your e-mail and password, you will be prompted to enter a security code generated by an authentication app. - SAML2 - SAML is an open standard that allows users to log in to the application through an identity provider (IdP). - OpenID - Same as SAML, OpenID allows users to access an external IdP and authenticate to access the Devo Platform. OpenID is a lighter-weight protocol and requires explicit user consent to access as part of its communication flows. At least one authentication method must always be selected. If you deactivate the default e-mail/password method, you must enable at least one other method. You can activate several authentication methods in a domain, and users may access using the required one. After logging in, if you switch to a domain in which the authentication method you used to log in to your current domain is not activated, you will be prompted to select one of the authentication methods activated in that domain. Related articles
https://docs.devo.com/confluence/ndt/v7.1.0/domain-administration/user-authentication
2022-05-16T11:33:24
CC-MAIN-2022-21
1652662510117.12
[]
docs.devo.com
Appendix: AT - RKSV This appendix expands on the information provided in the General Part section, by adding details specific to the Austrian market. This additional information is provided only where applicable. The remaining chapters, for which there is no extra information required, were omitted. The links to regulations and further information, can be found at: Further literature can be found at: Ritz/Koran/Kutschera, SWK-Spezial Registrierkassen- und Belegerteilungspflicht, 1. Auflage 2016, Linde Verlag Wien. ISBN: 9783707333763 Please note that this information is only complete when combined with the General part. To implement the Middleware, users should get themselves familiar with the general information first and then refer to the country-specific details listed here.
https://docs.fiskaltrust.cloud/docs/poscreators/middleware-doc/austria
2022-05-16T12:28:00
CC-MAIN-2022-21
1652662510117.12
[]
docs.fiskaltrust.cloud
fiskaltrust.Middleware 1.3.8 (Germany) September 30, 2020 This release of the Middleware includes the stabilized version of the DSFinV-K export and automatically archives the TSE's .tar files on daily closing receipts. Finalized feature: Local DSFinV-K export After publishing a preview version of our local DSFinV-K export in version 1.3.6, we're happy to announce that this feature now leaves the preview state and is fully available as a stabilized, final functionality. We'd like to thank everyone who sent us their highly-valued feedback so far! This version is required for POSOperators that use the free Middleware only (without any add-ons like the revision-safe cloud storage) to be compliant to the tax authorities' regulations. The cloud-version of our DSFinV-K export (which can be queried via the Portal and is included in our POS Archive) is of course up-to-date as well. The ftJournalType for getting a DSFinV-K export is 0x4445000000000002, the returned file is a .zip stream. More details about how to access this endpoint can be found in our docs. New feature: Automated TSE .tar file export & archiving Starting with this version, we automatically export the TSE .tar files - which include transaction, audit and system logs - and store them in the Queue's database. The functionality to export TAR files is required in the BSI's Secure Element API (TR-03151), and thus standardized. Auditors may require these .tar files, hence we want to make them available as easy as possible. Exporting and storing this in the Queue's database has several advantages: - Depending on the TSE, .tar file exports may take a lot of time, especially when the storage gets full. Exporting and deleting this from the TSE regularly therefore greatly enhances the performance in case it needs to be accessed quickly. - Similarly, many TSEs tend to get slow over time due to their low memory size. Exporting and deleting the data from the device obviously also resolves this issue. - Customers who use our POS Archive profit from our regular revision-safe storage mechanisms and can download an aggregated .tar export directly via the Portal. Therefore, the data is still available to them, even if the TSE e.g. is damaged or lost. We ensure to not delete TSE data that was not properly exported by combining the internal security mechanisms of the TSEs with additional, software-based checks. Only data that was exported to the queue is deleted from the device. There are two options to trigger a .tar file export: - Via a daily-closing receipt (automatically). To prevent this, please add the receipt case flag 0x0000000004000000. - Via a zero receipt (optionally). To execute this, please add the receipt case flag 0x0000000002000000. New feature: Configurable SCU timeouts To solve some special cases of customers with unusually slow TSEs, we made two SCU communication parameters configurable: scu-timeout-ms: Determines the timeout value of the Queue to SCU communication, in ms. Default is 70 seconds. scu-max-retries: The maximum number of retries in case an SCU operation fails with an unexpected exception. Default is 2. Both these values can be set via the Queue configuration page in the Portal - just add the Key-Value pairs there. A rebuild of the affected Cashbox is required to propagate these changes to the Middleware. Bug fix: Properly return TSE details of Diebold Nixdorf and CryptoVision TSEs We fixed two small, but important issues in the Diebold Nixdorf and the CryptoVision SCUs: - CryptoVision: Instead of the correct logTimeFormat, this SCU returned noInputData. - Diebold Nixdorf: The serial number was returned as a Base64-encoded string instead of the required Octet string. Both issues are now resolved. Stability improvement: Improve upload of fast growing queues Some customers with rapidly growing queues (mostly in test scenarios) reported that the Queue data was not properly uploaded to our POS Archive sometimes. The log often showed timeout errors in this case. This issue was due to a wrongly set default upload limit for requests and is now resolved. As designed, large queues are now uploaded in chunks. How to update Existing configurations with versions greater than 1.3.1 continue to work, but we strongly recommend users to update to be able to use this new, required functionality..8 - fiskaltrust.Middleware.Queue.SQLite v1.3.8 - fiskaltrust.Middleware.Queue.MySQL v1.3.8-rc2 - fiskaltrust.Middleware.SCU.DE.CryptoVision v1.3.8 - fiskaltrust.Middleware.SCU.DE.DieboldNixdorf v1.3.8.
https://docs.fiskaltrust.cloud/docs/release-notes/middleware/1.3.8
2022-05-16T12:57:23
CC-MAIN-2022-21
1652662510117.12
[]
docs.fiskaltrust.cloud
PipelinedPipelined OverviewOverview Pipelined is the control application that programs rules in the Open vSwitch (OVS). In implementation, Pipelined is a set of network services that are chained together. These services can be chained and enabled/disabled through the REST API in orchestrator. Open vSwitch & OpenFlowOpen vSwitch & OpenFlow Open vSwitch (OVS) is a virtual switch that implements the OpenFlow protocol. Pipelined services program rules in OVS to implement basic PCEF functionality for user plane traffic. The OpenFlow pipeline of OVS contains 255 flow tables. Pipelined splits the tables into two categories: - Main table (Table 1 - 20) - Scratch table (Table 21 - 254) Source: OpenFlow Specification Each service is associated with a main table, which is used to forward traffic between different services. Services can claim scratch tables optionally, which are used for complex flow matching and processing within the same service. See Services for a detailed breakdown of each Pipelined services. Each flow table is programmed by a single service through OpenFlow and it can contain multiple flow entries. When a packet is forwarded to a table, it is matched against the flow entries installed in the table and the highest-priority matching flow entry is selected. The actions defined in the selected flow entry will be applied to the packet. RyuRyu Ryu is a Python library that provides an API wrapper for programming OVS. Pipelined services are implemented as Ryu applications (controllers) under the hood. Ryu apps are single-threaded entities that communicate using an event model. Generally, each controller is assigned a table and manages the its flows. ServicesServices Static ServicesStatic Services Static services include mandatory services (such as OAI and inout) which are always enabled, and services with a set table number. Static services can be configured in the YAML config. GTP port Local Port Uplink Downlink | | | | V V ------------------------------- | Table 0 (SPECIAL) | | GTP APP (OAI) | |- sets IMSI metadata | |- sets tunnel id on downlink | |- sets eth src/dst on uplink | ------------------------------- | V ------------------------------- | Table 1 (PHYSICAL) | | inout | |- sets direction bit | ------------------------------- | V ------------------------------- | Table 2 (PHYSICAL) | | ARP | |- Forwards non-ARP traffic | |- Responds to ARP requests w/| ---> Arp traffic - LOCAL | ovs bridge MAC | ------------------------------- | V ------------------------------- | Table 3 (PHYSICAL) | | access control | |- Forwards normal traffic | |- Drops traffic with ip | | address that matches the | | ip blocklist | ------------------------------- | V Configurable PHYSICAL apps managed by cloud <---> Scratch tables (Tables 4-9) (Tables 21 - 254) | V ------------------------------- | Table 10 (SPECIAL) | | inout | |- Forwards uplink traffic to | | LOCAL port | |- Forwards downlink traffic | | to GTP port | ------------------------------- | V Configurable LOGICAL apps managed by cloud <---> Scratch tables (Tables 11-19) (Tables 21 - 254) | V ------------------------------- | Table 20 (SPECIAL) | | inout | |- Forwards uplink traffic to | | LOCAL port | |- Forwards downlink traffic | | to GTP port | ------------------------------- | | | | V V GTP port Local Port downlink uplink Service typesService types Services(controllers) are split into two: Physical and Logical. Physical controllers: arpd, access_control. Logical controllers: dpi, enforcement. Configurable ServicesConfigurable Services These services can be enabled and ordered from orchestrator cloud. mconfig is used to stream the list of enabled service to gateway. Table numbers are dynamically assigned to these services and depenedent on the order. ------------------------------- | Table X | | DPI | |- Assigns App ID to each new | | IP tuple encountered | |- Optional, requires separate| | DPI engine | ------------------------------- ------------------------------- ------------------------------- | Table X | | Scratch Table 1 | | enforcement | --->| redirect | |- Activates/deactivates rules| |- Drop all non-HTTP traffic | | for a subscriber | | for redirected subscribers | | |<--- | | | | | | ------------------------------- ------------------------------- | | ------------------------------- --------------------->| Scratch Table 2 | | enforcement stats | |- Keeps track of flow stats | | and sends to sessiond | | | | | ------------------------------- Reserved registersReserved registers Nicira extension for OpenFlow provides additional registers (0 - 15) that can be set and matched. The table below lists the registers used in Pipelined. ResilienceResilience Pipelined service is restart resilient and can seamlessly recover from service restarts. This is achieved by: - Querying all flows on controller startup. This is done through a separate startup flow controller that will handle querying all initial stats. - Comparing the flows received from step 1, with the flows obtained from sessiond setup() call - Activating new flows that are not present - Deactivate flows that are not in the sessiond call but are active This works because ovs secure fail mode doesn't remove flows whenever the controller disconnects. Note: Currently we reinsert some flows instead of doing the diff logic on them(f.e. enforcement redirection flows as they need async dhcp request resolution, other tables that don't hold and session data(inout, ue_mac, etc.) but this will be added later). TestingTesting ScriptsScripts Some scripts in /lte/gateway/python/scripts may come in handy for testing. These scripts should be ran in virtualenv so magtivate needs to be ran first to enter the virtualenv . pipelined_cli.pycan be used to to make calls to the rpc API - Some commands require sudo privilege. To run the script as sudo in virtualenv, use venvsudo pipelined_cli.py - Example: ./pipelined_cli.py enforcement activate_dynamic_rule --imsi IMSI12345 --rule_id rule1 --priority 110 --hard_timeout 60 venvsudo ./pipelined_cli.py enforcement display_flows fake_user.pycan be used to debug Pipelined without an eNodeB. It creates a fake_user OVS port and an interface with the same name and IP (10.10.10.10). Any traffic sent through the interface would traverse the pipeline, as if its sent from a user ip (192.168.128.200 by default). - Example: ./fake_user.py create --imsi IMSI12345 sudo curl --interface fake_user -vvv --ipv4 > /dev/null Unit TestsUnit Tests See the Unit Test README for more details. Integration TestsIntegration Tests Traffic integration tests cover the end to end flow of Pipelined. See the Integration Test README for more details.
https://docs.magmacore.org/docs/lte/pipelined
2022-05-16T12:24:50
CC-MAIN-2022-21
1652662510117.12
[]
docs.magmacore.org
-03-20 Our team has been hard at work over the last month working to bring you improvements on across the platform! A few of the bigger updates can be found below: Browserbot changed to use Chrome We've updated our agent codebase to use Chrome for the Browserbot, instead of Firefox. This is a good thing: it's faster, more stable, and less buggy; Chrome will alleviate some of the concerns people have raised, mostly in relation to our Enterprise Agents. A list of key features affected by this will change is below. Page Loads now show the waterfall with mixed case URLs NTLM authentication works, in addition to basic authentication Browser session cleared after each page load / transaction test completes Better interaction with Selenium (used in web transactions) This will affect customers with Enterprise Agents deployed in their infrastructure. We continuously update our agent codebase, and push these changes to all agents, seamlessly upgrading agents during periods of inactivity. The te-chrome package will be installed on all Enterprise Agents, and te-firefox will be removed. Users monitoring their logs and wanting to exclude the BrowserBot can continue to do so by filtering on Chrome version 99, rather than Firefox. We do not anticipate any deployment hiccups during this process, and as always, agents will continue collecting data during any period of downtime or upgrade. We'll monitor our agent deployment carefully, so as to avoid any unforeseen downtime. Minor bugs squashed We've also optimized code and resolved some minor UI errors reported by our customers. This includes the following: Using the views menu to open BGP View would sometimes result in a server error. The location selector in DNSSEC view would sometimes not update when a different location was selected from either the world view, or the detailed metrics table. Certain links in the DNS+ views were not functioning correctly Documentation available! Since we released our updated Customer Success Center last release, we've been hard at work developing some basic documentation for our knowledge base. Articles are now available for each of the major areas and views in the platform. This includes items like the Getting Started Guide , as well as sections on each major view of the product. Have a look, ask questions as needed, suggest updates and/or needed documentation.... we're happy to answer and here to make you successful. We've also launched a contact number for customers experiencing critical issues: +1 (415) 237-EYES. Release Notes: 2013-04-02 Release Notes: 2013-02-27 Last modified 4mo ago Copy link Contents Browserbot changed to use Chrome Minor bugs squashed Documentation available!
https://docs.thousandeyes.com/archived-release-notes/2013/2013-03-20-release-notes
2022-05-16T11:39:39
CC-MAIN-2022-21
1652662510117.12
[]
docs.thousandeyes.com
Configuring CUE To configure CUE: If necessary, switch user to root. Open /etc/escenic/cue-web/config.ymlfor editing. For example This is a new file, so it will be empty. Enter the following: endpoints: escenic: " newsgate: " where escenic-host is the IP address or host name of the Content Engine CUE is to provide access to and newsgate-host is the IP address or host name of the CCI Newsgate system CUE is to provide access to. If no CCI Newsgate system is present, then you should set newsgateto an empty string (or remove the line completely): endpoints: escenic: " newsgate: "" Save the file. Enter: This reconfigures CUE with the Content Engine web service URL you specified in step 3. Open /etc/nginx/sites-available/defaultfor editing, and replace the entire contents of the file with the following: server { listen 81 default; include /etc/nginx/default-site/*.conf; } Create a new folder to contain your site definitions: Add two files to the new /etc/nginx/default-site/folder, called cue-web.confand webservice.conf: Open /etc/nginx/default-site/cue-web.conffor editing and add the following contents: location /cue-web/ { alias /var/www/cue-web/; expires modified +310s; } Open /etc/nginx/default-site/webservice.conffor editing and add the contents described in Web Service CORS Configuration: You should now be able to access CUE by opening a browser and going to :81/cue-web.
http://docs.escenic.com/cue-user-guide/1.0/configuring_cue.html
2022-05-16T12:23:08
CC-MAIN-2022-21
1652662510117.12
[]
docs.escenic.com
fiskaltrust.Middleware 1.3.9 (Germany) October 6, 2020 In this version, we published an important fix for the TSE .tar file export, both in the Queue and the SCU packages. This resolves a critical issue where customers with pre-existing Queues were not able to properly execute daily-closing receipts anymore. We highly recommend updating to this version on POS Systems that use version 1.3.8 to resolve this issue. Bug fix: Resolve .tar file export issues during daily-closing receipt Since the previous version (1.3.8), the Middleware automatically export the TSE's .tar files during the daily-closing receipt. Unfortunately, due to a communication issue between Queue and SCU, this export was executed far slower than expected - which in some cases completely prevented the creation of a daily-closing receipt when TSEs already contained too many transactions. In other cases, the daily-closing receipt took an intolerable amount of time. With this version, we drastically increased the communication speed when querying the .tar export, which resolves this issue in our test scenarios. We also improved our internal device locking to prevented some race conditions some customers were experiencing, which could lead to issues both while signing receipts and especially while creating .tar exports. An email was sent out to POS Creators and POS Dealers to inform them about this cause. We'd like to use this opportunity to again apologize for any inconvenience this may caused our customers, and have introduced internal measures to avoid issues like this propagating to our production systems in the future. New SCU: ATrust (Sandbox) After implementing the ATrust Cloud SCU we are making it available on the sandbox for early adopters. Since the ATrust TSE is not certified yet, it won't be available in production yet. How to update Existing configurations with versions greater than 1.3.1 continue to work, but we strongly recommend users to update to resolve this critical issue. Version 1.3.8 should not be used anymore..9 - fiskaltrust.Middleware.Queue.SQLite v1.3.9 - fiskaltrust.Middleware.Queue.MySQL v1.3.9-rc1 - fiskaltrust.Middleware.SCU.DE.CryptoVision v1.3.9 - fiskaltrust.Middleware.SCU.DE.Swissbit v1.3.9 - fiskaltrust.Middleware.SCU.DE.DieboldNixdorf v1.3.9 - fiskaltrust.Middleware.SCU.DE.ATrust v1.3.
https://docs.fiskaltrust.cloud/docs/release-notes/middleware/1.3.9
2022-05-16T11:39:46
CC-MAIN-2022-21
1652662510117.12
[]
docs.fiskaltrust.cloud
Methods¶ Method Descriptions¶ JavaScriptObject create_callback ( Object object, String method ) Creates a reference to a script function that can be used as a callback by JavaScript. The reference must be kept until the callback happens, or it won't be called at all. See JavaScriptObject for usage. Creates a new JavaScript object using the new constructor. The object must a valid property of the JavaScript window. See JavaScriptObject for usage. void download_buffer ( PoolByteArray buffer, String name, String mime="application/octet-stream" ) Prompts the user to download a file containing the specified buffer. The file will have the given name and mime type. Note: The browser may override the MIME type provided based on the file name's extension. Note: Browsers might block the download if download_buffer is not being called from a user interaction (e.g. button click). Note: Browsers might ask the user for permission or block the download if multiple download requests are made in a quick succession.. JavaScriptObject get_interface ( String interface ) Returns an interface to a JavaScript object that can be used by scripts. The interface must be a valid property of the JavaScript window. The callback must accept a single Array argument, which will contain the JavaScript arguments. See JavaScriptObject for usage.
https://docs.godotengine.org/ja/latest/classes/class_javascript.html
2022-05-16T12:12:08
CC-MAIN-2022-21
1652662510117.12
[]
docs.godotengine.org