content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
pullapprove_conditions allow you to determine which PRs need to
be reviewed and which don't. Often times, people don't need to review "work in
progress" PRs or PRs that are being merge into specific branches (i.e. for
further development or testing). This is a top-level setting which completely
enables or disables PullApprove (including notifications).
By default there are no
pullapprove_conditions and every PR is
up for review (depending on the conditions of each group). If not specified (see
below), PullApprove will return a "success" status to GitHub if any conditions
are not met so that PRs are still mergeable if PullApprove is a required status
check. To change the status or the explanation that goes with it, you can use
the more complex syntax for defining conditions.
version: 3 pullapprove_conditions: - "'WIP' not in title" # only review if not marked as work in progress - "base.ref == 'master'" # only review things being merged into master - "'*travis*' in statuses.successful" # only review if tests have already passed - "'hotfix' not in labels" # let hotfix PRs go through without review # review groups are only evaluated if all of the pullapprove_conditions are true groups: ...
For more details on what kinds of conditions you can write, look here.
You can further tweak the behavior of
pullapprove_conditions by
specifying the status and explanation that are returned to GitHub when one of
the conditions is not met. The conditions are evaluated in order, and the first
one to fail will be the one to set the status and explanation in GitHub.
version: 3 pullapprove_conditions: - condition: "base.ref == 'master'" unmet_status: success explanation: "Review not required unless merging to master" - "'hotfix' not in labels" # when using the string type, the default status is "success" - condition: "'WIP' not in title" unmet_status: pending explanation: "Work in progress" - condition: "'*travis*' in statuses.successful" unmet_status: failure explanation: "Tests must pass before review starts" # review groups are only evaluated if all of the pullapprove_conditions are true groups: ... | https://docs.pullapprove.com/config/pullapprove-conditions/ | 2020-01-17T16:35:28 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.pullapprove.com |
Viewing Redis Slow Log
On the Database > Slow Log page, you can view Slow Log details for Redis Enterprise Software (RS) databases.
Redis Slow Log is one of the best tools for debugging and tracing your Redis database, especially if you experience high latency and high CPU usage with Redis operations. Because Redis is based on a single threaded architecture, Redis Slow Log can be much more useful than slow log mechanisms of multi-threaded database systems such as MySQL Slow Query Log.
Unlike tools that introduce lock overhead (which makes the debugging process very complex, Redis Slow Log is highly effective at showing the actual processing time of each command.
Redis Enterprise Software includes enhancements to the standard Redis Slow Log capabilities that allow you to analyze the execution time complexity of each command. This enhancement can help you better analyze Redis operations, allowing you to compare the differences between execution times of the same command, observe spikes in CPU usage, and more.
This is especially useful with complex commands such as ZUNIONSTORE, ZINTERSTORE and ZRANGEBYSCORE.
The enhanced RS Slow Log adds the Complexity Info field to the output data.
View the Complexity Info data by its respective Command in the table below: | https://docs.redislabs.com/latest/rs/administering/logging/redis-slow-log/ | 2020-01-17T17:07:28 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.redislabs.com |
Design time
This article demonstrates how to populate RadDomainUpDown with data at design time. The RadListDataItem Collection Editor allows you to do that. You can access it through the Smart tag >> Edit Items option:
Figure 1: RadListDataItem Collection Editor
Another possibility to open the editor is via the Items collection in the Properties Visual Studio section:
Figure 2: Visual Studio Properties window
Both ways you can add a RadListDataItem which represents a logical data item which can display specific text and image. | https://docs.telerik.com/devtools/winforms/controls/editors/domainupdown/populating-with-data/design-time | 2020-01-17T16:05:47 | CC-MAIN-2020-05 | 1579250589861.0 | [array(['images/domainupdown-populating-with-data-design-time001.png',
'domainupdown-populating-with-data-design-time 001'], dtype=object)
array(['images/domainupdown-populating-with-data-design-time002.png',
'domainupdown-populating-with-data-design-time 002'], dtype=object)] | docs.telerik.com |
Welcome to The Badjr & Unity Documentation
Software defined networking allows Network Systems Administrators to design their networks in software, including describing
- Local IP Ranges
- Wireless Configurations
- Multiple Internet Link Configurations
- Failover Requirements
- Servers Monitoring Requirements
- Site to Site VPNs
The Badjrs are then deployed to site after having received their configuration through an adoption process. These devices are then fully modified, managed and monitored via the Unity Control Panel.
To see whats new click Whats New or Getting Started | https://docs.thebadjr.com/doku.php | 2020-01-17T16:22:19 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.thebadjr.com |
Please login or sign up. You may also need to provide your support ID if you have not already done so.
This product can be discovered by Enterprise version of BMC Discovery, but you can still Download our free Community Edition to discover [other products] !
CA APM Introscope (formerly known as CA Wily Introscope) is a Web application management product that allows you to proactively detect, triage and diagnose performance problems in your complex, enterprise and SOA environments. | https://docs.bmc.com/docs/display/Configipedia/CA+APM+Introscope | 2020-01-17T17:54:25 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.bmc.com |
Load balancing for High Availability SIP/VoIP services
Load balancing is a technique to spread work between two or more computers or other resources to achieve optimal resource utilization, maximize throughput, and minimize response time. The Brekeke SIP Server can be used to balance loads between multiple SIP servers.
Load Balancing
As an example of a Load Balancing structure, this exmaple environemnt has 3 server machines.
With this structure, the front-end Brekeke SIP Server divides the load and forwards half of the load to each back-end SIP servers. In this scenario, the front-end Brekeke SIP Server works as a load balancer which divides a load. The two back-end Brekeke SIP Servers work as servers which handle a divided load.
The front-end Brekeke SIP Server handles all of the load but it just forwards traffic and does not handle RTP packets. You need to install the Brekeke SIP Server in the front-end-machine and add some load-balancing DialPlan rules there. There is no general DialPlan for load-balancing, rather the DialPlan rules for load-balancing depends on each project’s requirements. For larger operations, this structure can be expanded to handle more back-end SIP servers.
Related Links: | https://docs.brekeke.com/sip/load-balancing-for-high-availability-sip-voip-services | 2020-01-17T16:01:30 | CC-MAIN-2020-05 | 1579250589861.0 | [array(['https://docs.brekeke.com/wp-content/uploads/2019/01/sip-server_load-balancing.gif',
None], dtype=object) ] | docs.brekeke.com |
Toon Boom Harmony 17 Premium Documentation
Release Notes
The list of new features and enhancements in Harmony 17.0.1.
Installation
How to install Harmony and set up Harmony servers, batch processing and WebCC.
Getting Started Guide
A beginner-friendly introductory guide to the main functionalities of Harmony Premium.
User Guide
How to use each of the functionalities available in Harmony Premium.
Reference
A reference guide for all the dialogs, view, toolbars and menus in Harmony Premium.
Preferences
A reference guide on the user preferences available in Harmony Premium.
Other Applications
Documentation for the other Harmony | https://docs.toonboom.com/help/harmony-17/premium/book/index.html | 2020-01-17T15:28:24 | CC-MAIN-2020-05 | 1579250589861.0 | [array(['../Resources/Images/HAR/Splash/banner-premium.png', None],
dtype=object)
array(['../Resources/Images/HAR/Splash/harmony.png', None], dtype=object)
array(['../Resources/Images/HAR/Splash/harmony.png', '2'], dtype=object)
array(['../Resources/Images/HAR/Splash/harmony.png', '3'], dtype=object)
array(['../Resources/Images/HAR/Splash/harmony.png', None], dtype=object)
array(['../Resources/Images/HAR/Splash/harmony.png', None], dtype=object)
array(['../Resources/Images/HAR/Splash/harmony.png', None], dtype=object)
array(['../Resources/Images/HAR/Splash/harmony.png', None], dtype=object)] | docs.toonboom.com |
PinPoint is a tool that helps you draw and modify elements relative to known positions in a drawing. You can place a target point and then the software dynamically displays the horizontal and vertical distance between the pointer and the target point. You can use PinPoint with all element drawing commands. You can run PinPoint from the Tools menu or the Main toolbar.
How PinPoint Works
PinPoint allows you to provide coordinate input to commands as you draw. The x and y coordinates are relative to a target point that you can position anywhere in the window. You can change the location of the target point at any time by clicking Reposition Target on the ribbon and then clicking a new position in the window.
As you move the pointer around, PinPoint dynamically displays the horizontal and vertical distance between the pointer position and the target point. Help lines show the PinPoint X- and Y-axis and the PinPoint orientation.
Locking and Freeing Values
You can lock the x coordinate or the y coordinate using the X and Y boxes on the ribbon. When one coordinate value is locked, you can position the other coordinate by clicking a position in the window. Or you can set both values using the ribbon boxes. If you want to free the dynamics for a locked value, you can clear the value box by double-clicking in the box and pressing Backspace or Delete.
PinPoint Orientation
In its default orientation, PinPoint's x-axis is horizontal. You can re-orient the x-axis to any angle by setting the angle on the PinPoint ribbon. The figure shows the PinPoint angle set to 20 degrees. | https://docs.hexagonppm.com/reader/9SG4E7lbOGmkM93~Boanfg/Yf9iAx9TC5cxftF_f9ifHA | 2020-01-17T17:22:43 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.hexagonppm.com |
Tutorial: Migrate your data to Cassandra API account in Azure Cosmos DB
As a developer, you might have existing Cassandra workloads that are running on-premises or in the cloud, and you might want to migrate them to Azure. You can migrate such workloads to a Cassandra API account in Azure Cosmos DB. This tutorial provides instructions on different options available to migrate Apache Cassandra data into the Cassandra API account in Azure Cosmos DB.
This tutorial covers the following tasks:
- Plan for migration
- Prerequisites for migration
- Migrate data using cqlsh COPY command
- Migrate data using Spark
If you don’t have an Azure subscription, create a free account before you begin.
Prerequisites for migration
Estimate your throughput needs: Before migrating data to the Cassandra API account in Azure Cosmos DB, you should estimate the throughput needs of your workload. In general, it's recommended to start with the average throughput required by the CRUD operations and then include the additional throughput required for the Extract Transform Load (ETL) or spiky operations. You need the following details to plan for migration:
Existing data size or estimated data size: Defines the minimum database size and throughput requirement. If you are estimating data size for a new application, you can assume that the data is uniformly distributed across the rows and estimate the value by multiplying with the data size.
Required throughput: Approximate read (query/get) and write (update/delete/insert) throughput rate. This value is required to compute the required request units along with steady state data size.
The schema: Connect to your existing Cassandra cluster through cqlsh and export the schema from Cassandra:
cqlsh [IP] "-e DESC SCHEMA" > orig_schema.cql
After you identify the requirements of your existing workload, you should create an Azure Cosmos account, database, and containers according to the gathered throughput requirements.
Determine the RU charge for an operation: You can determine the RUs by using any of the SDKs supported by the Cassandra API. This example shows the .NET version of getting RU charges.
var tableInsertStatement = table.Insert(sampleEntity); var insertResult = await tableInsertStatement.ExecuteAsync(); foreach (string key in insertResult.Info.IncomingPayload) { byte[] valueInBytes = customPayload[key]; double value = Encoding.UTF8.GetString(valueInBytes); Console.WriteLine($"CustomPayload: {key}: {value}"); }
Allocate the required throughput: Azure Cosmos DB can automatically scale storage and throughput as your requirements grow. You can estimate your throughput needs by using the Azure Cosmos DB request unit calculator.
Create tables in the Cassandra API account: Before you start migrating data, pre-create all your tables from the Azure portal or from cqlsh. If you are migrating to an Azure Cosmos account that has database level throughput, make sure to provide a partition key when creating the Azure Cosmos containers.
Increase throughput: The duration of your data migration depends on the amount of throughput you provisioned for the tables in Azure Cosmos DB. Increase the throughput for the duration of migration. With the higher throughput, you can avoid rate limiting and migrate in less time. After you've completed the migration, decrease the throughput to save costs. It’s also recommended to have the Azure Cosmos account in the same region as your source database.
Enable SSL: Azure Cosmos DB has strict security requirements and standards. Be sure to enable SSL when you interact with your account. When you use CQL with SSH, you have an option to provide SSL information.
Options to migrate data
You can move data from existing Cassandra workloads to Azure Cosmos DB by using the following options:
Migrate data using cqlsh COPY command
The CQL COPY command is used to copy local data to the Cassandra API account in Azure Cosmos DB. Use the following steps to copy data:
Get your Cassandra API account’s connection string information:
Sign in to the Azure portal, and navigate to your Azure Cosmos account.
Open the Connection String pane that contains all the information that you need to connect to your Cassandra API account from cqlsh.
Sign in to cqlsh using the connection information from the portal.
Use the CQL COPY command to copy local data to the Cassandra API account.
COPY exampleks.tablename FROM filefolderx/*.csv
Migrate data using Spark
Use the following steps to migrate data to the Cassandra API account with Spark:
Provision an Azure Databricks cluster or an HDInsight cluster
Move data to the destination Cassandra API endpoint by using the table copy operation
Migrating data by using Spark jobs is a recommended option if you have data residing in an existing cluster in Azure virtual machines or any other cloud. This option requires Spark to be set up as an intermediary for one time or regular ingestion. You can accelerate this migration by using Azure ExpressRoute connectivity between on-premises and Azure.
Clean up resources
When they're no longer needed, you can delete the resource group, the Azure Cosmos account, and all the related resources. To do so, select the resource group for the virtual machine, select Delete, and then confirm the name of the resource group to delete.
Next steps
In this tutorial, you've learned how to migrate your data to Cassandra API account in Azure Cosmos DB. You can now proceed to the following article to learn about other Azure Cosmos DB concepts:
Feedback | https://docs.microsoft.com/en-gb/azure/cosmos-db/cassandra-import-data | 2020-01-17T17:04:28 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.microsoft.com |
.
For example, the initial connection from a client might be to cluster node 1, but the client derives from the information it receives that it needs to go directly to another node in the cluster to interact directly with the data for the query. This process can dramatically reduce access times and latency, but also offers near-linear scalability.
When combined with the other RS high availability features, this solution provides high performance and low latency, all while giving applications the ability to cope with topology changes, including add node, remove node, and node failover.
For more about working with the OSS Cluster API, see Using the OSS Cluster API. | https://docs.redislabs.com/latest/rs/concepts/data-access/oss-cluster-api/ | 2020-01-17T17:13:29 | CC-MAIN-2020-05 | 1579250589861.0 | [] | docs.redislabs.com |
Surf.
Features
- Scripts and templates: Everything in Surf consists of scripts, templates, or configuration. This means no server restarts or compilation.
- Reusability: Surf’s presentation objects, templates, and scripts emphasize reusability. Scoped regions and component bindings allow you to describe presentation with less code.
- Spring Web MVC: Surf plugs in as a view resolver for Spring Web MVC, enabling you to use Surf for all or part of a site's view resolution. Surf renders views on top of annotated controllers and is plug-compatible with Spring Web Flow, Spring Security, Spring Roo, and Spring tag libraries.
- RESTful scripts and templates: All page elements and remote interfaces are delivered through a RESTful API. The full feature set of web scripts is available to Surf applications. Write new remote interfaces or new portlets with a script, a template, and a configuration file.
- Content management: A set of client libraries and out-of-the-box components streamline interoperability with CMIS content management systems, letting you easily access and present Enterprise content using Surf components and templates.
- Two-tier architecture: Surf works in a decoupled architecture where the presentation tier is separate from the content services tier.
- Production, development, and staging/preview: Configure Surf to work in a number of deployment scenarios including development, preview, or production environments.
- Development tools: Tools that plug into the SpringSource suite of development tools include Eclipse add-ons for SpringSource Tool Suite, as well as Spring Roo plug-ins to enable scaffolding and script-driven site generation. | https://docs.alfresco.com/5.1/concepts/surf-fwork-intro.html | 2020-05-25T01:35:30 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.alfresco.com |
DeleteBackupPlan
Deletes a backup plan. A backup plan can only be deleted after all associated selections of resources have been deleted. Deleting a backup plan deletes the current version of a backup plan. Previous versions, if any, will still exist.
Request Syntax
DELETE /backup/plans/
backupPlanIdHTTP/1.1
URI Request Parameters
The request requires the following URI parameters.
- backupPlanId
Uniquely identifies a backup plan.
Request Body
The request does not have a request body.
Response Syntax
HTTP/1.1 200 Content-type: application/json { "BackupPlanArn": "string", "BackupPlanId": "string", "Deletion
- DeletionDate
The date and time a backup plan is deleted, 1,024 bytes long. Version Ids cannot be edited.
Type: String
Errors
For information about the errors that are common to all actions, see Common Errors.
-.: | https://docs.amazonaws.cn/aws-backup/latest/devguide/API_DeleteBackupPlan.html | 2020-05-25T01:10:26 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.amazonaws.cn |
. route53resolver ]
Deletes a resolver rule. Before you can delete a resolver rule, you must disassociate it from all the VPCs that you associated the resolver rule with. For more infomation, see DisassociateResolverRule .
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
delete-resolver-rule --resolver-rule-id <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--resolver-rule-id (string)
The ID of the resolver rule Resolver rule
The following delete-resolver-rule example deletes the specified rule.
Note If a rule is associated with any VPCs, you must first disassociate the rule from the VPCs before you can delete it.
aws route53resolver delete-resolver-rule \ --resolver-rule-id rslvr-rr-5b3809426bexample
Output:
{ "ResolverRule": { "Id": "rslvr-rr-5b3809426bexample", "CreatorRequestId": "2020-01-03-18:47", "Arn": "arn:aws:route53resolver:us-west-2:111122223333:resolver-rule/rslvr-rr-5b3809426bexample", "DomainName": "zenith.example.com.", "Status": "DELETING", "StatusMessage": "[Trace id: 1-5dc5e05b-602e67b052cb74f05example] Deleting Resolver Rule.", "RuleType": "FORWARD", "Name": "my-resolver-rule", "TargetIps": [ { "Ip": "192.0.2.50", "Port": 53 } ], "ResolverEndpointId": "rslvr-out-d5e5920e3example", "OwnerId": "111122223333", "ShareStatus": "NOT_SHARED" } }
ResolverRule -> (structure)
Information about the DeleteResolverRule request, including the status of the. | https://docs.aws.amazon.com/cli/latest/reference/route53resolver/delete-resolver-rule.html | 2020-05-25T02:48:26 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.aws.amazon.com |
Understanding the benefit of Rapid Tools for Dynamics GP
Migrating to a new ERP system can present a number of complex challenges. For many organizations, migration is slow and cumbersome. Re-entering the detailed histories of customers, vendors, and products into a
new system is tedious and error-prone. In addition, the need to use experienced workers for implementation results in reduced productivity.
To help organizations meet these challenges, Microsoft is delivering a new generation of Rapid Tools for streamlined implementation of Microsoft Dynamics GP. These solutions help organizations save time, improve accuracy, and increase productivity.
Please take a look at the DemoMate and use it as a part of your presentations to your customers. This tool can be a real game-changer for your ability to sell a customer on a rapid configuration and rapid migration of their data to Dynamics GP. And let me know if you have any questions!
Jay Manley | https://docs.microsoft.com/en-us/archive/blogs/gp/understanding-the-benefit-of-rapid-tools-for-dynamics-gp | 2020-05-25T03:17:22 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.microsoft.com |
Sets the Rigidbody2D to have kinem will collide with all other Rigidbody2D body types.
When an attached Collider2D is set to trigger, it will always produce a trigger for any Collider2D attached to all other Rigidbody2D body types.
See Also: Rigidbody2D.bodyType, Rigidbody2D.useFullKinematicContacts. | https://docs.unity3d.com/ja/2018.4/ScriptReference/RigidbodyType2D.Kinematic.html | 2020-05-25T02:26:11 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.unity3d.com |
The following macro variables can be added in your postback URL:
{EVENT_ID} The event id, this is value is unique.
{ADVERTISER_ID} The advertiser id.
{COMMISSION_ID} The ID of commission, this value can occur multiple times if a commission changes it value or get cancelled.
{COMMISSION} The commission amount. It is positive for a new commission and negativ if the commission was cancelled.
{SALES_DATE} The date when the sale happened.
{MODIFIED_DATE} The date when the event happened.
{SUB_ID} The sub id which you might have specified via yk_tag.
NEW:
{EVENT_TYPE} can be “NEW” or “UPDATE”. indicated whether it is a new commission in our system or if a commission is updated
{STATE} the commission status, can be “OPEN”,”CONFIRMED”,”REJECTED”,”DELAYED”
| http://docs.yieldkit.com/knowledgebase/s2s-tracking/ | 2020-05-25T00:46:48 | CC-MAIN-2020-24 | 1590347387155.10 | [array(['http://docs.yieldkit.com/wp-content/uploads/2019/12/Bildschirmfoto-2019-12-03-um-11.58.40.png',
'S2S YIELDKIT'], dtype=object) ] | docs.yieldkit.com |
Message-ID: <499724627.3349.1590372656271.JavaMail.confluence@support.solidangle.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_3348_1542445733.1590372656234" ------=_Part_3348_1542445733.1590372656234 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
24 November 2016
This version uses the Arnold 4.2.16.0 core.
MtoA 1.4.1 addresses important issues regarding the Texture Workflow. &n= bsp;A few more IPR fixes have been done, as well as some improvements in th= e Arnold RenderView.
Windows users will no longer have to run the installer twice to update M= toA.
When selecting a mesh and doing "Arnold->Light->Mesh light", it no= w creates a new type of light in Maya that references the selected shape. T= his allow you to see the light in Maya UIs like Light Editor, Light-linking= Editor, etc... and make it consistent with other types of Arnold Lights. T= he previous system (changing mesh parameter "translator" to "mesh_light") i= s still supported but is now considered as deprecated and will be removed i= n the long-term future.
With this export option, all shading groups are exported (or only the se= lected ones during export selected), even if they're not assigned to any ge= ometry in the scene. This prevents you from assigning the shaders to dummy = geometries for export (as could be done in the past).
min_pixel_width=for curves and points is now significantly faster in certain s= ituations.
off,&nbs= p;
interior_onlyand
interior_exterior&n= bsp;to respectively turn off portals, block any light outside portals for i= nterior only scenes, and let light outside portals through for mixed interi= or and exterior scenes.
An option "Legacy Temperature" in the Render Settings (Tab "Lights") all= ows to restore the previous behavior. | https://docs.arnoldrenderer.com/exportword?pageId=40110958 | 2020-05-25T02:10:56 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.arnoldrenderer.com |
Extension¶
An extension part acts as a placeholder for any interactive element added by an extension, or custom code in the question, which awards marks to the student.
To use an extension part, your code must implement the following methods: (links go to the relevant pages in the Numbas JavaScript API documentation)
If you can create a JME value representing the student’s answer to the part, you should also implement studentAnswerAsJME so that it can be used in adaptive marking by later parts.
See the GeoGebra extension for an example of how to use the extension part. | https://docs.numbas.org.uk/en/latest/question/parts/extension.html | 2020-05-25T00:18:47 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.numbas.org.uk |
Localization
The Resco Mobile CRM application has been translated into many languages (see list). Users of the mobile app can select their preferred language in the Setup (unless that component has been removed from the app by the administrator).
Administrators can use Woodford to manage available language in the mobile app.
- To modify language options for the entire organization and all projects, select Localizations from the Administration menu.
- To modify language options for a single project, select Localization from the Project menu.
Project-level localization contains user interface texts and entity texts, whereas organization-level localization contains entity texts only.
Contents
Adding a language
- On the toolbar, click New. A new window opens.
- As Language, select the language that you want to create.
- As Template, select the language that will be used as a source.
- Click Save. New language is added to the list. After you save and publish your projects, your mobile users will now see an additional language available.
Editing a translation
This option gives you an ability to add a language mutation of the Mobile CRM application – this means that you can change various display names in the app, name of entities, tabs in forms and other UI elements.
This can be particularly useful if a translation does not fit on the small button of the mobile app. Updating these selected strings, perhaps using abbreviations, can greatly improve the usability of the app.
To modify strings:
- Select a language and click Edit. Alternatively, double-click the language.
- Use the Navigation pane to find the string you want to modify.
- Optionally, use Search in the top right corner of the Labels pane to narrow down your search.
- To modify a label, select it and click Edit (or just double-click it).
- Click Save to save your changes.
If you're not happy with a particular change, use the Restore button.
Use Add or Add Multiple to add custom labels. Use Delete to remove custom labels. You cannot delete the default labels.
Use Export to download the labels in text format. Use Import to upload labels back.
Additional options
- Properties
- Allows you to modify the Template language of a translation.
-
- Remove a language from the list of available languages within the app. Any custom translations are lost.
- Activate
- Make an inactive language active again.
- Deactivate
- Remove a language from the list of available languages within the app. Any custom translations are kept intact.
Tip: use localization to display restricted characters
When designing the user interface in Woodford, you may encounter a restriction that prevents you from using certain characters in the names and labels of UI elements. For example, when naming a view, you can only use alphanumeric characters, space, dash, or underscore. Similar (often less strict) restrictions exist in forms, for example, for the name of a detail tab or an associated list tab.
If you need to display a particular special character, you can use localization to change the label of these elements.
See also
- Localization examples
- Several examples for localizing various elements of the user interface, such as home items, commands, or views.
- Renaming and custom icons on home items and views
- Please check this BLOG POST for examples of the localization. It contains examples of creating custom icons and renaming Home items and changing view names. Blog
- Making a lookup field from text field
- In some cases, users need to enter the same text into some field repeatedly. To help them, you can use localization to predefine some often-used options so that the text field acts as a combination of a lookup field and text field. That way, users can choose from the most frequent options and also have an opportunity to enter a different entry if desired so. To see how it works and how to set it up, please check the following webinar. Webinar | https://docs.resco.net/mediawiki/index.php?title=Localization&printable=yes | 2020-05-25T02:11:41 | CC-MAIN-2020-24 | 1590347387155.10 | [] | docs.resco.net |
Step 3: Connect to the Master Node in the Stack
Connect to the master node in the cluster of EC2 instances launched by the AWS CloudFormation template.
For step-by-step instructions, see Connect to Your Linux Instance in the Amazon EC2 User Guide for Linux Instances. As you follow the steps, enable SSH agent forwarding when you connect to the master node. The master node uses SSH agent forwarding to securely connect with worker nodes to run distributed deep learning applications.
On Windows client, in PuTTY choose the Allow agent forwarding authentication option.
On Linux and macOS clients, use the
sshcommand to connect to the instance. Add the
-Aparameter when using this command. For example:Copy
$ ssh-add -K /
path/
my-key-pair.pem$ ssh -A -i /
path/
my-key-pair.pemubuntu@
public-DNS
Next Step
Step 4: Run Sample Apache MXNet Code | http://docs.aws.amazon.com/mxnet/latest/dg/mxnet-on-ec2-cluster-connect-master.html | 2017-11-17T21:45:41 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.aws.amazon.com |
This .
ListMultipartUploadsResponse ListMultipartUploads( ListMultipartUploadsRequest listMultipartUploadsRequest )
- listMultipartUploadsRequest (ListMultipartUploadsRequest)
- Container for the necessary parameters to execute the ListMultipartUploads service method on AmazonGlacier.
| http://docs.aws.amazon.com/sdkfornet1/latest/apidocs/html/M_Amazon_Glacier_AmazonGlacier_ListMultipartUploads.htm | 2017-11-17T21:45:45 | CC-MAIN-2017-47 | 1510934803944.17 | [array(['../icons/collapse_all.gif', None], dtype=object)
array(['../icons/collapse_all.gif', None], dtype=object)
array(['../icons/collapse_all.gif', None], dtype=object)
array(['../icons/collapse_all.gif', None], dtype=object)] | docs.aws.amazon.com |
Hi,
It is nice to know the Tories aren’t all braying asses:
Leaving these fools behind as they find cloned androgynous
nebbishes to believe their mantras, and think they did well,
when only 18% of the electorate is enough to get them elected
and they come to Parliament to do nothing of value without
any meaningful mandate, rubber stamping law and providing
camouflage as a pretence of democracy, to comply with EU
diktat centrally controlled and undemocratically imposed!
Just as will the next minority support Government.
On the other hand Daniel Hannan & Douglas Carswell,
with a slew of published political material under their belts,
have come up with an idea for our time.
This is where policy will increasingly be made as the people
regain control of their own governance as the servant which
has so grossly and self interestedly betrayed them.
This idea will gather its own momentum as you pass it on.
Here is YOUR chance to seize YOUR destiny.
Join in your own future, whether by making choices,
formulating policies, drafting bills, brain storming and
presenting or by promoting and supporting this brave new idea:
You can down load another copy of this to circulate from:
PLEASE DISTRIBUTE THIS WIDELY
Your Destiny Lies In Your Hands. | http://gl-w-docs.blogspot.com/2009/07/gl-w004-idea-whose-time-has-come.html | 2017-11-17T20:52:23 | CC-MAIN-2017-47 | 1510934803944.17 | [array(['http://1.bp.blogspot.com/_2vua2t6H-5o/SnCX-lG0j1I/AAAAAAAACFg/ymHUiaDH4RM/s320/CHILDREN%27S+HOUR+01.jpg',
None], dtype=object) ] | gl-w-docs.blogspot.com |
Using a MySQL Server¶
This is a guide for windows only currently. Preliminary Linux (Debian) instructions below (VII) Preliminary Docker (modern Linux OS w/ Docker & git installed) instructions below (VIII)
** Updated for commit 775982e **
I. Prerequisites¶
- Have already ran/operated the PokemonGo-Map using the default database setup.
- Have the “develop” build of PokemonGo-Map. pokemongomapdb; CREATE USER 'pogomapuser'@'localhost' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON pokemongomapdb . * TO 'pogomapuser'@'localhost'; exit
You can change
pokemongomap pokemongomapdb CREATE DATABASE pokemongomapdb;. Simply retry the
CREATE DATABASE pokemongomapdb;and don’t forget your
;.
- Error: “ERROR 1007 (HY000): Can’t create database ‘pokemongomapdb’; database exists” The pokemongomapdb database already exists. If you’re trying to start a fresh database you’ll need to execute
DROP DATABASE pokemongomapdb, and then run
CREATE DATABASE pokemongomapdb. If you want to keep the pokemongomapdb but start a new one, change the name.
Congratulations, your database is now setup and ready to be used.
IV. Setting up the Config.ini file & Editing utils.py¶
Config.ini¶
Open file explorer to where you’ve extracted your develop branch of PokemonGo-Map
Navigate to the “config” folder.
Right-click and open config.ini in your text editor of choice. I used Notepad++.
You’re looking to fill in all the values in this file. If you’ve already ran and used the PokemonGo-Map PokemonGo-Map “pokemongomapdb”
- Change “db-user:” to “pogomap PokemonGo-Map.
Filled out Search_Settings PokemonGoDev discord server, and go to the help channel. People there are great, and gladly assist people with troubleshooting issues.
VI. Final Notes & Credits¶
Final Notes¶
As just some quick closing notes, if you’ve encountered any problems or issues with this guide or find it needs to be updated please don’t hesitate to let me know. I am normally always in the PokemonGoDev discord channels, or you can contact me by other means. I really hope this guide goes a long way in helping others, because I know I was confused when I tried to get the mysql servers setup and without the help I received I would have never got this setup, or this guide written.
Credits¶
I’d just like to credit the PokemonGoDev channel on discord and the many people who have helped me in the past few days. I’ve learned a lot, and while I used to hobby program I just haven’t been able to dig deep into this project. So without the help of the guys in Discord this guide wouldn’t have been possible. So shout out to all of them, because well frankly tons of people helped me at various points along my way.
I’d also like to specifically credit Znuff2471 on discord for their great assistance, definitely one of the main contributors to helping me set this all up.
VII. Linux Instructions¶
- Visit and download mariaDB
- Login to your MySQL DB
- mysql -p
- Enter your password if you set one
- Create the DB
- CREATE DATABASE pokemongomapdb;
- CREATE USER ‘pogomapuser’@’localhost’ IDENTIFIED BY ‘password’;
- GRANT ALL PRIVILEGES ON pokemongomapdb . * TO ‘pogomapuser’@’localhost’;
Quit the MySQL command line tool
quit
Edit the
config/config.inifile
# Database settings db-type: mysql # sqlite (default) or mysql db-host: 127.0.0.1 # required for mysql db-name: pokemongomapdb # required for mysql db-user: pogomapuser # required for mysql db-pass: YourPW # required for mysql
VIII. | http://rocketmap.readthedocs.io/en/stable/extras/mysql.html | 2017-11-17T21:05:17 | CC-MAIN-2017-47 | 1510934803944.17 | [] | rocketmap.readthedocs.io |
Core Committers
The core committers team is reviewed approximately annually, new members are added based on quality contributions to SilverStipe code and outstanding community participation.
Core committer team
- Aaron Carlino
- Chris Joe
- Damian Mooyman
- Daniel Hensby
- Hamish Friedlander
- Ingo Schommer
- Jono Menz
- Loz Calver
- Sam Minnée
- Sean Harvey
- Stevie Mayhew
- Stig Lindqvist
- Will Rossiter
House rules for the core committer team
The "core committers" consist)
- | https://docs.silverstripe.org/en/4/contributing/core_committers/ | 2017-11-17T21:03:03 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.silverstripe.org |
The Form class provides a way to create interactive forms in your web application with very little effort. SilverStripe handles generating the correct semantic HTML markup for the form and each of the fields, as well as the framework for dealing with submissions and validation.
Introduction to Forms
An introduction to creating a Form instance and handling submissions.
Form Validation
Validate form data through the server side validation API.
Form Templates
Customize the generated HTML for a FormField or an entire Form.
Form Security
Ensure Forms are secure against Cross-Site Request Forgery attacks, bots and other malicious intent.
Form Transformations
Provide read-only and disabled views of your Form data.
Tabbed Forms
Find out how CMS interfaces use jQuery UI tabs to provide nested FormFields.
Field types
FormField Documentation
How to's | https://docs.silverstripe.org/en/4/developer_guides/forms/ | 2017-11-17T21:13:49 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.silverstripe.org |
WPF Platform Setup
Xamarin.Forms now has preview support for the Windows Presentation Foundation (WPF). This article demonstrates how to add a WPF project to a Xamarin.Forms solution.
Before you start, create a new Xamarin.Forms solution in Visual Studio 2017, or use an existing Xamarin.Forms solution, for example, BoxViewClock. You can only add WPF apps to a Xamarin.Forms solution in Windows.
Add a WPF project to a Xamarin.Forms app with Xamarin.University
Xamarin.Forms 3.0 WPF Support, by Xamarin University
Adding a WPF App
Follow these instructions to add a WPF app that will run on the Windows 7, 8, and 10 desktops:
In Visual Studio 2017, right-click on the solution name in the Solution Explorer and choose Add > New Project....
In the New Project window, at the left select Visual C# and Windows Classic Desktop. In the list of project types, choose WPF App (.NET Framework).
Type a name for the project with a WPF extension, for example, BoxViewClock.WPF. Click the Browse button, select the BoxViewClock folder, and press Select Folder. This will put the WPF project in the same directory as the other projects in the solution.
Press OK to create the project.
In the Solution Explorer, right click the new BoxViewClock.WPF project and select Manage NuGet Packages. Select the Browse tab, click the Include prerelease checkbox, and search for Xamarin.Forms.
Select that package and click the Install button.
Now search for Xamarin.Forms.Platform.WPF package and install that one as well. Make sure the package is from Microsoft!
Right click the solution name in the Solution Explorer and select Manage NuGet Packages for Solution. Select the Update tab and the Xamarin.Forms package. Select all the projects and update them to the same Xamarin.Forms version:
In the WPF project, right-click on References. In the Reference Manager dialog, select Projects at the left, and check the checkbox adjacent to the BoxViewClock project:
Edit the MainWindow.xaml file of the WPF project. In the
Windowtag, add an XML namespace declaration for the Xamarin.Forms.Platform.WPF assembly and namespace:
xmlns:wpf="clr-namespace:Xamarin.Forms.Platform.WPF;assembly=Xamarin.Forms.Platform.WPF"
Now change the
Windowtag to
wpf:FormsApplicationPage. Change the
Titlesetting to the name of your application, for example, BoxViewClock. The completed XAML file should look like this:
<wpf:FormsApplicationPage x: <Grid> </Grid> </wpf:FormsApplicationPage>
Edit the MainWindow.xaml.cs file of the WPF project. Add two new
usingdirectives:
using Xamarin.Forms; using Xamarin.Forms.Platform.WPF;
Change the base class of
MainWindowfrom
Windowto
FormsApplicationPage. Following the
InitializeComponentcall, add the following two statements:
Forms.Init(); LoadApplication(new BoxViewClock.App());
Except for comments and unused
usingdirectives, the complete MainWindows.xaml.cs file should look like this:
using Xamarin.Forms; using Xamarin.Forms.Platform.WPF; namespace BoxViewClock.WPF { public partial class MainWindow : FormsApplicationPage { public MainWindow() { InitializeComponent(); Forms.Init(); LoadApplication(new BoxViewClock.App()); } } }
Right-click the WPF project in the Solution Explorer and select Set as Startup Project. Press F5 to run the program with the Visual Studio debugger on the Windows desktop:
Next Steps
Platform Specifics
You can determine what platform your Xamarin.Forms application is running on from either code or XAML. This allows you to change program characteristics when it's running on WPF. In code, compare the value of
Device.RuntimePlatform with the
Device.WPF constant (which equals the string "WPF"). If there's a match, the application is running on WPF.
In XAML, you can use the
OnPlatform tag to select a property value specific to the platform:
<Button.TextColor> <OnPlatform x: <On Platform="iOS" Value="White" /> <On Platform="macOS" Value="White" /> <On Platform="Android" Value="Black" /> <On Platform="WPF" Value="Blue" /> </OnPlatform> </Button.TextColor>
Window Size
You can adjust the initial size of the window in the WPF MainWindow.xaml file:
Title="BoxViewClock" Height="450" Width="800"
Issues
This is a Preview, so you should expect that not everything is production ready. Not all NuGet packages for Xamarin.Forms are ready for WPF, and some features might not be fully working. | https://docs.microsoft.com/en-us/xamarin/xamarin-forms/platform/other/wpf | 2019-01-16T01:49:06 | CC-MAIN-2019-04 | 1547583656577.40 | [array(['../../../media/shared/preview.png', 'Preview'], dtype=object)] | docs.microsoft.com |
Visual Structure
This section defines terms and concepts used in the scope of RadCalendar you have to get familiar with prior to continue reading this help. They can also be helpful when contacting our support service in order to describe your issue better.
Navigation Header - the Header of RadCalendar. Includes the left and right navigation buttons and represents the current month.
Week Days - the distribution of specific days in a month.
Week Numbers - week numbers throughout the year.
Current Date - indicates today's date.
Selected Date - currently selected date or range of dates.
Highlighted Date - currently highlighted date. | https://docs.telerik.com/devtools/silverlight/controls/radcalendar/structure | 2019-01-16T01:53:44 | CC-MAIN-2019-04 | 1547583656577.40 | [array(['images/RadCalendar_Structure_0.png', 'Rad Calendar Structure 0'],
dtype=object) ] | docs.telerik.com |
Do you think text documentation is boring and you think you can absorb more information by video? Need a data solution ASAP and don't have time to see the documentation? We got you covered. We have a collection of videos to quickstart your SlicingDice use.
Control Panel
The Control Panel is one of the main things on SlicingDice. In the Control Panel, you can do almost anything! Check the video below.
Database
Creating a Database
In order to store everything, you'll need a database. Watch the video below to see an overview of a database creation in practice.
You can also read the text documentation
Database whitelist
You can restrict the access to your database based on IP Address or domain, and you can do that while creating your database in the Control Panel. See the video below to know more about it or read the text documentation
Database Keys
You have two types of keys that you can connect to your database via API: database keys and custom database keys. See more about them in the video below or see the documentation
Editing and Deleting a Database
You can also modify and delete your database. See the video below or read our text documentation
Dimension
Creating a Dimension
Dimension is a collection of columns in a database. Watch the video below to know how to make a Dimension in a database or read about it
Dimensions explained
Still confused about what Dimension - Dimension - Dimension is a concept used in SlicingDice to describe a way to group columns for a database. Each database can contain multiple dimensions and each dimension can contain multiple columns. On SlicingDice every database has at least one dimension (created by default), that is similar to the concept of a table, on relational databases. Dimensions are normally used to insert the same kind of data you inserted on another dimension, but associating it for a different type of entity. But it's also possible to use a dimension like a table on a relational database, where you normally create tables and build relations between them. really is? Check the video below. And as always, you can read the text documentation
Columns
A column is a place where a data will be stored associated with it's type. Check more in the following video or read about it
Workbench
Workbench is the SlicingDice built-in feature that allows you to make your operations in the Database from Control Panel. See the video below.
SQL Based API
You can connect with your database by using our API. Check the following video to example of how to use the API.
One of SlicingDice's features is load the data from third-party sources, and it's built-in on Control Panel. See more about that below. You can see more details in the documentation
Dashboard and Statistics
Now that you have data loaded and running, you can see the statistics of your databases and queries in the Dashboard. See the next two videos about it.
Data visualization
SlicingDice has a built-in data visualization tool that allows you to see your data in charts. Check the video below and see the documentation about Data visualization
Charts
The main data visualization module is to insert your data and convert it to a chart. Check the video below to see how to create a chart and see the documentation for more details
Types of charts
Charts come in several flavors. See them in the next video and see the details of each chart type in the documentation
Measures, Dimensions and Calculations
You can use some options in our Data Visualization Module to refine your chart. see it in the video below.
Dashboards
Dashboard in Data Visualization module is where you can add multiples charts in a single document. See the video below.
Practical Data Visualization example
Do you want a practical example of Data Visualization use? Check the following video.
Users
You can add users and give them specific permissions of what they can do in the Control Panel. See more in the video below and in the documentation
Roles and Policy
If some of your users need to have their permissions restricted, you need to create a new group of Roles and Policy for them with their specific permissions. See more in the following video.
Full video examples
Now that you know everything about the core concepts of SlicingDice, you can see the next two videos to see how everything connects. | http://docs.slicingdice.com/docs/getting-started-videos | 2019-01-16T02:11:51 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.slicingdice.com |
string_set_byte_at(str, pos, byte);
Returns:string
This function sets a byte directly in a string (based on the
UTF8 format) and returns a copy of the string with the changes.
NOTE: This function is incredibly slow so consider carefully whether it is necessary and where you use it.
str = string_set_byte_at("hello", 2, 97);
The above code would change the byte value of the second letter in the string, and so set the variable "str" to hold "hallo". | http://docs.yoyogames.com/source/dadiospice/002_reference/strings/string_set_byte_at.html | 2019-01-16T02:31:18 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.yoyogames.com |
Classtime¶
Classtime is an HTTP API for course data and schedule generation at UAlberta.
Purpose: “Build a university schedule that fits your life in less than five minutes”
It can be used for the following:
- browse terms
- browse courses
- get details on any course
- generate schedules, with support for core courses, electives, and preferences
Classtime currently only supports the University of Alberta.
Documentation: | https://classtime.readthedocs.io/en/latest/ | 2019-01-16T02:35:26 | CC-MAIN-2019-04 | 1547583656577.40 | [] | classtime.readthedocs.io |
Ticket Delivery Report allows you to see what was delivered (or what should have been delivered) to your member code on a selected date. This is helpful in determining which tickets you missed and need to resend.
How To Use
- Click on the “Ticket Delivery Report” link under the “Tickets” column
- If you have access to more than one member code, select the correct member code from the drop down list or check the box titled “All authorized members” to display all your member code deliveries in one report
- Enter the date for which you are checking the deliveries
- Click the “Submit” button
Once the report is generated you will see the member code (titled “Code”), the Ticket, Rev (the ticket revision), Seq#, Dest (how it was delivered), Server (either A or B) and Delivered (the date and time of delivery). | https://docs.digalert.org/display/NM/Using+Ticket+Delivery+Report | 2019-01-16T01:38:43 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.digalert.org |
All content with label contributor_project+data_grid+infinispan_user_guide.
Related Labels:
tx, user_guide, gui_demo, documentation, remoting, eventing, datagrid, student_project, notification, tutorial, client_server, infinispan, userguide, replication, transactionmanager, hotrod, streaming, docs, consistent_hash,
interface, clustering, deadlock, jta, large_object, jsr-107, async, lucene, xaresource, guide, listener, events, cache, memcached, grid, demo, hash_function, jcache, api, client, non-blocking
more »
( - contributor_project, - data_grid, - infinispan_user_guide )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/contributor_project+data_grid+infinispan_user_guide | 2019-01-16T02:29:22 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.jboss.org |
Build a Node.js and MongoDB app in Azure App Service on Linux
Note
This article deploys an app to App Service on Linux. To deploy to App Service on Windows, see Build a Node.js and MongoDB app in Azure.
App Service on Linux provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a Node.js app, connect it locally to a MongoDB database, then deploy as Azure Cosmos DB for MongoDB API database. When you're done, you'll have a MEAN application (MongoDB, Express, AngularJS, and Node.js) running in App Service on Linux. For simplicity, the sample application uses the MEAN.js web framework.
What you learn how to:
- Create a database using Azure Cosmos DB for MongoDB API
- v6.0 or above and NPM
-
Ignore the config.domain warning..:
Create production MongoDB
In this step, you create a Cosmos database configured with MongoDB API, in Azure. When your app is deployed to Azure, it uses this cloud Cosmos DB configured with MongoDB API
In a local terminal window, run the following command to minify and bundle scripts for the production environment. This process generates the files needed by the production environment.
gulp prod
In a local terminal window, run the following command to use the connection string you configured in config/env/local-production.js. Ignore the certificate error and the config.domain warning. Node.js application to Azure App Service.
Configure local git deployment
In the Cloud Shell, configure deployment credentials with the
az webapp deployment user set command. This deployment user is required for FTP and local Git deployment to a web app. The user name and password are account level. They are different from your Azure subscription credentials.
In the following example, replace <username> and <password> (including brackets) with a new user name and password. The user name must be unique within Azure. The password must be at least eight characters long, with two of the following three elements: letters, numbers, symbols.
az webapp deployment user set --user-name <username> --password <password>
You should get a JSON output, with the password shown as
null. If you get a
'Conflict'. Details: 409 error, change the username. If you get a
'Bad Request'. Details: 400 error, use a stronger password.
You need to configure this deployment user only once; you can use it for all your Azure deployments.
Note
Record the user name and password. You need them to deploy the web app later.
Create an", "location": "West Europe", "maximumNumberOfWorkers": 1, "name": "myAppServicePlan", < JSON data removed for brevity. > "targetWorkerSizeId": 0, "type": "Microsoft.Web/serverfarms", "workerTierName": null }
NODE|6.9. To see all supported runtimes, run
az webapp list-runtimes --linux.
# Bash az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app_name> --runtime "NODE|6.9" --deployment-local-git # PowerShell az --% webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app_name> --runtime "NODE|6.9" - web app, prompted for credentials by Git Credential Manager, make sure that app at any point, App Service doesn't rerun these automation tasks.
Browse to the Azure app
Browse to the deployed Azure Cosmos DB for MongoDB API.
Select Admin > Manage Articles to add some articles.
Congratulations! You're running a data-driven Node.js app in Azure App Service on Linux.
Update data model and redeploy
In this step, you change the
article data model and publish your change to Azure.
Update the data model
In your local MEAN.js repository,.
gulp prod
Commit your changes in Git, then push the code changes to Azure.
git commit -am "added article comment" git push azure master
Once the
git push is complete, navigate to your Azure app and try out the new functionality.
If you added any articles earlier, you still can see them. Existing data in your Cosmos DB is not lost. Also, your updates to the data schema and leaves your existing data intact. database using Azure Cosmos DB for MongoDB API
- Connect a Node.js app to a. | https://docs.microsoft.com/en-us/azure/app-service/containers/tutorial-nodejs-mongodb-app | 2019-01-16T01:40:05 | CC-MAIN-2019-04 | 1547583656577.40 | [array(['media/tutorial-nodejs-mongodb-app/meanjs-in-azure.png',
'MEAN.js app running in Azure App Service'], dtype=object)
array(['media/tutorial-nodejs-mongodb-app/mongodb-connect-success.png',
'MEAN.js connects successfully to MongoDB'], dtype=object)
array(['media/tutorial-nodejs-mongodb-app/meanjs-in-azure.png',
'MEAN.js app running in Azure App Service'], dtype=object)
array(['media/tutorial-nodejs-mongodb-app/added-comment-field.png',
'Added comment field to Articles'], dtype=object)
array(['media/tutorial-nodejs-mongodb-app/added-comment-field-published.png',
'Model and database changes published to Azure'], dtype=object)
array(['media/tutorial-nodejs-mongodb-app/access-portal.png',
'Portal navigation to Azure app'], dtype=object)
array(['media/tutorial-nodejs-mongodb-app/web-app-blade.png',
'App Service page in Azure portal'], dtype=object) ] | docs.microsoft.com |
Rate Limiting Policy
Scenario
Users can set the rate limiting policy in the provider's configuration. By setting the request frequency from a particular micro service, provider can limit the max number of requests per second.
Cautions
- There may be a small difference between the rate limit and actual traffic.
- The provider's rate limit control is for service rather than security. To prevent distributed denial of service(DDos) attacks, you need to take other measures.
- Traffic control is scoped to microservice rather than instance. Consume a consumer microservice has 3 instances, and calls a provider service. After configuring the rate limit policy, the provider won't distinguish which consumer instance makes the request, but take all requests together as the 'consume request' for rate limiting.
Configuration
Rate limiting policies are configured in the microservice.yaml file. The table below shows all the configuration items. To enable the provider's rate limit policy, you also need to configure the rate limiting handler in the server's handler chain and add dependencies in the pom.xml file.
- An example of rate limit configuration in microservice.yaml:
servicecomb: handler: chain: Provider: default: qps-flowcontrol-provider
- Add the handler-flowcontrol-qps dependency in the pom.xml file:
<dependency> <groupId>org.apache.servicecomb</groupId> <artifactId>handler-flowcontrol-qps</artifactId> <version>1.0.0-m1</version> </dependency>
QPS rate limit configuration items
Notes:
The
ServiceNamein provider's rate limit config is the name of the consumer that calls the provider. While
schemaand
operationis the provider's own config item. That is, the rate limit policy controls the consumer requests that call the provider's schema or operation. | https://docs.servicecomb.io/java-chassis/en_US/build-provider/configuration/ratelimite-strategy.html | 2019-01-16T02:05:41 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.servicecomb.io |
UpdateProject.,
"").
Request Syntax
PUT /projects/
projectNameHTTP/1.1 Content-type: application/json { "description": "
string", "placementTemplate": { "defaultAttributes": { "
string" : "
string" }, "deviceTemplates": { "
string" : { "callbackOverrides": { "
string" : "
string" }, "deviceType": "
string" } } } }
URI Request Parameters
The request requires the following URI parameters.
- projectName
The name of the project to be updated.
Length Constraints: Minimum length of 1. Maximum length of 128.
Pattern:
^[0-9A-Za-z_-]+$
Request Body
The request accepts the following data in JSON format.
- description
An optional user-defined description for the project.
Type: String
Length Constraints: Minimum length of 0. Maximum length of 500.
Required: No
- placementTemplate
An object defining the project update. Once a project has been created, you cannot add device template names to the project. However, for a given
placementTemplate, you can update the associated
callbackOverridesfor the device definition using this API.
Type: PlacementTemplate object
Required: No
Response Syntax
HTTP/1.1 200
- ResourceNotFoundException
HTTP Status Code: 404
- TooManyRequestsException
HTTP Status Code: 429
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/iot-1-click/latest/projects-apireference/API_UpdateProject.html | 2019-01-16T01:59:59 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.aws.amazon.com |
Windows Forms Configuration Section
Windows Forms configuration settings allow a Windows Forms app to store and retrieve information about customized application settings such as multi-monitor support, high DPI support, and other predefined configuration settings.
Windows Forms application configuration settings are stored in an application configuration file's
System.Windows.Forms.ApplicationConfigurationSection element.
Syntax
<configuration> <System.Windows.Forms.ApplicationConfigurationSection> ... </System.Windows.Forms.ApplicationConfigurationSection> </configuration>
Attributes and elements
The following sections describe attributes, child elements, and parent elements.
Attributes
None.
Child elements
Parent elements
Remarks
Starting with the .NET Framework 4.7, the
<System.Windows.Forms.ApplicationConfigurationSection> element allows you to configure Windows Forms applications to take advantage of features added in recent releases of the .NET Framework.
The
<System.Windows.Forms.ApplicationConfigurationSection> element can include one or more child
<add> elements, each of which defines a specific configuration setting.
See also
Configuration File Schema
High DPI Support in Windows Forms | https://docs.microsoft.com/en-us/dotnet/framework/configure-apps/file-schema/winforms/index | 2019-01-16T02:12:03 | CC-MAIN-2019-04 | 1547583656577.40 | [] | docs.microsoft.com |
Warming up Dedicated IP Addresses
When determining whether to accept or reject a message, email service providers consider the reputation of the IP address that sent it. One of the factors that contributes to the reputation of an IP address is whether the address has a history of sending high-quality email. Email providers are less likely to accept mail from new IP addresses that have little or no history. Email sent from IP addresses with little or no history may end up in recipients' junk mail folders, or may be blocked altogether.
When you start sending email from a new IP address, you should gradually increase the amount of email you send from that address before using it to its full capacity. This process is called warming up the IP address.
The amount of time required to warm up an IP address varies between email providers. For some email providers, you can establish a positive reputation in around two weeks, while for others it may take up to six weeks. When warming up a new IP address, you should send emails to your most active users to ensure that your complaint rate remains low. You should also carefully examine your bounce messages and send less email if you receive a high number of blocking or throttling notifications. For information about monitoring your bounces, see Monitoring Your Amazon SES Sending Activity.
Automatically Warm up Dedicated IP Addresses
When you request dedicated IP addresses, Amazon SES automatically warms them up to improve the delivery of emails you send. The automatic IP address warm-up feature is enabled by default.
The steps that happen during the automatic warm-up process depend on whether or not you already have dedicated IP addresses:
When you request dedicated IP addresses for the first time, Amazon SES distributes your email sending between your dedicated IP addresses and a set of addresses that are shared with other Amazon SES customers. Amazon SES gradually increases the number of messages sent from your dedicated IP addresses over time.
If you already have dedicated IP addresses, Amazon SES distributes your email sending between your existing dedicated IPs (which are already warmed up) and your new dedicated IPs (which are not warmed up). Amazon SES gradually increases the number of messages sent from your new dedicated IP addresses over time.
After you warm up a dedicated IP address, you should send around 1,000 emails every day to each email provider that you want to maintain a positive reputation with. You should perform this task on each dedicated IP address that you use with Amazon SES.
You should avoid sending large volumes of email immediately after the warm-up process is complete. Instead, slowly increase the number of emails you send until you reach your target volume. If an email provider sees a large, sudden increase in the number of emails being sent from an IP address, they may block or throttle the delivery of messages from that address.
Disable the Automatic Warm-up Process
When you purchase new dedicated IP addresses, Amazon SES automatically warms them up for you. If you prefer to warm up dedicated IP addresses yourself, you can disable the automatic warm-up feature.
Important
If you disable the automatic warm up feature, you are responsible for warming up your dedicated IP addresses yourself. If you send email from addresses that haven't been warmed up, you may experience poor delivery rates.
To disable the automatic warm-up feature
Sign in to the AWS Management Console and open the Amazon SES console at.
In the navigation bar on the left, choose Dedicated IPs.
Clear the box next to Automatic IP warm-up.
Restart the Automatic Warm-up Process
You can restart the automatic IP warm-up process for a set of IP addresses that belong to a dedicated IP pool.
To restart the automatic warm-up process
Sign in to the AWS Management Console and open the Amazon SES console at.
In the navigation bar on the left, choose Dedicated IPs.
In the dedicated IP pool for which you want to restart the warm-up process, choose Actions, and then choose Restart IP warm up.
The status of the automatic warm-up process is in the Warm Up Status column; when the warm-up process is finished, this column will say
Complete. | https://docs.aws.amazon.com/ses/latest/DeveloperGuide/dedicated-ip-warming.html | 2018-03-17T10:54:57 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.aws.amazon.com |
Online Subsystem Steam
Information contained here pertains specifically to the Steam implementation. Any additional setup steps, tips & tricks, and workarounds can be found here.
The Steam module implements most of the interfaces exposed through the online subsystem and supports most of what Valve offers through the Steamworks SDK.
Steam Interfaces
Matchmaking (Lobbies / GameServer APIs)
Leaderboards
Achievements
Voice
UserCloud
SharedCloud
External UI
All games must have a valid Steam App ID. During development, this App ID is exposed to the SDK via a file called steam_appid.txt which must reside in the same directory as the executable. The file is generated by Unreal Engine at launch and deleted during graceful shutdown of the engine. This makes it unnecessary to launch the game via the Steam client (although it must be running). The file should not be included in any Steam images.
In shipping builds, the engine will also check to make sure the logged in user is properly subscribed to the game and will shutdown the engine if not. This is but one way to help secure the game. Using Steam DRM (see the Steamworks SDK) should further protect the game from tampering.
Basic Setup
The Steam subsystem requires some additional setup through Valve. Contact Valve and use their documentation to make sure you are setup on their end before attempting to use Steam in the Unreal Engine.
Installing the Steamworks SDK
For legal reasons, using Steam requires downloading the latest SDK from Valve. Currently this is v1.29a, but any future update should be a straightforward change to the path name. See Steamworks.build.cs in the ThirdParty/Steamworks directory
Using Steam against the precompiled version of the engine should only require copying some of the DLLs from Valve's SDK into the appropriate places. If you intend to recompile the engine against the source, putting the SDK in the right place is required as well.
The SDK needs to be unzipped/copied to this path
/YourUnrealEnginePath/Engine/Source/ThirdParty/Steamworks/Steamv130/sdk
Find the following binaries from the /redistributable_bin/ directory of the SDK and copy them to their noted locations.
Note: Some of the 64bit dlls can be found in your normal Steam client directory. Valve for some reason does not include all of them in the SDK.
/YourUnrealEnginePath/Engine/Binaries/ThirdParty/Steamworks/Steamv130/Win64
steam_api64.dll
steamclient64.dll
tier0_s64.dll
vstdlib_s64.dll
/YourUnrealEnginePath/Engine/Binaries/ThirdParty/Steamworks/Steamv130/Win32
steam_api.dll
steamclient.dll
tier0_s.dll
vstdlib_s.dll
/YourUnrealEnginePath/EngineOrGameFolder/Binaries/Mac/YourGame.app/Contents/MacOS
libsteam_api.dylib (from /redistributable_bin/osx32 - single dylib has both 32 and 64 bit support)
If you are compiling the entire engine, you will want to modify the following line in OnlineSubsystemSteam\Private\OnlineSubsystemSteamPrivatePCH.h
define STEAM_SDK_VER TEXT("Steamv130")
to make sure it references the new SDK directory location / version
INI Configuration
Turn on some settings in the game's
DefaultEngine.ini.
The SteamDevAppId of 480 is Valve's test app id, shared by everyone. You will need your own app id eventually, but most features of Steam should work before then.
["
Module Setup
Make sure to include the Unreal Engine Steam module as part of your project. (see 虚幻引擎编译系统的目标文件 for additional help)
Adding the following should be enough to make sure that the Steam module is built along with your game. It goes inside the constructor of mygame.build.cs
DynamicallyLoadedModuleNames.Add("OnlineSubsystemSteam");
Steam Overlay on Mac
Contrary to Windows, Steam Overlay on Mac requires game to be launched using Steam client. For this you first need to add the game to your library using "Add a Non-Steam Game to My Library" option from Steam's Games menu. | http://docs.manew.com/ue4/811.html | 2018-03-17T10:41:36 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.manew.com |
Accessing the User Buffers for an I/O Operation
The FLT_PARAMETERS structure for an I/O operation contains the operation-specific parameters for the operation, including buffer addresses and memory descriptor lists (MDL) for any buffers that are used in the operation.
For IRP-based I/O operations, the buffers for the operation can be specified by using:
MDL only (typically for paging I/O)
Buffer address only
Buffer address and MDL
For fast I/O operations, only the user-space buffer address is specified. Fast I/O operations that have buffers always use neither buffered nor direct I/O and thus never have MDL parameters.
The following topics provide guidelines for handling buffer addresses and MDLs for IRP-based and fast I/O operations in minifilter driver preoperation callback routines and postoperation callback routines:
Accessing User Buffers in a Preoperation Callback Routine
Accessing User Buffers in a Postoperation Callback Routine | https://docs.microsoft.com/en-us/windows-hardware/drivers/ifs/accessing-the-user-buffers-for-an-i-o-operation | 2018-03-17T10:50:30 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.microsoft.com |
Goal Management
Guide to configure goal with WordPress Google Analytics WD > Goal Management page. Note, that Goals configured with Google Analytics WD \btest\b,.
v.1.1.8 | http://docs.10web.io/docs/wd-google-analytics/goal-management.html | 2018-03-17T10:39:07 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.10web.io |
change_history Highrise Add-On
Installing the Highrise Add-On
If you are familiar with installing Gravity Forms add-ons or other WordPress plugins, installing the Highrise plugin is the exact same process.
Setting Up the Highrise Add-On
Before you can begin setting up a feed for the Highrise add-on, you will first need to complete a few initial setup steps.
Creating a Feed for the Highrise Add-On
Before you may begin using using the Highrise add-on to send your form submissions from Gravity Forms to Highrise, you will first need to configure a feed that tells the add-on how to interact with your form. | https://docs.gravityforms.com/category/add-ons-gravity-forms/highrise-add-on/ | 2018-03-17T10:41:42 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.gravityforms.com |
Silva, and other Infrae products should follow the coding style defined by pep8, with the following enforcement.
Here is an example:
# Copyright (c) 2010 Infrae. All rights reserved. # See also LICENSE.txt # $Id$ from silva.foo.bar.interfaces import IBar from zope import component, interface CONST_VALUE = 42 def do_something(self): """Method to do something. """ return None class BarImplementation(object): """Implement a bar following the Netherlands specification. """ some_product = [] def retrieve_data(self): """Retrieve Bar data. """ return self.some_product
The following tools can help you: | http://docs.infrae.com/code/coding_style.html | 2018-03-17T10:13:45 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.infrae.com |
partition management
Updated: April 17, 2012
Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2008 :
create nc dc=AppPartition,dc=contoso,dc=com ConDc1.contoso.com
Run the
listcommand again to refresh the list of partitions.
Additional references
group membership evaluation
security account management
semantic database analysis | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc730970(v=ws.10) | 2018-03-17T11:06:40 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.microsoft.com |
NFC Data Exchange Format¶
NDEF (NFC Data Exchange Format) is a binary message format to exchange application-defined payloads between NFC Forum Devices or to store payloads on an NFC Forum Tag. A payload is described by a type, a length and an optional identifer encoded in an NDEF record structure. An NDEF message is a sequence of NDEF records with a begin marker in the first and an end marker in the last record.
NDEF decoding and encoding is provided by the
nfc.ndef module.
>>> import nfc.ndef
Parsing NDEF¶
An
nfc.ndef.Message class can be initialized with an NDEF
message octet string to parse that data into the sequence of NDEF
records framed by the begin and end marker of the first and last
record. Each NDEF record is represented by an
nfc.ndef.Record
object accessible through indexing or iteration over the
nfc.ndef.Message object.
>>> import nfc.ndef >>> message = nfc.ndef.Message(b'\xD1\x01\x0ET\x02enHello World') >>> message nfc.ndef.Message([nfc.ndef.Record('urn:nfc:wkt:T', '', '\x02enHello World')]) >>> len(message) 1 >>> message[0] nfc.ndef.Record('urn:nfc:wkt:T', '', '\x02enHello World') >>> for record in message: >>> record.type, record.name, record.data >>> ('urn:nfc:wkt:T', '', '\x02enHello World')
An NDEF record carries three parameters for describing its payload:
the payload length, the payload type, and an optional payload
identifier. The
nfc.ndef.Record.data attribute provides access
to the payload and the payload length is obtained by
len(). The
nfc.ndef.Record.name attribute holds the payload identifier
and is an empty string if no identifer was present in the NDEF
date. The
nfc.ndef.Record.type identifies the type of the
payload as a combination of the NDEF Type Name Format (TNF) field and
the type name itself.
Empty (TNF 0)
An Empty record type (expressed as a zero-length string) indicates that there is no type or payload associated with this record. Encoding a record of this type will exclude the name (payload identifier) and data (payload) contents. This type can be used whenever an empty record is needed; for example, to terminate an NDEF message in cases where there is no payload defined by the user application.
NFC Forum Well Known Type (TNF 1)
An NFC Forum Well Known Type is a URN as defined by RFC 2141, with the namespace identifier (NID) “nfc”. The Namespace Specific String (NSS) of the NFC Well Known Type URN is prefixed with “wkt:”. When encoded in an NDEF message, the Well Known Type is written as a relative-URI construct (cf. RFC 3986), omitting the NID and the “wkt:” -prefix. For example, the type “urn:nfc:wkt:T” will be encoded as TNF 1, TYPE “T”.
Media-type as defined in RFC 2046 (TNF 2)
A media-type follows the media-type BNF construct defined by RFC 2046. Records that carry a payload with an existing, registered media type should use this record type. Note that the record type indicates the type of the payload; it does not refer to a MIME message that contains an entity of the given type. For example, the media type ‘image/jpeg’ indicates that the payload is an image in JPEG format using JFIF encoding as defined by RFC 2046.
Absolute URI as defined in RFC 3986 (TNF 3)
An absolute-URI follows the absolute-URI BNF construct defined by RFC 3986. This type can be used for message types that are defined by URIs. For example, records that carry a payload with an XML-based message type may use the XML namespace identifier of the root element as the record type, like a SOAP/1.1 message may be represented by the URI ‘’.
NFC Forum External Type (TNF 4)
An NFC Forum External Type is a URN as defined by RFC 2141, with the namespace identifier (NID) “nfc”. The Namespace Specific String (NSS) of the NFC Well Known Type URN is prefixed with “ext:”. When encoded in an NDEF message, the External Type is written as a relative-URI construct (cf. RFC 3986), omitting the NID and the “ext:” -prefix. For example, the type “urn:nfc:ext:nfcpy.org:T” will be encoded as TNF 4, TYPE “nfcpy.org:T”.
Unknown (TNF 5)
An Unknown record type (expressed by the string “unknown”) indicates that the type of the payload is unknown, similar to the “application/octet-stream” media type.
Unchanged (TNF 6)
An Unchanged record type (expressed by the string “unchanged”) is used in middle record chunks and the terminating record chunk used in chunked payloads. This type is not allowed in any other record.
>>> import nfc.ndef >>> message = nfc.ndef.Message('\xD0\x00\x00') >>> nfc.ndef.Message('\xD0\x00\x00')[0].type '' >>> nfc.ndef.Message('\xD1\x01\x00T')[0].type 'urn:nfc:wkt:T' >>> nfc.ndef.Message('\xD2\x0A\x00text/plain')[0].type 'text/plain' >>> nfc.ndef.Message('\xD3\x16\x00')[0].type '' >>> nfc.ndef.Message('\xD4\x10\x00example.org:Text')[0].type 'urn:nfc:ext:example.org:Text' >>> nfc.ndef.Message('\xD5\x00\x00')[0].type 'unknown' >>> nfc.ndef.Message('\xD6\x00\x00')[0].type 'unchanged'
The type and name of the first record, by convention, provide the
processing context and identification not only for the first record
but for the whole NDEF message. The
nfc.ndef.Message.type and
nfc.ndef.Message.name attributes map to the type and name
attributes of the first record in the message.
>>> message = nfc.ndef.Message(b'\xD1\x01\x0ET\x02enHello World') >>> message.type, message.name ('urn:nfc:wkt:T', '')
If invalid or insufficient data is provided to the NDEF message parser, an
nfc.ndef.FormatError or
nfc.ndef.LengthError is raised.
>>> try: nfc.ndef.Message('\xD0\x01\x00') ... except nfc.ndef.LengthError as e: print e ... insufficient data to parse >>> try: nfc.ndef.Message('\xD0\x01\x00T') ... except nfc.ndef.FormatError as e: print e ... ndef type name format 0 doesn't allow a type string
Creating NDEF¶
An
nfc.ndef.Record class can be initialized with an NDEF
To build NDEF messages use the
nfc.ndef.Record class to
create records and instantiate an
nfc.ndef.Message object
with the records as arguments.
>>> import nfc.ndef >>> record1 = nfc.ndef.Record("urn:nfc:wkt:T", "id1", "\x02enHello World!") >>> record2 = nfc.ndef.Record("urn:nfc:wkt:T", "id2", "\x02deHallo Welt!") >>> message = nfc.ndef.Message(record1, record2)
The
nfc.ndef.Message class also accepts a list of records as a single argument and it is possible to
nfc.ndef.Message.append() records or
nfc.ndef.Message.extend() a message with a list of records.
>>> message = nfc.ndef.Message() >>> message.append(record1) >>> message.extend([record2, record3])
The serialized form of an
nfc.ndef.Message object is produced with
str().
>>> message = nfc.ndef.Message(record1, record2) >>> str(message) '\x99\x01\x0f\x03Tid1\x02enHello World!Y\x01\x0e\x03Tid2\x02deHallo Welt!'
Specific Records¶
Text Record¶
>>> import nfc.ndef >>> record = nfc.ndef.TextRecord("Hello World!") >>> print record.pretty() text = Hello World! language = en encoding = UTF-8
Uri Record¶
>>> import nfc.ndef >>> record = nfc.ndef.UriRecord("") >>> print record.pretty() uri =
Smart Poster Record¶
>>> import nfc.ndef >>>>> record = nfc.ndef.SmartPosterRecord(uri) >>> record.>> record.title['de'] = "Python Modul für Nahfeldkommunikation" >>> print record.pretty() resource = title[de] = Python Modul für Nahfeldkommunikation title[en] = Python module for near field communication action = default | https://nfcpy.readthedocs.io/en/latest/topics/ndef.html | 2018-03-17T10:40:42 | CC-MAIN-2018-13 | 1521257644877.27 | [array(['../_images/ndefmsg.png', '../_images/ndefmsg.png'], dtype=object)] | nfcpy.readthedocs.io |
Clothesline Installation Eastern Suburbs Melbourne?
Here at Lifestyle Clotheslines, we supply residents of the Eastern Suburbs, Melbourne, with supreme quality clotheslines, such as the Hills Hoist model, along with efficient installation services for houses, townhouses and apartments. We provide Clothesline Installation Eastern Suburbs Melbourne.
Our vast range of clothesline products and accessories cater for a wide range of individual needs and preferences, and may be installed within your home by our clothesline professionals. Our product selection includes:
- Hills, Austral, City Living and Versaline models
- All types of lines - Rotary, Retractable, Foldown, Portable, Indoor, Ceiling
- Accessories and spare parts
Map of the Eastern Suburbs Melbourne area covered for clothesline installations
Suburbs in the Eastern Suburbs Melbourne area we service are:
Our Top 7 Clothesline Recommendations for Eastern Suburbs Melbourne residents
- Hills Hoist Heritage 4 - Traditional design dating back to the 1960s
- Traditional Ceiling Mounted Airer - Built strong and tough to last a lifetime
- Hills Everyday Rotary 47 - Massive 47m of line space
- Hills Portable 120 - Handy extra line for those larger wash loads
- Hills Rotary 7 - Holds multiple loads of washing
- Austral Indoor Outdoor - Perfect for the bathroom, laundry and garage
- Austral Super 5 - Top permanent fixed rotary clothesline
If you need any further assistance in choosing the right clothesline or washing line for your needs or for Clothesline Installation Yarra Ranges | https://docs.lifestyleclotheslines.com.au/article/658-clothesline-installation-eastern-suburbs-melbourne | 2018-03-17T10:12:07 | CC-MAIN-2018-13 | 1521257644877.27 | [] | docs.lifestyleclotheslines.com.au |
For flashcard sessions you can select if the cards should be flipped automatically, and if they should be counted as correct or incorrect. Enter a time delay in seconds for the automatic flipping.
For multiple choice sessions you can select if your choice should be checked immediately or if you have to select → first.
For question & answer sessions you can select if questions where you use → should be counted as incorrect.
For all sessions you can select if the scores should be displayed as percent instead of absolute numbers.
Select to restore all settings to predefined defaults. Select to make your changes without closing the dialog. Select to make your changes and close the dialog. Select to close the dialog without making any changes. | https://docs.kde.org/trunk5/en/kdeedu/kwordquiz/dlg-config-quiz.html | 2016-10-21T13:08:41 | CC-MAIN-2016-44 | 1476988718278.43 | [array(['/trunk5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)
array(['kwq-dlg-configure-quiz.png', 'Quiz Settings'], dtype=object)] | docs.kde.org |
Description / Features
Allows The Branding plugin allows to add your own company logo to the Sonar UI. Two location locations are supported. Here is the default TOP location:
Image Removed
Image Added
And here is the MENU location (can be changed in plugin settings):
Image Removed
The last feature is the possibility
Image Added
It is also possible to add a 'Project Logo' widget on dashboards a dashboard to display a logo for project, which can be configured as following :
Image Removed
Usage and installation
...
a specific project.
Installation
- Install the Branding plugin through the Update Center or download it into the SONAR_HOME/extensions/plugins directory
- Restart the Sonar Web server
- Configure (Global Settings -> Branding)
- Restart Sonar Web server
- Admire your custom logo
Changelog
Usage
- Configure the branding settings:
- At instance level, go to Settings > Configuration > General Settings > Branding
- At project level, go to Configuration > Links
Image Added
- Restart the Sonar server (you have to restart the Sonar server each time you update the branding settings)
- Admire your logo
Change Log
...
...
... | http://docs.codehaus.org/pages/diffpages.action?pageId=194314474&originalId=229743022 | 2013-05-18T11:44:27 | CC-MAIN-2013-20 | 1368696382360 | [] | docs.codehaus.org |
- native JSON builder / parser
Releases
- Groovy 1.8-beta-1: July 2010
- Groovy 1.8-beta-2: September 2010
- Groovy 1.8-beta-3: December 2010
- Groovy 1.8-beta-4: Early February 2011
- Groovy 1.8-RC-1: Mid-February 2011
- Groovy 1.8-RC-2: End-February 2011
- Groovy 1.8 GA: Early March 2011
Groovy 1.9)
- compiler related:
- investigate the integration of the Eclipse joint compiler to replace the Groovy stub-based joint compiler
- investigate making the groovyc compiler multithreaded
- | http://docs.codehaus.org/pages/viewpage.action?pageId=195952649 | 2013-05-18T11:34:12 | CC-MAIN-2013-20 | 1368696382360 | [array(['/s/fr_FR/3278/15/_/images/icons/emoticons/warning.png', None],
dtype=object) ] | docs.codehaus.org |
Tk/Tcl has long been an integral part of Python. It provides a robust and platform independent windowing toolkit, that is available to Python programmers using the tkinter package, and its extension, the tkinter.tix and the tkinter.ttk modules.. | http://docs.python.org/3.3/library/tk.html | 2013-05-18T11:32:05 | CC-MAIN-2013-20 | 1368696382360 | [] | docs.python.org |
SQLAlchemy 0.6 Documentation
- Prev: Examples
- Next: SQLAlchemy Core
- Table of Contents | Index | view source
ORM Exceptions
Table of Contents
Previous Topic
Next Topic
Project Versions
Quick Search
ORM Exceptions¶
SQLAlchemy ORM exceptions.
- sqlalchemy.orm.exc.ConcurrentModificationError¶
alias of StaleDataError
- exception sqlalchemy.orm.exc.DetachedInstanceError¶
Bases: sqlalchemy.exc.SQLAlchemyError
An attempt to access unloaded attributes on a mapped instance that is detached.
- exception sqlalchemy.orm.exc.FlushError¶
Bases: sqlalchemy.exc.SQLAlchemyError
A invalid condition was detected during flush().
- exception sqlalchemy.orm.exc.MultipleResultsFound¶
Bases: sqlalchemy.exc.InvalidRequestError
A single database result was required but more than one were found.
- sqlalchemy.orm.exc.NO_STATE = (<type 'exceptions.AttributeError'>, <type 'exceptions.KeyError'>)¶
Exception types that may be raised by instrumentation implementations.
- exception sqlalchemy.orm.exc.NoResultFound¶
Bases: sqlalchemy.exc.InvalidRequestError
A database result was required but none was found.
- exception sqlalchemy.orm.exc.ObjectDeletedError¶.
- exception sqlalchemy.orm.exc.StaleDataError¶
Bases: sqlalchemy.exc.SQLAlchemyError
An operation encountered database state that is unaccounted for.
Two conditions cause this to happen:
- A flush may have attempted to update or delete rows and an unexpected number of rows were matched during the UPDATE or DELETE statement. Note that when version_id_col is used, rows in UPDATE or DELETE statements are also matched against the current known version identifier.
- A mapped object with version_id_col was refreshed, and the version number coming back from the database does not match that of the object itself.
- exception sqlalchemy.orm.exc.UnmappedClassError(cls, msg=None)¶
Bases: sqlalchemy.orm.exc.UnmappedError
An mapping operation was requested for an unknown class.
- exception sqlalchemy.orm.exc.UnmappedColumnError¶
Bases: sqlalchemy.exc.InvalidRequestError
Mapping operation was requested on an unknown column.
- exception sqlalchemy.orm.exc.UnmappedError¶
Bases: sqlalchemy.exc.InvalidRequestError
Base for exceptions that involve expected mappings not present.
- exception sqlalchemy.orm.exc.UnmappedInstanceError(obj, msg=None)¶
Bases: sqlalchemy.orm.exc.UnmappedError
An mapping operation was requested for an unknown instance. | http://docs.sqlalchemy.org/en/rel_0_6/orm/exceptions.html | 2013-05-18T11:42:19 | CC-MAIN-2013-20 | 1368696382360 | [] | docs.sqlalchemy.org |
When you add an entry to BlackBerry Remember, you can choose between a note or task. Tasks include a completion checkbox and the option to add a due date or reminder.
If you add an entry to a folder that's synced with one of your accounts, you might not be able to choose whether the entry is a note or task.
When you add a due date to an entry in BlackBerry Remember, your BlackBerry device adds the entry to the Calendar app. To receive a reminder, you must add a specific reminder time to your entry.
Tags help you to categorize your entries. For example, you can add the tag "recipe" to any entries containing recipes, and then filter your entries by that tag.
Depending on the account that your entry is associated wtih, you might be able to apply formatting to italicize, bold, or underline text, create lists, or change the text size and color. | http://docs.blackberry.com/en/smartphone_users/deliverables/62002/mwa1337889718814.html | 2014-09-16T01:00:06 | CC-MAIN-2014-41 | 1410657110730.89 | [] | docs.blackberry.com |
otT; mostly focused on style and rendering enhancements right now.
Planning
If your organization is making use of GeoTools please talk to us about project goals, time line and upcoming releases. GeoTools uses an open development process and your contributions can make the difference..
For more information please visit our About page.
Thanks
Please click here for a complete list of organizations supporting GeoTools.
The following organizations ask that their logos be included on our front page.
News and Events
Blog stream
Create a blog post to share news and announcements with your team and company. | http://docs.codehaus.org/pages/viewpage.action?pageId=119537840 | 2014-09-16T01:46:58 | CC-MAIN-2014-41 | 1410657110730.89 | [] | docs.codehaus.org |
Interpolate over a 2-D grid.
x, y and z are arrays of values used to approximate some function f: z = f(x, y). This class returns a function whose call method uses spline interpolation to find the value of new points.
If x and y represent a regular grid, consider using RectBivariateSpline.
See also
Notes
The minimum number of data points required along the interpolation axis is (k+1)**2, with k=1 for linear, k=3 for cubic and k=5 for quintic interpolation.
The interpolator is constructed by bisplrep, with a smoothing factor of 0. If more control over smoothing is needed, bisplrep should be used directly.
Examples
Construct a 2-D grid and interpolate on it:
>>> from scipy import interpolate >>> x = np.arange(-5.01, 5.01, 0.25) >>> y = np.arange(-5.01, 5.01, 0.25) >>> xx, yy = np.meshgrid(x, y) >>> z = np.sin(xx**2+yy**2) >>> f = interpolate.interp2d(x, y, z, kind='cubic')
Now use the obtained interpolation function and plot the result:
>>> xnew = np.arange(-5.01, 5.01, 1e-2) >>> ynew = np.arange(-5.01, 5.01, 1e-2) >>> znew = f(xnew, ynew) >>> plt.plot(x, z[:, 0], 'ro-', xnew, znew[:, 0], 'b-') >>> plt.show()
Methods | http://docs.scipy.org/doc/scipy-0.12.0/reference/generated/scipy.interpolate.interp2d.html | 2014-09-16T00:55:08 | CC-MAIN-2014-41 | 1410657110730.89 | [] | docs.scipy.org |
, there is another configuration
file,
user.json, which contains runtime parameters for the gateway
application, including:
max_lease_time- the maximum duration, in seconds, of an acquired lease
fe_tcp_port- the TCP port on which the gateway application listens, 4929 by default
- the
sizeentry in the
receiver_configmap determines the number of
cvmfs_receiver
worker processes that are spawned (default value is 1, should not be increased beyond the number of available CPU cores)
To access the gateway service API, the specified
fe_tcp.
Publisher configuration¶
This section describes how to set up a publisher for a specific CVMFS repository. The precondition is a working gateway machine where the repository has been created as a Stratum 0.
Example:¶
-
then make changes to the repository, and publish:
$ cvmfs_server publish
Displaying and clearing leases on the gateway machine¶
The
cvmfs-gateway package includes two scripts intended to help gateway administrators debug or unblock the gateway in case of problems.
The first one displays the list of currently active leases:
$ /usr/libexec/cvmfs-gateway/scripts/get_leases.sh
The second one will clear all the currently active leases:
$ /usr/libexec/cvmfs-gateway/scripts/clear_leases.sh} | https://cvmfs.readthedocs.io/en/2.6/cpt-repository-gateway.html | 2021-06-12T19:55:00 | CC-MAIN-2021-25 | 1623487586390.4 | [] | cvmfs.readthedocs.io |
The Export As… command allows you to store your image
in a format other than XCF.
Please refer to 1절. “파일” for information
about exporting in different file formats.
You can access this command from the image menubar through
File → Export As…,
or by using the keyboard shortcut
Shift+Ctrl+E. | https://docs.gimp.org/ko/gimp-file-export-as.html | 2021-06-12T21:01:39 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.gimp.org |
Summary Name Protocol Description eduGAIN SAML Federation of research and educational providers supported by Geant eduTEAMS OIDC Group management service integrated with research and educational providers provided by Geant FreeIPA REST API Support for synchronisation of Waldur identities with open-source Identity Management server Keycloak OIDC Open-source identity management server LDAP LDAP/S Support of identity servers over LDAP protocol TARA OIDC Estonian State Autentication service Last update: 2021-05-03 | https://docs.waldur.com/admin-guide/identities/summary/ | 2021-06-12T19:36:13 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.waldur.com |
You are viewing documentation for Kubernetes version: v1.19
Kubernetes v1.19 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
A Very Happy Birthday Kubernetes
Last year at OSCON, I got to reconnect with a bunch of friends and see what they have been working on. That turned out to be the Kubernetes 1.0 launch event. Even that day, it was clear the project was supported by a broad community -- a group that showed an ambitious vision for distributed computing.
Today, on the first anniversary of the Kubernetes 1.0 launch, it’s amazing to see what a community of dedicated individuals can do. Kubernauts have collectively put in 237 person years of coding effort since launch to bring forward our most recent release 1.3. However the community is much more than simply coding effort. It is made up of people -- individuals that have given their expertise and energy to make this project flourish. With more than 830 diverse contributors, from independents to the largest companies in the world, it’s their work that makes Kubernetes stand out. Here are stories from a couple early contributors reflecting back on the project:
- Sam Ghods, services architect and co-founder at Box
- Justin Santa Barbara, independent Kubernetes contributor
- Clayton Coleman, contributor and architect on Kubernetes on OpenShift at Red Hat
The community is also more than online GitHub and Slack conversation; year one saw the launch of KubeCon, the Kubernetes user conference, which started as a grassroot effort that brought together 1,000 individuals between two events in San Francisco and London. The advocacy continues with users globally. There are more than 130 Meetup groups that mention Kubernetes, many of which are helping celebrate Kubernetes’ birthday. To join the celebration, participate at one of the 20 #k8sbday parties worldwide: Austin, Bangalore, Beijing, Boston, Cape Town, Charlotte, Cologne, Geneva, Karlsruhe, Kisumu, Montreal, Portland, Raleigh, Research Triangle, San Francisco, Seattle, Singapore, SF Bay Area, or Washington DC.
The Kubernetes community continues to work to make our project more welcoming and open to our kollaborators. This spring, Kubernetes and KubeCon moved to the Cloud Native Compute Foundation (CNCF), a Linux Foundation Project, to accelerate the collaborative vision outlined only a year ago at OSCON …. lifting a glass to another great year.
-- Sarah Novotny, Kubernetes Community Wonk | https://v1-19.docs.kubernetes.io/blog/2016/07/happy-k8sbday-1/ | 2021-06-12T20:56:32 | CC-MAIN-2021-25 | 1623487586390.4 | [] | v1-19.docs.kubernetes.io |
<<
health.conf
The following are the spec and example files for
health.conf.
health.conf.spec
# Version 7. 08 November, 2018
This documentation applies to the following versions of Splunk® Enterprise: 7.2.1
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/7.2.1/Admin/Healthconf | 2021-06-12T21:22:05 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
>> completes quickly.
See Rebalance indexer cluster primary bucket copies..
See Rebalance indexer cluster simply, see cluster/master/control/control/rebalance_primaries in the Rest API Reference Manual. cluster/master/peers in the Rest API Reference Manual.
Summary of indexer cluster primary rebalancing
Primary rebalancing is the rebalancing of the primary assignments across existing searchable copies in the cluster.
Primary rebalancing the following:
-. on the master node.. progress.! | https://docs.splunk.com/Documentation/Splunk/8.0.4/Indexer/Rebalancethecluster | 2021-06-12T21:46:37 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Access Groups
What are Access Groups?
Access Groups allow you to limit which members of your team can access the sites you have configured. This allows agencies to provide access to clients, but restrict them to seeing their own site or sites. Groups are also useful if you have multiple product teams that only need access to the site(s) they are responsible for.
Access Groups is a feature only available to teams on the Agency Plan.
Basic Concepts
- A list of all of your groups is available at /dashboard/groups. From here you can add a new group, or manage an existing one.
- A group consists of sites and members. Any site that is associated with a group can only be seen by members of that group.
- From any site dashboard, click the Access Groups menu item to see the groups associated with the site and the users who currently have access.
- For advanced use cases, sites can belong to any number of groups. This provides an extremely flexible way to control access.
It’s important to note, that any site that is not in a group can be accessed by all users on your team.
Common Use Case 1 - Agency with Client Users
- Create a group. We recommend calling it something like “Internal” or “Agency Team”.
- Add all current sites to the group.
- Add all of your internal users to the group. This would typically be all the users on your team who are not clients.
- Create a group for each client.
- Add the client’s sites as well as the client’s users to this group.
- When adding a new site, be sure it is added to your “Internal” group. Likewise if you add team members to your internal team.
Your internal team will be able to access all sites. Your clients will only be able to access sites associated with the client group.
Common Use Case 2 - Product Teams
- Create a group for each product your team works with. For example: “Marketing Team” and “App Team”.
- Add the appropriate site(s) and user(s) to each product team.
- When adding a new site, be sure it is added to the appropriate team’s group. Likewise if you add a new team member.
Each product team will only be able to access the sites they are responsible for. | https://docs.tidydom.com/teams/access-groups/ | 2021-06-12T20:40:30 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.tidydom.com |
Manage IP addresses¶.
Disassociate a floating IP address from an instance in the project.
Delete a floating IP from the project which automatically deletes that IP’s associations.
Use the openstack commands to manage floating IP addresses.
List floating IP address information¶.
Associate floating IP addresses¶ | +---------------------+------+---------+------------+-------------+------------------+------------+
Note the server ID to use.
List ports associated with the selected server.
$ openstack port list --device-id SERVER_ID +--------------------------------------+------+-------------------+--------------------------------------------------------------+--------+ | ID | Name | MAC Address | Fixed IP Addresses | Status | +--------------------------------------+------+-------------------+--------------------------------------------------------------+--------+ | 40e9dea9-f457-458f-bc46-6f4ebea3c268 | | fa:16:3e:00:57:3e | ip_address='10.0.0.4', subnet_id='23ee9de7-362e- | ACTIVE | | | | | 49e2-a3b0-0de1c14930cb' | | | | | | ip_address='fd22:4c4c:81c2:0:f816:3eff:fe00:573e', subnet_id | | | | | | ='a2b3acbe-fbeb-40d3-b21f-121268c21b55' | | +--------------------------------------+------+-------------------+--------------------------------------------------------------+--------+
Note the port ID to use.
Associate an IP address with an instance in the project, as follows:
$ openstack floating ip set --port PORT_ID FLOATING_IP_ADDRESS
For example:
$ openstack floating ip set --port 40e9dea9-f457-458f-bc46-6f4ebea3c268.
Disassociate floating IP addresses¶
To disassociate a floating IP address from an instance:
$ openstack floating ip unset --port. | https://docs.openstack.org/nova/wallaby/user/manage-ip-addresses.html | 2021-06-12T20:18:55 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.openstack.org |
Importing a bank statement
Note: In order to be able to import a bank statement, it needs to be a .csv (comma separated value) file.
To import a bank statement, you need a .csv (comma separated value) statement from your bank, this can be retrieved through online banking or via email.
Once you have gotten the .csv file, click on the Import button at the bottom of the Navigation Bar.
You will be brought to a window where you can choose the .csv file and choose the bank that the bank statement pertains to.
Once completed, click on the Upload CSV button. You will then be redirected to a window where you can save and categorize expenses, or delete them.
Once the expenses have been saved, they will automatically be added onto the list of existing expenses. | https://docs.clica.co/importing-bank-statements | 2021-06-12T19:48:15 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['/img/article-img/11MdxBawwD.png', None], dtype=object)
array(['/img/article-img/INrV9SVmvm.png', None], dtype=object)] | docs.clica.co |
What is HumanFirst
HumanFirst provides infrastructure, APIs and workflows to transform unstructured conversational and utterance data into accurate and scalable NLU training data & models
#❇️ Visual Data Studio
AI-assisted human workflows that provide superpowers for:
- Exploring data
- Labeling data
- Fixing and improving model accuracy
- Scaling model
👉 Keep discovering Studio here
#❇️ APIs
State-of-the-art APIs trained from your data, powering Studio and your own applications:
- Prediction
- Query
- Recommend
👉 Explore detailed API documentation
#❇️ Integrations
Export your data to any NLU or conversational AI platform with Studio or our Command Line Tool (CLI).
👉 View integration options here
#❇️ Data Pipeline
Scalable indexing and querying of uploaded unlabeled and labeled data that powers our APIs and Studio
👉 Technical overview here
#Built for long tail scale
- Index and query millions of unlabeled data in real-time
- Manage training datasets with thousands of intents, hundreds of thousands of training phrases
#Centralized hub
- Built for product, data science, labeling and dev teams
- Increased transparency and collaboration between stakeholders around data and use-cases
- Collaborative workspaces and workflows
- Share training datasets and build taxonomies to easily re-use across projects | https://docs.humanfirst.ai/docs/ | 2021-06-12T21:01:37 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.humanfirst.ai |
This article describes how to scale your resource by a custom metric in Azure portal.
Azure Monitor autoscale applies only to Virtual Machine Scale Sets, Cloud Services, App Service - Web Apps, Azure Data Explorer Cluster , Integration Service Environment and API Management services.
Lets get started
This article assumes that you have a web app with application insights configured. If you don't have one already, you can set up Application Insights for your ASP.NET website
- Open Azure portal
- Click on Azure Monitor icon in the left navigation pane.
- Click on Autoscale setting to view all the resources for which auto scale is applicable, along with its current autoscale status
- Open 'Autoscale' blade in Azure Monitor and select a resource you want to scale
Note: The steps below use an app service plan associated with a web app that has app insights configured.
- In the scale setting blade for the resource, notice that the current instance count is 1. Click on 'Enable autoscale'.
- Provide a name for the scale setting, and the click on "Add a rule". Notice the scale rule options that opens as a context pane in the right hand side. By default, it sets the option to scale your instance count by 1 if the CPU percentage of the resource exceeds 70%. Change the metric source at the top to "Application Insights", select the app insights resource in the 'Resource' dropdown and then select the custom metric based on which you want to scale.
- Similar to the step above, add a scale rule that will scale in and decrease the scale count by 1 if the custom metric is below a threshold.
- Set the instance limits. For example, if you want to scale between 2-5 instances depending on the custom metric fluctuations, set 'minimum' to '2', 'maximum' to '5' and 'default' to '2'
Note: In case there is a problem reading the resource metrics and the current capacity is below the default capacity, then to ensure the availability of the resource, Autoscale will scale out to the default value. If the current capacity is already higher than default capacity, Autoscale will not scale in.
- Click on 'Save'
Congratulations. You now successfully created your scale setting to auto scale your web app based on a custom metric.
Note: The same steps are applicable to get started with a VMSS or cloud service role. | https://docs.microsoft.com/en-us/azure/azure-monitor/autoscale/autoscale-custom-metric | 2021-06-12T21:05:57 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.microsoft.com |
Send knowledge base content to your contact
Sometimes you want to share content from your knowledge base. Web1on1 has made that really convenient for you.
This is how it works:
- Use tab to move your cursor from the messaging zone to the search bar
- Enter your search term
- Select your topic (works with up, down arrow keys)
- Click copy
See? There is nothing to it. Write your content with sharing in mind and you are all set. Sharing is easier than copy pasting. | https://docs.web1on1.chat/article/577jcznrdf-send-knowledge-base-content-to-your-contact | 2021-06-12T19:42:52 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['https://files.helpdocs.io/7gr3uufy93/articles/2y6iu711lb/1601835065867/copy-kb-content-to-message-input.gif',
None], dtype=object) ] | docs.web1on1.chat |
You are viewing documentation for Kubernetes version: v1.20
Kubernetes v1.20 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.
Scheduler Performance Tuning
Kubernetes v1.14 [beta]
kube-scheduler is the Kubernetes default scheduler. It is responsible for placement of Pods on Nodes in a cluster.
Nodes in a cluster that meet the scheduling requirements of a Pod are called feasible Nodes for the Pod. The scheduler finds feasible Nodes for a Pod and then runs a set of functions to score the feasible Nodes, picking a Node with the highest score among the feasible ones to run the Pod. The scheduler then notifies the API server about this decision in a process called Binding.
This page explains performance tuning optimizations that are relevant for large Kubernetes clusters.
In large clusters, you can tune the scheduler's behaviour balancing scheduling outcomes between latency (new Pods are placed quickly) and accuracy (the scheduler rarely makes poor placement decisions).
You configure this tuning setting via kube-scheduler setting
percentageOfNodesToScore. This KubeSchedulerConfiguration setting determines
a threshold for scheduling nodes in your cluster.
Setting the threshold
The
percentageOfNodesToScore option accepts whole numeric values between 0
and 100. The value 0 is a special number which indicates that the kube-scheduler
should use its compiled-in default.
If you set
percentageOfNodesToScore above 100, kube-scheduler acts as if you
had set a value of 100.
To change the value, edit the
kube-scheduler configuration file
and then restart the scheduler.
In many cases, the configuration file can be found at
/etc/kubernetes/config/kube-scheduler.yaml.
After you have made this change, you can run
kubectl get pods -n kube-system | grep kube-scheduler
to verify that the kube-scheduler component is healthy.
Node scoring threshold
To improve scheduling performance, the kube-scheduler can stop looking for feasible nodes once it has found enough of them. In large clusters, this saves time compared to a naive approach that would consider every node.
You specify a threshold for how many nodes are enough, as a whole number percentage of all the nodes in your cluster. The kube-scheduler converts this into an integer number of nodes. During scheduling, if the kube-scheduler has identified enough feasible nodes to exceed the configured percentage, the kube-scheduler stops searching for more feasible nodes and moves on to the scoring phase.
How the scheduler iterates over Nodes describes the process in detail.
Default threshold
If you don't specify a threshold, Kubernetes calculates a figure using a linear formula that yields 50% for a 100-node cluster and yields 10% for a 5000-node cluster. The lower bound for the automatic value is 5%.
This means that, the kube-scheduler always scores at least 5% of your cluster no
matter how large the cluster is, unless you have explicitly set
percentageOfNodesToScore to be smaller than 5.
If you want the scheduler to score all nodes in your cluster, set
percentageOfNodesToScore to 100.
Example
Below is an example configuration that sets
percentageOfNodesToScore to 50%.
apiVersion: kubescheduler.config.k8s.io/v1alpha1 kind: KubeSchedulerConfiguration algorithmSource: provider: DefaultProvider ... percentageOfNodesToScore: 50
Tuning percentageOfNodesToScore
percentageOfNodesToScore must be a value between 1 and 100 with the default
value being calculated based on the cluster size. There is also a hardcoded
minimum value of 50 nodes.
Note:
In clusters with less than 50 feasible nodes, the scheduler still checks all the nodes because there are not enough feasible nodes to stop the scheduler's search early.
In a small cluster, if you set a low value for
percentageOfNodesToScore, your change will have no or little effect, for a similar reason.
If your cluster has several hundred Nodes or fewer, leave this configuration option at its default value. Making changes is unlikely to improve the scheduler's performance significantly.
An important detail to consider when setting this value is that when a smaller number of nodes in a cluster are checked for feasibility, some nodes are not sent to be scored for a given Pod. As a result, a Node which could possibly score a higher value for running the given Pod might not even be passed to the scoring phase. This would result in a less than ideal placement of the Pod.
You should avoid setting
percentageOfNodesToScore very low so that kube-scheduler
does not make frequent, poor Pod placement decisions. Avoid setting the
percentage to anything below 10%, unless the scheduler's throughput is critical
for your application and the score of nodes is not important. In other words, you
prefer to run the Pod on any Node as long as it is feasible.
How the scheduler iterates over Nodes
This section is intended for those who want to understand the internal details of this feature.
In order to give all the Nodes in a cluster a fair chance of being considered
for running Pods, the scheduler iterates over the nodes in a round robin
fashion. You can imagine that Nodes are in an array. The scheduler starts from
the start of the array and checks feasibility of the nodes until it finds enough
Nodes as specified by
percentageOfNodesToScore. For the next Pod, the
scheduler continues from the point in the Node array that it stopped at when
checking feasibility of Nodes for the previous Pod.
If Nodes are in multiple zones, the scheduler iterates over Nodes in various zones to ensure that Nodes from different zones are considered in the feasibility checks. As an example, consider six nodes in two zones:
Zone 1: Node 1, Node 2, Node 3, Node 4 Zone 2: Node 5, Node 6
The Scheduler evaluates feasibility of the nodes in this order:
Node 1, Node 5, Node 2, Node 6, Node 3, Node 4
After going over all the Nodes, it goes back to Node 1. | https://v1-20.docs.kubernetes.io/docs/concepts/scheduling-eviction/scheduler-perf-tuning/ | 2021-06-12T21:03:45 | CC-MAIN-2021-25 | 1623487586390.4 | [] | v1-20.docs.kubernetes.io |
Concepts¶
The Airflow platform is a tool for describing, executing, and monitoring workflows.
Core Ideas¶
DAGs¶
Operators¶])
Relationship Helper¶.
Task Lifecycle¶
A task goes through various stages from start to completion. In the Airflow UI (graph and tree views), these stages are displayed by a color representing each stage:
The happy flow consists of the following stages:
no status (scheduler created empty task instance)
queued (scheduler placed a task dags trigger.
Workflows¶.
To combine Pools with SubDAGs see the SubDAGs section.
Connections¶_2<<_3<<
We can combine all of the parallel
task-* operators into a single SubDAG,
so that the resulting DAG resembles the following:
:
>>IMAGE also clears the state of the tasks within
marking success on a SubDagOperator does email parameter, the sla_miss_callback specifies an additional Callable object to be invoked when the SLA is not met.
You can configure the email that is being sent in your
airflow.cfg
by setting a
subject_template_6<<.
.
Zombies & Undeads¶)
Documentation & Notes¶. | https://airflow-apache.readthedocs.io/en/latest/concepts.html | 2021-06-12T20:58:26 | CC-MAIN-2021-25 | 1623487586390.4 | [array(['_images/task_lifecycle.png', '_images/task_lifecycle.png'],
dtype=object)
array(['_images/task_manual_vs_scheduled.png',
'_images/task_manual_vs_scheduled.png'], dtype=object)
array(['_images/branch_note.png', '_images/branch_note.png'], dtype=object)
array(['_images/subdag_before.png', '_images/subdag_before.png'],
dtype=object)
array(['_images/subdag_after.png', '_images/subdag_after.png'],
dtype=object)
array(['_images/subdag_zoom.png', '_images/subdag_zoom.png'], dtype=object)
array(['_images/branch_without_trigger.png',
'_images/branch_without_trigger.png'], dtype=object)
array(['_images/branch_with_trigger.png',
'_images/branch_with_trigger.png'], dtype=object)
array(['_images/latest_only_with_trigger.png',
'_images/latest_only_with_trigger.png'], dtype=object)] | airflow-apache.readthedocs.io |
PivotGridField.MinHeight Property
Gets or sets the minimum allowed height of rows that correspond to the current field. This is a dependency property.
Namespace: DevExpress.Xpf.PivotGrid
Assembly: DevExpress.Xpf.PivotGrid.v21.1.dll
Declaration
Property Value
Remarks
The PivotGridField.Height property cannot be set to a value less than the minimum allowed one, specified by the MinHeight property. End-users also cannot set a row height to a value less than MinHeight.
Note that specifying a row height has different effects for column, row and data fields. To learn more, see PivotGridField.Height.
To denote a default value of the MinHeight property, use the PivotGridField.DefaultMinHeight field.
See Also
Feedback | https://docs.devexpress.com/WPF/DevExpress.Xpf.PivotGrid.PivotGridField.MinHeight | 2021-06-12T21:08:38 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.devexpress.com |
Indicates whether the underlying database table uses record numbers to indicate the order of records.
function IsSequenced: Boolean; virtual;
virtual __fastcall Boolean IsSequenced();
Use IsSequenced to determine whether the underlying database table supports sequence numbers, or whether these are computed by the dataset component. When IsSequenced returns true, applications can safely use the RecNo property to navigate to records in the dataset.
As implemented in TDataSet, IsSequenced always returns true. Descendants of TDataSet reimplement this method to return a value that depends on the underlying table type. | http://docs.embarcadero.com/products/rad_studio/delphiAndcpp2009/HelpUpdate2/EN/html/delphivclwin32/DB_TDataSet_IsSequenced.html | 2012-05-26T00:00:58 | crawl-003 | crawl-003-011 | [] | docs.embarcadero.com |
This help file, help file.
Ipswitch Collaboration Suite (ICS ), the Ipswitch Collaboration Suite (ICS) logo, IMail, the IMail logo, WhatsUp, the WhatsUp logo, WS_FTP, the WS_FTP logos, Ipswitch Instant Messaging (IM), the Ipswitch Instant Messaging (IM) logo, Ipswitch, and the Ipswitch logo are trademarks of Ipswitch, Inc. Other products and their brands or company names are or may be trademarks or registered trademarks, and are the property of their respective companies.
Code derived from MIME .NET/MAIL.NET. Copyright© 2003- 2006 Hunny Software, Inc. All Rights Reserved.
Code derived from CuteEditor for .NET version 6. Copyright© 2006 by CuteSoft Components, Inc. All rights reserved.
IMail Web Messaging v10, February 2008
IMail Web Messaging 2006.22, October 2007
IMail Web Messaging 2006.21, July 2007
IMail Web Messaging 2006.2, February 2007
IMail Web Messaging 2006.1 Help, July 2006
IMail Web Messaging 2006.04 Help, April 2006
IMail Web Messaging 2006.03 Help, March 2006
IMail Web Messaging 2006.02 Help, January 2006
IMail Web Messaging 2006.01 Help, December 2005
IMail Web Messaging 2006 Help, November 2005 | http://docs.ipswitch.com/_Messaging/IMailServer/v10/Help/Client/about.htm | 2012-05-25T15:36:49 | crawl-003 | crawl-003-011 | [] | docs.ipswitch.com |
# Creating and Publishing Content
# Introduction
Welcome to the first tutorial in the Modyo training series. In this tutorial, you'll create and publish content using Modyo Content, the Modyo tool for managing dynamic, cross-platform sites.
# Dynamic Bank
Dynamic Bank is our fictional brand that we built to use in all our demos and tutorials. With Dynamic Bank you can live the experience of building digital products with Modyo.
Once you complete this tutorial series, your project should look like this:
# Prerequisites
You only need to have a Modyo account and have access to the platform. Don't have an account? You can request one with the platform administrator at your company, or request a trial here (opens new window).
# Step 1: Create a Space
Once you log in to Modyo with your account, we'll go to the Modyo Content module to create our first Space. A Space is where you group content types and entries from your sites.
To create your Space, follow these steps:
- In the main menu, select Content and click on Spaces.
- Click + New Space.
- In the New Space window fill in the following fields:
- Name: Bank
- Identifier: bank
- Default language: **Spanish (Spain) **
- Realm of Space: None
- Click Create.
# Step 2: Create "Hero" Type
Create your first content type by following these steps:
- In the Spaces window, click on the space Bank.
- From the main menu, click Types.
- Click + New Type and fill in the following fields:
- Name: Hero
- Identifier: hero
- Cardinality: Multiple
- In the content type window, drag the items in the following order.
# Step 3: Create and Post "Hero" Type entry
To create your first entry of type “Hero”, follow these steps:
- In the main menu, click Entries.
- Click + New Entry.
- Select the content type Hero and fill in the following values:
- Name: "Wherever you are, Dynamic Bank is with you"
- Identifier: dynamicbank_hero
- For the rest of the fields, use the following values:
When finished, select Publish Now and click Publish.
Very good! You have created your first Type and Entry successfully
Now follow the steps below to create the Types and Tickets you'll need for future tutorials.
# Step 4: Create "News" Type
From the main menu return to the Types section. As with the type “Hero”, create the type “News” with the following fields:
# Step 5: Create and publish News
Go to Entries and create the following entries for the “News” type:
# First Entry
# Second Entry
# Third Entry
At the end of each entry, select Publish Now and click Publish.
# Step 6: Create “Benefits” Type
Following the same steps, create the type for “Benefits” with the fields:
# Step 7: Create Categories
The categories are used to sort your entries. To filter your entries of type “Benefits” create the following categories.
- From the main menu, click Categories.
- Click + New Category and create the following categories:
- Gourmet
- Health
- Activities
- Shopping
- Travel
- Click Save.
Your category window should like the following image.
# Step 8: Create and Publish Benefits
In the main menu, click Entries. Create the Benefits entries with the following fields:
# Benefit One
# Benefit Two
# Benefit Three
# Benefit Four
# Benefit Five
# Benefit Six
# Benefit Seven
At the end of each entry, select Publish Now and click Publish.
# Step 9: Create "Testimonial" Type
Create the last type for testimonials, for this type you will need the following fields:
# Step 10: Create and Publish Testimonials
Create two entries with the following fields:
# Testimonial One
# Testimonial Two
Remember to publish the posts you've created.
# Conclusion
Congratulations! You used Modyo Content to its full potential using Spaces, Types, Entries, and Categories to generate all the content you need to build the Home page for Dynamic Bank.
We already have all our entries to be able to develop the Front-end and the Home page for Dynamic Bank can be generated from Modyo Channels while the content is changed from Modyo Content.
What comes next? Managing this content from a Web site created in Modyo Channels. | https://develop.docs.modyo.com/en/platform/tutorials/how-to-create-content.html | 2022-06-25T11:45:31 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['/assets/img/tutorials/how-to-create-dynamicbank-content/home.png',
None], dtype=object)
array(['/assets/img/tutorials/how-to-create-dynamicbank-content/new-space.png',
None], dtype=object)
array(['/assets/img/tutorials/how-to-create-dynamicbank-content/create-space.png',
None], dtype=object)
array(['/assets/img/tutorials/how-to-create-dynamicbank-content/hero.png',
'Type Hero'], dtype=object)
array(['/assets/img/tutorials/how-to-create-dynamicbank-content/publish.png',
None], dtype=object)
array(['/assets/img/tutorials/how-to-create-dynamicbank-content/categories.png',
'Type'], dtype=object) ] | develop.docs.modyo.com |
Logging
Payara Server captures information about events that occur and records this information using its logging mechanism into log files.
In a Payara Server domain, the information is logged into the following files by default:
Payara Server uses Java Logging (JUL) to format and output log records. The default configuration file is located at
${DOMAIN_DIR}/config/logging.properties.
Logging settings can be configured by either using the Asadmin CLI or in the Web Admin Console, like in the following screenshot:
Fast Logging
Whenever logging occurs on an application, if the LogRecord in question contains set parameters, they will undergo a forced transformation by having a
toString() method call. In most cases this is the desirable outcome, but will not provide the best performance. You can enable the Fast Logging setting to skip this forced parameter transformation at runtime.
A common use case of this feature would be to prevent database access done by JPA entities, as it is a common occurrence for entity data to be logged out for auditing purposes.
Log to File
The
Log to File option allow you to enable and disable the action of logging to a file. When disabled this should help to minimize disk usage.
Enable Log to File using the Admin Console
To configure the
Log to File option using theAdmin Console:
Log To Console
The
Log to Console option controls if the server writes the logging entries directly to the console. The property only has an effect when the server or instance is started in verbose mode.
To enable the log to console option you simply have to start the domain or instance using corresponding server process.
Once the server is run in verbose mode, you can use either the Admin Console or the Asadmin CLI to modify this setting.
Configure Logging To Console using the Admin Console
To configure the
Log to Console option using the Admin Console:
Multiline Mode Support
When the Multiline mode is enabled, each log entry’s message body will be printed on a new line after the message header for each log record. This will improve the overall readability of the server’s log.
Here’s a quick example of how multiline formatted log entries look like:
Environment Variable Replacement
The logging properties file supports environment variables like
.level=${ENV=logLevel}
Whenever the server starts up or the logging properties are changed, the value for the
.level property will be taken from the environment variable
logLevel.
Access Logging Max File Size
Payara Server provides different ways to rotate HTTP access log files. This section will detail the use of the max size of the log file to trigger a rotation.
The Max File Size option provides a way to change the file size at which the PayaraServer rotates the access log file. This option accepts an integer value specifying the maximum size of the log file, after which a file rotation will occur.
Notification Logging Service
The Notification Logging Service captures information about events which come from other services, such as the JMX Monitoring Service, the HealthCheck service or the Request Tracing service and stores these entries it into a log file.
All the generated entries are stored in server.log by default. It is possible to configure the Log Notifier to store its output in a separate log file. More information on the Log Notifier can be found on the the Log Notifier section of the Notification Service overview.
The Notification Logging Service uses its own collection of logging properties which are separate from the standard logging facilities of Payara Server. However, they are stored in the same configuration file.
Configuring the Notification Logging Service
Enabling or Disabling Logging to a File
The.
Rotation on Date Change
The Rotation On Date Change option provides a way to set the log rotation when the system date changes (at midnight, by default).
Rotation on File Size
The File Rotation Limit option provides a way to change the file size at which the server triggers the log file rotation. This option accepts an integer value specifying the maximum size of the log file, after which a file rotation will occur. The minimum size it can be set to is
500KB (
500.000 bytes).
Rotation on Time Limit
The File Rotation Time Limit option provides a way to trigger the log file rotation based on a fixed time limit. The value of this setting is an integer that defines the time limit in minutes until the log rotation gets triggered.
Change the Logging Format
The Log File Logging Format option can be used to change the log entries' format. There are 3 logging formats available:
ULF,
ODL and
JSON, each one represented by an specific formatter class present in the Payara Platform API.
Set the Maximum Number of Historic Files
The.
Change the Name and Location of the Log File
The
Log File option provides a way to change the default name and location of the server log files.
Enable File Compression on Rotation
The
Compress on Rotation option provides a way to enable the automatic compression of log files on rotation.
Log Rotation
File rotation keeps log files manageable, as older log files are automatically deleted after a certain amount of time, and its proper configuration is recommended to keep a healthy disk space management.
Enabling file rotation
By default a size rotation of
2MB is used for server logs, meaning no log files will be deleted until the size limit is reached and a new file is created at midnight.
Payara Server has different rotation conditions which can be fine-tuned based on your needs:
- Time
Daily, weekly, monthly or even hourly log rotation.
- Size
Logs are rotated when they exceed a certain limit.
- Number
Maximum number of entries kept in a log file.
These settings can be configured in the Admin Console:
Which allows you to change how the logs are rotated to your needs and can be combined with the default "daily" log rotation.
ANSI Coloured Logging
Payara Server supports the use of ANSI coloured log entries when running in verbose mode.
To enable ANSI colours run the following command using the Asadmin CLI:
asadmin> set-log-attributes com.sun.enterprise.server.logging.UniformLogFormatter.ansiColor=true
Log File Compression on Rotation
Payara Server can be configured to automatically compress rotated log files in an automatic manner to save disk space.
Using the Web Admin Console
When log rotation is enabled, you can turn on automatic compression in the
Logger Settings section of the Admin Console, by ticking the
Compress on Rotation checkbox:
Using the Asadmin CLI
Use the following command to enable or disabled the automatic compression of log files on rotation:
asadmin> set-log-attributes com.sun.enterprise.server.logging.GFFileHandler.compressOnRotation='true'
JSON Log Formatter
Besides the standard Uniform Log Format (ULF) and Oracle Diagnostics Logging (ODL) formats (inherited from Payara Server’s source: GlassFish Server Open Source Edition), Payara Server provides a JSON format. With this format, every entry is formatted as a JSON object string. These entries can be easily processed by any JSON parser for further data processing.
Once the JSON formatter is enabled, the server’s log file may look similar to this sample:
Enable the JSON formatter using the Web Admin Console
To enable the JSON formatter using the Admin Console, just select
JSON from the list of Logging Formats, either for
Console or
Log File:
Enable the JSON formatter using the Asadmin CLI
The following command will enable the JSON formatter:
asadmin> set-log-attributes com.sun.enterprise.server.logging.GFFileHandler.formatter='fish.payara.enterprise.server.logging.JSONLogFormatter'
Configure Prefixed field names
In some situations, the JSON representation of a log entry may use field names that clash with existing standard field names that logging gathering tools may use them for specific purposes. To solve this problem, Payara Server can be configured to automatically prefix all field names in the JSON object representation with an underscore (
_) character. See the following sample to get an idea of how such a JSON payload would look like:
The following command will enable this configuration setting:
asadmin> set-log-attributes fish.payara.deprecated.jsonlogformatter.underscoreprefix=true
Support for Additional Fields
The JSON Log Formatter also supports the customization of additional fields through the
setParameters method of the LogRecord class that is part of the standard
java.util.logging package. This action is done when logging a new entry at runtime, so it is limited to an application business logic context.
Here’s a quick example of how to pass additional fields to the resulting JSON object by using a map with a single entry:
LogRecord lr = new LogRecord(Level.INFO, "Sample message"); lr.setParameters(new Object[]{Collections.singletonMap("key", "value")}); logger.log(lr);
Exclude Fields
All the three log formatters
ODLLogFormatter,
UniformLogFormatter, and
JSONLogFormatter support excluding log entry fields when being recorded. This makes the log file more compact and removes unnecessary information in the case you do not need it or want to use it.
You can change the
com.sun.enterprise.server.logging.GFFileHandler.excludeFields within the
<PAYARA_HOME>/glassfish/domains/<domain-name>/config/logging.properties file or use the Admin Console the Asadmin CLI.
Using the Admin Console
To configure the excluded fields in the log entries, select them on the Logger settings screen:
Using the Asadmin CLI
Use the following command to change the excluded fields:
asadmin> set-log-attributes com.sun.enterprise.server.logging.GFFileHandler.excludeFields=tid,version | https://docs.payara.fish/enterprise/docs/Technical%20Documentation/Payara%20Server%20Documentation/Logging%20and%20Monitoring/Logging.html | 2022-06-25T11:06:53 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['../../../_images/logging/logging_setup.png', 'Logger Settings'],
dtype=object)
array(['../../../_images/logging/log_to_file.png', 'Log to File enabled'],
dtype=object)
array(['../../../_images/logging/log_to_file.png',
'Log to Console disabled'], dtype=object)
array(['../../../_images/logging/multiline_example.png',
'Multiline mode in the Web Console'], dtype=object)
array(['../../../_images/logging/daily-log-rotation.png',
'File rotation settings'], dtype=object)
array(['../../../_images/logging/log_rotation_settings.png',
'Log rotation settings'], dtype=object)
array(['../../../_images/logging/compress_on_rotation.png',
'Compress on rotation enabled'], dtype=object)
array(['../../../_images/logging/json_example.png',
'Example log file with JSON format'], dtype=object)
array(['../../../_images/logging/json_config.png',
'JSON format configuration in Web Console'], dtype=object)
array(['../../../_images/logging/json_underscore_prefix_example.png',
'Example log file with underscore prefix in JSON fields'],
dtype=object)
array(['../../../_images/logging/exclude-fields.png', 'Exclude Fields'],
dtype=object) ] | docs.payara.fish |
Single-row ACID transactions
YugabyteDB offers ACID semantics for mutations involving a single row or rows that fall within the same shard (partition, tablet). These mutations incur only one network roundtrip between the distributed consensus peers.
Even read-modify-write operations within a single row or single shard, such as the following incur only one round trip in YugabyteDB.
UPDATE table SET x = x + 1 WHERE ... INSERT ... IF NOT EXISTS UPDATE ... IF EXISTS
Note that this is unlike Apache Cassandra, which uses a concept called lightweight transactions to achieve correctness for these read-modify-write operations and incurs 4-network round trip latency.
Reading the latest data from a recently elected leader
In a steady state, when the leader is appending and replicating log entries, the latest majority-replicated entry is exactly the committed one. However, it is a bit more complicated right after a leader change. When a new leader is elected in a tablet, it appends a no-op entry to the tablet's Raft log and replicates it, as described in the Raft protocol. Before this no-op entry is replicated, we consider the tablet unavailable for reading up-to-date values and accepting read-modify-write operations. This is because the new tablet leader needs to be able to guarantee that all previous Raft-committed entries are applied to RocksDB and other persistent and in-memory data structures, and it is only possible after we know that all entries in the new leader's log are committed.
Leader leases: reading the latest data in case of a network partition
Leader leases are a mechanism for a tablet leader to establish its authority for a certain short time period in order to avoid the following inconsistency:
- The leader is network-partitioned away from its followers
- A new leader is elected
- The client writes a new value and the new leader replicates it
- The client reads a stale value from the old leader.
The leader lease mechanism in YugabyteDB prevents this inconsistency. It works as follows:
With every leader-to-follower message (AppendEntries in Raft's terminology), whether replicating new entries or even an empty heartbeat message, the leader sends a "leader lease" request as a time interval, e.g. could be "I want a 2-second lease". The lease duration is usually a system-wide parameter. For each peer, the leader also keeps track of the lease expiration time corresponding to each pending request (i.e. time when the request was sent + lease duration), which is stored in terms of local monotonic time (CLOCK_MONOTONIC in Linux). The leader considers itself as a special case of a "peer" for this purpose. Then, as it receives responses from followers, it maintains the majority-replicated watermark of these expiration times as stored at request sending time. The leader adopts this majority-replicated watermark as its lease expiration time, and uses it when deciding whether it can serve consistent read requests or accept writes.
When a follower receives the above Raft RPC, it reads the value of its current monotonic clock, adds the provided lease interval to that, and remembers this lease expiration time, also in terms of its local monotonic time. If this follower becomes the new leader, it is not allowed to serve consistent reads or accept writes until any potential old leader's lease expires.
To guarantee that any new leader is aware of any old leader's lease expiration, another bit of logic is necessary. Each Raft group member records the latest expiration time of an old leader that it knows about (in terms of this server's local monotonic time). Whenever a server responds to a RequestVote RPC, it includes the largest remaining amount of time of any known old leader's lease in its response. This is handled similarly to the lease duration in a leader's AppendEntries request on the receiving server: at least this amount of time has to pass since the receipt of this request before the recipient can service up-to-date requests in case it becomes a leader. This part of the algorithm is needed so that we can prove that a new leader will always know about any old leader's majority-replicated lease. This is analogous to Raft's correctness proof: there is always a server ("the voter") that received a lease request from the old leader and voted for the new leader, because the two majorities must overlap.
Note that we are not relying on any kind of clock synchronization for this leader lease implementation, as we're only sending time intervals over the network, and.
The leader lease mechanism guarantees that at any point in time there is at most one server in any tablet's Raft group that considers itself to be an up-to-date leader that is allowed to service consistent reads or accept write requests.
Safe timestamp assignment for a read request
Every read request is assigned a particular MVCC timestamp / hybrid time (let's call it ht_read), which allows write operations to the same set of keys to happen in parallel with reads. It is crucial, however, that the view of the database as of this timestamp is not updated by concurrently happening writes. In other words, once we've picked ht_read for a read request, no further writes to the same set of keys can be assigned timestamps lower than or equal to ht_read. As we mentioned above, we assign strictly increasing hybrid times to Raft log entries of any given tablet. Therefore, one way to assign ht_read safely would be to use the hybrid time of the last committed record. As committed Raft log records are never overwritten by future leaders, and each new leader reads the last log entry and updates its hybrid time, all future records will have strictly higher hybrid times.
However, with this conservative timestamp assignment approach, ht_read can stay the same if there is no write workload on this particular tablet. This will result in a client-observed anomaly if TTL(time-to-live) is being used: no expired values will disappear, as far as the client is concerned, until a new record is written to the tablet. Then, a lot of old expired values could suddenly disappear. To prevent this anomaly, we need to assign the read timestamp to be close to the current hybrid time (which is in its turn close to the physical time) to preserve natural TTL semantics. We should therefore try to choose ht_read to be the highest possible timestamp for which we can guarantee that all future write operations in the tablet will have a strictly higher hybrid time than that, even across leader changes.
For this, we need to introduce a concept of "hybrid time leader leases", similar to absolute-time leader leases discussed in the previous section. With every Raft AppendEntries request to a follower, whether it is a regular request or an empty / heartbeat request, a tablet leader computes a "hybrid time lease expiration time", or ht_lease_exp for short, and sends that to the follower. ht_lease_exp is usually computed as current hybrid time plus a fixed configured duration (e.g. 2 seconds). By replying, followers acknowledge the old leader's exclusive authority over assigning any hybrid times up to and including ht_lease_exp. Similarly to regular leases, these hybrid time leases are propagated on votes. The leader maintains a majority-replicated watermark, and considers itself to have replicated a particular value of a hybrid time leader lease expiration if it sent that or a higher ht_lease_exp value to a majority of Raft group members. For this purpose, the leader is always considered to have replicated an infinite leader lease to itself.
Definition of safe time
Now, suppose the current majority-replicated hybrid time leader lease expiration is replicated_ht_lease_exp. Then the safe timestamp for a read request can be computed as the maximum of:
- Last committed Raft entry's hybrid time
- One of:
- If there are uncommitted entries in the Raft log: the minimum ofthe first uncommitted entry's hybrid time - ε (where ε is the smallest possible difference in hybrid time) and replicated_ht_lease_exp.
- If there are no uncommitted entries in the Raft log: the minimum of the current hybrid time and replicated_ht_lease_exp.
In other words, the last committed entry's hybrid time is always safe to read at, but for higher hybrid times, the majority-replicated hybrid time leader lease is an upper bound. That is because we can only guarantee that no future leader will commit an entry with hybrid time less than ht if ht < replicated_ht_lease_exp.
Note that when reading from a single tablet, we never have to wait for the chosen ht_read to become safe to read at because it is chosen as such already. However, if we decide to read a consistent view of data across multiple tablets, ht_read could be chosen on one of them, and we'll have to wait for that timestamp to become safe to read at on the second tablet. This will typically happen very quickly, as the hybrid time on the second tablet's leader will be instantly updated with the propagated hybrid time from the first tablet's leader, and in the common case we will just have to wait for pending Raft log entries with hybrid times less than ht_read to be committed.
Propagating safe time from leader to followers for follower-side reads
YugabyteDB supports reads from followers to satisfy use cases that require an extremely low read latency that can only be achieved by serving read requests in the data center closest to the client. This feature comes at the expense of potentially slightly stale results, and this is a trade-off that application developers have to make. Similarly to strongly-consistent leader-side reads, follower-side read operations also have to pick a safe read timestamp.
As before, "safe time to read at" means that no future writes are supposed to change the view of the data as of the read timestamp. However, only the leader is able to compute the safe read time using the algorithm described in the previous section. Therefore, we propagate the latest safe time from leaders to followers on AppendEntries RPCs. This means, for example, that follower-side reads handled by a partitioned-away follower will see a "frozen" snapshot of the data, including values with TTL specified not timing out. When the partition is healed, the follower will start getting updates from the leader and will be able to return read results that would be very close to up-to-date. | https://docs.yugabyte.com/preview/architecture/transactions/single-row-transactions/ | 2022-06-25T11:15:12 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.yugabyte.com |
Iso14CE.ActivateCardAPDU
This command starts the reader's passive mode, emulating an ISO14443-4 compatible card, and returns the first APDU request received.
The emulated PICC answers all ISO14443-3 request, anticollision and select commands. It utilizes the 4-Byte serial number specified in the Snr parameter. The Byte order corresponds to ISO14443-3: The first Byte (uid0) will be transmitted first to the reader. It has to be 0x08 as this indicates a random serial number.
To identify potential errors, the PCD starts a timer within which the PICC has to respond, after sending out each frame. This time duration is the so-called frame waiting time (FWT). FWT is determined by the PICC during protocol activation and is valid for all further communication. This parameter should be chosen as large as required, but as small as possible. The communication speed between card emulator and host and the processing speed of the host should be taken into consideration. It is possible to increase the FWT temporarily for a single frame by calling the Iso14CE.ExtendWaitingTime command. Since the ISO14443 protocol specification only allows discrete FWT values, the firmware calculates the lowest value that meets the specified waiting time according to the equation
FWI = (256 * 16 / fc) * 2 ^ FWT,
where fc is the RFID carrier frequency. The highest possible FWT value is 4949 ms.
2 timeout parameters triggering a timer have to be specified for this command:
- The first timer, associated with the TimeoutPCD parameter, is used for the card activation sequence. Card activation is complete once the emulated PICC has received the RATS command. If the emulated PICC doesn't receive the required protocol activation sequence within TimeoutPCD, this command will return an Iso14CE.ErrIso144State status code. For TimeoutPCD, we recommend a value of 1000 ms - this provides the best results for the protocol activation sequence.
- The second timer is optional (default value: 100 ms) and associated with the TimeoutApdu parameter. It stops as soon as the emulated PICC has received an optional PPS command frame or an APDU Exchange command after the RATS command as defined in ISO14443-4. If the emulated PICC doesn't receive anything within TimeoutApdu, this command will return an Iso14CE.ErrIso144State status code. Otherwise, the first APDU request is returned in the command's response.
The ATS (historical bytes) the card emulation shall use may be specified by the ATS parameter if required. This parameter may also be left out, in which case no historical bytes are sent.
As already mentioned, ISO14443-4 specifies that a card has to send a response within FWT ms. The command I4CE.ExtendWaitingTime can be called to extend this time temporarily if the host cannot prepare the APDU within the defined FWT time. A more convenient way to perform this action is to use the automatic WTX mode: If the parameter AutoWTX is set to 1, the card emulation will automatically transmit WTX requests periodically every 0.9 * FWT ms after the successful execution of the Iso14CE.StartEmu command and of all subsequent Iso14CE.ExchangeCardAPDU commands. In practice, this allows to ignore the FWT limits, since the card emulation itself keeps the communication with the PCD alive.
Properties
- Command code: 0x4A01
- Command timeout: 2000 ms
- Possible status codes: General status codes, Iso14CE.ErrIso144State, Iso14CE.ErrCom, Iso14CE.ErrTransmission, Iso14CE.ErrTimeout, Iso14CE.ErrOverflow, Iso14CE.ErrInternal, Iso14CE.ErrDeselect | https://docs.baltech.de/refman/cmds/iso14ce/activatecardapdu.html | 2022-06-25T11:40:06 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.baltech.de |
Table 1 Supported Layer 2 and Layer 3 Protocols
Category
Features
System Management and Administration
Support for clock/date setting and NTP (Network Time Protocol)
Support for inbound IP access via any routed interface
Support for DHCP (Dynamic Host Configuration Protocol); DHCP client, DHCP relay, DHCP Option82, and DHCP snooping
Support for multiple local user accounts
Support for SSHv2 (Secure Shell) protocol
Ability to enable debugging for a specific module
Support for Read Only and Read Write access SNMP (Simple Network Management Protocol)
Support for IPFIX (IP Flow Information Export), monitors data flow in specified server
Device Configuration, Software, and File Management
Support the ability to save the configuration to flash on the device
Support for configuration versioning and rollback; compares the two configurations, identifying differences
Ability to import/export configuration files, device software, and logs from a file on a remote server (tftp/scp as options)
Ping and Trace route tool from CLI (command line interface)
SSH tool from CLI
Ability to view and configure MAC/ARP (Address Resolution Protocol) table information
Layer 2 Forwarding and Protocol
Support for LLDP (Link Layer Discovery) protocols for detecting devices on a link
Support for LACP (Link Aggregation Control) protocol and hashing of traffic using src/Dst (Source/Destination) MAC address, Src/Dst IP address, and Layer 4 port information and flag
Support for 802.1q trunked interfaces, for both single and LAG (Link Aggregation Group) interfaces
Support for 802.1q tagged/untagged interfaces and native tags
Support for Q-in-Q
Support for Jumbo Frame
Support for 802.1d STP (Spanning Tree Protocol)
Support for 802.1w RSTP (Rapid STP) and PVST (Per-VLAN STP)
Support for 802.1s MSTP (Multiple Spanning Tree protocol)
Support for functionality of BPDU (Bridge Protocol Data Unit) Guard / Filter/UDLD (Unidirectional Link Detection)
Support for storm-control for unicast, multicast, broadcast
Support for ingress/egress port mirroring
Support for 802.1p in Layer 2 forwarding
Support for Flow control per-interface
Support for IGMP (Internet Group Management Protocol) snooping enable per-VLAN
Support for IGMP snooping query per-VLAN
Layer 3 Forwarding and Routing Protocol
Full support for dual stacked IPv4 and IPv6 addressing.
Support for 6 members in a Layer 3 LAG (Link Aggregation Group) interface
Support for IPv4 and IPv6 static route configuration
Support for OSPFv2 (Open Shortest Path First) IPv4 only
Support for stub, normal, and NSSA (Not-So-Stubby Area) OSPF area types
Support for up to 32 equal-cost routes in OSPF
Support for RIP routing protocol
Support for BGP (Border Gate Protocol) routing
Support for 128 equal-cost routes in the device's routing/forwarding tables
Support for ECMP (Equal-Cost Multi-path) routing with hashing of traffic using Src/Dst IP and Port
Support the ToS and DSCP (Differentiated Services Code Point) in Layer 3 forwarding
Support for IGMP v1/v2
Support for PIM-SM (Protocol Independent Multicast Routing-Sparse Mode)
Support for VRRP (Virtual Router Redundancy Protocol) | https://docs.pica8.com/plugins/viewsource/viewpagesrc.action?pageId=41716800 | 2022-06-25T11:04:44 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.pica8.com |
Dealing with Google Analytics
How to interact between our CMP and Google Analytics ( GA4 )
Choose the best way to control GA: 3 options
Option 1: Using Transparency and Consent Framework ( IAB TCF)
Google is registed as a vendor (755) and Google Analytics product is covered.
In order to ask GA to honor TCF signals, just add this line of code BEFORE any call to gtag().
window['gtag_enable_tcf_support'] =.
For example, you can enable it while you do the init of our CMP.
<script type="text/javascript">
__tcfapi('init', 2, function(){} , {
appKey: 'YOUR_API_KEY',
})
window['gtag_enable_tcf_support'] = true;
</script>
Depending of the user choice, GA will honor user choices according to the following table
Purposes
A "Purpose" in the TCF context is a defined intent for processing data. Google Analytics tags implemented via Google Tag Manager or gtag.js with TCF support enabled will handle requests that contain the consent string in the following ways:
For more detailled informations, here is the documention from Google:
As the date of now, this behavior is not compatible with the FRENCH cnil guidelines. Using this
gtag_enable_tcf_support = true
Google will stop to drop all advertising cookies but will still drop analytics cookies ( _ga , _gid ) , using an exemption right which is not possible anymore beginning the 1st of April 2021.
We strongly encourage customers to contact legal advisors and DPO to take the right decision.
Option 2: Using GCM (Google Consent Mode)
Google Consent Mode is a new possibilities to control Google Analytics (but also other Google Advertising product)
Here is how to setup GCM according the the choices made in our CMP.
Step 1 - Adding control flag in your GA tracking code.
Here is a real tracking code from GA
<script async</script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-XXXXXXXXX-1');
</script>
Let's say that we want that the default behavior must be :
No Analytics Cookies
No advertising Cookies.
According to the official documentation, we need to add this:
gtag('consent', 'default', {
'ad_storage': 'denied',
'analytics_storage': 'denied'
});
Here is the final GA tag you have to replace in your pages [ You can add it through your Tag Manager tool ]
Copy it to insert it as the first element in the HEAD section of each of the web pages.
<script async</script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('consent', 'default', {
'ad_storage': 'denied',
'analytics_storage': 'denied'
});
gtag('js', new Date());
gtag('config', 'UA-XXXXXXXXX-1');
</script>
This way, when the user arrives on you website, it will not be tracked until he make a choice.
Step 2 - Change the behavior according to the CMP Choice.
In this implementation, we are outside TCF Purposes matching, so we are going to link the ability to track users to the Purpose 1 - Store and/or access information on a device
Add this function to your page
<script type="text/javascript">
__tcfapi('addEventListener', 2, function(tcData, success) {
if (success && tcData.gdprApplies && (tcData.eventStatus === 'tcloaded' || tcData.eventStatus === 'useractioncomplete') ) {
if (tcData.purpose.consents[1]) {
gtag('consent', 'update', {
'ad_storage': 'granted',
'analytics_storage': 'granted'
})
}
}
})
</script>
With this code, your GA tracking code :
Will be granted only if the purpose 1 is set to
True.
It will not drop cookies before the consent of the users.
Will drop cookies only after and if the user granted Purpose 1 in the CMP.
To go further, here is the complete Google documentation:
Option 3: Using Google Tag Manager Event
As a third solution, you can also use your tag manager to trigger GA tags according to purpose_events sent to the dataLayer. See this page. | https://docs.sfbx.io/configuration/notice-implementation/web-cmp-google-gtm/deal-with-google-analytics/index.html | 2022-06-25T11:38:46 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.sfbx.io |
,: MySQL, MariaDB, Oracle, SQL Server, PostgreSQL, DB2. patch) | https://docs.appian.com/suite/help/20.3/Managing_Import_Customization_Files.html | 2022-06-25T10:01:45 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.appian.com |
Live ISO and PXE image reference
Passing the PXE rootfs to a machine
The Fedora CoreOS PXE image includes three components: a
kernel, an
initramfs, and a
rootfs. All three are mandatory and the live PXE environment will not boot without them.
There are multiple ways to pass the
rootfs to a machine:
Specify only the
initramfsfile as the initrd in your PXE configuration, and pass an HTTP(S) or TFTP URL for the
rootfsusing the
coreos.live.rootfs_url=kernel argument. This method requires 2 GiB of RAM, and is the recommended option unless you have special requirements.
Specify both
initramfsand
rootfsfiles as initrds in your PXE configuration. This can be done via multiple
initrddirectives, or using additional
initrd=parameters as kernel arguments. This method is slower than the first method and requires 4 GiB of RAM.
Concatenate the
initramfsand
rootfsfiles together, and specify the combined file as the initrd. This method is slower and requires 4 GiB of RAM.
Passing an Ignition config to a live PXE system
When booting Fedora CoreOS via live PXE, the kernel command line must include the arguments
ignition.firstboot ignition.platform.id=metal to run Ignition. If running in a virtual machine, replace
metal with the platform ID for your platform, such as
qemu or
vmware.
There are several ways to pass an Ignition config when booting Fedora CoreOS via PXE:
Add
ignition.config.url=<config-url>to the kernel command line. Supported URL schemes include
http,
https,
tftp,
s3, and
gs.
If running virtualized, pass the Ignition config via the hypervisor, exactly as you would when booting from a disk image. Ensure the
ignition.platform.idkernel argument is set to the platform ID for your platform.
Generate a customized version of the
initramfscontaining your Ignition config using
coreos-installer pxe customize. For example, run:
coreos-installer pxe customize --live-ignition config.ign -o custom-initramfs.img \ fedora-coreos-36.20220605.3.0-live-initramfs.x86_64.img
If you prefer to keep the Ignition config separate from the Fedora CoreOS
initramfsimage, generate a separate initrd with the low-level
coreos-installer pxe ignition wrapcommand and pass it as an additional initrd. For example, run:
coreos-installer pxe ignition wrap -i config.ign -o ignition.img
and then use a PXELINUX
APPENDline similar to:
APPEND initrd=fedora-coreos-36.20220605.3.0-live-initramfs.x86_64.img,fedora-coreos-36.20220605.3.0-live-rootfs.x86_64.img,ignition.img ignition.firstboot ignition.platform.id=metal
Passing network configuration to a live ISO or PXE system
On Fedora CoreOS, networking is typically configured via NetworkManager keyfiles. If your network requires special configuration such as static IP addresses, and your Ignition config fetches resources from the network, you cannot simply include those keyfiles in your Ignition config, since that would create a circular dependency.
Instead, you can use
coreos-installer iso customize or
coreos-installer pxe customize with the
--network-keyfile option to create a customized ISO image or PXE
initramfs image which applies your network settings before running Ignition. For example:
coreos-installer iso customize --network-keyfile custom.nmconnection -o custom.iso \ fedora-coreos-36.20220605.3.0-live.x86_64.iso
If you’re PXE booting and want to keep your network settings separate from the Fedora CoreOS
initramfs image, you can also use the lower-level
coreos-installer pxe network wrap command to create a separate initrd image, and pass that as an additional initrd. For example, run:
coreos-installer pxe network wrap -k custom.nmconnection -o network.img
and then use a PXELINUX
APPEND line similar to:
APPEND initrd=fedora-coreos-36.20220605.3.0-live-initramfs.x86_64.img,fedora-coreos-36.20220605.3.0-live-rootfs.x86_64.img,network.img ignition.firstboot ignition.platform.id=metal
Passing kernel arguments to a live ISO system
If you want to modify the default kernel arguments of a live ISO system, you can use the
--live-karg-{append,replace,delete} options to
coreos-installer iso customize. For example, if you want to enable simultaneous multithreading (SMT) even on CPUs where that is insecure, you can run:
coreos-installer iso customize --live-karg-delete mitigations=auto,nosmt -o custom.iso \ fedora-coreos-36.20220605.3.0-live.x86_64.iso
Extracting PXE artifacts from a live ISO image
If you want the Fedora CoreOS PXE artifacts and already have an ISO image, you can extract the PXE artifacts from it:
podman run --security-opt label=disable --pull=always --rm -v .:/data -w /data \ quay.io/coreos/coreos-installer:release iso extract pxe \ fedora-coreos-36.20220605.3.0-live.x86_64.iso
The command will print the paths to the artifacts it extracted.
Using the minimal ISO image
In some cases, you may want to boot the Fedora CoreOS ISO image on a machine equipped with Lights-Out Management (LOM) hardware. You can upload the ISO to the LOM controller as a virtual CD image, but the ISO may be larger than the LOM controller supports.
To avoid this problem, you can convert the ISO image to a smaller minimal ISO image without the
rootfs. Similar to the PXE image, the minimal ISO must fetch the
rootfs from the network during boot.
Suppose you plan to host the
rootfs image at. This command will extract a minimal ISO image and a
rootfs from an ISO image, embedding a
coreos.live.rootfs_url kernel argument with the correct URL:
podman run --security-opt label=disable --pull=always --rm -v .:/data -w /data \ quay.io/coreos/coreos-installer:release iso extract minimal-iso \ --output-rootfs fedora-coreos-36.20220605.3.0-live-rootfs.x86_64.img \ --rootfs-url \ fedora-coreos-36.20220605.3.0-live.x86_64.iso \ fedora-coreos-36.20220605.3.0-live-minimal.x86_64.iso | https://docs.fedoraproject.org/hr/fedora-coreos/live-reference/ | 2022-06-25T11:25:05 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.fedoraproject.org |
- .
Installation command line options
The table below contains all the possible charts configurations that can be supplied
to the
helm install command using the
--set flags:
Chart configuration examples
resources
resources allows you to configure the minimum and maximum amount of resources (memory and CPU) a Sidekiq
pod can consume.
Sidekiq pod workloads vary greatly between deployments. Generally speaking, it is understood that each Sidekiq
process consumes approximately 1 vCPU and 2 GB of memory. Vertical scaling should generally align to this
1:2
ratio of
vCPU:Memory.
Below is an example use of
resources:
resources: limits: memory: 5G requests: memory: 2G cpu: 900m
extraEnv
extraEnv allows you to expose additional environment variables in the dependencies container.
Below is an example use of
extraEnv:
extraEnv: SOME_KEY: some_value SOME_OTHER_KEY: some_other_value
When the container is started, you can confirm that the environment variables are exposed:
env | grep SOME SOME_KEY=some_value SOME_OTHER_KEY=some_other_value
You can also set
extraEnv for a specific pod:
extraEnv: SOME_KEY: some_value SOME_OTHER_KEY: some_other_value pods: - name: mailers queues: mailers extraEnv: SOME_POD_KEY: some_pod_value - name: catchall negateQueues: mailers
This will set
SOME_POD_KEY only for application containers in the
mailers
pod. Pod-level
extraEnv settings are not added to init containers. mountPath: /etc/example.sidekiq Sidekiq pods.
Below is an example use of
annotations:
annotations: kubernetes.io/example-annotation: annotation-value-sidekiq-ce.
External Services
This chart should be attached to the same Redis, PostgreSQL, and Gitaly instances
as the Webservice chart. The values of external services will be populated into a
ConfigMap
that is shared across all Sidekiq pods.
Redis
redis: host: rank-racoon-redis port: 6379 sentinels: - host: sentinel1.example.com port: 26379 password: secret: gitlab-redis key: redis-password
redis.install=false. The Secret containing the Redis password needs to be manually created before deploying the GitLab chart.
PostgreSQL
psql: host: is a string containing a comma-separated list of queues to be
processed. By default, it is not set, meaning that all queues will be processed.
The string should not contain spaces:
merge,post_receive,process_commit will
work, but
merge, post_receive, process_commit will not.
Any queue to which jobs are added but are not represented as a part of at least one pod item will not be processed. For a complete list of all queues, see these files in the GitLab source:
negateQueues
negateQueues is in the same format as
queues, but it represents
queues to be ignored rather than processed.
The string should not contain spaces:
merge,post_receive,process_commit will
work, but
merge, post_receive, process_commit will not.
This is useful if you have a pod processing important queues, and another pod
processing other queues: they can use the same list of queues, with one being in
queues and the other being in
negateQueues.
negateQueuesshould not be provided alongside
queues, as it will have no effect.
Example
pod entry
pods: - name: immediate concurrency: 10 minReplicas: 2 # defaults to inherited value maxReplicas: 10 # defaults to inherited value maxUnavailable: 5 # defaults to inherited value queues: merge,post_receive,process_commit extraVolumeMounts: | - name: example-volume-mount mountPath: /etc/example extraVolumes: | - name: example-volume persistentVolumeClaim: claimName: example-pvc resources: limits: cpu: 800m memory: 2Gi hpa: targetAverageValue: 350m: - 10.0.0.0/8 | https://docs.gitlab.com/14.10/charts/charts/gitlab/sidekiq/ | 2022-06-25T11:43:07 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.gitlab.com |
Features in Configuration Manager technical preview version 2002
Applies to: Configuration Manager (technical preview branch)
This article introduces the features that are available in the technical preview for Configuration Manager, version 2002.. The device doesn't need to restart between installs, and you don't need to create an additional maintenance window. SSUs are installed first only for non-user initiated installs. For instance, if a user initiates an installation for multiple updates from Software Center, the SSU might not be installed first.
Microsoft 365 updates for disconnected software update points
You can use a new tool to import Microsoft 365 updates from an internet connected WSUS server into a disconnected Configuration Manager environment. Previously when you exported and imported metadata for software updated in disconnected environments, you were unable to deploy Microsoft 365 updates. Microsoft 365 updates require additional metadata downloaded from an Office API and the Office CDN, which isn't possible for disconnected environments.
Prerequisites
- An internet connected WSUS server running a minimum of Windows Server 2012.
- The WSUS server needs connectivity to these two internet endpoints:
officecdn.microsoft.com
config.office.com
- Copy the OfflineUpdateExporter tool and its dependencies to the internet connected WSUS server.
- The tool and its dependencies are in the <ConfigMgrInstallDir>/tools/OfflineUpdateExporter directory.
- The user running the tool must be part of the WSUS Administrators group.
- The directory created to store the Office update metadata and content should have appropriate access control lists (ACLs) to secure the files.
- This directory must also be empty.
- Data being moved from the online WSUS server to the disconnected environment should be moved securely.
- Review the Known Issues.
Synchronize then decline unneeded Microsoft 365 updates
- On your internet connected WSUS, open the WSUS console.
- Select Options then Products and Classifications.
- In the Products tab, select Office 365 Client and select Updates in the Classifications tab.
- Go to Synchronizations and select Synchronize Now to get the Microsoft 365 updates into WSUS.
- When the synchronization completes, decline any Microsoft 365 updates that you don't want to deploy with Configuration Manager. You don't need to approve Microsoft 365 updates in order for them to be downloaded.
- Declining unwanted Microsoft 365 updates in WSUS doesn't stop them from being exported during a WsusUtil.exe export, but it does stop the OfflineUpdateExporter tool from downloading the content for them.
- The OfflineUpdateExporter tool does the download of Microsoft 365 updates for you. Other products will still need to be approved for download if you're exporting updates for them.
- Create a new update view in WSUS to easily see and decline unneeded Microsoft 365 updates in WSUS.
- If you're approving other product updates for download and export, wait for the content download to complete before running WsusUtil.exe export and copying the contents of the WSUSContent folder. For more information, see Synchronize software updates from a disconnected software update point
Exporting the Microsoft 365 updates
Copy the OfflineUpdateExporter folder from Configuration Manager to the internet connected WSUS server.
- The tool and its dependencies are in the <ConfigMgrInstallDir>/tools/OfflineUpdateExporter directory.
From a command prompt on the internet connected WSUS server, run the tool with the following usage: OfflineUpdateExporter.exe -O -D <destination path>
- The OfflineUpdateExporter tool does the following:
- Connects to WSUS
- Reads the Microsoft 365 update metadata in WSUS
- Downloads the content and any additional metadata needed by the Microsoft 365 updates to the destination folder
At the command prompt on the internet connected WSUS server, navigate to the folder that contains WsusUtil.exe. By default, the tool is located in %ProgramFiles%\Update Services\Tools. For example, if the tool is located in the default location, type cd %ProgramFiles%\Update Services\Tools.
Type the following to export the software updates metadata to a GZIP file:
WsusUtil.exe export packagename logfile
For example:
WsusUtil.exe export export.xml.gz export.log
Copy the export.xml.gz file to the top-level WSUS server on the disconnected network.
If you approved updates for other products, copy the contents of the WSUSContent folder to the top-level disconnected WSUS server's WSUSContent folder.
Copy the destination folder used for the OfflineUpdateExporter to the top-level Configuration Manager site server on the disconnected network.
Import the Microsoft 365 updates
On the disconnected top-level WSUS server, import the update metadata from the export.xml.gz you generated on the internet connected WSUS server.
For example:
WsusUtil.exe import export.xml.gz import.log
By default, the WsusUtil.exe tool is located in %ProgramFiles%\Update Services\Tools.
Once the import is complete, you'll need to configure a site control property on the disconnected top-level Configuration Manager site server. This configuration change points Configuration Manager to the content for Microsoft 365. To change the property's configuration:
- Copy the O365OflBaseUrlConfigured PowerShell script to the top-level disconnected Configuration Manager site server.
- Change
"D:\Office365updates\content"to the full path of the copied directory containing the Office content and metadata generated by OfflineUpdateExporter.
- Save the script as
O365OflBaseUrlConfigured.ps1
- From an elevated PowerShell window on the disconnected top-level Configuration Manager site server, run
.\O365OflBaseUrlConfigured.ps1.
- Restart the SMS_Executive service on the site server.
In the Configuration Manager console, navigate to Administration > Site Configuration > Sites.
Right-click on your top-level site, then select Configure Site Components > Software Update Point.
In the Classifications tab, select Updates. In the Products tab, select Office 365 Client.
Synchronize software updates for Configuration Manager
When the synchronization completes, use your normal process to deploy Microsoft 365 updates.
Known issues
- Proxy configuration isn't natively built into the tool. If proxy is set in the Internet Options on the server where the tool is running, in theory it will be used and should function properly.
- From a command prompt, run
netsh winhttp show proxyto see the configured proxy.
- Only local paths work for the O365OflBaseUrlConfigured property.
- Currently, content will be downloaded for all Microsoft 365 languages. Each update can have approximately 10 GB of content.
Modify O365OflBaseUrlConfigured property
# Name: O365OflBaseUrlConfigured.ps1 # # Description: This sample sets the O365OflBaseUrlConfigured property for the SMS_WSUS_CONFIGURATION_MANAGER component on the top-level site. # This script must be run on the disconnected top-level Configuration Manager site server # # Replace "D:\Office365updates\content" with the full path to the copied directory containing all the Office metadata and content generated by the OfflineUpdateExporter tool. $PropertyValue = "D:\Office365updates\content" # Don't change any of the lines below $PropertyName = "O365OflBaseUrlConfigured" # Get provider instance $providerMachine = Get-WmiObject -namespace "root\sms" -class "SMS_ProviderLocation" if($providerMachine -is [system.array]) { $providerMachine=$providerMachine[0] } $SiteCode = $providerMachine.SiteCode $component = gwmi -ComputerName $providerMachine.Machine -namespace root\sms\site_$SiteCode -query 'select comp.* from sms_sci_component comp join SMS_SCI_SiteDefinition sdef on sdef.SiteCode=comp.SiteCode where sdef.ParentSiteCode="" and comp.componentname="SMS_WSUS_CONFIGURATION_MANAGER"' $properties = $component.props Write-host "Updating $PropertyName property for site " $SiteCode foreach ($property in $properties) { if ($property.propertyname -eq $PropertyName) { Write-host "Current value for $PropertyName is $($property.value2)" $property.value2 = $PropertyValue Write-host "Updating value for $PropertyName to $($property.value2)" break } } $component.props = $properties $component.put()
Improvements to Orchestration Groups
Create orchestration groups to better control the deployment of software updates to devices. An orchestration group gives you the flexibility to update devices based on a percentage, a specific number, or an explicit order. You can also run a PowerShell script before and after the devices run the update deployment.:
-.
- Updates requiring restarts now work with orchestration.
Try it out!
Try to complete the tasks. Then send Feedback with your thoughts on the feature.
Prerequisites
To see all of the orchestration groups and updates for those groups, your account needs to be a Full Administrator.
Enable the Orchestration Groups feature. For more information, see Enable optional features.
Note
When you enable Orchestration Groups, the site disables the Server Groups feature. This behavior avoids any conflicts between the two features.
Create an orchestration group
In the Configuration Manager console, go to the Assets and Compliance workspace, and select the Orchestration Group node.
In the ribbon, select Create Orchestration Group to open the Create Orchestration Group Wizard.
On the General page, give your orchestration group a Name and optionally a Description. Specify your values for the following items:
- The Orchestration Group timeout (in minutes): Time limit for all group members to complete update installation.
- Orchestration Group member timeout (in minutes): Time limit for a single device in the group to complete the update installation.
On the Member Selection page, first specify the Site code. Then select Add to add device resources as members of this orchestration group. Search for devices by name, and then Add them. You can also filter your search to a single collection by using Search in Collection. Select OK when you finish adding devices to the selected resources list.
- When selecting resources for the group, only valid clients are shown. Checks are made to verify the site code, that the client is installed, and that resources aren't duplicated.
On the Rule Selection page, select one of the following options:
Allow a percentage of the machines to be updated at the same time, then select or enter a number for this percentage. Use this setting to allow for future flexibility of the size of the orchestration group. For example, your orchestration group contains 50 devices, and you set this value to 10. During a software update deployment, Configuration Manager allows five devices to simultaneously run the deployment. If you later increase the size of the orchestration group to 100 devices, then 10 devices update at once.
Allow a number of the machines to be updated at the same time, then select or enter a number for this specific count. Use this setting to always limit to a specific number of devices, whatever the overall size of the orchestration group.
Specify the maintenance sequence, then sort the selected resources in the proper order. Use this setting to explicitly define the order in which devices run the software update deployment.
On the PreScript page, enter a PowerShell script to run on each device before the deployment runs. The script should return a value of
0for success, or
3010for success with restart. Specify a Script timeout (in seconds) value, which fails the script if it doesn't complete in the specified time.
On the PostScript page, enter a PowerShell script to run on each device after the deployment runs and a Script timeout (in seconds) value. The behavior is otherwise the same as the PreScript.
Complete the wizard.
You can change the settings of an existing Orchestration Group using Properties for the group.
To delete the orchestration group, select it then select Delete in the ribbon.
View orchestration groups and members
From the Assets and Compliance workspace, select the Orchestration Group node. To view members, select an orchestration group and select Show Members in the ribbon. For more information about the available columns for the nodes, see Monitor orchestration groups and members.
Start Orchestration
- Deploy software updates to a collection that contains the members of the orchestration group.
- Orchestration starts when any client in the group tries to install any software update at deadline or during a maintenance window. It starts for the entire group, and makes sure that the devices update by following the orchestration group rules.
- You can manually start orchestration by selecting it from the Orchestration Group node, then choosing Start Orchestration from the ribbon or right-click menu.
Tip
- Orchestration groups only apply to software update deployments. They don't apply to other deployments.
- You can right-click on an Orchestration Group member and select Reset Orchestration Group Member. This allows you to rerun orchestration.
Monitoring
Monitor your orchestration groups and members through the Configuration Manager console and the log files.
Monitor orchestration groups
From the Assets and Compliance workspace, select the Orchestration Group node. Add any of the following columns to get information about the groups:
Orchestration Name: The name of your orchestration group.
Site Code: Site code for the group.
Orchestration Type: is one of the following types:
- Number
- Percentage
- Sequence
Orchestration Value: How many members or the percentage of members that can get a lock simultaneously. Orchestration Value is only populated when Orchestration Type is either Number or Percentage.
Orchestration State: In progress during orchestration. Idle when not in progress.
Orchestration Start Time: Date and time that the orchestration started.
Current Sequence Number: Indicates for which member of the group orchestration is active. This number corresponds with the Sequence Number for the member.
Orchestration Timeout (in minutes): Value of The Orchestration Group timeout (in minutes) set on the General page when creating the group, or the General tab when editing the group.
Orchestration Group Member Timeout (in minutes): Value of Orchestration Group member timeout (in minutes) set on the General page when creating the group, or the General tab when editing the group.
Orchestration Group ID: ID of the group, The ID is used in logs and the database.
Orchestration Group Unique ID: Unique ID of the group, The Unique ID is used in logs and the database.
Monitor orchestration group members
In the Orchestration Group node, select an orchestration group. In the ribbon, select Show Members. You can see the members of the group, and their orchestration status. Add any of the following columns to get information about the members:
- Name: Device name of the orchestration group member
- Current State: Gives you the state of the member device.
- In progress during orchestration.
- Waiting: Indicates the client is waiting on the lock for its turn to install updates.
- Idle when orchestration is complete or not running.
- State Code: You can right-click on the Orchestration Group member and select Reset Orchestration Group Member. This reset allows you to rerun orchestration. States include:
- Idle
- Waiting, the device is waiting its turn
- In progress, installing an update
- Failed
- Reboot pending
- Lock Acquired Time: Locks are requested by the client based on its policy. Once the client acquires a lock, orchestration is triggered on it.
-Last State Reported Time: Time the member last reported a state.
- Sequence Number: The client's location in the queue for installing updates.
- Site Code: The site code for the member.
- Client Activity: Tells you if the client is active or inactive.
- Primary User(s): Which users are primary for the device.
- Client Type: What type of device the client is.
- Currently Logged on User: Which user is currently logged on to the device.
- OG ID: ID of the orchestration group the member belongs to.
- OG Unique ID: Unique ID of the orchestration group the member belongs to.
- Resource ID: Resource ID of the device.
Log files
Use the following log files on the site server to help monitor and troubleshoot:
Site server
- Policypv.log: shows that the site targets the orchestration group to the clients.
- SMS_OrchestrationGroup.log: shows the behaviors of the orchestration group.
Client
- MaintenanceCoordinator.log: Shows the lock acquisition, update installation, pre and post scripts, and lock release process.
- UpdateDeployment.log: Shows the update installation process.
- PolicyAgent.log: Checks if the client is in an orchestration group.
Orchestration group known issues
- Don't add a machine to more than one orchestration group..
Proxy support for Azure Active Directory discovery and group sync
The site system's proxy settings, including authentication, are now used by:
- Azure Active Directory (Azure AD) user discovery
- Azure AD user group discovery
- Synchronizing collection membership results to Azure Active Directory groups
Log files
- SMS_AZUREAD_DISCOVERY_AGENT.log
Improvements to BitLocker management
The BitLocker management policy now includes additional settings, including policies for fixed and removable drives:
Global policy settings on the Setup page:
- Prevent memory overwrite on restart
- Validate smart card certificate usage rule compliance
- Organization unique identifiers
OS drive settings:
- Allow enhanced PINS for startup
- Operating system drive password policy
- Reset platform validation data after BitLocker recovery
- Pre-boot recovery message and URL
- Encryption policy enforcement settings
Fixed drive settings:
- Fixed data drive encryption
- Deny write access to fixed drives not protected by BitLocker
- Allow access to BitLocker fixed data drives from earlier versions of Windows
- Fixed data drive password policy
- Encryption policy enforcement settings
Removable drive settings:
- Removable drive data encryption
- Deny write access to removable drives not protected by BitLocker
- Allow access to BitLocker protected removable drives not protected by BitLocker
- Removable drive password policy
Client management settings:
- User exemption policy
- Customer experience improvement program
For more information on these settings, see the MBAM documentation.
BitLocker management known issues
The following new settings don't work in this technical preview version:
- Fixed drive settings: Deny write access to fixed drives not protected by BitLocker
- Removable drive settings: Deny write access to removable drives not protected by BitLocker
- Client management policy: Customer experience improvement program
BitLocker reports don't work in this release
Additional improvements to task sequence progress
Based on continued feedback from the community, this release includes further improvements to task sequence progress. Now the count of total steps doesn't include the following items in the task sequence:
Groups. This item is a container for other steps, not a step itself.
Instances of the Run task sequence step. This step is a container for other steps, so are no longer counted.
Steps that you explicitly disable. A disabled step doesn't run during the task sequence, so is no longer counted.
Note
Enabled steps in a disabled group are still included in the total count.
For more information, see the following articles:
- 2001 features - Improvements to task sequence progress
- 2001.2 features - Additional improvement to task sequence progress
Improvements to the ConfigMgr PXE Responder
The ConfigMgr PXE Responder now sends status messages to the site server. This change makes troubleshooting operating system deployments easier.
Token-based authentication for cloud management gateway
This feature appears in the What's New workspace of the Configuration Manager console for the technical preview branch version 2002, but it released with version 2001.2. For more information, see 2001.2 features.
General known issues
Can't delete collections
In this version of the technical preview branch, you can't delete collections.
To work around this issue, use the following Configuration Manager PowerShell cmdlet to delete collections:
Next steps
For more information about installing or updating the technical preview branch, see Technical preview.
For more information about the different branches of Configuration Manager, see Which branch of Configuration Manager should I use?.
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/mem/configmgr/core/get-started/2020/technical-preview-2002 | 2022-06-25T10:44:45 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.microsoft.com |
You can monitor the status of Edges and view the details of each Edge like the WAN links, top applications used by the Edges, usage data through the network sources and traffic destinations, business priority of network traffic, system information, details of Gateways connected to the Edge, and so on.
To monitor the Edge details:
-. The page displays the details of the Edges like the status, links, Gateways, and other information.
You can use the Search option to view specific Edges. Click the Filter Icon in the Search option to define a criteria and view the Edge details filtered by Edge Name, Status, Created Date, Serial Number, Custom Info, and so on.
You can click the link to View option in the Gateways column to view the details of Gateways connected to the corresponding Edge.
Click the link to an Edge to view the details pertaining to the selected Edge. Click the relevant tabs to view the corresponding information. Each tab displays a drop-down list at the top which allows you to select a specific time period. The tab displays the details for the selected duration.
Some of the tabs provide drop-down list of metrics parameters. You can choose the metrics from the list to view the corresponding data. The following table lists the available metrics: | https://docs.vmware.com/en/VMware-SD-WAN/4.0/VMware-SD-WAN-by-VeloCloud-Administration-Guide/GUID-8A2C9CF1-0981-4740-84CA-E203C8FF686D.html | 2022-06-25T12:04:15 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.vmware.com |
fn:current-time() as xs:time
Returns
xs:time(fn:current-dateTime()). This is an
xs:time (with timezone) that is current at some time
during the evaluation of a query or transformation in which
fn:current-time() is executed. This function is
*stable*. The precise instant during the query or transformation
represented by the value of
fn:current-time() is
*implementation dependent*.
fn:current-time()returns an
xs:timecorresponding to the current date and time. For example, an invocation of
fn:current-time()might return
23:17:00.000-05:00.
fn:current-time() => 18:24:06-07:00
Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question. | https://docs.marklogic.com/fn:current-time | 2022-06-25T10:21:49 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.marklogic.com |
Sys.CfgLoadBlock
This command can be used to load a configuration from a .bec file into the reader. Therefore it has to be called for every "block" tag in the .bec file.
Before loading a .bec file a Sys.CfgLoadPrepare has to be done. For legacy reasons this command can be omitted (which is not recommended on newer firmware). In this case it must be guaranteed, that this is the first CfgLoadBlock since Powerup.
After all "block" tags are transferred, the Sys.CfgLoadFinish has to be done. For legacy reasons this command can be omitted if Sys.CfgLoadPrepare was omitted. In this case a manual reset has to be done.
Properties
- Command code: 0x0016
- Command timeout: 10000 ms
- Possible status codes: General status codes, Sys.ErrCfgAccess , Sys.ErrCfgFull, Sys.ErrInvalidCfgBlock
Parameters (request frame)
Returned values (response frame)
None | https://docs.baltech.de/refman/cmds/sys/cfgloadblock.html | 2022-06-25T10:40:22 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.baltech.de |
6.10 Problem Modification and Reoptimization¶
Often one might want to solve not just a single optimization problem, but a sequence of problems, each differing only slightly from the previous one. This section demonstrates how to modify and re-optimize an existing problem.
The example we study is a simple production planning model.
Problem modifications regarding variables, cones, objective function and constraints can be grouped in categories:
add/remove,
coefficient modifications,
bounds modifications.
Especially removing variables and constraints can be costly. Special care must be taken with respect to constraints and variable indexes that may be invalidated.
Depending on the type of modification, MOSEK may be able to optimize the modified problem more efficiently exploiting the information and internal state from the previous execution. After optimization, the solution is always stored internally, and is available before next optimization. The former optimal solution may be still feasible, but no longer optimal; or it may remain optimal if the modification of the objective function was small. This special case is discussed in Sec. 14.3 (Sensitivity Analysis).
In general, MOSEK exploits dual information and availability of an optimal basis from the previous execution. The simplex optimizer is well suited for exploiting an existing primal or dual feasible solution. Restarting capabilities for interior-point methods are still not as reliable and effective as those for the simplex algorithm. More information can be found in Chapter 10 of the book [Chvatal83].
6.10.1 Example: Production Planning¶
A company manufactures three types of products. Suppose the stages of manufacturing can be split into three parts:. We want to know how many items of each product the company should produce each year in order to maximize profit?
Denoting the number of items of each type by \(x_0,x_1\) and \(x_2\), this problem can be formulated as a linear optimization problem:
and
Code in Listing 6.19 loads and solves this problem.
% Specify the c vector. prob.c = [1.5 2.5 3.0]'; % Specify a in sparse format. subi = [1 1 1 2 2 2 3 3 3]; subj = [1 2 3 1 2 3 1 2 3]; valij = [2 4 3 3 2 3 2 3 2]; prob.a = sparse(subi,subj,valij); % Specify lower bounds of the constraints. prob.blc = [-inf -inf -inf]'; % Specify upper bounds of the constraints. prob.buc = [100000 50000 60000]'; % Specify lower bounds of the variables. prob.blx = zeros(3,1); % Specify upper bounds of the variables. prob.bux = [inf inf inf]'; % Perform the optimization. param.MSK_IPAR_OPTIMIZER = 'MSK_OPTIMIZER_FREE_SIMPLEX'; [r,res] = mosekopt('maximize',prob,param); % Show the optimal x solution. res.20
prob.c = [prob.c;1.0]; prob.a = [prob.a,sparse([4.0 0.0 1.0]')]; prob.blx = [prob.blx; 0.0]; prob.bux = [prob.bux; inf];
After this operation the new problem is:
and
6.10.4 Appending Constraints¶
Now suppose we want to add a new stage to the production process called Quality control for which \(30000\) minutes are available. The time requirement for this stage is shown below:
This corresponds to adding the constraint
to the problem. This is done as follows.
prob.a = [prob.a;sparse([1.0 2.0 1.0 1.0])]; prob.blc = [prob.blc; -inf]; prob.buc = [prob.buc;.buc = [80000 40000 50000 22000]'; prob.sol = res.sol; [r,res] = mosekopt('maximize',prob,param); res,y = []; prob.sol.bas = res.sol.bas; [r,res] = mosekopt('maximize',prob,param); res = []; prob.sol.bas = res.sol.bas; prob.sol.bas.xx = [prob.sol.bas.xx; 0.0]; prob.sol.bas.slx = [prob.sol.bas.slx; 0.0]; prob.sol.bas.sux = [prob.sol.bas.sux; 0.0]; prob.sol.bas.skx = [prob.sol.bas.skx; 'UN']; [r,res] = mosekopt('maximize',prob,param); res.sol.bas.xx
If the optimizer used the data from the previous run to hot-start the optimizer for reoptimization, this will be indicated in the log:
Optimizer - hotstart : yes.
A more advanced discussion of hot-start is presented in Sec. 9.2 (Advanced hot-start). | https://docs.mosek.com/latest/toolbox/tutorial-reoptimization.html | 2022-06-25T10:28:49 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.mosek.com |
New Features
- Async mode: Added support for the
metricNameand
transactionNamingPrioritycustom instrumentation options when async mode is enabled.
- Async mode: Added support for Castle MonoRail 2.x when async mode is enabled. Note that MonoRail transactions previously named
WebTransaction/DotNetController/{controller}/{action}will be renamed to
WebTransaction/MonoRail/{controller}/{action}when async mode is enabled.
- Async mode: Added support for legacy ScriptHandlerFactory instrumentation when async mode is enabled.
- Removes instrumentation for deprecated methods in Umbraco. The default instrumentation for ASP .NET MVC or ASP .NET Web API will now be used by the agent in its place.
Fixes
- Async mode: Fixed an issue where the agent would not clean up SQL connections after explain plan execution when async mode is enabled.
- Async mode: Fixed a bug which would cause a SerializationException to occur when async mode is enabled which could result in an app crash.
Upgrading
- For upgrade instructions, see Upgrade the .NET agent.
- If you are upgrading from a particularly old agent, see Upgrading legacy .NET agents for a list of major changes to the .NET agent. | https://docs.newrelic.com/docs/release-notes/agent-release-notes/net-release-notes/net-agent-519470/?q= | 2022-06-25T11:49:41 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.newrelic.com |
Neighborhood Criminal Justice Center.
The unprecedented growth of Doc’s Detecting Supply, one of the premiere Minelab Dealers in the United States, called for a larger facility more suited to the large amount of merchandise Doc carries and sells.
In March of 2017 Doc secured a new 4500 sq.ft. warehouse located at 1180 Wigwam Parkway, Suite 110 Henderson, NV 89054. This makes Doc’s location the largest Minelab dealer warehouse west of the Mississippi. Because we ship an enormous amount of merchandise each day, from orders placed through our website, Ebay and Amazon, we do not have a showroom. You are always welcome to call for an appointment, if you know what you need. Because of the demands of filling a large amount of orders each day, we must limit appointments to a 20 minute duration. Insurance regulations prohibit customers from going into the warehouse and shipping area.
Always call for an appointment before dropping in. Our local Las Vegas number is 702-866-9068.
Hi,
can i get a link to your vendor name or identity in ebay or amazon? i would like to buy a detector only from you guys
docs_detecting_supply on Ebay. I don’t have detectors listed on Amazon. | https://docsdetecting.com/2019/02/02/docs-4500-sq-ft-warehouse-opens-in-henderson-nv-march-2017/ | 2022-06-25T10:40:32 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docsdetecting.com |
6.
The complete source for the example is listed in Listing 6.15.
Please note that compared to a linear optimization problem with no integer-constrained variables:
The
prob.ints.subfield is used to specify the indexes of the variables that are integer-constrained.
The optimal integer solution is returned in the
res.sol.intMATLAB structure.
MOSEK also provides a wrapper for the
intlinprog function found in the MATLAB optimization toolbox. This function solves linear problems wth integer variables; see the reference section for details.
6.9.2 Specifying an initial solution¶
It is a common strategy to provide a starting feasible point (if one is known in advance) to the mixed-integer solver. This can in many cases reduce solution time.
It is not necessary to specify the whole solution. MOSEK will attempt to use it to speed up the computation. MOSEK will first try to construct a feasible solution by fixing integer variables to the values provided by the user (rounding if necessary) and optimizing over the continuous variables. The outcome of this process can be inspected via information items
"MSK_IINF_MIO_CONSTRUCT_SOLUTION" and
"MSK_DINF_MIO_CONSTRUCT_SOLUTION_OBJ", and via the
Construct solution objective entry in the log. We concentrate on a simple example below.
Solution values can be set using the appropriate fields in the problem structure.
hereto download.¶
% Specify start guess for the integer variables. prob.sol.int.xx = [1 1 0 nan]';
The log output from the optimizer will in this case indicate that the inputted values were used to construct an initial feasible solution:
Construct solution objective : 1.950000000000e+01
The same information can be obtained from the API:
6.23) suitable for Optimization Toolbox for MATLAB is
[rcode, res] = mosekopt('symbcon echo(0)'); symbcon = res.symbcon; clear prob % The full variable is [t; x; y] prob.c = [1 0 0]; prob.a = sparse(0,3); % No constraints % Conic part of the problem prob.f = sparse([ eye(3); 0 1 0; 0 0 0; 0 0 1 ]); prob.g = [0 0 0 -3.8 1 0]'; prob.cones = [symbcon.MSK_CT_QUAD 3 symbcon.MSK_CT_PEXP 3]; % Specify indexes of variables that are integers prob.ints.sub = [2 3]; % It is as always possible (but not required) to input an initial solution % to start the mixed-integer solver. prob.sol.int.xx = [0, 9, -1]; % Optimize the problem. [r,res] = mosekopt('minimize',prob); % The integer solution (x,y) res.sol.int.xx(2:3)
Note that the conic constraints are described using the format \(Fx+g\in\K\), that is as affine conic constraints. See Sec. 6.7 (Affine conic constraints (new)) for details.
Error and solution status handling were omitted for readability. | https://docs.mosek.com/latest/toolbox/tutorial-mio-shared.html | 2022-06-25T11:52:33 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.mosek.com |
SceneGraphAnalyzerMeter
- class SceneGraphAnalyzerMeter
This is a special
TextNodethat automatically updates itself with output from a
SceneGraphAnalyzerinstance. It can be placed anywhere in the world where you’d like to see the output from
SceneGraphAnalyzer.
It also has a special mode in which it may be attached directly to a channel or window. If this is done, it creates a
DisplayRegionfor itself and renders itself in the upper-right-hand corner.
Inheritance diagram
- SceneGraphAnalyzerMeter(SceneGraphAnalyzerMeter const&) = default
- void clear_window(void)
Undoes the effect of a previous call to
setup_window().
- static TypeHandle get_class_type(void)
- DisplayRegion *get_display_region(void) const
Returns the
DisplayRegionthat the meter has created to render itself into the window to
setup_window(), or NULL if
setup_window()has not been called.
- double get_update_interval(void) const
Returns the number of seconds that will elapse between updates to the frame rate indication.
- GraphicsOutput *get_window(void) const
Returns the
GraphicsOutputthat was passed to
setup_window(), or NULL if
setup_window()has not been called.
- void set_update_interval(double update_interval)
Specifies the number of seconds that should elapse between updates to the meter. This should be reasonably slow (e.g. 0.5 to 2.0) so that the calculation of the scene graph analysis does not itself dominate the frame rate.
- void setup_window(GraphicsOutput *window)
Sets up the frame rate meter to create a
DisplayRegionto render itself into the indicated window.
- void update(void)
You can call this to explicitly force the
SceneGraphAnalyzerMeterto update itself with the latest scene graph analysis information. Normally, it is not necessary to call this explicitly. | https://docs.panda3d.org/1.10/cpp/reference/panda3d.core.SceneGraphAnalyzerMeter | 2022-06-25T11:34:20 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.panda3d.org |
NDEF-MFRC522 (community library)
Summary
An Arduino/Particle library for NFC Data Exchange Format (NDEF). Read and write NDEF messages to NFC tags and peers. Supports Mifare Ultralight with MFRC522 RFID board.
Example Build Testing
Device OS Version:
This table is generated from an automated build. Success only indicates that the code compiled successfully.
Library Read Me
This content is provided by the library maintainer and has not been validated or approved.
NDEF Library for Arduino/Particle
Read and Write NDEF messages on Mifare Ultralight NFC Tags with Arduino connected to MFRC522 RFID card.
NFC Data Exchange Format (NDEF) is a common data format that operates across all NFC devices, regardless of the underlying tag or device technology.
Originally forked from NDEF library that exclusively worked with NFC Shield, but adapted to work with the MFRC522 Arduino and MFRC522 Particle and limited to Mifare Ultralight NFC tags.
Supports
- Reading from Mifare Ultralight tags.
- Writing to Mifare Ultralight tags.
- Works on Arduino and Particle (Gen 3 xenon/argon/boron)
Requires
Hello Github
This will write this Github URL to your tag which will allow your NFC-Enabled phone to read the URL and open a browser to this page. See WriteTag.ino
#include <SPI.h> #include <MFRC522.h> #include "MifareUltralight.h" #define SS_PIN 10 #define RST_PIN 6 using namespace ndef_mfrc522; } void loop() { // Look for new cards if (!mfrc522.PICC_IsNewCardPresent()) return; // Select one of the cards if (!mfrc522.PICC_ReadCardSerial()) return; NdefMessage message = NdefMessage(); String url = String(""); message.addUriRecord(url); MifareUltralight writer = MifareUltralight(mfrc522); bool success = writer.write(message); }
Now read the tag using similar main code as above, but with this read excerpt from ReadTag example.
MifareUltralight reader = MifareUltralight(mfrc522); NfcTag tag = reader.read(); tag.print();
... expect something similar to...
NFC Tag - NFC Forum Type 2 UID 04 0D 89 32 F1 4A 80 NDEF Message 1 record, 44 bytes NDEF Record TNF 0x1 Well Known Type Length 0x1 1 Payload Length 0x28 40 Type 55 U Payload 00 68 74 74 70 73 3A 2F 2F 67 69 74 68 75 62 2E 63 6F 6D 2F 61 72 6F 6C 6C 65 72 2F 4E 44 45 46 2D 4D 46 52 43 35 32 32 . Record is 44 bytes
- Type 55 U -> Indicates URL
- Decode the payload from ASCII and it will spell out your URL
- See the url written
MifareUltralight
The user interacts with the MifareUltralight to read and write NFC tags using the MFRC522.
Read a message from a tag
MifareUltralight reader = MifareUltralight(mfrc522); NfcTag tag = reader.read();
Write a message to a tag
NdefMessage message = NdefMessage(); MifareUltralight writer = MifareUltralight(mfrc522); bool success = writer.write(message);
Clean a tag. Cleaning resets a tag back to a factory-like state. For Mifare Ultralight, the tag is zeroed and left empty.
MifareUltralight writer = MifareUltralight(mfrc522); bool success = writer.clean();
NfcTag
Reading a tag with the shield, returns a NfcTag object. The NfcTag object contains meta data about the tag UID, technology, size. When an NDEF tag is read, the NfcTag object contains a NdefMessage.
NdefMessage
A NdefMessage consist of one or more NdefRecords.
The NdefMessage object has helper methods for adding records.
ndefMessage.addTextRecord("hello, world"); ndefMessage.addUriRecord("");
The NdefMessage object is responsible for encoding NdefMessage into bytes so it can be written to a tag. The NdefMessage also decodes bytes read from a tag back into a NdefMessage object.
NdefRecord
A NdefRecord carries a payload and info about the payload within a NdefMessage.
Specifications
This code is based on the "NFC Data Exchange Format (NDEF) Technical Specification" and the "Record Type Definition Technical Specifications" that can be downloaded from the NFC Forum.
Tests
- Unit tests from original repo work. Load them to arduino and look for success.
Usage
Arduino Usage
The library is not yet published in the Library Manager so you must treat it as a private library.
- Read Arduino Libraries for how to use private library.
Particle Usage
The library is published and can easily install, but unfortunately there is a conflict between constants in
Arduino.h and
spark_wiring_arduino_constants.h which requires a little extra effort.
In file included from ./inc/Arduino.h:27:0, from .../lib/NDEF-MFRC522/src/Ndef.h:9, from .../lib/NDEF-MFRC522/src/MifareUltralight.h:4, from .../src/school-tag-station-particle.ino:6: ../wiring/inc/spark_wiring_arduino_constants.h:152:18: error: conflicting declaration 'typedef uint32_t word'
- Open your project in Particle Workbench
- Install
NDEF-MFRC522
- Open
spark_wiring_arduino_constants.hin your particle library installed
- MacOS:
/Users/{you}/.particle/toolchains/deviceOS/{version}/firmware-{version}/wiring/inc/
- comment out
typedef uint32_t word;
- Should compile and install without an error.
Releases
See Releases for the latest.
Steps to release:
- Update library.properties with the correct semantic version
- Update both in the version and the URL reference for explicit src reference
- Merge PR into master
- Create a Release named with the version in library.properties
particle library publishto update particle
- Arduino users rely on the github repo
Known Issues
This software is in development. It works for the happy path. Error handling could use improvement. It runs out of memory, especially on the Uno board. Use small messages with the Uno. The Due board can write larger messages. Please submit patches.
- Read and Write in the same session fails
- Consider breaking NDEF files (NFC.h/Ndef.h) out from I/O files (MifareUltralight.h)
- Not all examples are converted to MFRC522 yet.
- Conflict between Particle and Arduino constants
typedef uint32_t word;
Book
Need more info? Check out my book
Beginning NFC: Near Field Communication with Arduino, Android, and PhoneGap
.
License
BSD License (c) 2013-2014, Don Coleman BSD License (c) 2019, Aaron Roller
Browse Library Files | https://docs.particle.io/reference/device-os/libraries/n/NDEF-MFRC522/ | 2022-06-25T10:40:17 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.particle.io |
Following is the shortcode for member listing you can use this in your post and pages:
[members-listing]
Here is the way you can specify options
[members-listing option_name=option_val option_name2=option_val2 and so on]
Example: [members-listing title=Member_List object=Groups per_page=2].
The accepted parameters:-
- title (string):– What should be the title of the member’s section.
- type (string):- Sort order. Accepts ‘active’, ‘random’, ‘newest’, ‘popular’, ‘online’, ‘alphabetical’. Default: ‘active’.
- per_page (int|bool):- Number of results per page. Default: 20.
- max (int|bool):- Maximum number of results to return. Default: false (unlimited).
Example: [members-listing max=2]
- include (int|string):- Limit results by a list of user IDs. Accepts a single integer, a comma-separated list of IDs (to disable this limiting). Accepts ‘active’, ‘alphabetical’,’newest’, or ‘random’. Default: false.
- exclude (int|string):- Exclude users from results by ID. Accepts a single integer, a comma-separated list of IDs (to disable this limiting). Default: false.
- user_id (int):- If provided, results are limited to the friends of the specified user. When on a user’s Friends page, defaults to the ID of the displayed user. Otherwise defaults to 0.
- member_type (string):- Can be a comma-separated list. (Note: BuddyPress itself does not register any member types. Plugins and themes can register member types using the bp_register_member_type() or the bp_register_member_types() function.)
- include_member_role (string):- Can be a comma-separated list of members role.
- exclude_member_role (string):- Can be a comma-separated list of members role.
- member_type__in (string):- list only these members type ( Note: This parameter will work when member types exist.)
- member_type__not_in (string):- not list these members type ( Note: This parameter will work when member types exist.)
- search_terms (string):- Limit results by a search term. Default: value of `$_REQUEST[‘members_search’]` or `$_REQUEST[‘s’]`, if present. Otherwise false.
Example: [members-listing search_terms=’zoya’].
- meta_key (string):- Limit results by the presence of a user meta key. Default: false.
- meta_value (string):- When used with meta_key, limit results by matching the user meta value. Default: false. Requires meta_key.
- container_class (string):- Default ‘members’. Allows changing the class of the shortcode contents wrapper.
This shortcode can take the following parameters:
Examples:-
[members-listing per_page=3]
How many items you want to shows. The default is 10.
[members-listing title=Wbcom_Design_Member]
This shortcode uses to show the title to the member list.
[members-listing include_member_role=’subscriber, admin’]
This shortcode used to displays the member list. and it accompanies a lot of parameters to customize which member is appeared and how they are spread out.
| https://docs.wbcomdesigns.com/docs/shortcode-for-buddypress-pro/member-parameters/what-are-accepted-parameters-for-members-listing-shortcodes/ | 2022-06-25T10:01:07 | CC-MAIN-2022-27 | 1656103034930.3 | [array(['https://wbcomdesigns.com/wp-content/uploads/2019/12/Screenshot-5.png',
None], dtype=object)
array(['https://wbcomdesigns.com/wp-content/uploads/2019/12/member_listinfg.png',
None], dtype=object)
array(['https://wbcomdesigns.com/wp-content/uploads/2019/12/Untitled.png',
None], dtype=object)
array(['https://wbcomdesigns.com/wp-content/uploads/2019/12/Screenshot-9.png',
None], dtype=object)
array(['https://wbcomdesigns.com/wp-content/uploads/2019/12/member_listinfg2.png',
None], dtype=object)
array(['https://wbcomdesigns.com/wp-content/uploads/2019/12/Screenshot-10.png',
None], dtype=object)
array(['https://wbcomdesigns.com/wp-content/uploads/2019/12/Screenshot-6.png',
None], dtype=object) ] | docs.wbcomdesigns.com |
Business analyst resume – V2 All Docs Business analyst resume – V2 Read History Click on a revision date from the list below to view that revision. Alternatively, you can compare two revisions by selecting them in the 'Old' and 'New' columns, and clicking 'Compare Revisions'. Old New Date Created Author Actions Old New October 30, 2019 at 3:46 pm Themo Saurus | https://buddypress-docs.armadon-theme.com/docs/business-analyst-resume-v2/history | 2022-06-25T10:42:11 | CC-MAIN-2022-27 | 1656103034930.3 | [] | buddypress-docs.armadon-theme.com |
This section specifies the syntax and semantics for the setup command. Supported for all feature sets.
The syntax for setup is shown below.
<SetupAction> ::= 'Setup' [ '[' <SetupOptionSpec> ']' ] '=' <SetupCriteriaSpec> <SetupOptionsSpec> ::= <SetupOption> [ ';' <SetupOptionSpec> ] <SetupOption> ::= ( 'NUMANode' '=' <NUMANode> ) | ( 'minHostBufferSize' '=' <MinHostBufferSize> ) | ( 'State' '=' <State> ) | ( 'TxDescriptor' '=' <TxDescriptor> ) | ( 'TxPorts' '=' <PortNumberSpec> ) | ( 'RxPorts' '=' <PortNumberSpec> ) | ( 'TxPortPos' '=' <32-bit decimal value> ) | ( 'TxIgnorePos' '=' <32-bit decimal value> ) | ( 'RxCRC' '=' <TrueFalseValue> ) | ( 'TxMetaData' '=' 'TimeStamp' ) | ( 'UseWL' '=' <TrueFalseValue> ) <SetupCriteriaSpec> ::= ( 'StreamId' '==' <StreamIdSpec> ) <NUMANode> ::= '0' | '1' | '2' | ... | <Number of NUMA Nodes - 1> <MinHostBufferSize> ::= '16' | '17' | '18' | ... | '131072' <State> ::= 'Active' | 'Inactive' <TxDescriptor> ::= 'PCAP' | 'NT' | 'DYN'
The Setup command defines a set of global properties and state for the given streams. The 'Setup' command should be called before 'Assign' commands as the properties of the stream define how a host buffer is selected and the state of the stream. Calling the Setup command, on a stream which is already assigned, with different NUMA node or increased minimum host buffer size parameters, will throw an error. The NUMA node setting for all streams is reset when issuing a "Delete = All" command. Calling the Setup with different state, to activate or inactivate a stream, can be done both before and after an Assign command.
The Setup command is also used to configure inline transmission, where packets received on a stream, can be re-transmitted out of an adapter. Reception and transmission need not take place on the same adapter. There are two transmission scenarios: static and dynamic. In a static setup, packets are only transmitted on one specific port. In a dynamic setup, a bit-field in the packet specifies which port the packet should be transmitted on. In a similar fashion it is possible to specify the position of a single bit in the packet, that indicates whether or not the packet should be transmitted. In order for inline transmission to work, there needs to be an application attached to the streams, to release the individual packets for transmission.
The following parameters controls the inline transmission behaviour:
- TxDescriptor: The descriptor "class" the incoming packets are prepended with.
- TxPorts: The ports that packets can be transmitted on. If a port range is specified (dynamic inline), all ports must be located on the same physical adapter.
- RxPorts: Ports of the receiving adapter(s). If not specified, it is all adapters.
- TxPortPos: Dynamic transmission port selection field offset.
- TxIgnorePos: Dynamic ignore bit offset.
- RxCRC: Specify whether incoming packets include FCS. Defaults to True.
- TxMetaData: Transmitted packets will have meta-data + a new FCS appended to them.
- UseWL: Specify whether wire-length should be used rather than cap-length for length calculations. Requires a descriptor type containing the wire-length field, such as Std or Dyn3.
A static scenario is configured by only specifying one transmission port. A dynamic scenario is configured by specifying multiple transmission ports (e.g. TxPorts=(1..3)) in conjunction with a port selection field position (e.g. TxPortPos=20). Please note and obey the limitation, that all transmission ports must be located on the same physical adapter for dynamic inline.
The "RxPorts" parameter controls what receiving adapters the transmission applies to. This is mostly useful in a qpi-bypass setup.
When using the 'TxMetaData = TimeStamp' option, the receiving timestamp is appended to the packet along with a new FCS, before inline transmission. The appended timestamp is in native UNIX nanosecond format.
Transmission cannot be configured for a stream on a receiving adapter if the stream is already in use on the receiving adapter. Transmission properties are cleared when the "Delete = All" command.
NOTE: When inline transmission is configured for a set of streams, it directly affects how filters can be assigned. It is not possible to assign a filter that distributes packets to both inline and non-inline streams. This must be done in two separate filters.
The "RxCRC" option will affect the slicing of packets captured by filters that distribute packets to inline streams. If "RxCRC" is set to "True", no slicing is allowed and an error will be thrown, if a filter is assigned with slicing. If set to "False", an implicit slicing recipe to remove the FCS on packets will be configured if not explicitly specified in the assign statement. This is however not true if the Key matcher is used to distribute packets to streams. In this case the user must take care to ensure that slicing is correctly configured.
Setup Examples
This section describes some examples of using the setup command.
Setting up the NUMA Node for a Stream
This section describes an example of setting up the NUMA node to use a specific stream and
setting a stream up to use a minimum host buffer size. The example illustrates how to set up stream 12 to use NUMA node 1.
The 'Setup' NTPL example is shown below.
Setup[NUMANode=1] = StreamId==12
The example illustrates how to set up stream 12 to use a minimum host buffer size of 32 MBytes.
The 'Setup' NTPL example is shown below.
Setup[minhostbuffersize=32] = StreamId==12
The minimum host buffer size used by stream 12 will be the smallest host buffer that is equal or larger than 32 MBytes. If no host buffers are found that fulfill this requirement, the assignment will fail.
Setting a Stream Active or Inactive
By default a stream is active and hence can receive data. This can be changed by changing the stream state. The following example sets stream 12 to 'Inactive' state and hence stops the data reception on that stream.
Setup[State=Inactive] = StreamId==12
The stream can later be reactivated again with the following command:
Setup[State=Active] = StreamId==12
Simple Static Re-transmission
This example shows how to set up inline transmission to port 1 of all packets coming in on port 0:
Setup[TxDescriptor=Dyn;TxPorts=1] = StreamId == 42
Assign[StreamId=42;Descriptor=Dyn3] = Port == 0
Dynamic Inline Transmission
This example demonstrates how to set up transmission of packets to ports based on a field in the packet descriptor. This field can be set by the application when iterating the incoming packets. Alternatively, by using the color field of a descriptor as port selection field, filters could set the color value to indicate the destination port. This example also uses tail slicing to remove the FCS from incoming packets.
Setup[TxDescriptor=Dyn; TxPorts=(0..3); TxPortPos=128; RxCRC=False] = StreamId == 42
Assign[StreamId=42;Slice=EndOfFrame[-4];Descriptor=Dyn3] = Port == (0..3)
TxPortPos is set to 128 which is the offset in bits to the "color_hi" field in the dynamic descriptor 3. The packet will be transmitted on the port number set in the bits at this offset. Due to the dynamic inline limitation of transmission ports, as described earlier, it is a prerequisite that all the transmission ports, 0 to 3, are located on the same physical adapter and if not, an error is thrown. | https://docs.napatech.com/r/Reference-Documentation/Setup-Command | 2022-06-25T09:58:10 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.napatech.com |
Where can I go for help if I do not have access to paid support?
SingleStore Forums. To ask questions and share knowledge with other members of the community, join our public forums at. SingleStore employees will also monitor the forums during business hours in California.
Ask StackOverflow. We and other SingleStore DB experts are active on the StackOverflow community. If you can’t find what you’re looking for in our documentation, just ask a question. Make sure to tag it with
#singlestore! | https://docs.singlestore.com/db/v7.8/en/support/faqs/how-support-works/where-can-i-go-for-help-if-i-do-not-have-access-to-paid-support-.html | 2022-06-25T11:36:36 | CC-MAIN-2022-27 | 1656103034930.3 | [] | docs.singlestore.com |
OnGridCreated
Sys.EventArgs OnGridCreated Property
note after the grid is created.
Example:
<telerik:RadGrid <ClientSettings> <ClientEvents OnGridCreated="GridCreated" /> </ClientSettings> </telerik:RadGrid>
function GridCreated(sender, eventArgs) { alert("Grid with ClientID: " + sender.get_id() + " was created"); } | https://docs.telerik.com/devtools/aspnet-ajax/controls/grid/client-side-programming/events/ongridcreated | 2017-11-17T20:54:35 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.telerik.com |
TOC & Recently Viewed
Recently Viewed Topics
Remove NNM from macOS
Steps
-
Delete the following directories (including subdirectories) and files with either sudo root or root privileges using the command line:
# rm /Library/LaunchDaemons/com.tenablesecurity.NNM*
# rm -r /Library/NNM
# rm -r /Library/PreferencePanes/NNM*
# rm -r /Applications/NNM
NNM is removed from your macOS system. | https://docs.tenable.com/nnm/Content/RemoveNNMmacOS.htm | 2017-11-17T21:03:47 | CC-MAIN-2017-47 | 1510934803944.17 | [] | docs.tenable.com |
10. Payment Providers¶
Payment Providers are simple classes, which create an interface from an external Payment Service Provider (shortcut PSP) to our django-SHOP framework.
Payment Providers must be aggregates of a Payment Cart Modifier. Here the Payment Cart Modifier computes extra fees when selected as a payment method, whereas our Payment Provider class, handles the communication with the configured PSP, whenever the customer submits the purchase request.
In django-SHOP Payment Providers normally are packed into separate plugins, so here we will show how to create one yourself instead of explaining the configuration of an existing Payment gateway.
A precautionary measure during payments with credit cards is, that the used e-commerce implementation never sees the card numbers or any other sensible information. Otherwise those merchants would have to be PCI-DSS certified, which is an additional, but often unnecessary bureaucratic task, since most PSPs handle that task for us.
10.1. Checkout Forms¶
Since the merchant is not allowed to “see” sensitive credit card information, some Payment Service Providers require, that customers are redirected to their site so that there, they can enter their credit card numbers. This for some customers is disturbing, because they visually leave the current shop site.
Therefore other PSPs allow to create form elements in HTML, whose content is send to their site during the purchase task. This can be done using a POST submission, followed by a redirection back to the client. Other providers use Javascript for submission and return a payment token to the customer, who himself forwards that token to the shopping site.
All in all, there are so many different ways to pay, that it is quite tricky to find a generic solution compatible for all of them.
Here django-SHOP uses some Javascript during the purchase operation. Lets explain how:
10.1.1. The Purchasing Operation¶
During checkout, the clients final step is to click onto a button labeled something like “Buy Now”.
This button belongs to an AngularJS controller, provided by the directive
shop-dialog-proceed.
It may look similar to this:
.. code-block:: html
<button shop-dialog-proceed ng-click=”proceedWith(‘PURCHASE_NOW’)” class=”btn btn-success”>Buy Now</button>
Whenever the customer clicks onto that button, the function
proceedWith('PURCHASE_NOW') is
invoked in the scope of the AngularJS controller, belonging to the given directive.
This function first uploads the current checkout forms to the server. There they are validated, and
if everything is OK, an updated checkout context is send back to the client. See
shop.views.checkout.CheckoutViewSet.upload() for details.
Next, the success handler of the previous submission looks at the given action. In
proceedWith,
we used the magic keyword
PURCHASE_NOW, which starts a second submission to the server,
requesting to begin with the purchase operation (See
shop.views.checkout.CheckoutViewSet.purchase()
for details.). This method determines he payment provider previously chosen by the customer. It
then invokes the method
get_payment_request() of that provider, which returns a Javascript
expression.
On the client, this returned Javascript expression is passed to the eval() function and executed; it then normally starts to submit the payment request, sending all credit card data to the given PSP.
While processing the payment, PSPs usually need to communicate with the shop framework, in order to
inform us about success or failure of the payment. To communicate with us, they may need a few
endpoints. Each Payment provider may override the method
get_urls() returning a list of
urlpatterns, which then is used by the Django URL resolving engine.
class MyPSP(PaymentProvider): namespace = 'my-psp-payment' def get_urls(self): urlpatterns = [ url(r'^success$', self.success_view, name='success'), url(r'^failure$', self.failure_view, name='failure'), ] return urlpatterns def get_payment_request(self, cart, request): js_expression = 'scope.charge().then(function(response) { $window.location.href=response.data.thank_you_url; });' return js_expression @classmethod def success_view(cls, request): # approve payment using request data returned by PSP cart = CartModel.objects.get_from_request(request) order = OrderModel.objects.create_from_cart(cart, request) order.populate_from_cart(cart, request) order.add_paypal_payment(payment.to_dict()) order.save() thank_you_url = OrderModel.objects.get_latest_url() return HttpResponseRedirect(thank_you_url) @classmethod def failure_view(cls, request): """Redirect onto an URL informing the customer about a failed payment""" cancel_url = Page.objects.public().get(reverse_id='cancel-payment').get_absolute_url() return HttpResponseRedirect(cancel_url)
Note
The directive
shop-dialog-proceed evaluates the returned Javascript expression inside
a chained
then(...)-handler from the AngularJS promise framework. This means that such a
function may itself return a new promise, which is resolved by the next
then()-handler.
As we can see in this example, by evaluating arbitrary Javascript on the client, combined with HTTP-handlers for any endpoint, django-SHOP is able to offer an API where adding new Payment Service Providers doesn’t require any special tricks. | http://django-shop.readthedocs.io/en/latest/reference/payment-providers.html | 2017-11-17T21:00:15 | CC-MAIN-2017-47 | 1510934803944.17 | [] | django-shop.readthedocs.io |
Destinations are groups or teams of support staff to serve visitors enquiries of your channels.
Set how your support team serve and manage visitor and customer enquiries. It can be through Staff using maaiiconnect dashboard or mobile app, PSTN phone or SIP trunk as the destination types.
To create a new destination:
- From the navigation menu, go to Destination.
- In the Destination page, click Create Destination.
Create New Destination
- In Destination Name, enter the name of the support team to handle visitor customer enquiries
- In Support Language and Support Location, select the language(s) and location(s) your support team would cover.
Destination Rules
- In Destination Rules, select the preferred destination type (Staff list, PSTN or SIP Trunk).
- Add the endpoint according to the select destination type, maaiiconnect Staff for the Staff list. the phone number for the PSTN Phone.
You can add up to 30 destination endpoints. All endpoints will simultaneously receive a notification whenever there is an open enquiry.
- Click Create. A newly created destination will be listed in the destination list and use in your channel routing.
Destination List
Updated 5 days ago | https://docs.m800.com/docs/add-new-destination | 2021-05-06T07:05:01 | CC-MAIN-2021-21 | 1620243988741.20 | [array(['https://files.readme.io/0bc7b0f-Screenshot_2020-03-22_at_3.14.36_PM.png',
'Screenshot 2020-03-22 at 3.14.36 PM.png'], dtype=object)
array(['https://files.readme.io/0bc7b0f-Screenshot_2020-03-22_at_3.14.36_PM.png',
'Click to close...'], dtype=object)
array(['https://files.readme.io/f1858ad-Destination_Rules.gif',
'Destination Rules.gif'], dtype=object)
array(['https://files.readme.io/f1858ad-Destination_Rules.gif',
'Click to close...'], dtype=object)
array(['https://files.readme.io/b0043f8-Screenshot_2020-03-22_at_3.14.53_PM.png',
'Screenshot 2020-03-22 at 3.14.53 PM.png'], dtype=object)
array(['https://files.readme.io/b0043f8-Screenshot_2020-03-22_at_3.14.53_PM.png',
'Click to close...'], dtype=object) ] | docs.m800.com |
runai logs
Description¶
Show the logs of a Job.
Synopsis¶
runai logs <job-name> [--follow | -f] [--pod string | -p string] [--since duration] [--since-time date-time] [--tail int | -t int] [--timestamps] [--loglevel value] [--project string | -p string] [--help | -h]
Options¶
<job-name> - The name of the Job to run the command with. Mandatory.
--follow | -f
Stream the logs.
--pod | -p
Specify a specific pod name. When a Job fails, it may start a couple of times in an attempt to succeed. The flag allows you to see the logs of a specific instance (called 'pod'). Get the name of the pod by running
runai describe job <job-name>.
--instance (string) | -i (string)
Show logs for a specific instance in cases where a Job contains multiple pods.
--since (duration)
Return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. The flags since and since-time cannot be used together.
--since-time (date-time)
Return logs after specified date. Date format should be RFC3339, example:
2020-01-26T15:00:00Z.
--tail (int) | -t (int)
# of lines of recent log file to display.
--timestamps
Include timestamps on each line in the log output. command will show the logs of the first process in the container. For training Jobs, this would be the command run at startup. For interactive Jobs, the command may not show anything.
See Also¶
- Training Workloads. See Quickstart document: Launch Unattended Training Workloads. | https://docs.run.ai/Researcher/cli-reference/runai-logs/ | 2021-05-06T07:42:12 | CC-MAIN-2021-21 | 1620243988741.20 | [] | docs.run.ai |
Whether to use a Light's color temperature when calculating the final color of that Light."
Enable to use the correlated color temperature (abbreviated as CCT) for adjusting light color. CCT is a natural way to set light color based on the physical properties of the light source. The CCT is multiplied with the color filter when calculating the final color of a light source. The color temperature of the electromagnetic radiation emitted from an ideal black body is defined as its surface temperature in degrees Kelvin. White is 6500K according to the D65 standard. Candle light is 1800K.
If you want to use lightsUseColorTemperature, lightsUseLinearIntensity has to be enabled to ensure physically correct output.
See Also: GraphicsSettings.lightsUseLinearIntensity, Light.ColorTemperature. | https://docs.unity3d.com/es/2017.4/ScriptReference/Rendering.GraphicsSettings-lightsUseColorTemperature.html | 2021-05-06T05:50:34 | CC-MAIN-2021-21 | 1620243988741.20 | [] | docs.unity3d.com |
.
Getting the source¶
Source code is hosted in github. You can get it using git client:
$ git clone
Installation¶
You can install it via pip:
$ pip install pyexcel
For individual excel file formats, please install them as you wish:
Please import them before you start to access the desired file formats:
from pyexcel.ext import plugin
or:
import pyexcel.ext.plugin
Usage¶
Suppose you want to process the following excel data :
Here are the example usages:
>>> import pyexcel as pe >>> import pyexcel.ext.xls # import it to handle xls file >>> import pyexcel.ext.xlsx # import it to handle xlsx file >>>
- Sheet: Data manipulation
- Sheet: Data filtering
- Work with data series in a single sheet
- Work with multi-sheet file
- Sheet: Data conversion
- How to obtain records from an excel sheet
- How to get an array from an excel sheet
- How to save an python array as an excel file
- | http://docs.pyexcel.org/en/v0.1.7/ | 2021-05-06T06:52:38 | CC-MAIN-2021-21 | 1620243988741.20 | [] | docs.pyexcel.org |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.