content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Smartech provides mobile app SDK that can help you track, analyze and engage with your app customers.
Quick Summary of Smartech App SDK capabilities
- User and Event Tracking
- Customer Engagement Channels
- App Push Notifications
- In-app Messaging
- Product Experience
Integration Guides
This guide will help you get started with the integration of SDK in your app.
Netcore's Github repo is available here.
Updated 3 months ago | https://docs.netcoresmartech.com/docs/app | 2021-06-13T00:30:59 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.netcoresmartech.com |
Requirements
This topic summarizes the requirements for installing Tanium Core Platform servers.
For the host system requirements of the Tanium Client, see Tanium Client Management User Guide: Tanium Client and Client Management requirements.
Installation package and license files
Tanium provides the following installation package files and license file required to install the Tanium Server, Tanium Module Server, and Tanium Zone Server:
- SetupServer.exe
- SetupModuleServer.exe
- SetupZoneServer.exe
- tanium.license
The installation package for each of these three servers must have the same build number (for example, all must have build number 7.4.5.1200). To complete the procedures in this guide, be sure you can copy these files to, and between, the host computers.
The license is bound to the hostname you assign to the Tanium Server. In high availability (HA) deployments, the license must specify the hostnames of both Tanium Servers. Contact Tanium Support if the server hostnames change.
Server version and host system requirements
Table 1 summarizes basic requirements for Tanium Core Platform and database servers that are installed on customer-provided Windows infrastructure. For detailed version specifications and sizing guidelines, see Reference: Host system resource guidelines.
Tanium solutions (modules and shared services) might have additional requirements for Tanium Core Platform servers. Table 2 provides links to the user guide sections that list these requirements.
The Standard, Enterprise, and Datacenter editions of the following Windows Server platforms are supported. The Server Core and Nano Server options are not supported.
Click the links in the following table to see the minimum Tanium Core Platform version (Tanium dependencies) and other platform server requirements for each Tanium module and shared service.
Tanium Core Platform server and client compatibility.
Internet access, network connectivity, and firewall
Tanium components use TCP/IP to communicate over IPv4 and IPv6 networks. Tanium Core Platform 7.2 and earlier supports only IPv4. Contact Tanium Support if you need IPv6 support in version 7.3 or later. You must work with your network administrator to ensure that the Tanium components are provisioned with IP addresses and can use DNS to resolve host names.
During installation and ongoing operations, the Tanium Server and the web browser that you use to access the Tanium Console must connect to to import updates to Tanium Core Platform components and modules. The Tanium Server might need to connect to additional URLs based on the components you import. For a list of the required URLs, see Tanium Core Platform Deployment Reference Guide: Internet URLs required.
The Tanium Server must be able to connect to the Tanium database server and Module Server. In an HA deployment, the Tanium Servers must be able to connect to each other over a reliable Ethernet connection. All these connections require a minimum throughput of 1 Gbps and a maximum round-trip latency of 30 ms.
If your enterprise network environment requires outbound Internet connections to traverse a proxy server, you can configure the proxy settings as described under Tanium Console User Guide: Configuring proxy server settings.
Table 3 summarizes the Tanium processes and default values for ports used in Tanium Core Platform communication. Host and network firewalls might require configuration to allow the specified processes to send and receive TCP data over the listed ports. The Tanium installer opens required ports in the Windows host firewall. You must work with your network security administrator to ensure the platform components can communicate through any security barriers (such as firewalls) in their communication path. For a detailed explanation, see Tanium Core Platform Deployment Reference Guide: Network ports.
Configure firewall policies to open ports for Tanium traffic with TCP-based rules instead of application identity-based rules. For example, on a Palo Alto Networks firewall, configure the rules with service objects or service groups instead of application objects or application groups.
Your security administrator might also need to create rules to exempt or exclude Tanium processes that run on the host computers from blocking by antivirus or processing by encryption or other security and management stack software. For details, see Tanium Core Platform Deployment Reference Guide: Host system security exceptions.
The following figure illustrates how the Tanium Core Platform uses these ports in an HA deployment on Windows infrastructure.
SSL/TLS certificates
SSL/TLS certificate and key exchanges secure connections to the Tanium™ Console or Tanium™ API, as well as connections between the Tanium Server and Tanium Module Server. When you run the server installation wizards, they prompt you to generate a self-signed certificate or specify the location of a certificate that was issued by a commercial certificate authority (CA) or your own enterprise CA. As a best practice to facilitate troubleshooting, use the self-signed certificates during initial installation and replace them with CA-issued certificates later. This practice enables you to separate potential installation issues from TLS connection issues. For details, see Tanium Core Platform Deployment Reference Guide: Securing Tanium Console, API, and Module Server access.
Administrator account permissions
Work with your Microsoft Active Directory (AD) administrator to provision the accounts needed during Tanium Core Platform installations or upgrades and for post-installation or post-upgrade activities.
Administrator accounts for installations and upgrades
The following table lists the administrator accounts required to install or upgrade Tanium Core Platform servers, create Tanium databases, or deploy Tanium Clients. You can use a single service account to install the Tanium Server and to create databases on the SQL or PostgreSQL server, as long as the account has the all required group memberships and permissions for those servers. You can also use a single service account to install the Zone Server and Zone Server Hub. You must use a separate service account to install the Module Server.
Administrator accounts for post-installation/upgrade activities
The following table lists the administrator accounts required for regular, ongoing operations performed after installations or upgrades, including running the services for Tanium Core Platform servers and Tanium Clients, and accessing Tanium databases. If you reuse the accounts used for installations and upgrades, first reduce the account permissions to those specified in the following table. You can use a single service account to run the Tanium Server service and access the Tanium databases. You can also use a single service account to run the Zone Server and Zone Server Hub services. You must use a separate service account to run the Module Server service.
Last updated: 6/11/2021 9:09 AM | Feedback | https://docs.tanium.com/platform_install/platform_install/requirements.html | 2021-06-12T23:04:34 | CC-MAIN-2021-25 | 1623487586465.3 | [array(['images/platform_communication_windows.png', None], dtype=object)] | docs.tanium.com |
IGEL Client 20.1.200
This release includes bug fixes and changes for consistency with other deviceTRUST 20.1 components, and will be built into a forthcoming IGEL OS release. Alternatively, binaries are available from [email protected].
IGEL and Custom Properties
The previous Custom category of properties has been removed and replaced with the following:
- DEVICE_IGEL_ENVIRONMENT_COUNT has been added, and represents the number of IGEL environment variables.
- DEVICE_IGEL_ENVIRONMENT_X_NAME has been added, and represents the name of the IGEL environment variable.
- DEVICE_IGEL_ENVIRONMENT_X_VALUE has been added, and represents the value of the IGEL environment variable.
- Environment variables beginning with DEVICETRUST were ignored in previous releases, however are now included.
Location Properties
The changes to the location properties are as follows:
- DEVICE_LOCATION_LATITUDE and DEVICE_LOCATION_LONGITUDE have been replaced with the DEVICE_LOCATION_POSITION property.
- DEVICE_LOCATION_COUNTRY_CODE and DEVICE_LOCATION_COUNTRY have been combined into a single DEVICE_LOCATION_COUNTRY property representing the ISO-Alpha 2 country code.
- DEVICE_LOCATION_PROVIDER has been added, and is set to Third Party when location properties are provided.
- DEVICE_LOCATION_SOURCE has been added, and is set to WiFi when location properties are provided.
- DEVICE_LOCATION_STATE_DISTRICT has been removed.
Network Properties
The changes to the network properties are as follows:
- DEVICE_NETWORK_X_IP has been removed.
- DEVICE_NETWORK_X_IPV4_ENABLED has been removed.
- DEVICE_NETWORK_X_IPV6_ENABLED has been removed.
- DEVICE_NETWORK_X_IPV4 has been added, and represents a semi-colon separated list of IPv4 addresses bound to the network adapter.
- DEVICE_NETWORK_X_IPV6 has been added, and represents a semi-colon separated list of IPv6 addresses bound to the network adapter.
- DEVICE_NETWORK_X_IPV4_SUBNET has been added, and is set to the IPv4 address masked with the subnet and with the subnet prefix length appended in CIDR notation. For example, an IP address of 192.168.10.12 with subnet 255.255.255.0 will be set to a value of 192.168.10.0/24. This new value becomes an accurate check of whether a device is part of a specific network.
- DEVICE_NETWORK_X_IPV6_SUBNET has been added, and is set to the IPv6 address masked with the subnet and with the subnet prefix length appended in CIDR notation.
- DEVICE_NETWORK_X_STATUS has been updated for consistency with other platforms, and is now set to Up, Down or Unknown.
Bug Fixes
- Various performance and reliability improvements.
- Fixed an occasional crash shortly after establishing a Citrix session. This crash occurred any time the client was loaded, even if the deviceTRUST Host did not request properties from the client.
- Fixed an issue where the DEVICE_HARDWARE_BIOS_RELEASEDATE was not formatted as an ISO 8601 date/time string.
Compatibility
The deviceTRUST IGEL Client 20.1.200 will automatically convert Custom, Location and Network properties to their previous values when connecting to a 19.4 or earlier deviceTRUST Host. | https://docs.devicetrust.com/docs/releases-igel-20.1.200/ | 2021-02-24T21:04:11 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.devicetrust.com |
Configuring surveys in the BMC Remedy ITSM Requester console
Use the Survey Question Configuration form to set up survey questions for your requesters. Surveys give the Business Service Manager or IT Manager an indication of customer satisfaction levels and how the BMC Service Desk is performing.
You can configure a survey for a specific company or select Global to make the survey available to all companies.
Note
You must have Requester Console Config or Requestor Console Master permissions to access this form.
If your environment is running the BMC Service Request Management application, the Request Entry console feature of that application replaces the BMC Remedy ITSM Requester console. For information about surveys in BMC Service Request Manager, see Setting up surveys and viewing results.
To configure surveys for the BMC Remedy ITSM Requester console
- From the Application Administration Console, click the Custom Configuration tab.
- From the Application Settings list, choose Requester Console > Configuration > Survey Configuration, and then click Open. The Survey Question Configuration form appears.
- Select the company to which this survey applies, or select Global to make this survey available to all companies.
- For Request Type, select Change or Incident. The survey is specific to the specified request type, either change or incident. (The application must be installed for the appropriate request type to appear.)
- If users select a summary definition that is mapped to the change request type, they receive the survey if the Request Type is set to Change.
- If users select a summary definition that is mapped to the incident request type, they receive the survey if the Request Type is set to Incident.
- In the four Question fields, enter questions to which you want requesters to respond. Only one question is required, but you can enter up to four questions.
Click Save.
Note
The Remedy Mid Tier URL for survey notifications must be customized for your environment. For further information, see Customizing the out-of-the-box survey notification URL.
Point 2 should be:
From the Application Settings list, choose Service Request Management > Advanced > Survey Configuration, and then click Open. The Survey Question Configuration form appears.
Hi Amr,
Thanks for your comment, and sorry for the delayed response.
I think the procedure in this topic is for the Requester Console, which appears in BMC Remedy ITSM if BMC Service Request Management is not installed. See the second part of the Note near the top.
Regards,
Cathy
The 'Working with Surveys' link in the top Note box is invalid. The correct link for SRM 9.1 is
Thanks, Fred. I have fixed the link.
Regards,
Cathy | https://docs.bmc.com/docs/itsm91/configuring-surveys-in-the-bmc-remedy-itsm-requester-console-608491244.html | 2021-02-24T21:18:05 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.bmc.com |
The Kinderpedia chat module allows you to send and receive real-time messages, initiate and participate in group conversations and even easily transmit documents and images. Because we appreciate the dialogue as much as you, we have made the chat module more attractive and easier to use.
All conversations in one place, for easy navigation
Kinderpedia users now communicate easily with the new messaging module interface. On the left column we will find the list of private conversations in which the profile picture of the user appears, its status (online - green and offline - gray), preview of a part of the message and the time at which it was sent.
Also on this column we can see the group conversations. When selecting a group conversation at the top appears the name of the initiator of the conversation and the group with which this user communicates: Denisa Lithium (educator)> Administrators.
Easily find any conversation in the archive
With the search function you can now find an older conversation with someone in the list of conversations but also a word or sequence within a conversation. It will be very easy for you to identify an older discussion with a parent if you remember an important keyword.
To start a new conversation with a person or group of people, you have to click the icon in the form of a pencil, which is located above the search button.
Communication is more fun with emoji
We have added a library of emoticons (emoji) that will make the conversation more enjoyable and fun. Also here you can add images or files with just one click and your interlocutor can download or view them in real time.
If you want to know when a message has been sent that you saw later, you just have to scroll over that message and you will see the date and time that message was sent. | https://docs.kinderpedia.co/en/articles/2930160-how-does-the-live-chat-module-work-on-the-web-interface | 2021-02-24T21:00:19 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.kinderpedia.co |
Using the TF Heat Template¶
- date
2020-03-31
Heat is the orchestration engine of the OpenStack program. Heat enables launching multiple cloud applications based on templates that are comprised of text files.
Introduction to Heat¶¶¶
Starting with TF Release 3.0.2, TF Heat resources and
templates are autogenerated from the TF schema, using Heat Version
2 resources. TF Release 3.0.2 is the minimum required version for
using Heat with TF in 3.x releases. The TF Heat Version 2
resources are of the following hierarchy:
OS::ContrailV2::<ResourceName>.
The generated resources and templates are part of the TF :
Heat Version 2 with Service Templates and Port Tuple Sample Workflow¶
With TF service templates Version 2, the user can create ports and bind them to a virtual machine (VM)-based service instance, by means of a port-tuple object. All objects created with the Version 2 service template are directly visible to the TF¶
The following is an example of how to create a service template using Heat.
Define a template to create the service template.
service_template.yaml heat_template_version: 2013-‐05-‐23 description: > HOT template to create a service template parameters: name: type: string description: Name of service template mode: type: string description: service mode type: type: string description: service type image: type: string description: Name of the image flavor: type: string description: Flavor service_interface_type_list: type: string description: List of interface types shared_ip_list: type: string description: List of shared ip enabled-‐disabled static_routes_list: type: string description: List of static routes enabled-‐disabled resources: service_template: type: OS::ContrailV2::ServiceTemplate properties: name: { get_param: name } service_mode: { get_param: mode } service_type: { get_param: type } image_name: { get_param: image } flavor: { get_param: flavor } service_interface_type_list: { "Fn::Split" : [ ",", Ref: service_interface_type_list ] } shared_ip_list: { "Fn::Split" : [ ",", Ref: shared_ip_list ] } static_routes_list: { "Fn::Split" : [ ",", Ref: static_routes_list ] } outputs: service_template_fq_name: description: FQ name of the service template value: { get_attr: [ service_template, fq_name] } }
Create an environment file to define the values to put in the variables in the template file.
service_template.env parameters: name: contrail_svc_temp mode: transparent type: firewall image: cirros flavor: m1.tiny service_interface_type_list: management,left,right,other shared_ip_list: True,True,False,False static_routes_list: False,True,False,False
Create the Heat stack by launching the template and the environment file, using the following command:
heat stack create stack1 –f service_template.yaml –e service_template.env
OR use this command for recent versions of OpenStack
openstack stack create -e <env-file-name> -t <template-file-name> <stack-name> | https://docs.tungsten.io/en/latest/tungsten-fabric-installation-and-upgrade-guide/heat-template-vnc.html | 2021-02-24T20:35:39 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.tungsten.io |
Dataset not configured correctly during normalization
Use this information if, during normalization, the dataset is not configured correctly to create or update instances. Verify that the dataset settings are not preventing the common data model update.
To troubleshoot the dataset configuration
- Click Dataset Configuration, and verify that the dataset is not set to Trusted.
If the dataset is marked as Trusted, the data model changes are not normalized.
- Verify the setting for Inline error handling:
- If set to Reject, the CMDB is not updated if there is a normalization error. Enable logging to see the error messages.
- If set to Accept, the CMDB is updated even if there is a normalization error.
- If the Normalization mode is set to Continuous, verify that the event or time is configured.
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/ac91/dataset-not-configured-correctly-during-normalization-609847499.html | 2021-02-24T20:56:42 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.bmc.com |
2.3. Lesson: Navigating the Map Canvas: Basic Navigation Tools Fig. (Fig..
As you zoomed in and out, notice that the Scale value in the Status Bar changes. The Scale value represents the Map Scale. In general, the number to the right of : represents how many times smaller the object you are seeing in the Map Canvas is to the actual object in the real world.
>. | https://docs.qgis.org/testing/en/docs/training_manual/basic_map/mapviewnavigation.html | 2021-02-24T19:44:56 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['../../../_images/map_scale1.png',
'../../../_images/map_scale1.png'], dtype=object)] | docs.qgis.org |
No, not yet.
The Proton C is based on ARM, and offers some advantages over the Pro Micro and Elite C - especially better performance and more flash memory, which enables you to work with your firmware more freely if you're a power user.
However, the part of QMK that lets split keyboards work does not yet fully work with ARM. It's being worked on and the current estimate seems to be that it should be ready by Q3 2020.
In the meantime, my advice is to use an Elite C if you wish to use USB C. If you wish to upgrade to a Proton C in the future, you should consider socketing your controller with Mill Max Sockets so it can more easily be replaced.
The QMK Proton C, image courtesy of qmk.fm. | https://docs.splitkb.com/hc/en-us/articles/360011125439-Can-I-use-a-Proton-C-for-my-split-keyboard- | 2021-02-24T20:44:21 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['/hc/article_attachments/360007516659/GdsN1Rd.jpg', 'GdsN1Rd.jpg'],
dtype=object) ] | docs.splitkb.com |
On April 28, 2021, Google is changing the required permissions for attaching IAM roles to service accounts. If you are using IAM roles for your Google service accounts, please see Changes to User Management.
A reference dataset is a reference to a recipe's outputs that has been added to a flow other than the one where the recipe is located. When a reference dataset is selected, details are available in the context menu.A reference dataset is a reference to a recipe's outputs that has been added to a flow other than the one where the recipe is located. When a reference dataset is selected, details are available in the context menu.
NOTE: A reference dataset is a read-only object in the flow where it is referenced. You cannot select or use a reference dataset until a reference has been created in the source flow from the recipe to use. For more information, see View for Recipes.
To add a reference dataset:
- From the source flow, select the reference object and click Add to Flow in the Details options.
From the Add -
Untitledto dialog, where
Untitledis the name of the reference object, select the required flow or click Create new flow. For more information, see Create Flow Page.
Figure: Reference Dataset icons
Icon context menu
The following menu options are available when you select the plus icon next to the referenced dataset:
- Add new recipe: Add a new recipe extending from the current recipe. This new recipe operates on the outputs of the original recipe.
- Add Join: Add a join step as the new last step to the recipe. For more information, see Join Window.
- Add Union: Add a union step as the new last step to the recipe. For more information, see Union Page.
Details options
The following options are available in the details context menu when you select a referenced dataset.
- Add:
- Recipe: Add a recipe for this dataset.
- Join: Join this dataset with another recipe or dataset. If this dataset does not have a recipe for it, a new recipe object is created to store this step. See Join Window.
- Union: Union this dataset with one or more recipes or datasets. If this dataset does not have a recipe for it, a new recipe object is created to store this step. See Union Page.
- View in library: Review details on the flows where the dataset is used.
- Go to original reference: Open in Flow View the flow containing the original dataset for this reference.
- Remove from Flow: Remove the dataset from the flow.
All dependent flows, outputs, and references are not removed from the flow. You can replace the source for these objects as needed.
NOTE: References to the deleted dataset in other flows remain broken until the dataset is replaced.
Tip: You can also right-click the referenced dataset to view all the menu options..
This page has no comments. | https://docs.trifacta.com/display/DP/View+for+Reference+Datasets?reload=true | 2021-02-24T21:15:31 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.trifacta.com |
Update the user name and password in the customizations for the hosts in the shared edge and compute cluster to be compliant as the host profile does not contain credentials information.
Procedure
- Log in to vCenter Server by using the vSphere Client.
- Open a Web browser and go to.
- Log in by using the following credentials.
- Update the sfo01-w01hp-comp01 host profile.
- From the Home menu, select Policies and Profiles and click Host Profiles.
- Right-click sfo01-w01hp-comp01, and select Copy Settings from Host.
- Select sfo01w01esx01.sfo01.rainpole.local, and click OK.
- Edit the sfo01-w01hp-comp01 host profile.
- On the Host Profiles page, right-click sfo01-w01hp-comp01, and select .
The Edit Host Customizations wizard appears.
- On the Select hosts page, select all hosts and click Next.
- On the Customize hosts page, in the Path column, click the filtering icon and enter active directory.
- For the User Name and Password properties, enter the following values.
- Click Finish.
- Verify compliance and remediate the hosts.
- On the Host Profiles page, click sfo01-w01hp-comp01, and click the Monitor tab.
- Click Compliance, click Actions, and select Check Host Profile Compliance.
On the Host profile page, in the Host Profile Compliance column, the first host shows as Compliant, and the other hosts show as Not Compliant.
- Select all hosts that are not compliant and click Remediate.
- In the Remediate dialog box, select Automatically reboot hosts that require remediation.
- Click OK.
All hosts show as Compliant. | https://docs.vmware.com/en/VMware-Validated-Design/5.0/com.vmware.vvd.sddc-deploya.doc/GUID-525D01DD-F57E-46D6-960E-E74C399847F4.html | 2021-02-24T20:28:01 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.vmware.com |
The UEM Managed Content repository refers to a location where administrators with the appropriate permissions have complete control over the files that are stored within it. Using the VMware Workspace ONE Content app, the end users can access the added content from the repository labeled UEM Managed but cannot edit the content.
Features
Managed Content repository provides the following features:
- Uploading of files manually
- Options to configure and provide permissions for individual files
- Sync options to control content accessed on end-user devices
- List View for advanced file management options
Security
To protect the content that is stored and synced from the repository to end-user devices, the following security features are available:
- SSL encryption secures data during transit between the UEM console and end-user devices.
- Roles with the security pin for controlled access to the content.
Deployment
The UEM Managed repository content is stored in the Workspace ONE UEM database. You can choose to host the database in the Workspace ONE UEM cloud or on-premises, depending on your deployment model. For more information, see Configure the AirWatch Managed Content Category Structure. | https://docs.vmware.com/en/VMware-Workspace-ONE-UEM/2011/MCM/GUID-AWT-OVERV-AWMC.html | 2021-02-24T21:45:00 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.vmware.com |
Displays information about powered off VMs, idle VMs, snapshots and orphaned disks. This information helps to identify the amount of resources that can be reclaimed and provisioned to other objects in your environment or amount of potential savings that can be done in each month.
The types of VMs are ranked in the order of their importance in a reclamation action. A VM whose attributes match more than one VM type is included with the higher-ranking VM type. Grouping the VMs this way eliminates duplicates during calculations. As an example, powered-off VMs are ranked higher than snapshots, so that a powered-off VM that also has a snapshot appears only in the powered-off VM group.
If you exclude a given type of VM, all VMs matching this type are included with the next lower-ranked group they match. For example, to list all snapshots regardless of whether their corresponding VMs are powered-off or idle, deselect the Powered-off VMs and Idle VMs check boxes.
Further, you can configure how long a given class of VMs must be in the designated state - powered-off, for example, or idle - to be included in the reclamation exercise. You also can choose to hide the cost savings calculation. | https://docs.vmware.com/en/vRealize-Operations-Manager/8.3/com.vmware.vcom.core.doc/GUID-7DBBF4C4-D4A0-4212-80DE-31E8EA1BB4F0.html | 2021-02-24T21:48:26 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.vmware.com |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::Amplify::Types::DeleteBranchRequest
- Defined in:
- (unknown)
Overview
Note:
When passing DeleteBranchRequest as input to an Aws::Client method, you can use a vanilla Hash:
{ app_id: "AppId", # required branch_name: "BranchName", # required }
The request structure for the delete branch request.
Instance Attribute Summary collapse
- #app_id ⇒ String
The unique ID for an Amplify app.
- #branch_name ⇒ String
The name for the branch.
Instance Attribute Details
#app_id ⇒ String
The unique ID for an Amplify app.
#branch_name ⇒ String
The name for the branch. | https://docs.amazonaws.cn/sdk-for-ruby/v2/api/Aws/Amplify/Types/DeleteBranchRequest.html | 2021-02-24T20:19:21 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.amazonaws.cn |
%Net.DB.DataSource
abstract class %Net.DB.DataSourceThis class implements the IRIS Native API for Object Script DataSource interface. At this time that interface consists solely of the CreateConnection() method.
IRIS Native API for ObjectScript.
Method Inventory (Including Private)
Methods (Including Private)
classmethod CreateConnection(host As %String(MAXLEN="")="127.0.0.1", port As %Integer = 51773, namespace As %String(MAXLEN="")="USER", user, pwd, timeout As %Integer, logfile As %String(MAXLEN=255)="") as %Net.DB.Connection [ Language = objectscript ]
CreateConnection accepts url, port,namespace, user, and pwd parameters. Refer to %Net.DB.Connection for more information on these parameters. CreateConnection() returns an instance of %Net.DB.Connection. | https://docs.intersystems.com/irisforhealthlatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&PRIVATE=1&CLASSNAME=%25Net.DB.DataSource | 2021-02-24T21:17:50 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.intersystems.com |
In this tutorial we will describe the necessary steps to be able to generate monthly invoices for each student using several companies.
This option is useful when a kindergarten uses 2 or 3 companies to manage all kindergarten activities. For example, a child may be billed for a monthly fee using one company, for meals using another company, and for transportation using another company.
Required:
- To make these settings you will need to know how to set up a tariff plan, all the details can be found here:
- We need to have saved several companies:
Once we have a billing plan created, we can go to Billing> Configuration> Billing Plans> Actions> Edit Billing Plan> Click "next" until we reach step 5 (picture below).
From here, once we add a product to the invoice using the green + button on the right, we will be able to select whether that product will be included in the main invoice or not. If we select another company then the system will generate a separate invoice for the respective product, monthly, automatically depending on the tariff plan settings.
How to add products: | https://docs.kinderpedia.co/en/articles/4824865-automatic-invoicing-using-multiple-companies-to-bill-students-how-we-generate-invoices-using-several-companies | 2021-02-24T20:56:58 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.kinderpedia.co |
Registering Interfaces
This section presents a detailed discussion of the process of registering an RPC interface.
The information in this section is presented in the following topics:
- Interface Registration Functions
- Entry-Point Vectors
- Manager EPVs
- Registering a Single Implementation of an Interface
- Registering Multiple Implementations of an Interface
- Rules for Invoking Manager Routines
- Dispatching a Remote Procedure Call to a Server-Manager Routine
- Supplying Your Own Object-Inquiry Function
Interface Registration Functions
Servers register their interfaces by calling the RpcServerRegisterIf function. Complex server programs often support more than one interface. Server applications must call this function once for each interface they support.
Also, servers can support multiple versions of the same interface, each with its own implementation of the interface's functions. If your server program does this, it must provide a set of entry points. An entry point is a manager routine that dispatches calls for a version of an interface. There must be one entry point for each version of the interface. The group of entry points is called an entry point vector. For details, see Entry-Point Vectors.
In addition to the standard function RpcServerRegisterIf, RPC also supports other interface registration functions. The RpcServerRegisterIf2 function extends the capabilities of RpcServerRegisterIf by enabling you to specify a set of registration flags (see Interface Registration Flags), the maximum number of concurrent remote procedure call requests the server can accept, and the maximum size in bytes of incoming data blocks.
The RPC library also contains a function called RpcServerRegisterIfEx. Like the RpcServerRegisterIf function, this function registers an interface. Your server program can also use this function to specify a set of registration flags (see Interface Registration Flags), the maximum number of concurrent remote procedure call requests the server can accept, and a security callback function.
The RpcServerRegisterIf, RpcServerRegisterIfEx, and RpcServerRegisterIf2 functions set values in the internal interface registry table. This table is used to map the interface UUID and object UUIDs to a manager EPV. The manager EPV is an array of function pointers that contains exactly one function pointer for each function prototype in the interface specified in the IDL file.
For information on supplying multiple EPVs to provide multiple implementations of the interface, see Multiple Interface Implementations.
The run-time library uses the interface registry table (set by calls to the function RpcServerRegisterIf, RpcServerRegisterIfEx, or RpcServerRegisterIf2) and the object registry table (set by calls to the function RpcObjectSetType) to map interface and object UUIDs to the function pointer.
When you want your server program to remove an interface from the RPC run-time library registry, call the RpcServerUnregisterIf function. After the interface is removed from the registry, the RPC run-time library will no longer accept new calls for that interface.
Entry-point Vectors
The manager entry-point vector (EPV) is an array of function pointers that point to implementations of the functions specified in the IDL file. The number of elements in the array corresponds to the number of functions specified in the IDL file. RPC supports multiple entry-point vectors representing multiple implementations of the functions specified in the interface.
The MIDL compiler automatically generates a manager EPV data type for use in constructing manager EPVs. The data type is named if-name**_SERVER_EPV**, where if-name specifies the interface identifier in the IDL file.
The MIDL compiler automatically creates and initializes a default manager EPV on the assumption that a manager routine of the same name exists for each procedure in the interface and is specified in the IDL file.
When a server offers multiple implementations of the same interface, the server must create one additional manager EPV for each implementation. Each EPV must contain exactly one entry point (address of a function) for each procedure defined in the IDL file. The server application declares and initializes one manager EPV variable of type if-name**_SERVER_EPV** for each additional implementation of the interface. To register the EPVs it calls RpcServerRegisterIf, RpcServerRegisterIfEx, or RpcServerRegisterIf2 once for each object type it supports.
When the client makes a remote procedure call to the server, the EPV containing the function pointer is selected based on the interface UUID and the object type. The object type is derived from the object UUID by the object-inquiry function or the table-driven mapping controlled by RpcObjectSetType.
Manager EPVs
By default, the MIDL compiler uses the procedure names from an interface's IDL file to generate a manager EPV, which the compiler places directly into the server stub. This default EPV is statically initialized using the procedure names declared in the interface definition.
To register a manager using the default EPV, specify NULL as the value of the MgrEpv parameter in a call to either the RpcServerRegisterIf, RpcServerRegisterIfEx, or RpcServerRegisterIf2 function. If the routine names used by a manager correspond to those of the interface definition, you can register this manager using the default EPV of the interface generated by the MIDL compiler. You can also register a manager using an EPV that the server application supplies.
A server can (and sometimes must) create and register a non-null manager EPV for an interface. To select a server application–supplied EPV, pass the address of an EPV whose value has been declared by the server as the value of the MgrEpv a parameter. A non-null value for the MgrEpv a parameter always overrides a default EPV in the server stub.
The MIDL compiler automatically generates a manager EPV data type (RPC_MGR_EPV) for a server application to use in constructing manager EPVs. A manager EPV must contain exactly one entry point (function address) for each procedure defined in the IDL file.
A server must supply a non-null EPV in the following cases:
- When the names of manager routines differ from the procedure names declared in the interface definition
- When the server uses the default EPV for registering another implementation of the interface
A server declares a manager EPV by initializing a variable of type if-name**_SERVER_EPV** for each implementation of the interface.
Registering a Single Implementation of an Interface
When a server offers only one implementation of an interface, the server calls RpcServerRegisterIf, RpcServerRegisterIfEx, or RpcServerRegisterIf2 only once. In the standard case, the server uses the default manager EPV. (The exception is when the manager uses routine names that differ from those declared in the interface.)
For the standard case, you supply the following values for calls to RpcServerRegisterIf, RpcServerRegisterIfEx, or RpcServerRegisterIf2:
Manager EPVs
To use the default EPV, specify a null value for the MgrEpv a parameter.
Manager type UUID
When using the default EPV, register the interface with a nil manager type UUID by supplying either a null value or a nil UUID for the MgrTypeUuid a parameter. In this case, all remote procedure calls, regardless of the object UUID in their binding handle, are dispatched to the default EPV, assuming no RpcObjectSetType calls have been made.
You can also provide a non-nil manager type UUID. In this case, you must also call the RpcObjectSetType routine.
Registering Multiple Implementations of an Interface
You can supply more than one implementation of the remote procedure(s) specified in the IDL file. The server application calls RpcObjectSetType to map object UUIDs to type UUIDs and calls RpcServerRegisterIf, RpcServerRegisterIfEx, or RpcServerRegisterIf2 to associate manager EPVs with a type UUID. When a remote procedure call arrives with its object UUID, the RPC server run-time library maps the object UUID to a type UUID. The server application then uses the type UUID and the interface UUID to select the manager EPV.
You can also specify your own function to resolve the mapping from object UUID to manager type UUID. You specify the mapping function when you call RpcObjectSetInqFn.
To offer multiple implementations of an interface, a server must register each implementation by calling RpcServerRegisterIf, RpcServerRegisterIfEx or RpcServerRegisterIf2 separately. For each implementation a server registers, it supplies the same IfSpec a parameter, but a different pair of MgrTypeUuid and MgrEpv a parameters.
In the case of multiple managers, use RpcServerRegisterIf, RpcServerRegisterIfEx or RpcServerRegisterIf2 as follows:
Manager EPVs
To offer multiple implementations of an interface, a server must:
- Create a non-null manager EPV for each additional implementation.
- Specify a non-null value for the MgrEpv a parameter in RpcServerRegisterIf, RpcServerRegisterIfEx, or RpcServerRegisterIf2.
Please note that the server can also register with the default manager EPV.
Manager type UUID
Provide a manager type UUID for each EPV of the interface. The nil type UUID (or null value) for the MgrTypeUuid a parameter can be specified for one of the manager EPVs. Each type UUID must be different.
Rules for Invoking Manager Routines
The RPC run-time library dispatches an incoming remote procedure call to a manager that offers the requested RPC interface. When multiple managers are registered for an interface, the RPC run-time library must select one of them. To select a manager, the RPC run-time library uses the object UUID specified by the call's binding handle.
The run-time library applies the following rules when interpreting the object UUID of a remote procedure call:
Nil object UUIDs
A nil object UUID is automatically assigned the nil type UUID (it is illegal to specify a nil object UUID in the RpcObjectSetType routine). Therefore, a remote procedure call whose binding handle contains a nil object UUID is automatically dispatched to the manager registered with the nil type UUID, if any.
Non-nil object UUIDs
In principle, a remote procedure call whose binding handle contains a non-nil object UUID should be processed by a manager whose type UUID matches the type of the object UUID. However, identifying the correct manager requires that the server has specified the type of that object UUID by calling the RpcObjectSetType routine.
If a server fails to call the RpcObjectSetType routine for a non-nil object UUID, a remote procedure call for that object UUID goes to the manager EPV that services remote procedure calls with a nil object UUID (that is, the nil type UUID).
Remote procedure calls with a non-nil object UUID in the binding handle cannot be executed if the server assigned that non-nil object UUID a type UUID by calling the RpcObjectSetType routine, but did not also register a manager EPV for that type UUID by calling RpcServerRegisterIf, RpcServerRegisterIfEx or RpcServerRegisterIf2.
The following table summarizes the actions that the run-time library uses to select the manager routine.
The object UUID of the call is the object UUID found in a binding handle for a remote procedure call.
The server sets the type of the object UUID by calling RpcObjectSetType to specify the type UUID for an object.
The server registers the type for the manager EPV by calling RpcServerRegisterIf, RpcServerRegisterIfEx or RpcServerRegisterIf2 using the same type UUID.
Note
The nil object UUID is always automatically assigned the nil type UUID. It is illegal to specify a nil object UUID in the RpcObjectSetType routine.
Dispatching a Remote Procedure Call to a Server-manager Routine
The following tables show the steps that the RPC run-time library takes to dispatch a remote procedure call to a server-manager routine.
A simple case where the server registers the default manager EPV, is described in the following tables.
Interface Registry Table
Object Registry Table
Mapping the Binding Handle to an Entry-point Vector (EPV)
The following steps describe the actions that the RPC server's run-time library take, as shown in the preceding tables, when a client with interface UUID uuid1 calls it.
The server calls RpcServerRegisterIf, RpcServerRegisterIfEx, or RpcServerRegisterIf2 to associate an interface it offers with the nil manager type UUID and with the MIDL-generated default manager EPV. This call adds an entry in the interface registry table. The interface UUID is contained in the IfSpec a parameter.
By default, the object registry table associates all object UUIDs with the nil type UUID. In this example, the server does not call RpcObjectSetType.
The server run-time library receives a remote procedure code containing the interface UUID that the call belongs to and the object UUID from the call's binding handle.
See the following function reference entries for discussions of how an object UUID is set into a binding handle:
Using the interface UUID from the remote procedure call, the server's run-time library locates that interface UUID in the interface registry table.
If the server did not register the interface using RpcServerRegisterIf, RpcServerRegisterIfEx, or RpcServerRegisterIf2, then the remote procedure call returns to the caller with an RPC_S_UNKNOWN_IF status code.
Using the object UUID from the binding handle, the server's run-time library locates that object UUID in the object registry table. In this example, all object UUIDs map to the nil object type.
The server's run-time library locates the nil manager type in the interface registry table.
Combining the interface UUID and nil type in the interface registry table resolves to the default EPV, which contains the server-manager routines to be executed for the interface UUID found in the remote procedure call.
Assume that the server offers multiple interfaces and multiple implementations of each interface, as described in the following tables.
Interface Registry Table
Object Registry Table
Mapping the Binding Handle to an Entry-point Vector
The following steps describe the actions that the server's run-time library take, as shown in the preceding tables when a client with interface UUID uuid2 and object UUID uuidC calls it.
The server calls RpcServerRegisterIf, RpcServerRegisterIfEx, or RpcServerRegisterIf2 to associate the interfaces it offers with the different manager EPVs. The entries in the interface registry table reflect four calls of RpcServerRegisterIf, RpcServerRegisterIfEx, or RpcServerRegisterIf2 to offer two interfaces, with two implementations (EPVs) for each interface.
The server calls RpcObjectSetType to establish the type of each object it offers. In addition to the default association of the nil object to a nil type, all other object UUIDs not explicitly found in the object registry table also map to the nil type UUID.
In this example, the server calls the RpcObjectSetType routine six times.
The server run-time library receives a remote procedure call containing the interface UUID that the call belongs to and an object UUID from the call's binding handle.
Using the interface UUID from the remote procedure call, the server's run-time library locates the interface UUID in the interface registry table.
Using the uuidC object UUID from the binding handle, the server's run-time library locates the object UUID in the object registry table and finds that it maps to type uuid7.
To locate the manager type, the server's run-time library combines the interface UUID, uuid2, and type uuid7 in the interface registry table. This resolves to epv3, which contains the server manager routine to be executed for the remote procedure call.
The routines in epv2 will never be executed because the server has not called the RpcObjectSetType routine to add any objects with a type UUID of uuid4 to the object registry table.
A remote procedure call with interface UUID uuid2 and object UUID uuidF returns to the caller with an RPC_S_UNKNOWN_MGR_TYPE status code because the server did not call RpcServerRegisterIf, RpcServerRegisterIfEx, or RpcServerRegisterIf2 to register the interface with a manager type of uuid8.
Return Values
This function returns one of the following values.
Supplying Your Own Object-inquiry Function
Consider a server that manages thousands of objects of many different types. Whenever the server started, the server application would have to call the function RpcObjectSetType for every one of the objects, even though clients might refer to only a few of them (or take a long time to refer to them). These thousands of objects are likely to be on disk, so retrieving their types would be time consuming. Also, the internal table that is mapping the object UUID to the manager type UUID would essentially duplicate the mapping maintained with the objects themselves.
For convenience, the RPC function set includes the function RpcObjectSetInqFn. With this function, you provide your own object-inquiry function.
As an example, you can supply your own object-inquiry function when you map objects 100–199 to type number 1, 200–299 to type number 2, and so on. The object inquiry function can also be extended to a distributed file system, where the server application does not have a list of all the files (object UUIDs) available, or when object UUIDs name files in the file system and you do not want to preload all of the mappings between object UUIDs and type UUIDs.
Related topics | https://docs.microsoft.com/en-us/windows/win32/rpc/registering-interfaces | 2021-02-24T21:28:05 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.microsoft.com |
Changelog for PDFer
January 20, 2021
v 0.5 (2021-1-20)
FIXED: RSVP pdfing function return false instead of data evors_email()
v 0.4 (2019-9-15)
FIXED: RSVP pdf generation errors
REMOVED: unused rsvp template file
REQ: EventON 2.7.3 | https://docs.myeventon.com/documentations/changelog-for-pdfer/ | 2021-02-24T20:26:40 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.myeventon.com |
What is a session?¶
Alongside running batch computational workflows, the REANA platform allows users to open interactive sessions (such as Jupyter notebooks based) on the same workspace.
The opened sessions will have access to the workflow workspace, allowing users to modify the analysis and its assets and quickly debug common problems. | https://docs.reana.io/running-notebooks/what-is-session/ | 2021-02-24T20:12:59 | CC-MAIN-2021-10 | 1614178347321.0 | [] | docs.reana.io |
Overview
This article briefly explains the specifics of RadSpreadStreamProcessing - what is spread streaming, how it works compared to the RadSpreadProcessing library and when to use it.
The SpreadStreamProcessing is part of Telerik UI for Xamarin, a
professional grade UI component library for building modern and feature-rich applications. To try it out sign up for a free 30-day trial.
The SpreadStreamProcessing is part of Telerik UI for Xamarin, a professional grade UI component library for building modern and feature-rich applications. To try it out sign up for a free 30-day trial. RadSpreadStreamProcessing to create and export large amount of data with a low memory footprint and great performance.
Required references
You have two options to add the required Telerik references to your Xamarin.Forms app in order to use RadSpreadStreamProcessing:
-SpreadStreamProcessing you have to install the Telerik.Zip and Telerik.Documents.SpreadsheetStreaming nuget packages.
Add the references to Telerik assemblies manually, check the list below with the required assemblies for RadSpreadStreamProcessing:
- Telerik.Zip.dll
- Telerik.Documents.SpreadsheetStreaming.dll
Please keep in mind these assemblies are located in the Portable folder, still, you need to add a reference to them in the Xamarin.Forms project as well as in each of the platform projects (Android | iOS | UWP). | https://docs.telerik.com/devtools/xamarin/controls/spreadstreamprocessing/overview | 2021-02-24T21:27:05 | CC-MAIN-2021-10 | 1614178347321.0 | [array(['images/SpreadStreamProcessing-Overview_01.png',
'SpreadStreamProcessing Fast Export image'], dtype=object)] | docs.telerik.com |
You can expand or shrink the capacity of a file system when needed.
Total capacity of a file system after expansion ≤ (Capacity quota of the cloud account - Total capacity of all the other file systems owned by the cloud account)
For example, cloud account A has a quota of 500 TB. This account has already created three file systems, SFS1 (350 TB), SFS2 (50 TB), and SFS3 (70 TB). If this account needs to expand SFS2, the new capacity of SFS2 cannot be greater than 80 TB. Otherwise, the system will display a message indicating an insufficient quota and the expansion operation will fail.
For example, cloud account B has created a file system, SFS1. The total capacity and used capacity of SFS1 are 50 TB and 10 TB respectively. When shrinking SFS1, the new capacity cannot be smaller than 10 TB. | https://docs.otc.t-systems.com/en-us/usermanual/sfs/en-us_topic_0060912040.html | 2019-08-17T15:50:27 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.otc.t-systems.com |
You can configure Horizon Client to save a server shortcut on the home window after you connect to a server for the first time.
Procedure
- Tap the Option menu in the upper-left corner of the Horizon Client menu bar.If you are connected to a server, you can tap the Option menu in the upper-left corner of the remote desktop and application selection window. If you are connected to a remote desktop or application, you can tap the Option button in the floating menu in the remote desktop or application window and tap Setting.
- Expand the Advanced section and tap to toggle the Save information about recent servers option to On.
If the option is set to Off, Horizon Client does not save recent servers on the home window. | https://docs.vmware.com/en/VMware-Horizon-Client-for-Windows-10-UWP/4.7/horizon-client-windows-10uwp-user/GUID-2DD83954-3A61-4931-A056-8F60F136D683.html | 2019-08-17T14:33:05 | CC-MAIN-2019-35 | 1566027313428.28 | [] | docs.vmware.com |
37.5.
EasyDialogs — Basic Macintosh dialogs¶
The
EasyDialogs module contains some simple dialogs for the Macintosh.
The dialogs get launched in a separate application which appears in the dock and
must be clicked on for the dialogs be displayed. All routines take an optional
resource ID parameter id with which one can override the
DLOG
resource used for the dialog, provided that the dialog items correspond (both
type and item number) to those in the default
DLOG resource. See source
code for details.
Note
This module has been removed in Python 3.x.
The
EasyDialogs module defines the following functions:
EasyDialogs.
Message(str[, id[, ok]])¶
Displays a modal dialog with the message text str, which should be at most 255 characters long. The button text defaults to “OK”, but is set to the string argument ok if the latter is supplied. Control is returned when the user clicks the “OK” button.
EasyDialogs.
AskString(prompt[, default[, id[, ok[, cancel]]]])¶in case the user cancelled.
EasyDialogs.
AskPassword(prompt[, default[, id[, ok[, cancel]]]])¶
Asks the user to input a string value via a modal dialog. Like
AskString(), but with the text shown as bullets. The arguments have the same meaning as for
AskString().
EasyDialogs.
AskYesNoCancel(question[, default[, yes[, no[, cancel[, id]]]]])¶
Presents a dialog with prompt question and three buttons labelled “Yes”, “No”, and “Cancel”. Returns
1for “Yes”,
0for “No” and
-1for “Cancel”. The value of default (or
0if default is not supplied) is returned when the
RETURNkey is pressed. The text of the buttons can be changed with the yes, no, and cancel arguments; to prevent a button from appearing, supply
""for the corresponding argument.
EasyDialogs.
ProgressBar([title[, maxval[, label[, id]]]])¶
Displays a modeless progress-bar dialog. This is the constructor for the
ProgressBarclass described below. title is the text string displayed (default “Working...”), maxval is the value at which progress is complete (default
0, indicating that an indeterminate amount of work remains to be done), and label is the text that is displayed above the progress bar itself.
EasyDialogs.
GetArgv([optionlist[ commandlist[, addoldfile[, addnewfile[, addfolder[, id]]]]]])¶
Displays a dialog which aids the user in constructing a command-line argument list. Returns the list inExitexception.
EasyDialogs.
AskFileForOpen( [message] [, typeList] [, defaultLocation] [, defaultOptionFlags] [, location] [, clientName] [, windowTitle] [, actionButtonLabel] [, cancelButtonLabel] [, preferenceKey] [, popupExtension] [, eventProc] [, previewProc] [, filterProc] [, wanted] )¶
Post a dialog asking the user for a file to open, and return the file selected or
Noneif the user cancelled. message is a text message to display, typeList is a list of 4-char filetypes allowable, defaultLocation is the pathname,
FSSpecor
FSRefof the folder to show initially, location is the
and subtypes thereof are acceptable.
For a description of the other arguments please see the Apple Navigation Services documentation and the
EasyDialogssource code.
EasyDialogs.
AskFileForSave( [message] [, savedFileName] [, defaultLocation] [, defaultOptionFlags] [, location] [, clientName] [, windowTitle] [, actionButtonLabel] [, cancelButtonLabel] [, preferenceKey] [, popupExtension] [, fileType] [, fileCreator] [, eventProc] [, wanted] )¶
Post a dialog asking the user for a file to save to, and return the file selected or
Noneif the user cancelled. savedFileName is the default for the file name to save to (the return value). See
AskFileForOpen()for a description of the other arguments.
EasyDialogs.
AskFolder( [message] [, defaultLocation] [, defaultOptionFlags] [, location] [, clientName] [, windowTitle] [, actionButtonLabel] [, cancelButtonLabel] [, preferenceKey] [, popupExtension] [, eventProc] [, filterProc] [, wanted] )¶
Post a dialog asking the user to select a folder, and return the folder selected or
Noneif the user cancelled. See
AskFileForOpen()for a description of the arguments.
See also
- Navigation Services Reference
- Programmer’s reference documentation for the Navigation Services, a part of the Carbon framework.
37.5.1. ProgressBar Objects¶
Progress:
ProgressBar.
curval¶
The current value (of type integer or long integer) of the progress bar. The normal access methods coerce
curvalbetween
0and
maxval. This attribute should not be altered directly.
ProgressBar.
maxval¶
The maximum value (of type integer or long integer) of the progress bar; the progress bar (thermometer style) is full when
curvalequals
maxval. If
maxvalis
0, the bar will be indeterminate (barber-pole). This attribute should not be altered directly.
ProgressBar.
set(value[, max])¶
Sets the progress bar’s
curvalto value, and also
maxvalto max if the latter is provided. value is first coerced between 0 and
maxval. The thermometer bar is updated to reflect the changes, including a change from indeterminate to determinate or vice versa.
ProgressBar.
inc([n])¶
Increments the progress bar’s
curvalby n, or byis coerced between 0 and
maxvalif incrementing causes it to fall outside this range. | https://docs.python.org/2/library/easydialogs.html | 2016-10-21T00:31:48 | CC-MAIN-2016-44 | 1476988717959.91 | [] | docs.python.org |
Paid AMIs
With a paid AMI, your customers:
Must be signed up to use Amazon EC2 themselves
Important
The process they go through to purchase your AMI product prompts them to sign up for Amazon EC2 if they aren't already signed up. However, to reduce any possible confusion, we encourage you to inform your customers on your site that they must be signed up for Amazon EC2 to purchase your product.
Buy your paid AMI and then launch instances from it
Use their own AWS credentials when launching instances, not your credentials
Pay the price you set for the paid AMI, and not the standard rates for Amazon EC2
You can associate your DevPay product code with more than one AMI. However, a single AMI can be associated with only one product code. If you plan to sell multiple AMIs, you could sell them all under a single product code, or different product codes (by registering multiple DevPay products). For information about why you might choose a single product code or multiple product codes, see Selling Multiple AMIs.
Each customer's usage for the paid AMI is displayed on their Application Billing page. For more information, see Where Customers Get Information About Their Bills.
At any time, you can confirm the customer is still currently subscribed to your product. For more information, see Confirming a Customer's Subscription Status.
Topics
Current Limitations
With the current implementation of Amazon DevPay:
Your paid or supported AMIs must be backed by Amazon S3. Paid or supported AMIs backed by Amazon Elastic Block Store are currently not supported. Therefore, your paid AMIs cannot run Windows 2008 Server or SQL Server 2008 at this time.
You can't use Elastic Load Balancing (either by itself or in conjunction with Auto Scaling) with instances of paid or supported AMIs.
The discounts from Amazon EC2 Reserved Instances don't apply to paid or supported AMIs. That is, if you purchase Reserved Instances, you don't get the lower usage price associated with them when your customers launch your paid or supported AMIs. Also, if your customers purchase Reserved Instances, and they use your paid or supported AMIs, they continue to pay the price you specified for the use of your paid or supported AMIs.
Your customers can't make Spot Instance requests for your paid or supported AMIs; if they do, Amazon EC2 returns an error.
For more information about any of the preceding Amazon EC2 features, go to the Amazon EC2 product page.
Process for Creating and Using Paid AMIs
The following table summarize the basic flow for creating and using paid AMIs.
Managing Your Paid AMI
To manage your paid AMIs, you can complete the following tasks using the Amazon EC2 command line interface (CLI) tools.
Topics
Associating a Product Code with an AMI
Each AMI can have a single product code associated with it. You must be the owner of an AMI to associate a product code with it. You can associate a single product code with more than one AMI. For more information, see Selling Multiple AMIs.
Important
You can associate a product code only with Amazon S3-backed AMIs (those using an instance store root device), and not Amazon EBS-backed AMIs (those using an Amazon EBS root device).
To associate a product code with an AMI
Use the
ec2-modify-image-attribute command as follows:
PROMPT>
ec2-modify-image-attribute ami-5bae4b32 --product-code 774F4FF8productCodes ami-5bae4b32 productCode 774F4FF8
To verify the product code is associated with the AMI
Use the
ec2-describe-image-attribute command as follows:
PROMPT>
ec2-describe-image-attribute ami-5bae4b32 --product-codeproductCodes ami-5bae4b32 productCode 774F4FF8
You can't change or remove the
productCodes attribute after you've set it. If you
want to use the same image without the product code or associate a different product code with the
image, you must reregister the image to obtain a new AMI ID. You can then use that AMI without a
product code or associate the new product code with the AMI ID. For more information, see Inheriting the Product Code of your Paid AMI.
Sharing Your Paid AMI with Select Users or the Public
If you're creating a paid AMI, after you associate the product code with the AMI, you need to share the AMI with select customers or the public.
To share an AMI with the public
Use the
ec2-modify-image-attribute command as follows:
PROMPT>
ec2-modify-image-attribute ami-5bae4b32 --launch-permission -a alllaunchPermission ami-5bae4b32 ADD group all
Even though you've shared the AMI, no one can use it until they sign up for your product by going to the purchase URL. After customers sign up, any instances of the paid AMI they launch will be billed at the rate you specified during product registration.
Confirming that an Instance is Launched from an AMI with a Product Code
If you have created a product for others to use with their AMIs (the supported AMI scenario), you might want to confirm that a particular AMI is associated with your product code and a particular instance is currently running that AMI.
Note
You must be the owner of the product code to successfully call
ec2-confirm-product-instance with that product code.
Because your customers don't own the product code, they should describe their instances to confirm their instances are running with your product code.
To confirm an instance is running an AMI associated with your product code
Use the
ec2-confirm-product-instance command as follows:
PROMPT>
ec2-confirm-product-instance 774F4FF8 -i i-10a64379774F4FF8 i-10a64379 true 111122223333
If the AMI is associated with the product code,
true is returned along with the AMI
owner's account ID. Otherwise,
false is returned.
Getting the Product Code from Within an Instance
A running Amazon EC2 instance can determine if a DevPay product code is attached to itself. The instance retrieves the product code like it retrieves its other metadata. For information about the retrieval of metadata, go to the "Instance Metadata" section of the Amazon Elastic Compute Cloud Developer Guide.
The instance retrieves the product code by querying a web server with this REST-like API call:
GET
The following is an example response.
774F4FF8
Inheriting the Product Code of your Paid AMI
Associating a product code with an AMI turns it into a paid AMI that your customers must sign up for to use. What happens if your customer launches an instance from the paid AMI and then creates a new AMI from that instance?
Linux
If you give a customer root access to your paid Linux/UNIX AMI, the customer can rebundle it. If your customer uses AWS tools to rebundle the AMI, the rebundled AMI inherits the product code. When launching instances of the rebundled AMI, the customer is still billed for usage based on your price. However, if the customer doesn't use the AWS tools when rebundling, the rebundled AMI won't inherit the product code, and the customer will pay the standard Amazon EC2 rates and not your price.
When a customer contacts you for support for a paid AMI, you can confirm your product code is associated with the AMI and the customer's instance is currently running the AMI. For more information, see Confirming that an Instance is Launched from an AMI with a Product Code.
If you have software installed on the AMI, the software can retrieve the instance metadata to determine if the product code is associated with the instance. For more information, see Getting the Product Code from Within an Instance.
Keep in mind that the preceding methods for confirming the association of the product code with the instance are not foolproof because a customer with root access to the instance could return false information indicating the product code is associated with the instance.
Windows
When you associate a product code with a Windows AMI, the association is permanent. Therefore, we recommend you keep a separate, base copy of the AMI that has no product code associated with it.
Anyone who purchases a Windows AMI can rebundle it. The product code is automatically transferred to the rebundled AMI. When your customers launch instances of the rebundled AMI, they pay the rates you set when you registered your DevPay product. In turn, you're charged for their usage of Amazon EC2. | http://docs.aws.amazon.com/AmazonDevPay/latest/DevPayDeveloperGuide/PaidAMIs.html | 2016-10-21T00:32:24 | CC-MAIN-2016-44 | 1476988717959.91 | [] | docs.aws.amazon.com |
This Online Advertising Agreement is a short contract that sets up an agreement between a website owner/host and a person or business that wishes to advertise on that website for a term of one week. This document contains language that is essential for executing this type of agreement. This document in its draft form contains standard clauses commonly used in these types of agreements; however, additional language may be added to allow for customization to ensure the specific terms of the parties agreement are addressed. This document is useful to individuals wanting to enter into an agreement with an online host for the purpose of advertising a good or service for a period of one week.
Get Unlimited Access to Our Complete Business Library
Plus | http://premium.docstoc.com/docs/10650522/Online-Advertisement-Agreement | 2013-12-05T04:08:18 | CC-MAIN-2013-48 | 1386163039753 | [] | premium.docstoc.com |
Help Center
Local Navigation
Search This Document
BlackBerry Router
The BlackBerry® Router connects to the wireless network. It sends data to and receives data from the BlackBerry® Infrastructure for a BlackBerry® Enterprise Server.
The BlackBerry Router also sends data to and receives data from BlackBerry devices that are connected to the BlackBerry Enterprise Server using the BlackBerry® Device Manager.
You can install the BlackBerry Router on a computer that is separate from the computer that hosts the BlackBerry Enterprise Server to route data between the BlackBerry Infrastructure and one or more BlackBerry Enterprise Server instances.
Next topic: BlackBerry Web Desktop Manager
Previous topic: BlackBerry Policy Service
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/admin/deliverables/20998/BB_Router_572521_11.jsp | 2013-12-05T04:08:34 | CC-MAIN-2013-48 | 1386163039753 | [] | docs.blackberry.com |
changes.mady.by.user Rolly Noel
Saved on Jul 06, 2011
Here are some samples of working with files
import System.IO
import System.Reflection
testfile = "newtestfile.txt"
try:
if File.Exists(testfile):
File.Delete(testfile)
//"using" will dispose of (and close) the file stream for you
using out = StreamWriter(testfile):
out.WriteLine(" Some text for this file ")
out.WriteLine("# ignore this line")
out.WriteLine(" Some more text ")
using input = StreamReader(testfile): //or you can use File.OpenText
for line in input:
line = line.Trim()
if len(line) > 0 and line[0] != char('#'):
print line
//an example using enumerate and no "using"
fileinput = File.OpenText(testfile)
for index as int, line as string in enumerate(fileinput):
print "line $index:", line.ToUpper()
fileinput.Close()
except e:
print "Error", e.ToString()
//An example of constructing file paths
//Assembly.GetExecutingAssembly().Location won't work in booi because you are executing a
//dynamic assembly in memory, you have to compile using booc first
rsppath = Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), "boo.rsp")
print Path.GetFileNameWithoutExtension(rsppath)
See also:
Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today. | http://docs.codehaus.org/pages/diffpages.action?pageId=8356560&originalId=228170924 | 2013-12-05T04:07:01 | CC-MAIN-2013-48 | 1386163039753 | [] | docs.codehaus.org |
Before.
Also, If you see an error like this:
Warning: is_dir() [function.is-dir]: open_basedir restriction in effect. File(/) is not within the allowed path(s): ...
this is because of a restriction of your hosting account. | http://docs.joomla.org/index.php?title=Installing_an_extension&diff=70065&oldid=13759 | 2013-12-05T04:00:53 | CC-MAIN-2013-48 | 1386163039753 | [] | docs.joomla.org |
Message-ID: <928301672.764410.1386216423086.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_764409_377880493.1386216423086" ------=_Part_764409_377880493.1386216423086 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
=20
Each node hosts a Group Communication component, abstracted by:=20
Group communication components see each others and can dispatch messages= , more accurately org.codehaus.wadi.group.Envelope, to each others.=20
WADI provides four implementations of the Group Communication API, defin= ed in wadi-group:=20
Service Spaces are components building on top of the above group communi= cation infrastructure. They provide a logical group communication service, = which restricts the view that clients have of the cluster to the sub-set of= the nodes hosting a given Service Space.=20
For instance, Service Space 1 is hosted by Node 1 and Node 3. Clients on= Node 1 using the logical group communication service of Service Space 1 on= ly see Node 3. Also, they can only dispatch messages to Node 3.=20
Service Spaces are used to share the physical group communication servic= es of a node between multiple applications, e.g. Web-applications. | http://docs.codehaus.org/exportword?pageId=9764983 | 2013-12-05T04:07:03 | CC-MAIN-2013-48 | 1386163039753 | [] | docs.codehaus.org |
Introduction
One.
CPU Usage
CPU Usage
CPU Usage.:
- offering of an efficient and robust caching solution leveraging the core infrastructure services validated by the tests of this report; and
- more complete integration with Geronimo. | http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=18186271&selectedPageVersions=11&selectedPageVersions=10 | 2013-12-05T03:55:26 | CC-MAIN-2013-48 | 1386163039753 | [] | docs.codehaus.org |
can accept arguments. Such entries would look like:
org.sonar.l10n.<key of the plugin to translate>_<language>.rules.<repository key>package.
org.sonar.l10n.<plugin key>_<language>package. Backward compatibility is ensured for l10n plugins which use this old location.
Here is what you need to know about conventions for keys: :
The component
org.sonar.api.i18n.I18n is available for server extensions. Batch extensions are not supported yet and can not load bundles.
A Language Pack defines bundles for the SonarQube platform and for the SonarQube community plugins.
The easiest way to create a new pack is to copy the French Pack and adapt it to your language.
In the pom file, set the versions of the plugins you want to translate:
When it's time to update your language pack for a new version of SonarQube or a plugin, the easiest way to see what keys are missing is to run: :
The default bundle is mandatory, and must be the English translation. For example the plugin with key "mysonarplugin" must define the following files in order to enable the French translation: | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=213942440 | 2013-12-05T03:54:22 | CC-MAIN-2013-48 | 1386163039753 | [] | docs.codehaus.org |
View tree
Close tree
|
Preferences
|
|
Feedback
|
Legislature home
|
Table of contents
Search
Up
Up.
Tax 14.02(10)
(10)
Person claimed as a dependent.
Under s.
71.53 (2) (d)
, Stats., a person does not qualify for a homestead credit if the person is claimed as a dependent for federal income tax purposes during the year to which the claim relates, unless the person claiming a homestead credit is 62 years of age or older as of December 31 of the claim year. However, a person is not disqualified if any of the following apply:
Tax 14.02(10)(a)
(a)
The person is improperly claimed as a dependent on a federal income tax return.
Tax 14.02(10)(b)
(b)
The person qualifies to be claimed as a dependent on a federal income tax return but is not claimed.
Tax 14.02(10)(c)
(c)
The person is properly claimed as a dependent on a federal income tax return but on a later amended federal income tax return is not claimed.
Tax 14.02(11)
(11)
Deceased claimant.
Under s.
71.53 (1) (b)
, Stats., a person must be alive at the time a homestead credit claim is filed. A claim completed and signed but not filed until after a person's death shall be denied.
Tax 14.02 Note
Note:
The qualification for a homestead credit of a person who becomes married or divorced during a claim year or occupies a separate dwelling from his or her spouse for any part of a claim year is described in s.
Tax 14.06
.
Tax 14.02 Note
Note:
Section
Tax 14.02
interprets ss.
71.52 (1)
,
(2)
and
(7)
,
71.53 (1) (b)
and
(c)
and
(2) (d)
and
71.58 (1) (b)
, Stats.
Tax 14.02 History
History:
Cr.
Register, February, 1990, No. 410
, eff. 3-1-90; r. (2) (c) and am. (5), (9), (10) and (11),
Register, July, 2000, No. 535
, eff. 8-1-00.
Tax 14.03
Tax 14.03
Household income and income.
Tax 14.03(1)
(1)
Purpose.
This section clarifies the meaning of"household income" and "income" includable in household income as the terms apply to homestead credit claims.
Tax 14.03(2)
(2)
Definitions.
In this section:
Tax 14.03(2)(a)
(a)
"Household income" has the meaning specified in s.
71.52 (5)
, Stats.
Tax 14.03(2)(b)
(b)
"Income" has the meaning specified in s.
71.52 (6)
, Stats.
Tax 14.03(3)
(3)
Deduction for dependents.
Tax 14.03(3)(a)
(a)
Under s.
71.52 (5)
, Stats., a deduction of $250 is allowed for each of the claimant's dependents, as defined in s. 152 of the internal revenue code, who have the same principal abode as the claimant for more than 6 months during the calendar year to which a claim for homestead credit relates. A claimant may multiply the number of dependents with the same principal abode for more than 6 months by $250 and subtract the result from the total of the income items to arrive at household income.
Tax 14.03 Note
Example: A claimant and the claimant's spouse claim 3 dependents on their 1997 federal income tax return, and all 3 dependents have the same principal abode as the claimant for the entire year. Household income items include Wisconsin adjusted gross income of $10,500, depreciation of $1,500 and unemployment insurance of $500.
Tax 14.03 Note
Total household income is $11,750, consisting of the total of the income items listed, $12,500, minus the dependent deduction of $750, which is $250 times 3 dependents.
Tax 14.03(3)(b)
(b)
A dependent is considered to have the same principal abode as the claimant during temporary absences from the claimant's homestead for reasons such as school attendance, illness, vacations, business commitments or military service.
Tax 14.03(3)(c)
(c)
In the following situations, a dependent who does not have the same principal abode as the claimant for more than 6 months during the calendar year to which a claim for homestead credit relates is nonetheless considered to have the same principal abode for more than 6 months if during that year:
Tax 14.03(3)(c)1.
1.
The dependent is born or dies, and the dependent has the same principal abode as the claimant during the entire time the dependent is alive during that year.
Tax 14.03(3)(c)2.
2.
The dependent is adopted by the claimant, is placed with the claimant for adoption or becomes the stepchild of the claimant, and the dependent has the same principal abode as the claimant from that time to the end of that calendar year.
Tax 14.03(4)
(4)
Items includable in income.
Under s.
71.52 (6)
, Stats., income includes the sum of:
Tax 14.03(4)(a)
(a)
"Wisconsin adjusted gross income" as defined in s.
71.01 (13)
, Stats., for the calendar year to which a claim for homestead credit relates.
Tax 14.03(4)(b)
(b)
The following amounts to the extent not included in Wisconsin adjusted gross income:
Tax 14.03(4)(b)1.
1.
Maintenance payments, not including foster care maintenance and supplemental payments excludable under s. 131 of the internal revenue code.
Tax 14.03(4)(b)2.
2.
Court-ordered support payments, including support for dependents under
ch.
49
, Stats.
Tax 14.03(4)(b)3.
3.
Cash public assistance and county relief, including the following:
Tax 14.03(4)(b)3.a.
a.
Aid to families with dependent children, or "AFDC."
Tax 14.03(4)(b)3.b.
b.
Wisconsin works, or "W-2" payments.
Tax 14.03(4)(b)3.c.
c.
Non-legally responsible relative, or "NLRR" AFDC payments or kinship care payments under s.
48.57
, Stats. These are payments received as a relative other than a parent, for caring for a dependent child in the claimant's homestead.
Tax 14.03(4)(b)3.d.
d.
Cash benefits paid by counties under s.
59.53 (21)
, Stats.
Tax 14.03(4)(b)3.e.
e.
Reimbursement from a governmental agency for amounts originally paid for by the recipient, not including cash reimbursements for home energy assistance or for services under Title XX of the federal social security act and community options program, or "COP" payments under s.
46.27
, Stats.
Tax 14.03(4)(b)3.f.
f.
Adoption assistance payments under Title IV-E of the federal social security act or from another state, or payments by the Wisconsin department of children and families under s.
48.975
, Stats., to adoptive parents of children having special needs as described in
s.
DCF 50.03 (1) (b)
.
Tax 14.03(4)(b)3.g.
g.
Veterans administration payments for reimbursement of services purchased by the recipient.
Tax 14.03(4)(b)3.h.
h.
Federal housing and urban development, or "H.U.D." payments for housing.
Tax 14.03(4)(b)3.i.
i.
Disaster relief grants under the federal disaster relief act of 1974.
Tax 14.03(4)(b)4.
4.
The gross amount of a pension or annuity, including:
Tax 14.03(4)(b)4.a.
a.
Railroad retirement benefits.
Tax 14.03(4)(b)4.b.
b.
Veterans' disability pensions.
Tax 14.03(4)(b)4.c.
c.
Any amounts withheld by the payor.
Tax 14.03(4)(b)4.d.
d.
Nontaxable recoveries of cost.
Tax 14.03(4)(b)4.e.
e.
Disability income exclusions from taxable income.
Tax 14.03 Note
Example: Gross amount of a pension. A claimant was entitled to a pension of $8,000 during the year but received only $5,600 after $2,400 was withheld by the payor for payment of health insurance premiums for the claimant. Of the $8,000 pension, $2,000 was a return of the claimant's contribution.
Tax 14.03 Note
The gross pension of $8,000 must be included in income.
Tax 14.03(4)(b)5.
5.
Except as provided in
subd.
3. e.
, all payments received for the benefit of a claimant or a member of the claimant's household under the federal social security act, including:
Tax 14.03(4)(b)5.a.
a.
All federal social security retirement, disability or survivorship benefits.
Tax 14.03(4)(b)5.b.
b.
Lump sum death benefits.
Tax 14.03(4)(b)5.c.
c.
Medicare premiums deducted from social security benefits received by all members of a household.
Tax 14.03(4)(b)5.d.
d.
Supplemental security income, or "SSI" benefits received by persons over 65 years of age, or blind or disabled.
Tax 14.03(4)(b)5.e.
e.
Supplemental security income - exceptional needs, or "SSI-E" payments under s.
49.77 (3s)
, Stats.
Tax 14.03(4)(b)6.
6.
Compensation and other cash benefits received from the United States for past or present service in the armed forces.
Tax 14.03(4)(b)7.
7.
Payments made to surviving widows, widowers or parents of veterans by the United States, but not including insurance proceeds received by beneficiaries of National Service Life Insurance.
Tax 14.03(4)(b)8.
8.
Proceeds from a personal endowment insurance policy or annuity contract purchased by the recipient.
Tax 14.03(4)(b)9.
9.
The gross amount of "loss of time" insurance proceeds.
Tax 14.03(4)(b)10.
10.
Nontaxable interest received from the federal government or any of its instrumentalities, or from state or municipal bonds.
Tax 14.03(4)(b)11.
11.
Scholarship and fellowship gifts or income and other educational grants, not including student loans.
Tax 14.03(4)(b)12.
12.
Unemployment insurance, including railroad unemployment compensation.
Tax 14.03(4)(b)13.
13.
Workers' compensation.
Tax 14.03(4)(b)14.
14.
Capital gains not included in Wisconsin adjusted gross income, but not including a nonrecognized gain from an involuntary conversion under s. 1033 of the internal revenue code.
Tax 14.03(4)(b)15.
15.
A gain on the sale of a personal residence excluded under s. 121 of the internal revenue code. A gain on the sale of a personal residence which would be reportable under the installment sale method if taxable may be reported either in full in the year of sale or each year as payments are received.
Tax 14.03(4)(b)16.
16.
Dividends not included in Wisconsin adjusted gross income.
Tax 14.03(4)(b)17.
17.
Income of a nonresident or part-year resident married to a full-year resident of Wisconsin.
Tax 14.03(4)(b)18.
18.
A housing allowance provided to a member of the clergy.
Down
Down
/code/admin_code/tax/14
true
administrativecode
/code/admin_code/tax/14/02/10/c
administrativecode/Tax 14.02(10)(c)
administrativecode/Tax 14.02(10)? | http://docs.legis.wisconsin.gov/code/admin_code/tax/14/02/10/c | 2013-12-05T03:53:38 | CC-MAIN-2013-48 | 1386163039753 | [] | docs.legis.wisconsin.gov |
Azure Analysis Service deployment
Visual Studio Team Service deploy task that will deploy a Azure Analysis Service Model to an existing Azure Analysis Service.
NOTE: At this moment the task only supports 1 SQL Server connection Support for more types of connection is in development
Parameters
Azure Details:
- Azure Connection Type - Only Azure Resource Manager is supported
- Azure RM Subscription - Which Azure Subscription (Service Endpoint) should be used to connect to the datafactory
- Resource Group - To which Resource Group is the Azure Analysis Service model deployed
Analysis Service Details:
- Analysis Service name - The name of the Azure Analysis Service server
- Login type - Type of Azure Analysis Service login: Named user or Service Principal
If Login type option is 'ervice Principal':
- Azure AD TenantID - Azure ID Tenant ID
- Application ID - Application ID of the Service Principal
- Application Key - Key of the Application ID
If Login type option is 'Named User':
- Analysis Services Admin - The admin user use to connect to the Azure Analysis Service instance
- Analysis Services Admin Password - The password of the admin user use to connect to the Azure Analysis Service instance
Data Source Connection Detailss:
- Data Source Type - Type of the first data source defined in the model. SQL is for now the only option.
- Source Azure SQL Server Name - The servername of the Azure SQL database server
- Source Database Name - The database name
- Source User Login - The username used for the connection by the model for trhe connection to the source database
- Source Password - The password for the given username
Firewall:
- Specify Firewall Rules Using - Auto Detect adds the IP address of the agent to the firewall rules. With the option 'IP Address Range' a start and end IP address of a range needs to be provided
- Start IP Address - Start IP address of the range
- End IP Address - End IP address of the range.
- Delete Rule After Task Ends - Delete the firewall rule at the end of the tasks
Advanced:
- Overwrite - Option to overwrite existing model with the new one.
- Remove - Option to remove the old model before deploying a new one.
Tested configuration
At this moment the following configuration are tested and working:
- Model 1400 and a single SQL Server database as datasource
More configuration will follow. Feel free to contact me for a specific configuration.
Release notes
1.2.0
- Add support for service principal deployments
- Add support for adding firewall rules
1.1.2
- Model files are readed with UTF8 encoding
1.1.0
- New: AAS return messages (error/warning) are used for the tasks logging
- Bugfix: Better logging when exceptions are thrown
1.0.0
- Initial public release
Feedback
If you have any comment related to the documentation, like corrections, unclear features or missing documentation, feel free to leave feedback below via GitHub. Or correct it yourself and submit a PR; see CONTRIBUTING.md for more details. GitHub account required. | https://azurebi-docs.jppp.org/vsts-extensions/azure-analysis-service-deploy.html | 2020-02-17T04:25:57 | CC-MAIN-2020-10 | 1581875141653.66 | [array(['images/aas-screenshot-2.png',
'Screenshot of Azure Analusis Service deploy task'], dtype=object)] | azurebi-docs.jppp.org |
Configuring Authentication for Cloudera Navigator
Cloudera Manager Server has an internal authentication mechanism, a database repository of user accounts that can be used to create user accounts. As an alternative to using the internal database, Cloudera Manager and Cloudera Navigator can be configured to use external authentication mechanisms.
Cloudera Manager Server and Cloudera Navigator each have their own user role schemes for granting privileges to system features and functions. Cloudera Manager user roles can be applied to user accounts as they are created in the internal repository. Cloudera Navigator user roles are applied to groups defined in the external system for use by Cloudera Navigator. The only user role that can be effectively applied to an account created in the Cloudera Manager internal repository is that of Navigator Administrator, which grants the user account privileges as a Full Administrator on the Cloudera Navigator services (Navigator Metadata Server, Navigator Audit Server).
In other words, assigning Cloudera Navigator user roles to user accounts requires using an external authentication mechanism. | https://docs.cloudera.com/documentation/enterprise/6/6.0/topics/cn_authentication.html | 2020-02-17T03:56:02 | CC-MAIN-2020-10 | 1581875141653.66 | [] | docs.cloudera.com |
Pushing Docker Images to AWS Elastic Container Registry (ECR)#
Pushing images to your AWS ECR is straight forward. Your workflow
simply needs to call the appropriate
aws command to login to the
Docker registry. Then
docker push works as expected. First, create a
secret to configure AWS access key environment variables.
Creating the Secret#
sem create secret AWS \ -e AWS_ACCESS_KEY_ID=<your-aws-key-id> \ -e AWS_SECRET_ACCESS_KEY=<your-aws-access-key>
Now add the secret to your pipeline and authenticate
Configuring the Pipeline#
This example authenticates in the
prologue. This is not
strictly required, it's just an example of covering all jobs in
authentication.
# .semaphore/pipeline.yml version: "v1.0" name: First pipeline example agent: machine: type: e1-standard-2 os_image: ubuntu1804 blocks: - name: "Push Image" task: env_vars: # TODO: change as required - name: AWS_DEFAULT_REGION value: ap-southeast-1 - name: ECR_REGISTRY value: 828070532471.dkr.ecr.ap-southeast-1.amazonaws.com/semaphore2-ecr-example secrets: - name: AWS prologue: commands: # Install the most up-to-date AWS cli - sudo pip install awscli - checkout # ecr get-login outputs a login command, so execute that with bash - aws ecr get-login --no-include-email | bash jobs: - name: Push Image commands: - docker build -t example . - docker tag example "${ECR_REGISTRY}:${SEMAPHORE_GIT_SHA:0:7}" - docker push "${ECR_REGISTRY}:${SEMAPHORE_GIT_SHA:0:7}" | https://docs.semaphoreci.com/use-cases/pushing-docker-images-to-aws-elastic-container-registry-ecr/ | 2020-02-17T03:56:50 | CC-MAIN-2020-10 | 1581875141653.66 | [] | docs.semaphoreci.com |
changes.mady.by.user Carlos Sanchez
Saved on Sep 06, 2007
changes.mady.by.user James William Dumay
Saved on Apr 10, 2008
As of Maven 2.0.9 it is no longer nessesary to add the build extension to your project.
The simplest way to enable webdav deployment is to add a build extension to your project:
...
Powered by a free Atlassian Confluence Open Source Project License granted to Codehaus. Evaluate Confluence today. | http://docs.codehaus.org/pages/diffpages.action?pageId=228168774&originalId=78348311 | 2014-04-16T11:03:44 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.codehaus.org |
XSL-FO (XSL Formatting Objects) is a powerful stylesheet language for formatting XML documents. The semantics of the bounded form of paper and print are expressed by XSL-FO when the dimensions are fixed. In contrast to HTML, which represents the semantics of the unbounded form of a browser window with variable dimensions. The XML documents formatted by XSL-FO are mostly used to generate PDF files. XSL (Extensible Stylesheet Language) is a set of feature complete W3C technologies intended to design for the formatting and exchange of XML documents and XSL-FO is a part of this language. XSLT and XPath are also other parts of XSL.
It is proposed that XML documents should be transformed first into XSL-FO, PDF is an example of this criteria.In PDF, results are rendered using XSLTfirst, and then XSL-FO formatter. In this fashion, XML documents can be formatted randomly. Though XSL-FO takes the advantage of using Cascading Style Sheet (CSS) properties and extend them where ever necessary for the real format, yet it houses the provision of page templates which are called page masters in the terminology of XSL-FO. XSL-FO also provides formatting for fairly sophisticated document and supports index generation.
History and Basic Concepts
In January 2012 the Working Draft of XSL-FO was updated last time, and in November 2013, its Working Group had been closed. An XSL stylesheet specifies the presentation of a class of XML documents by describing how an instance of the class is transformed into an XML document that uses the formatting vocabulary. XSL-FO is an integrated presentational language and has no semantic markups which are used in HTML. Moreover, this language stores all of the document’s data within itself, contrary to CSS which alters the default settings of an external HTML or XML document.
The general criteria of using XSL-FO is that the user writes a document in an XML language rather than writing in FO. After that an XSLT transform is occured. This XSLT transform is responsible for the conversion of XML into XSL-FO. As soon as, the XSL-FO document is generated, it is then handed over to an application called an FO processor. FO processors is responsible for transforming this document into a readable as well as printable document. PDF file or PS are the examples of most common output of XSL-FO. But it doesn’t mean that FO processor can only produce these two types of format as output. Some FO processors can output in the RTF files or even a window can appear in the user’s GUI, this window displays the pages sequence and their contents.
An XSL-FO document is different from a PDF or a PS in the sense, it does not ultimately define the text layout on different pages. Perhaps, it styles the pages and determines the places to display the contents. Moreover, an FO processor organizes the text within the boundaries specified by the FO document. This specification even permits different FO processors to behave accordingly to the resultant created pages. An example of such behavior is the hyphenation, few FO processors can hyphenate words in order to save space when a line breaks, while some processors don’t select this option. It depends upon the processors to choose different hyphenation algorithms that match their requirement. These hyphenation algorithms may be very simple or may be more complex. In some situations XSL-FO specification explicitly sanctions FO processors, some degree of choice in context to layout.
This variation among FO processors, generates varying outcomes, about which processors often remain unconcern. Because the general focus of XSL-FO is to produced paged/printed documents. XSL-FO documents themselves typically act as intermediaries, their main function is to generate either PDF files or a document that can be printed as the output to be distributed. In HTML/CSS or XSL-FO, distributing the PDF as end result rather than inputting the formatting language indicates that receivers remain unaffected by the result versatility which is produced due to differences among interpreters of formatting language. On the other hand, it is evident there is no easy way, that a document can fulfill the different needs of recipients, e.g. variable page size or desired font size, or tailoring for page or print.
XSLFO File Format
SL-FO documents are basically XML documents, but they do not follow any schema. In its place, SL-FO documents follow the syntax defined in the specification of their own language. There are two section required in each XSL-FO document:
- A section that specifies a list of labeled page layouts.
- A section with all the details of document data, with markup, that determines the display of contents on different pages through various page layouts.
Properties of the page are mentioned in the page layouts, which can define the organization for the text, to comply with the conventions for the specific language. Furthermore, size of the page, their margins, and sequences of pages (that sanctions different properties for the odd and even pages) are also define by the page layouts.
The data portion of the document is divided into a series of flows, where each flow is connected to a page layout. The flows encloses a list of blocks in them. This list of blocks may contain inline markup features or a list of text data, or maybe both at the same time. Margins of the document may also display the page numbers or chapter headings. Functionality of both blocks and inline elements remain same as in the CSS, yet some padding and margins rules are different between FO and CSS.
The page orientation direction is entirely specified for the extension of blocks and inlines, thus making FO documents to perform under the languages different from English. The language of the FO specification uses the words start and end rather than left and right for directions description. XSL-FO’s basic content markup and cascading rules are taken from CSS. XSL-FO’s language agrees for the following specifications.
Multiple columns
A page can have multiple columns and blocks and can extend from one column to another by default. Multiple pages are allowed to have different widths and numbers of columns. All FO characteristics follow the limits of a multi-column page.
Lists
An XSL-FO list is established by two sets of blocks arranged cheek by jowl. Conceptually, in a list, a block on the left indicates a number, a bullet or a string of text, while the right side block may works as anticipated. Numbering of XSL-FO lists, is usually done by the XSLT.
Tables
An FO table is similar to an HTML/CSS table. The user can select the rows of data, styling information, background color for each individual cell. Using a distinct styling information, user has the privilege to select the first row as a table header row. The FO processor can be informed explicitly about the space specification of each column, or to auto-fit the text in the table.
Indexing
XSL-FO 1.1 has features that help to generate an index through referencing of properly marked-up elements.
Benefits
- appropriate for content-based publishing
- Ease of use
- Low cost | https://docs.fileformat.com/page-description-language/xslfo/ | 2020-07-02T13:24:07 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.fileformat.com |
The
out_exec TimeSliced Output plugin passes events to an external program. The program receives the path to a file containing the incoming events as its last argument. The file format is tab-separated values (TSV) by default.
out_exec is included in Fluentd's core. No additional installation process is required.
<match pattern>@type execcommand cmd arg arg<format>@type tsvkeys k1,k2,k3</format><inject>tag_key k1time_key k2time_format %Y-%m-%d %H:%M:%S</inject></match>
Please see the Config File article for the basic structure and syntax of the configuration file.import sysinput = execcommand python /path/to/fizzbuzz.py<buffer>@type filepath /path/to/buffer_pathflush_interval 5s # for debugging/checking</buffer><format>@type tsvkeys fizzbuzz</format></match>
The
@type tsv and
keys fizzbuzz in
<format>
/path12fizz4buzzfizz78fizzbuzz11fizz1314fizzbuzz
Asynchronous
See Output Plugin Overview for more details.
The value must be
exec.
The command (program) to execute. The exec plugin passes the path of flushed buffer chunk as the last argument.
If you set
command parameter like below:
command cmd arg arg
actual command execution is:
cmd arg arg /path/to/file
If
cmd doesn't exist in PATH, you need to specify absolute path, e.g.
/path/to/cmd.
Command (program) execution timeout.
See Format section configurations for more details.
The format used to map the incoming events to the program input.
Overwrite default value in this plugin.
See Inject section configurations for more details.
Overwrite default value in this plugin.
Overwrite default value in this plugin.
See Buffer section configurations for more details.
Overwrite default value in this plugin.
If this article is incorrect or outdated, or omits critical information, please let us know. Fluentd is a open source project under Cloud Native Computing Foundation (CNCF). All components are available under the Apache 2 License. | https://docs.fluentd.org/output/exec | 2020-07-02T12:57:27 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.fluentd.org |
Release 3.0.0
The release spans the period between 2018-09-27 to 2018-10-11 The following tickets are included in this release.
- Navigate directly to Platform Instances
- Group multiple platform instances of the same type under a location
- Optionally fail replication of SAML IdP and Roles if not found initially
- Filters usage reports for a billable flag
- Fix locations flicker enabled/disabled after creating project
- Asynchronous Service Bindings in Marketplace
Ticket DetailsTicket Details
Navigate directly to Platform InstancesNavigate directly to Platform Instances
Audience: Users Component: panel
DescriptionDescription
Users can now directly navigate to cloud platform instances from the home screen as well as from the navigation drop-downs in the nav-bar. Tools for working with cloud resources directly from meshPanel (e.g. for Cloud Foundry, OpenStack) are also no longer "aggregated" on the same screen and menu for a mesh location. Instead, users can now navigate directly to these platforms.
How to useHow to use
In the nav-bar, selet a project and then select a platform instance. The platform instances are grouped by Location and display their type in an icon. meshStack operators should re-evaluate their location and platform instance naming strategies in light of these changes to ensure they provide good guidance to users.
Group multiple platform instances of the same type under a locationGroup multiple platform instances of the same type under a location
Audience: Operators Component: meshfed
DescriptionDescription
Operators can now add multiple platform instances of the same type to a single location. This is useful to e.g. provide a next-generation version of a platform to users under the same geographic location. When using this new feature, operators should ensure sensible naming of platform instances and locations to provide good guidance to end-users.
Optionally fail replication of SAML IdP and Roles if not found initiallyOptionally fail replication of SAML IdP and Roles if not found initially
Audience: Operator Component: Replication (AWS)
DescriptionDescription
It allows the replicator to AWS to fail if no suitable IdP or user role was found instead of creating them.
How to useHow to use
In the configuration files for the replicator add a flag under the platform configuration with
the value:
replicator-aws.platforms[?].failIfSAMLIdpOrRoleNotFound: true
Filters usage reports for a billable flagFilters usage reports for a billable flag
Audience: Customer, Partner Component: Billing
DescriptionDescription
Inside the Partner Section and the Customer Billing section the usage reports can now optionally be filtered if they require billing or not. Partner can set their Customer to 'Verified' instead of 'Billable', this will prevent the generation of billable usage reports.
How to useHow to use
In the partner area go to customers, search for the customer and click on the status badge. There you can choose to make the customer billable or verified.
Fix locations flicker enabled/disabled after creating projectFix locations flicker enabled/disabled after creating project
Audience: Users Component: panel
DescriptionDescription
When creating a project, the locations selector state no longer flickers between enabled/disabled after project creation is completed and before navigating back to the project list.
Asynchronous Service Bindings in MarketplaceAsynchronous Service Bindings in Marketplace
Audience: Customers
DescriptionDescription
If the creation of a binding takes longer, the UX is enhanced, because the initial binding creation command returns quickly. Afterwards the status information of the binding is automatically updated on the screen. | https://docs.meshcloud.io/blog/2018/10/11/Release-0.html | 2020-07-02T11:26:12 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.meshcloud.io |
Change in Lync 2013 Taskbar Icon
Update: | https://docs.microsoft.com/en-us/archive/blogs/dodeitte/change-in-lync-2013-taskbar-icon | 2020-07-02T13:47:58 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.microsoft.com |
Identity Manager Hybrid Reporting in Azure
If you have an Azure subscription, you can now easily create a report of events that are both on-premises and in the cloud. The reports can then be viewed in the Azure portal. Better yet, the reports are combined with the Azure Active Directory activities. With MIM Hybrid Reporting, Azure AD management portal can display identity management activity reports for both cloud and on-premises activities. This reporting capability provides the following:
Your experience is unified: unified reports for IAM activities, for both cloud and an on-premises
Your cost is reduced: Eliminating the need for on-premises reporting data-warehouse infrastructure
Your data is yours: The reporting data can be easily exported from the on-premises Identity Manager or from Azure AD and can be used to generate custom view reports
What is Azure AD Hybrid Reporting?
With Hybrid reporting, Azure AD management portal can display unified identity management activity reports. This is regardless to where the activity was carried out, identity manager or Azure AD. For example, if you want to know who registered to self-service password reset (SSPR) in the last month, you can see it all in Azure AD management portal. In this report you will see users who registered to SSPR in both the applications access panel (myapps.microsoft.com) and Identity Manager.
.jpeg)
Why should I use Identity Manager Activity Reports in Azure AD?
Hybrid reporting helps IT professionals address some common identity management reporting challenges.
Report identity management activities that were performed in different systems: Now you can see identity management reports from activities on Azure AD and Identity manager in Azure AD management portal.
Export reporting data and create custom reports: In addition to reports in Azure AD, with this new capability we have added windows events that reflect the Identity Manager activity. This makes is much easier than before to integrate to SIEM systems, view the Identity Manger activity and create custom reports.
Minimize the reporting system infrastructure cost: deploying this new capability will require a few minutes of your time. All you have to do is to install a reporting agent on the Identity Manager server.
The reporting agent is downloaded from the Azure AD management portal, in the directory configuration screen:
.jpeg)
How does it work?
After the reporting agent is installed, the activity data of Identity Manager is sent to the Windows Event Log. The reporting agent processes the events and uploads them to Azure. In Azure the activity data is stored, currently for one month. When retrieving the report, the activity events are parsed and filtered for the required reports. Finally, the Azure management portal retrieves the reporting data and renders this as the activity report.
.jpeg) | https://docs.microsoft.com/en-us/previous-versions/orphan-topics/ws.11/mt148517(v=ws.11)?redirectedfrom=MSDN | 2020-07-02T13:58:01 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['images/mt238080.2818ced2-f813-4e87-9a79-5013832f9c23(ws.11',
'MIM Hybrid Reporting Password Reset MIM Hybrid Reporting Password Reset'],
dtype=object)
array(['images/mt148517.03316521-c50d-4306-bff0-236fc61e66c4(ws.11',
'MIM_hybrid_download MIM_hybrid_download'], dtype=object)
array(['images/mt148517.6a70de65-7e41-4816-b896-17a53c07c29c(ws.11',
'MIM_Hybrid_networkdiagram MIM_Hybrid_networkdiagram'],
dtype=object) ] | docs.microsoft.com |
SydStart–THE event for Startups in Sydney!
- where startups meet talent and change the world. Sydstart is a community run event for startups to hear from successful leaders in the startup commnunity - those who are leading by example with great recent success such as exits or investment or new industry initiatives - as well as hear pitches from variety of new businesses looking for co-founders, staff, awareness, feedback, partners, investment and more.
The event is this Thursday March 31st and registration starts at 12 noon - Federation Conference Centre Surry Hills - very accessible from Central or a longer walk from Town Hall station.
Grab a coffee from single origin across the road and settle in for nearly 30 speakers helping make Sydney the place to be with the best startup ecosystem.
Register here:
I’ll be going – be sure to say hello to me as well if you’re going!!
| https://docs.microsoft.com/en-us/archive/blogs/ceibner/sydstartthe-event-for-startups-in-sydney | 2020-07-02T13:03:15 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['https://evbdn.eventbrite.com/s3-s3/eventlogos/1393488/screenshot20100327at3.12.44pm.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/01/11/12/metablogapi/2134.wlEmoticon-smile_5EFBBA18.png',
'Smile'], dtype=object) ] | docs.microsoft.com |
IAsync
Operation<TResult> Interface
Definition
public interface class IAsyncOperation : IAsyncInfo
template <typename TResult> __interface>Operation>.. | https://docs.microsoft.com/en-us/uwp/api/windows.foundation.iasyncoperation-1?view=winrt-18362 | 2020-07-02T13:55:02 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.microsoft.com |
Download
You:
- API, include the
<download.h> header file in your application:
The downloading library needs no initialization prior to the API usage.
Downloading Content from a URLdefault_NONEdefault:
- When clicking a completed notification message, the proper application for playing the downloaded content is launched. If there is no proper application, an error message is shown at the status tray.
- When clicking a failed notification message, the client application requesting the download is launched.state.);
Related Information
- Dependencies
- Tizen 2.4 and Higher for Mobile | https://docs.tizen.org/application/native/guides/connectivity/download/ | 2020-07-02T13:50:48 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['../media/user_scenario.png', 'User scenario'], dtype=object)
array(['../media/download_state.png', 'Download states'], dtype=object)] | docs.tizen.org |
Brokers¶
The Brokers feature provides a succinct view of essential Apache Kafka® metrics for brokers in a cluster:
- Throughput for production and consumption
- Partitions and partition replica status
- Apache ZooKeeper™ status
- Disk usage and distribution
- System metrics for network and request pool usage
Brokers overview page¶
The Brokers overview page conveys at a glance the health of brokers (nodes) in a Kafka cluster. View a summary of broker health and metrics.
To access the Brokers overview page, select a cluster from the navigation bar and click Brokers from the cluster submenu.
The clickable summary cards allow you to drill into detailed metrics charts for:
- production
- consumption
- broker uptime
- partition replicas
- system network and request pool usage
- disk usage
To navigate back to the Brokers overview page after drilling into the Metrics dashboard, either click the Brokers submenu again or click the Brokers Overview > breadcrumb at the top of the page.
The cards show a red sidebar for any issues that require operator attention.
Tip
You can set alerts on many of these metrics, such as: production request latency, under-replicated partitions, out of sync replica count, ZooKeeper status, and more. Send alerts action notifications though email, Slack, or PagerDuty. For details, see Alerts.
The Brokers table at the bottom of the page lists all brokers by ID. Use this table to:
- Sort a column by clicking in the column title cell.
- View throughput (bytes in and out per second) and latency (fetch and produce) percentile metrics.
Brokers metrics page¶
To access the Metrics page for brokers, click any summary card on the Brokers overview page. All metrics panels are conveniently located on one page. Clicking a particular card on the Brokers overview page brings the relevant metrics panel into focus:
- Production metrics panel
- Consumption metrics panel
- Broker uptime metrics panel
- Partition replicas metrics panel
- System usage panel
- Disk usage panel
Timeframe selector¶
Click the timeframe selector to select the granularity for viewing data. The timeframe default is the last 4 hours for the current date.
Filter brokers¶
In a multiple broker environment, use the interactive broker selection controls to view the metrics charts for multiple brokers. Click Deselect all or Select all, or individually select each broker you want to view on the applicable charts. If all brokers are deselected, each panel displays the No Brokers Selected message.
Note
Not applicable to Broker uptime panels.
Customize the dashboard for brokers metrics¶
Drag and drop to rearrange the order of the brokers metrics dashboard panels.
Note
The settings only persist across the same browser and computer.
Click Customize Dashboard on the Brokers Overview > Metrics page.
Drag the panels into the order you want.
Click Save.
Request latency selector¶
Select a percentile from the menu for viewing production or consumption request latency.
Tip
Click on a point of the line graph in the Request latency panels to view details on Production or Consumption request latency.
Inspection cursor¶
Hover on any point in any chart graph line to view details for a specific point in time.
Production metrics panel¶
To access the Production metrics panel, click the Production summary card from the Brokers overview page.
The Production panel shows throughput, request latency, and any failed production requests.
Click on a point of the line graph in the Request latency panel to view details on Production request latency.
Consumption metrics panel¶
To access the Consumption metrics panel, click the Consumption summary card from the Brokers overview page.
The Consumption panel shows throughput, request latency, and any failed consumption requests.
Click on a point of the line graph in the Request latency panel to view details on Consumption request latency.
Broker uptime metrics panel¶
To access the Broker uptime panel, click the Broker uptime summary card from the Brokers overview page.
The charts show uptime for available brokers, active controllers, and ZooKeeper.
Note
Broker uptime metrics are cluster-wide, and do not apply to individual brokers. The Filter broker controls are not applicable to the Broker uptime panel.
Partition replicas metrics panel¶
To access the Partition panel, click the Partitions summary card from the Brokers overview page.
The charts show the total number of partitions, in sync, out of sync, and under-replicated partitions.
System usage panel¶
To access the System panel, click the System summary card from the Brokers overview page.
The System panel shows network and request pool usage.
Network pool usage is the average network pool capacity usage across all brokers; that is, the percentage of time that the network processor threads are not idle.
Request pool usage is the average request handler capacity usage across all brokers; that is, the percentage of time that the request handler threads are not idle.
Disk usage panel¶
To access the Disk panel, click the Disk summary card from the Brokers overview page.
The Disk panel shows maximum usage, minimum usage, and distribution. | https://docs.confluent.io/current/control-center/brokers.html | 2020-07-02T13:06:14 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.confluent.io |
The
stdout output plugin prints events to stdout (or logs if launched with daemon mode). This output plugin is useful for debugging purposes.
out_stdout is included in Fluentd's core. No additional installation process is required.
<match pattern>@type stdout</match>
Please see the Config File article for the basic structure and syntax of the configuration file.
A sample output is as follows:
2017-11-28 11:43:13.814351757 +0900 tag: {"field1":"value1","field2":"value2"}
where the first part shows the output time, the second part shows the tag, and the third part shows the record.
Non-Buffered
Synchronous
The value must be
stdout.
See Buffer section configurations for more details.
Overwrite default value in this plugin.
Overwrite default value in this plugin.
Overwrite default value in this plugin.
See Format section configurations for more details.
The format of output.
This is the option of
stdout format. Configure the format of record (third part). Any formatter plugins can be specified.
See Inject section configurations for more details.
If this article is incorrect or outdated, or omits critical information, please let us know. Fluentd is a open source project under Cloud Native Computing Foundation (CNCF). All components are available under the Apache 2 License. | https://docs.fluentd.org/output/stdout | 2020-07-02T12:33:35 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.fluentd.org |
The Barcode feature allows you to use barcodes to extract the metadata and store them as extended attributes of the document.
Once you have selected the document template, you can then select one of the available barcode templates or create a new one by clicking on New. You may define multiple barcode templates for the same document template.
Just click on New button an give the name of the new template.
Add a Pattern
A.
Conditional processing
For each pattern you can optionally define inclusion/exclusion expressions and format declarations to process or skip a specific barcode.
In Include and Exclude you can write a regular expression, while in the Format you specify a set of possible barcode formats.
When a barcode is detected it is processed only if:
- the format is one of the specified ones and
- the value does not match the exclusion expression and
- the value matches the inclusion expression
In order to write correct regular expressions for Include and Exclude, please familiarize with the regular expression syntax.
Processing queue
In this panel you can see all documents not already processed. You can make unprocessable a document by right clicking on the item and then selecting the Mark as unprocessable option.
| https://docs.logicaldoc.com/en/document-metadata/barcodes | 2020-07-02T11:17:14 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['/images/stories/en/barcode/barcode-select.png', None],
dtype=object)
array(['/images/stories/en/barcode/barcodes.png', None], dtype=object)
array(['/images/stories/en/barcode/barcode-new.png', None], dtype=object)
array(['/images/stories/en/barcode/barcode-edit.png', None], dtype=object)
array(['/images/stories/en/barcode/barcode-queue.png', None], dtype=object)] | docs.logicaldoc.com |
Indicators and score profiles for assessing applications Application indicators are business metrics that help derive application scores. You can create the indicators and score profiles based on which you can assess your applications. Application Portfolio Management is integrated with key applications in the ServiceNow platform to provide a deep insight into the applications. These integrations help you: Identify cost saving opportunities The IT Chart of Accounts. The assessment framework calculates the application score for each application on a scale of 1–10, where 10 is a good score and 1is a low score. Assessments are based on various configured indicators, which you can configure. Each of these indicators periodically captures the related application data, which is used to derive the application score. These indicators with their respective value (weightage) are added to an application profile. The application is then associated with the application profile, which calculates the application score. Use the indicators or create indicators to assess applications with dimensions such as cost, quality, technical risk, investments, user satisfaction, and business value. Preconfigured indicators are sourced from Financial Management, IT Service Management, project portfolio management, surveys, assessments, SQL queries, performance analytics, and custom scripts. | https://docs.servicenow.com/bundle/london-it-business-management/page/product/application-portfolio-management/concept/applications-assessment-overview.html | 2020-07-02T13:38:56 | CC-MAIN-2020-29 | 1593655878753.12 | [] | docs.servicenow.com |
Grid Layout
GridLayout is a grid box for the two dimensional layout. It constraints the x and y position, width, and height of the child actors.
Positioning children in a grid form, the cells are of uniform size based on the first child added to the parent View.
You can set the number of columns, however the rows automatically increase to hold the children. After the available space is used, the remaining rows become invisible by default.
Column
View layoutView = new View(); var gridLayout = new GridLayout(); gridLayout.Columns = 2; layoutView.Layout = gridLayout;
Related Information
- Dependencies
- Tizen 5.5 and Higher | https://docs.tizen.org/application/dotnet/guides/nui/grid-layout/ | 2020-07-02T12:02:09 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['../media/columnLayout.png', 'Column'], dtype=object)] | docs.tizen.org |
Does FIRMM store DICOMs?
Any DICOMs used by FIRMM will be automatically deleted from the FIRMM computer after two days. Therefore, it is very important that users never count on FIRMM for long term data storage.
How do I get data from my scanner to FIRMM?
To use FIRMM effectively, DICOMs need to be transferred as fast as possible to the incoming DICOM directory on the FIRMM host computer.
- SIEMENS: Set up and run
ideacmdtoolor the FIRMM start/stop shortcuts on your scanner. Instructions for this are available in our ideacmdtool README or shortcuts README, respectively.
- GE: Please email us for help.
- PHILIPS: Please email us for help.
When the FIRMM installation script is run, it makes two Windows batch files on the FIRMM machine. They are called
FIRMM_session_start.bat and
FIRMM_session_stop.bat. Getting these to the scanner from the FIRMM host PC will allow SIEMENS users to use DICOM streaming start/stop shortcuts. Read our shortcuts documentation for more information.
Does FIRMM write out motion data?
FIRMM writes out a CSV file with FD and motion numbers (with and without respiration filter) for each session. Assuming your FIRMM user home directory is something like
FIRMM_USER_HOME=/home/firmmproc, the CSV can be found in the following directory on the FIRMM computer:
${FIRMM_USER_HOME}/FIRMM/v3.2.5b/sessions/FIRMM_logs.
Does FIRMM work with GE or PHILIPS scanners?
FIRMM is designed to work with any scanner as long as DICOM data can be sent to a SAMBA shared network directory on the FIRMM host computer. All of our documentation and DICOM streaming shortcuts, etc., are currently built for ease of use with SIEMENS scanners and SIEMENS' real-time DICOM transfer. With the release of version 3.2.5 we have added support for GE scanners. We are still working on support for a PHILIPS mode for FIRMM.
How do I test FIRMM?
After connecting to the FIRMM host computer via
ssh -X firmm_host (where
firmm_host is your FIRMM Linux system's name), run
FIRMM -t. This will start FIRMM on the
firmm_host computer and copy a few test DICOM series to the incoming DICOM directory specified in the settings. Remember to click Start FIRMM in the browser window. FIRMM will close automatically a little after the test is finished.
How do I change the FD thresholds?
The FD thresholds can be adjusted before beginning a session by using the settings tab in the FIRMM GUI. See our Usage documentation for more information.
Can I revert to a previously installed version of FIRMM?
Users can revert to a previous version of FIRMM if the minor version is the same (e.g. reverting from 2.1.1 to 2.1.0). This would normally occur only if a bugfix that was introduced within the minor version caused problems running FIRMM on the user's system.
To revert to a previous version, find the previous version of the
run.sh file that was created by the FIRMM automatic updater. It will be stored in the same directory as your current
run.sh and its name will contain the previous version number (e.g.
run.sh.2.1.0). Re-save that previous file under the name
run.sh, replacing or removing your current file. Then you can run FIRMM commands as normal and the desired version will be used.
Does FIRMM work with structural data?
Not currently, but we plan to add this capability in an upcoming release.
Where has FIRMM been tested?
As of the creation date of this document, FIRMM has been tested on the following systems/scanners:
- Intel Xeon E5-2640v3 (16GB RAM, HDD) with Siemens Prisma scanner
- Dell Optiplex with i5 processor (4GB RAM, HDD) with Siemens Skyra scanner
- Core i7-4790K (16GB RAM, SSD) with Siemens Prisma scanner
What is the FIRMM FD filter?
New changes in MRI acquisition procedures bring new opportunities and challenges to BOLD imaging. One of the most drastic changes in acquisition procedures in recent years is the introduction of multiband imaging. However, an unintended consequence of the improved temporal and spatial resolution that accompanies multiband imaging is artifacts in motion estimates from post-acquisition frame alignment procedures, caused primarily by chest motion during respiration. Chest motion, secondary to respiration, changes the magnetic field (B0) and 'tricks' any frame-to-frame alignment procedure used in real-time motion monitoring into correcting a 'head movement' even though no actual head movement existed. In the newest version of FIRMM, an optional band-stop (or notch) filter to remove such respiration-related artifacts from motion estimates is available, thus giving a more accurate real-time representation of motion. For more detail, see our upcoming publication.
Why did FIRMM stop receiving DICOMs after the scanner upgrade to VE11C?
FIRMM uses a SAMBA network mount to communicate between the SIEMENS scanner computer and the FIRMM computer. If you had previously used a SAMBA configuration file (located at
/etc/samba/smb.conf) that had the line
guest ok = yes, then with VE11C that will not work anymore. The solution is to replace the
guest ok = yes line with
valid users = username where
username should be the FIRMM username. You will then need to have an administrator run the command
sudo smbpasswd -a username where
username is that same FIRMM user and then enter the FIRMM user's password when prompted for it. Lastly, update your DICOM streaming shortcut on the scanner with this FIRMM username and password and you should be ready to DICOM stream once again.
How do I check if the scanner computer can see the FIRMM computer's SAMBA share?
Use
net view IP_ADDRESS on the scanner computer in a Command Prompt where IP_ADDRESS is your FIRMM computer's IP address on the network. To get a Command Prompt on the scanner computer, log into advanced user mode and run
cmd. The
net view IP_ADDRESS command should show you any shared SAMBA network directories on the FIRMM computer. Otherwise, authentication or networking problems are happening. | https://firmm.readthedocs.io/en/latest/FAQ/ | 2020-07-02T12:16:40 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['../img/FirmmLogo.png', 'Logo'], dtype=object)] | firmm.readthedocs.io |
Other useful articles: Promote your Loyalty Cards on Business Cards
Tent cards are a great way to promote your loyalty program in your physical location. Place them on tables or by your cashier for customers to easily scan to join your loyalty program.
1 - Download this Photoshop template.
2 - Customize the front and back panels of your tent card. You'll see exactly which images and text you should replace in the layers panel on the right. Simply double click and add your own images and text.
3 - Export your front and back panels as a PNG or JPG.
4 - Download this print template in Photoshop or Illustrator. Place your front and back panel images (from step 3) in the dotted area as outlined.
5 - Save this document and send it to your print shop for printing! The dimensions for the this tent card is 4 x 6 inches.
In addition to, or instead of tent cards have you considered using business cards? Check this article out for more information, including a free Adobe Illustrator template file. | http://docs.loopyloyalty.com/en/articles/1138445-how-to-promote-your-loyalty-program-on-a-tent-card | 2020-07-02T13:05:42 | CC-MAIN-2020-29 | 1593655878753.12 | [array(['https://downloads.intercomcdn.com/i/o/33706428/076c63b1ef9a16e371978f76/loopy-loyalty-tent-card-mockup.jpg',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/33706855/2af143aa26bd2c1a6ee2ce8b/front.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/33707008/6479ddfe5451906b00eb619b/print.png',
None], dtype=object) ] | docs.loopyloyalty.com |
check_point.mgmt.cp_mgmt_host module – Manages host_host.
New in version 2.9: of check_point.mgmt
Synopsis
Manages host objects on Check Point devices including creating, updating and removing objects.
All operations are performed over Web Services API.
Parameters
Examples
- name: add-host cp_mgmt_host: ip_address: 192.0.2.1 name: New Host 1 state: present - name: set-host cp_mgmt_host: color: green ipv4_address: 192.0.2.2 name: New Host 1 state: present - name: delete-host cp_mgmt_host: name: New Host 1 state: absent
Return Values
Common return values are documented here, the following are the fields unique to this module:
Collection links
Issue Tracker Repository (Sources) | https://docs.ansible.com/ansible/latest/collections/check_point/mgmt/cp_mgmt_host_module.html | 2022-09-24T19:24:42 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.ansible.com |
Creating a new collection
Starting with Ansible 2.10, related modules should be developed in a collection. The Ansible core team and community compiled these module development tips and tricks to help companies developing Ansible modules for their products and users developing Ansible modules for third-party products. See Developing collections for a more detailed description of the collections format and additional development guidelines.).
Existing license requirements still apply to content in ansible/ansible (ansible-core).
Content that was previously in ansible/ansible or a collection and has moved to a new collection must retain the license it had in its prior repository.
Copyright entries by previous committers must also be kept in any moved files.
Before you start coding
This list of prerequisites is designed to help ensure that you develop high-quality modules that work well with ansible-core and provide a seamless user experience.
Read though all the pages linked off Developing modules; paying particular focus to the Contributing your module to an existing Ansible collection.
We encourage PEP 8 compliance. See PEP 8 for more information.
We encourage supporting Python 2.6+ and Python 3.5+.
Look at Ansible Galaxy and review the naming conventions in your functional area (such as cloud, networking, databases).
With great power comes great responsibility: Ansible collection maintainers have a duty to help keep content up to date and release collections they are responsible for regularly. As with all successful community projects, collection maintainers should keep a watchful eye for reported issues and contributions.
We strongly recommend unit and/or integration tests. Unit tests are especially valuable when external resources (such as cloud or network devices) are required. For more information see Testing Ansible and the Testing Working Group.
Naming conventions
Fully Qualified Collection Names (FQCNs) for plugins and modules include three elements:
-
the Galaxy namespace, which generally represents the company or group
-
the collection name, which generally represents the product or OS
-
- the plugin or module name
-
-
always in lower case
-
words separated with an underscore (
_) character
-
singular, rather than plural, for example,
commandnot
commands
For example,
community.mongodb.mongodb_linux or
cisco.meraki.meraki_device.
It is convenient if the organization and repository names on GitHub (or elsewhere) match your namespace and collection names on Ansible Galaxy, but it is not required. The plugin names you select, however, are always the same in your code repository and in your collection artifact on Galaxy.
Speak to us
Circulating your ideas before coding helps you adopt good practices and avoid common mistakes. After reading the “Before you start coding” section you should have a reasonable idea of the structure of your modules. Write a list of your proposed plugin and/or module names, with a short description of what each one does. Circulate that list on IRC or a mailing list so the Ansible community can review your ideas for consistency and familiarity. Names and functionality that are consistent, predictable, and familiar make your collection easier to use.
Where to get support
Ansible has a thriving and knowledgeable community of module developers that is a great resource for getting your questions answered.
In the Ansible Community Guide you can find how to:
Subscribe to the Mailing Lists - We suggest “Ansible Development List” and “Ansible Announce list”
#ansible-devel- We have found that communicating on the
#ansible-develchat channel (using Matrix at ansible.im or using IRC at irc.libera.chat) works best for developers so we can have an interactive dialogue.
Working group and other chat channel meetings - Join the various weekly meetings meeting schedule and agenda page
Required files
Your collection should include the following files to be usable:
an
__init__.pyfile - An empty file to initialize namespace and allow Python to import the files. Required
at least one plugin, for example,
/plugins/modules/$your_first_module.py. Required
if needed, one or more
/plugins/doc_fragments/$topic.pyfiles - Code documentation, such as details regarding common arguments. Optional
if needed, one or more
/plugins/module_utils/$topic.pyfiles - Code shared between more than one module, such as common arguments. Optional
When you have these files ready, review the Contributing your module to an existing Ansible collection again. If you are creating a new collection, you are responsible for all procedures related to your repository, including setting rules for contributions, finding reviewers, and testing and maintaining the code in your collection.
If you need help or advice, consider joining the
#ansible-devel chat channel (using Matrix at ansible.im or using IRC at irc.libera.chat). For more information, see Where to get support and Communicating with the Ansible community.
New to git or GitHub
We realize this may be your first use of Git or GitHub. The following guides may be of use: | https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_in_groups.html | 2022-09-24T20:16:43 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.ansible.com |
Implement a user profile service
Use a User Profile Service to persist information about your users and ensure variation assignments are sticky. Sticky implies that once a user gets a particular variation, their assignment won't change.
In the React SDK, there is no default implementation. Implementing a User Profile Service is optional and is only necessary if you want to keep variation assignments sticky even when experiment conditions are changed while it is running (for example, audiences, attributes, variation pausing, and traffic distribution). Otherwise, the React SDK is stateless and relies).
import { createInstance } from '@optimizely/react-sdk'; // Sample user profile service implementation const userProfileService = { lookup: userId => { // Perform user profile lookup }, save: userProfileMap => { // Persist user profile }, }; const optimizelyClient = createInstance({ datafile: window.datafile, // assuming you have a datafile at window.datafile userProfileService, // Passing your userProfileService created above }); React.
Implement asynchronous user lookups with experiment bucket map attribute
You can implement
attributes.$opt_experiment_bucket_map to perform asynchronous lookups of users' previous variations. The SDK handles
attributes.$opt_experiment_bucket_map the same way it would
userProfileService.lookup, and this allows you to do an asynchronous lookup of the experiment bucket map before passing it to the Activate method.
Note
attributes.$opt_experiment_bucket_mapwill always take precedence over an implemented
userProfileService.lookup.
The example below shows how to implement consistent bucketing via attributes.
import React from 'react'; import { createInstance, OptimizelyProvider, } from '@optimizely/react-sdk' const optimizelyClient = createInstance({ datafile: window.datafile, // assuming you have a datafile at window.datafile }); // In practice, this could come from a DB call const experimentBucketMap = { 123: { // ID of experiment variation_id: '456', // ID of variation to force for this experiment } } const user = { id: ‘myuser123’, attributes: { // By passing this $opt_experiment_bucket_map, we force that the user // will always get bucketed into variationid='456' for experiment id='123' '$opt_experiment_bucket_map': experimentBucketMap, }, }; function App() { return ( <OptimizelyProvider optimizely={optimizely} user={user} > {/* … your application components here … */} </OptimizelyProvider> </App> }
You can use the asynchronous service example below to try this functionality in a test environment. If you implement this example in a production environment, be sure to modify
UserProfileDB to a real database.
import React from 'react'; import { createInstance, OptimizelyProvider, } from '@optimizely/react-sdk' // This is here only as an example; in a production environment this could access a real datastore class UserProfileDB { constructor() { /* Example structure * { * user1: { * user_id: 'user1', * experiment_bucket_map: { * '12095834311': { // experimentId * variation_id: '12117244349' // variationId * } * } * } * } */ this.db = {} } save(user_id, experiment_bucket_map) { return new Promise((resolve, reject) => { setTimeout(() => { this.db[user_id] = { user_id, experiment_bucket_map } resolve() }, 50) }) } lookup(userId) { return new Promise((resolve, reject) => { setTimeout(() => { let result if (this.db[userId] && this.db[userId].experiment_bucket_map) { result = this.db[userId].experiment_bucket_map } resolve(result) }, 50) }) } } const userDb = new UserProfileDB() const userProfileService = { lookup(userId) { // In our case we will not implement this function here. We will look up the attributes for the user below. }, save(userProfileMap) { const { user_id, experiment_bucket_map } = userProfileMap userDb.save(user_id, experiment_bucket_map) } } const client = createInstance({ datafile: window.datafile, // assuming you have a datafile at window.datafile userProfileService, }) // React SDK supports passing a Promise as user, for async user lookups like this const user = userDb.lookup(userId).then((experimentBucketMap = {}) => { return { id: 'user1', attributes: { $opt_experiment_bucket_map: experimentBucketMap }, } }) // The provider will use the given user and optimizely instance. // The provided experiment bucket map will force any specified variation // assignments from userDb. // The provided user profile service will save any new variation assignments to //userDb. function App() { return ( <OptimizelyProvider optimizely={optimizely} user={user} > {/* … your application components here … */} </OptimizelyProvider> </App> }
Updated about 2 years ago | https://docs.developers.optimizely.com/full-stack/docs/implement-a-user-profile-service-react | 2022-09-24T20:16:10 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.developers.optimizely.com |
Usability Features
Dseqr has multiple usability features that help you understand what buttons and inputs do.
Tips on hover
Most buttons show a tooltip on hover:
Hovering input values can also provide usefull information like full gene names:
Page tours
Click the information icon at the top of any page for a guided tour of the main features:
Interactive scatterplots
Scroll to zoom and drag to pan on scatterplots:
| https://docs.dseqr.com/docs/general/usability/ | 2022-09-24T20:44:56 | CC-MAIN-2022-40 | 1664030333455.97 | [array(['https://docs.dseqr.com/docs/general/usability/tooltips.png',
'Tooltips'], dtype=object)
array(['https://docs.dseqr.com/docs/general/usability/title.png',
'Titles'], dtype=object)
array(['https://docs.dseqr.com/docs/general/usability/tour.png', 'Tour'],
dtype=object)
array(['https://docs.dseqr.com/docs/general/usability/zoom-sc.png',
'Zoom Single-Cell'], dtype=object) ] | docs.dseqr.com |
The group met 0 times this quarter.
Recent Activities
First Activity
The award was advertised by committee members and CAM, resulting in three qualified applicants.
Meets LITA’s strategic goals for Member Engagement
Second Activity
In the course of launching the award process this year, we discovered that the OCLC representative that normally works with the award committee had left the company. Thanks to Jenny’s efforts a new rep has been appointed and the process will continue uninterrupted.
Meets LITA’s strategic goals for Organizational Stability and Sustainability
What will your group be working on for the next three months?
Reviewing the 7 qualified applicants (three from this year and 4 from previous years) and selecting one applicant for the award
Is there anything LITA could have provided during this time that would have helped your group with its work?
n/a
Please provide suggestions for future education topics, initiatives, publications, resources, or other activities that could be developed based on your group’s work.
n/a
Submitted by Aimee Fifarek on 01/08/2019 | https://docs.lita.org/2019/01/frederick-g-kilgour-award-committee-december-2018-report/ | 2022-09-24T19:55:55 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.lita.org |
method backtrace
Documentation for method
backtrace assembled from the following types:
class Exception
(Exception) method backtrace
Defined as:
method backtrace(Exception:)
Returns the backtrace associated with the exception in a
Backtrace object or an empty string if there is none. Only makes sense on exceptions that have been thrown at least once.
try die "Something bad happened";with $! | https://docs.raku.org/routine/backtrace | 2022-09-24T19:37:58 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.raku.org |
Bad printing quality through USB
Not all printers are built the same and this means that you can experience different print quality from printer to printer, not just because of the mechanics of the printer but also the printer's way of receiving information. Most printer motherboards are not built to receive printer commands through the USB port very well. They can easily print from this port but it can be limited, how many commands it can handle at a time, and thus how fast you can print. The printer will often start showing irregularities like patches of plastic all over the model. These patches occur because the printer's memory becomes full and therefore cannot take in any more information and the print head will momentarily stop. It is still unknown exactly what the solution to this problem is but there are some suggestions to try out.
Bad cable
There are several types of USB cables and they are not all built equally. Many Micro USB cables are made solely to charge gadgets like phones and headsets. This prevents the cable from transmitting data that the printer needs. If the cable is very thin, it is a good sign that it does not have all the conductors needed to transfer data.
When using a cable that needs to transfer data around a 3D printer, the problem with the cable can occur due to all the motors being under high voltage. They create magnetic fields that can interfere with the signal through the cable and therefore we shout that you use a shielded cable. These cables have a layer of metal around the conductors inside the cable and are generally a lot thicker than the "normal" cables. This keeps the motors and wires from interfering with the data moving through the cable.
Incorrect USB port (Pi 4)
The new Raspberry Pi 4 has 2 types of USB ports you can connect your printer to. There are 2 blue and 2 black ports. The blue ports are USB 3.0 and are a newer standard than the old USB 2.0 as the black ports would be. Some older printers have trouble talking to the USB 3.0 standard. Try plugging the USB connector into one of the black USB ports and see if that solves the problem. On the other hand, the blue ports are good for attaching cameras
Try a higher baud rate
When the printer and your raspberry has to talk to each other, they need to agree on the speeds of which they will do that. This is called the "baud rate" and is set when connections are established. It will typically be the motherboard in your 3D printer that sets the limit on how fast they can communicate, but if you have the option to set the baud rate higher then give it a try. To set the baud rate, go to your SimplyPrint printer settings and turn off “Automatic printer connection”. (if you do not do this you will not be able to adjust the baud rate before OctoPrint connects to the printer again). Go into OctoPrint and the "Connection tab" on the left. Press "Disconnect" and under baud rate, up the value one step at a time, and see if it will connect like this.
Disable bigger plugins
The problem may also lie with your Raspberry Pi, especially if you are running a slightly older version (Pi 2 and 3B). The small computer has to handle a lot of things while your printer is running and it can become too much for the thing. If it is running too many things at once, it may have a hard time keeping the printer fed with information. If you have some bigger plugins, you can try by disabling them while performing a test print. You can disable plugins by opening OctoPrint and finding the wrench at the top of the page. Find “plugin Manager” on the list on the left. Here you can turn off plugins by clicking on the small icon on the right which looks like a small slider.
Start OctoPrint in safe mode
To test if it is your raspberry that is causing problems, try starting it up in "Safe mode" which deactivates everything but the pure OctoPrint install. Open OctoPrint and find the "on / off" button at the top of the page and press "Restart OctoPrint in safe mode". Run a test print directly through OctoPrint as SimplyPrint cannot talk to OctoPrint with safe mode enabled. If this does not provide any improvement, the issue may not be OctoPrint related.
Install arc-welder
Since the problem occurs because too many commands are sent to printers at once, it makes sense to remove some of the many commands. The OctoPrint plugin “Arc-welder” reduces the amount of data by combining several commands into one, without reducing the quality of the print very much. This can increase how fast your printer can print through SimplyPrint and OctoPrint. See the article called “How to install an OctoPrint plugin” for a guide on how to install this.
Firmware edits
The firmware may limit the amount of data that can be stored on the printer at one time. If everything else fails, firmware can be updated on the machine where the buffer may be made a little bigger and thus keep the printer fed with commands. This will only work for marlin firmware which is fortunately very common.
Find the firmware you need, if you do not know which firmware you have on your printer, you can usually search for the name of your printer, followed by "firmware". Once you have found the version you want, you can open the file called “Configuration_adv.h” in a program like Arduino IDE and search for “
BLOCK_BUFFER_SIZE” and set the value that comes right after 128. Now search for “
BUFSIZE 32” and set the value that comes after it to 32. Save the file and transfer the firmware to your printer. A guide on how to flash new firmware on your printer will come in the near future. | https://docs.simplyprint.io/article/bad-printing-quality-through-usb | 2022-09-24T19:21:15 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.simplyprint.io |
Performance considerations
Make sure you understand the performance considerations in your installation.
ThoughtS support to learn more.
Aggregated Worksheets and Joins
To join an aggregated worksheet with a base table, you must configure your installation to allow this behavior.
The aggregated worksheet cannot use more than 5 component tables.
The number of rows in the final aggregated worksheet cannot be greater than 1000.
Row-level Security Boundaries
Maximum number of unique RLS rules with search data suggestions should not exceed 15K. | https://docs.thoughtspot.com/software/7.0/performance | 2022-09-24T19:29:42 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.thoughtspot.com |
System monitoring
Admin Console
The ThoughtSpot application includes an Admin Console center, where you can easily monitor usage, alerts, events and general cluster health. Navigate to the Admin Console by selecting Admin from the top navigation bar.
Only users with administrative privileges can view the Admin Console. However, administrative users can present the information in the Admin Console to others.
Administrators can also create their own, custom pinboards that reflect system data in ways that are meaningful to specific departments or groups. For more information, see the following documentation:
Much of the data presented by these boards is also available through tscli commands see tscli Command Reference.
Log files
Many of the administration commands output logging information to log files. The logs get written into the fixed directory
/export/logs/, with a sub-directory for each subsystem. The individual log directories follow:
. | https://docs.thoughtspot.com/software/7.0/system-monitor | 2022-09-24T19:53:45 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.thoughtspot.com |
Applying Wholesale Discounts to Individual Customers
Wholster allows you to apply specific wholesale discounts to individual customers accounts.
To apply a wholesale discount to a customer account, select Customers from your Wholster navigation.
Select the Customer that you would like to apply the payment term to.
And in the customer dashboard, simply ‘toggle on’ the wholesale discount that you would like to apply to their account.
*To remove this wholesale discount from their account, simply ‘toggle off’ this payment term at any time.
Congratulations, you have now applied this wholesale discount to your customer! | https://docs.wholster.com/article/applying-wholesale-discount-to-individual-customers/ | 2022-09-24T19:54:22 | CC-MAIN-2022-40 | 1664030333455.97 | [] | docs.wholster.com |
Status: Stable
This builder optimizes
We'll define a
When we deploy with
Next, use
The entrypoint is always a
This builder optimizes
.html files using html-minifier.
When to Use It
When you want your HTML source files to be minified so they are as small as possible (without any semantic changes) in production.
How to Use It
We'll define a
index.html file:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <title>Document</title> </head> <body> <h3>This is very long</h3> </body> </html>
Then we'll define our build step to optimize our images:
{ "version": 2, "builds": [{ "src": "*.html", "use": "@now/html-minifier" }] }
When we deploy with
now, the resulting deployment URL will be live as.
Next, use
curl to query the deployment URL, you will notice the HTML has been compressed:
▲ curl <!doctype html><html lang=en><meta charset=UTF-8><title>Document</title><h3>This is very long</h3>
The example deployment above is open-source and you can view the code for it here:.
Technical Details
Entrypoint
The entrypoint is always a
.html file you want to optimize. | https://docs-560461g10.zeit.sh/docs/v2/deployments/official-builders/html-minifier-now-html-minifier/ | 2019-08-17T13:59:54 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs-560461g10.zeit.sh |
-
Static Inventory Page
Broadsign Direct's static inventory page displays a list of all non-digital advertising faces on your network. You can filter, sort, and view your static inventory in list view or map view.
Once you filter and sort your faces, you can perform a few tasks:
- Create specialized proposals containing faces selected "à la carte". See Access the Proposal Builder and Proposal Builder.
- Build packages that your sales team can target at a later time. See Package Builder.
The filter panel enables you to narrow down your static inventory. As a result, you can better match your clients' requirements when building packages (see About Packages) or creating proposals (see About Proposals).
Filter Panel Sections
The face inventory in list view displays a list of all faces on your network. You can narrow the list using the filter panel (see The Filter Panel).
Also, each face has an information card showing details of that particular screen. Your Admin can associate an image with the screen (see Assign Images to Screens).
Note: If you apply filters to your faces while in List View, and then select Map View, the same filters apply. In other words, those faces that appear in List View will also appear in Map View.
Static Inventory - List View - Components
The static inventory in map view displays the locations of your faces on a map. If you have yet to narrow down your list, all faces in your inventory will appear. As you apply filters or select individual faces, those faces remaining will appear.
Note: If you apply filters to your faces while in Map View, and then select List View, the same filters apply. In other words, those faces that appear in Map View will also appear in List View.
Note: Broadsign Direct integrates with Google Maps APIs and uses its UI controls.
Screen Inventory - Map View - Components
As you filter your results and select individual faces, this panel displays a summary of your selections.
The Summary Panel Sections
The Booking level of faces widget displays the availability of faces in your narrowed list (see "The Filter Panel" and "The Ad Flight Selector", above). You can select to display information "per day" or "per week".
For a booking level widget that applies to all faces on your network, see Booking Levels of Screens Widget.
Note: Once you have booked a proposal, it will appear as dark green (see below); however, your scheduling team still needs to confirm the proposal before it is considered fully booked.
light green area: Represents held proposals on a specific day, from your narrowed list. Hover over the light green area to see the percentage of held proposals for that day.
dark green area: Represents booked proposals (campaigns) on a specific day, from your narrowed list. Hover over the dark green area to see the percentage of booked proposals (campaigns) for that day. | https://docs.broadsign.com/broadsign-direct/static-inventory-page.html | 2019-08-17T13:42:53 | CC-MAIN-2019-35 | 1566027313259.30 | [array(['Resources/Images/static-inventory-filter-panel.png', None],
dtype=object)
array(['Resources/Images/static-summary-panel.png', None], dtype=object)] | docs.broadsign.com |
What-if analysis statistics.. | https://docs.microsoft.com/en-us/business-applications-release-notes/October18/service/field-service/resource-scheduling-optimization-rso/what-if-analysis-statistic-ui | 2019-08-17T14:18:58 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
Contents Security Operations. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-security-management/page/administer/list-administration/concept/c_ListConfiguration.html | 2019-08-17T13:38:32 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.servicenow.com |
Quickstart¶
If", "z column")
demo_data() gives us a mix of categorical and numerical
variables:
In [4]: data Out[4]: {]), 'z column': array([ 2.26975462, -1.45436567, 0.04575852, -0.18718385, 1.53277921, 1.46935877, 0.15494743, 0.37816252])} [5]: dmatrices("y ~ x1 + x2", data) Out[5]: [6]: outcome, predictors = dmatrices("y ~ x1 + x2", data) In [7]: betas = np.linalg.lstsq(predictors, outcome)[0].ravel() In [8]: [9]: dmatrix("x1 + x2", data) Out[9]: [10]: d = dmatrix("x1 + x2", data) In [11]: [12]: dmatrix("x1 + x2 - 1", data) Out[12]: [13]: dmatrix("x1 + np.log(x2 + 10)", data) Out[13]: [14]: new_x2 = data["x2"] * 100 In [15]: dmatrix("new_x2") Out[15]: [16]: dmatrix("center(x1) + standardize(x2)", data) Out[16]: ]: [26]: dmatrix("0 + a", data) Out[26]: [27]: dmatrix("a", data) Out[27]: [28]: dmatrix("0 + a:b", data) Out[28]: [29]: dmatrix("a + b + a:b", data) Out[29]: [30]: dmatrix("a*b", data) Out[30]: [31]: dmatrix("C(c, Poly)", {"c": ["c1", "c1", "c2", "c2", "c3", "c3"]}) Out[31]: [32]: dmatrix("a:x1", data) Out[32]: [33]: dmatrix("x1 + a:x1", data) Out[33]: [34]: dmatrix("C(a, Poly):center(x1)", data) Out[34]:) | https://patsy.readthedocs.io/en/v0.4.1/quickstart.html | 2019-08-17T13:18:48 | CC-MAIN-2019-35 | 1566027313259.30 | [] | patsy.readthedocs.io |
Resource.HyperlinkSubAddress property (Project)
Gets or sets the address of a location within the target document. Read/write String.
Syntax
expression.
HyperlinkSubAddress
expression A variable that represents a Resource object.
Support and feedback
Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback. | https://docs.microsoft.com/en-us/office/vba/api/project.resource.hyperlinksubaddress | 2019-08-17T14:29:26 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
your requirements. More information: Create new metadata or use existing metadata
Part of the name of any custom field you create is the customization prefix. This is set based on the solution publisher for the solution you’re working in. If you care about the customization prefix, make sure that you are working in an unmanaged solution or the default solution where the customization prefix is the one you want for this entity. For information about how to change the customization prefix, see Solution publisher.
You can access fields in the application in several ways:
From the solution explorer you can expand the entity and choose the Fields node. From the list of fields, click New to create a new field or double-click any of the fields on the list to edit them.
Expand the entity and choose the Forms node. Open a form in the form editor and below the Field Explorer click New Field to create a new field. For any field already added to the form you can double-click the field to display the Field Properties. On the Details tab, click Edit.
- Another way to go to the form editor is to use the Form command on the command bar for any entity record.
If you use the metadata browser tool, use the Entity Metadata Browser page to view details about a specific entity, and then click the Attributes button. If a field is editable, you can click the Edit Attribute button to edit the field. More information: Use the metadata browser
All fields have the following properties:
Any of the fields that provide direct text input have an IME Mode. The input method editor (IME) is used for East Asian languages like Japanese. IMEs allow the user to enter the thousands of different characters used in East Asian written languages using a standard 101-key keyboard.
Create or edit entity fields
Create new fields to capture data when existing system entities don’t have fields that meet your requirements. After you create new fields, be sure to include them on the forms and views for the entity so that they are available from the relevant Customize the System.
Under Components, expand Entities, and then expand the entity you want.
Select Fields.
To add a new field, on the Actions toolbar, select New, and enter a Display Name to generate the Name.
- OR -
To edit one or more fields, select the field or fields (using the Shift key) you want to modify and then on the Actions toolbar, select Edit. You can make changes to the following fields:
For Field Requirement, select whether it’s optional, recommended, or required.
In Searchable, select whether to include this field in the list of fields shown in Advanced Find for this entity and also in the field available for customizing the find columns in the Quick Find view and the Lookup view.
For Field Security, enable or disable the feature for this field.
For Auditing, enable or disable the feature for this field.
Note
When you select multiple fields to edit, the Edit Multiple Fields dialog appears. You can edit Field Requirement, Searchable, and Auditing.
For new fields, under Type, enter the required information for the specified type. For existing fields, you cannot modify the type, but you can modify the settings for the Types of fields.
Select the Field type, Format, and Maximum length of the field.
Select the IME mode for this attribute.
Note
This specifies whether the active state of an input method editor (IME) is enabled. An IME lets you enter and edit Chinese, Japanese, and Korean characters. IMEs can be in an active or inactive state. The active state accepts Chinese, Japanese, or Korean characters. The inactive state behaves like a regular keyboard and uses a limited set of characters.
For a new field, be sure to add a Description of the field – this provides instructions to your users on how to use the new field.
Click Save and Close.
Publish your customization.
To publish your changes for one entity, under Components, select Entities, and then the entity that you made changes to. On the Actions toolbar, select Publish.
To publish all changes you have made to multiple entities or components, on the Actions toolbar, select Publish All Customizations.
Note
Installing a solution or publishing customizations can interfere with normal system operation. We recommend that you schedule a solution Dynamics CRM Online 2016 Update 1.
In previous releases, several out-of-the-box entities in Dynamics 365, such as the Case, Lead, and Opportunity entities, included a special kind of lookup field that represented a customer. Using this lookup field you could choose between two entities: Account or Contact. With this new capability, you can add the Customer field to any system or custom entity. You can use the Customer field in more entities to track the customer's information in the same way you've used the Customer field in the Case, Lead, and Opportunity entities.
Let's look at the following business scenario. Your company is an insurance provider. You use Dynamics 365 to manage your customer interactions and standardize business processes. It’s important for you to know if a recipient of policies or claims is an individual or a company. To address this business requirement, you can create two custom entities: Policies and Claims. To get and track the customer information you want, add the Customer lookup field to the Policies entity and the Claims entity, by using the new Customer field capability.
Single line of text format options
The following table provides information about the format options for single line of text fields.
Whole number format options
The following table provides information about the format options for whole number fields.
Using the right type of number
When choosing the correct type of number field to use, the choice to use a Whole Number or Currency type should be pretty.
Using currency fields
Currency fields allow for an organization to configure multiple currencies that can be used for records in the organization. When organizations have multiple currencies, they typically want to be able to perform calculations to provide values using their base currency. When you add a currency field to an entity that has no other currency fields, two additional fields are added:
A lookup field called Currency that you can set to any active currency configured for your organization. You can configure multiple active currencies for your organization in Settings > Business Management > Currencies. There you can specify the currency and an exchange rate with the base currency set for your organization. If you have multiple active currencies, you can add the currency field to the form and allow people to specify which currency should be applied to money values for this record. This will change the currency symbol that is shown for the currency fields in the form.
Individuals can also change their personal options to select a default currency for the records they create.
A decimal field called Exchange Rate that provides the exchange rate for a selected currency associated with the entity with respect to the base currency. If this field is added to the form, people can see the value, but they can’t edit it. The exchange rate is stored with the currency.
For each currency field you add, another currency field is added with the prefix “_Base” on the name. This field stores the calculation of the value of the currency field you added and the base currency. Again, if this field is added to the form, it can’t be edited.
When you configure a currency field you can choose the precision value. There are essentially three options as shown in the following table.
Different types of lookups
When you create a new lookup field you are creating a new Many-to-One (N:1) entity relationship between the entity you’re working with and the Target Record Type defined for the lookup. There are additional configuration options for this relationship that are described in Create and edit entity relationships. But all custom lookups can only allow for a reference to a single record for a single target record type.
However, you should be aware that not every lookup behaves this way. There are several different types of system lookups as shown here.
Image fields
Use image fields to display a single image per record in the application. Each entity can have one image field. You can add an image field to custom entities but not to system entities. The following system entities have an image field. Those marked with an asterisk are enabled by default.
Even though an entity has an image field, displaying that image in the application requires an additional step. In the entity definition the Primary Image field values are either [None] or Entity Image. Click Entity Image to display the image in the application. More information: Create and edit entities
When image display is enabled for an entity, any records that don’t have an image will display a placeholder image. For example, the Lead entity:
People can choose the default image to upload a picture from their computer. Images must be less than 5120 KB and must one of the following formats:
jpg
jpeg
gif
tif
tiff
bmp
png
When the image is uploaded, it will be converted to a .jpg format and all downloaded images will also use this format. If an animated .gif is uploaded, only the first frame is saved.
When an image is uploaded, it will be resized to a maximum size of 144 pixels by 144 pixels. People should resize or crop the images before they upload them so that they will display well using this size. All images are cropped to be square. If both sides of an image are smaller than 144 pixels, the image will be cropped to be a square with the dimensions of the smaller side. | https://docs.microsoft.com/en-us/previous-versions/dynamicscrm-2016/administering-dynamics-365/dn531187(v=crm.8) | 2019-08-17T13:00:45 | CC-MAIN-2019-35 | 1566027313259.30 | [array(['images/dn531187.3f5f194b-0536-4f57-92e4-eacd8a2d32f1%28crm.8%29.jpeg',
'Placeholder image for Lead entity form in CRM Placeholder image for Lead entity form in CRM'],
dtype=object) ] | docs.microsoft.com |
Unified Communications Managed API 3.0 Workflow SDK Documentation
Documentation updated Thursday, December 1, 2011
Use Microsoft Unified Communications Managed API (UCMA) 3.0 Workflow SDK to create and deploy communication workflow applications that are developed with the Microsoft .NET Framework-supported C# language.
In This SDK
See Also
Other Resources
Microsoft Online Privacy Notice
Accessibility in Microsoft Products
Legal Information (Unified Communications Managed API 3.0 Workflow SDK)
Unified Communications Workflow Sample Applications | https://docs.microsoft.com/en-us/previous-versions/office/developer/lync-2010/gg421021%28v%3Doffice.14%29 | 2019-08-17T14:20:28 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
SPEventReceiverBase Class
Provides methods for event receivers in the Microsoft SharePoint Foundation object model and serves as the base class for creating list items, lists, Webs, and sites.
Inheritance Hierarchy
System.Object
Microsoft.SharePoint.SPEventReceiverBase
Microsoft.SharePoint.SPItemEventReceiver
Microsoft.SharePoint.SPListEventReceiver
Microsoft.SharePoint.SPWebEventReceiver
Microsoft.SharePoint.Workflow.SPWorkflowEventReceiver
Namespace: Microsoft.SharePoint
Assembly: Microsoft.SharePoint (in Microsoft.SharePoint.dll)
Available in Sandboxed Solutions: Yes
Available in SharePoint Online
Syntax
'Declaration <SubsetCallableTypeAttribute> _ Public Class SPEventReceiverBase 'Usage Dim instance As SPEventReceiverBase
[SubsetCallableTypeAttribute] public class SPEventReceiverBase
Remarks
The SPEventReceiverBase class should not be instantiated but provides methods for receiver classes deriving from it that are listed in the Inheritance Hierarchy section. Override one of the derived classes below to create a custom event handler, and register the handler by using the SPEventReceiverDefinition class.
Examples
The following code example shows how to register a custom event receiver that traps the delete event on the Web site.
Dim webSite As SPWeb = New SPSite("").OpenWeb() Dim newReceiver As SPEventReceiverDefinition = webSite()
SPWeb oWebsite = new SPSite("").OpenWeb(); SPEventReceiverDefinition newReceiver = oWebsite(); oWebsite.Dispose();
SPEventReceiverBase Members
Microsoft.SharePoint Namespace | https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-2010/ms453647%28v%3Doffice.14%29 | 2019-08-17T14:18:03 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
.
Usage
Once you link a GitLab project, we will deploy every git-push related to that project. If there is no
now.json file within the project, we will deploy the project statically.
Using a
now.json file, you can configure your projects to deploy as dynamic serverless functions.
Default Behavior
A Deployment for Each Push
Now for GitLab will deploy each push by default. This includes pushes to any branch and any merge requests made from those branches. This allows those working within the project to preview the changes made before they are pushed to production.
With every push, if Now is already building a previous commit, it will cancel that current build to start the most recent commit, ensuring you always have the latest changes deployed as quickly as possible.
Aliasing the Default Branch
If an alias is set within the
now.json file, pushes and merges to the default branch (commonly "master") will be aliased automatically and made live to those defined aliases with the latest deployment made with a push.
For example, the following
now.json configuration will make Now for GitLab:
Read Next
Learn more about deploying your apps with Now using the following resources: | https://docs-560461g10.zeit.sh/docs/v2/integrations/now-for-gitlab/ | 2019-08-17T12:34:49 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs-560461g10.zeit.sh |
CreateBackup.
Request Syntax
{ "Description": "
string", "ServerName": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- Description
A user-defined description of the backup.
Type: String
Required: No
- ServerName
The name of the server that you want to back up.
Type: String
Length Constraints: Minimum length of 1. Maximum length of 40.
Pattern:
[a-zA-Z][a-zA-Z0-9\-]*
Required: Yes
Response Syntax
{ "Backup": { "BackupArn": "string", "BackupId": "string", "BackupType": "string", "CreatedAt": number, "Description": "string", "Engine": "string", "EngineModel": "string", "EngineVersion": "string", "InstanceProfileArn": "string", "InstanceType": "string", "KeyPair": "string", "PreferredBackupWindow": "string", "PreferredMaintenanceWindow": "string", "S3DataSize": number, "S3DataUrl": "string", "S3LogUrl": "string", "SecurityGroupIds": [ "string" ], "ServerName": "string", "ServiceRoleArn": "string", "Status": "string", "StatusDescription": "string", "SubnetIds": [ "string" ], "ToolsVersion": "string", "UserArn": "string" } }
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
Errors
For information about the errors that are common to all actions, see Common Errors.
- InvalidStateException
The resource is in a state that does not allow you to perform a specified action.
HTTP Status Code: 400
- LimitExceededException
The limit of servers or backups has been reached.: | https://docs.aws.amazon.com/opsworks-cm/latest/APIReference/API_CreateBackup.html | 2019-08-17T13:01:59 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.aws.amazon.com |
Add the component within
Root with this line:
<Delivered.InviteButtons render={props => <Delivered.InviteButtonsUI {...props} />} />
You can invite a user to join a chat either by their user ID or by having them in your contacts.
If the user is in your contacts, they will show up below the buttons:
Then, you can simply press the button with the plus sign to send them an invitation. | https://docs.delivered.im/react/defaultUI/invite-buttons.html | 2019-08-17T12:42:52 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.delivered.im |
LibreOffice » scaddins
View module in: cgit Doxygen
Extra functions for calc.
These provide UNO components that implement more exotic calc functions. If you want to do the same, here can be a good place to start.
See also:
Generated by Libreoffice CI on lilith.documentfoundation.org
Last updated: 2019-08-11 15:14:38 | Privacy Policy | Impressum (Legal Info) | https://docs.libreoffice.org/scaddins.html | 2019-08-17T12:39:40 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.libreoffice.org |
Get-AddressRewriteEntry
Applies to: Exchange Server 2007 SP1, Exchange Server 2007 SP2, Exchange Server 2007 SP3
Use the Get-AddressRewriteEntry cmdlet to view an existing address rewrite entry that rewrites sender and recipient e-mail addresses in e-mail messages that are sent to and from an e-mail organization.
Syntax
get-addressrewriteentry [-Identity <AddressRewriteEntryIdParameter>] [-DomainController <Fqdn>]
Detailed Description Get-AddressRewriteEntry cmdlet on a computer that has the Edge Transport server role installed, you must log on by using an account that is a member of the local Administrators group on that computer.
Parameters
Input Types
Return Types
Errors
Exceptions
Example.
Get-AddressRewriteEntry Get-AddressRewriteEntry "Address rewrite entry for contoso.com" | Format-List | https://docs.microsoft.com/en-us/previous-versions/office/exchange-server-2007/aa998582(v%3Dexchg.80) | 2019-08-17T13:43:56 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
TextPointer.Parent Property
Microsoft Silverlight will reach end of support after October 2021. Learn more.
Gets the logical parent that contains the current position.
Namespace: System.Windows.Documents
Assembly: System.Windows (in System.Windows.dll)
Syntax
'Declaration Public ReadOnly Property Parent As DependencyObject
public DependencyObject Parent { get; }
Property Value
Type: System.Windows.DependencyObject
The logical parent that contains the current position. Can return the RichTextBox when at the top of the content stack.
Version Information
Silverlight
Supported in: 5, 4
Silverlight for Windows Phone
Supported in: Windows Phone OS 7.1
Platforms
For a list of the operating systems and browsers that are supported by Silverlight, see Supported Operating Systems and Browsers.
See Also | https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/ms522417%28v%3Dvs.95%29 | 2019-08-17T14:32:25 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
pfsspy is a python package for carrying out Potential Field Source Surface modelling. For more information on the actually PFSS calculation see this document.
Note
pfsspy is a very new package, so elements of the API are liable to change with the first few releases. If you find any bugs or have any suggestions for improvement, please raise an issue here:
Improving performance¶
pfsspy automatically detects an installation of numba, which compiles some of the numerical code to speed up pfss calculations. To enable this simply install numba and use pfsspy as normal.
Citing¶
If you use pfsspy in work that results in publication, please cite the archived code at both
Citation details can be found at the lower right hand of each web page.
Code reference¶
For the main user-facing code and a changelog see
for usage examples see
and for the helper modules (behind the scense!) see | https://pfsspy.readthedocs.io/en/latest/ | 2019-08-17T12:57:11 | CC-MAIN-2019-35 | 1566027313259.30 | [] | pfsspy.readthedocs.io |
Using the MeshDataTool¶
The MeshDataTool is not used to generate geometry. But it is helpful for dynamically altering geometry, for example if you want to write a script to tessellate, simplify, or deform meshes.
The MeshDataTool is not as fast as altering arrays directly using ArrayMesh. However, it provides more information and tools to work with meshes than the ArrayMesh does. When the MeshDataTool is used, it calculates mesh data that is not available in ArrayMeshes such as faces and edges, which are necessary for certain mesh algorithms. If you do not need this extra information then it may be better to use an ArrayMesh.
Note
MeshDataTool can only be used on Meshes that use the PrimitiveType
Mesh.PRIMITIVE_TRIANGLES.
As an example, let’s walk through the process of deforming the mesh generated in the ArrayMesh tutorial.
Assume the mesh is stored in an ArrayMesh named
mesh. We then initialize the MeshDataTool from
mesh by calling
create_from_surface(). If there is already data initialized in the MeshDataTool
calling
create_from_surface() will clear it for you. Alternatively, you can call
clear() yourself
before re-using the MeshDataTool
var mdt = MeshDataTool.new() mdt.create_from_surface(mesh)
create_from_surface() uses the vertex arrays from the ArrayMesh to calculate two additional arrays,
one for edges and one for faces.
An edge is a connection between any two vertices. Each edge in the edge array contains a reference to the two vertices it is composed of, and up to two faces that it is contained within.
A face is a triangle made up of three vertices and three corresponding edges. Each face in the face array contains a reference to the three triangles and three edges it is composed of.
The vertex array contains edges, faces, normals, color, tangent, uv, uv2, bones, and weight information connected with each vertex.
To access information from these arrays you use a function of the form
get_****():
mdt.get_vertex_count() # returns number of vertices in vertex array mdt.get_vertex_faces(0) # returns array of faces that contain vertex[0] mdt.get_face_normal(1) # calculates and returns face normal mdt.get_edge_vertex(10, 1) # returns the second vertex comprsing edge at index 10
What you choose to do with these functions is up to you. A common use case is to iterate over all vertices and transform them in some way:
for i in range(get_vertex_count): var vert = mdt.get_vertex(i) vert *= 2.0 # scales the vertex by doubling size mdt.set_vertex(i, vert)
Finally,
commit_to_surface() adds a new surface to the ArrayMesh. So if you are dynamically
updating an existing ArrayMesh, first delete the existing surface before adding a new one.
mesh.surface_remove(0) # delete the first surface of the mesh mdt.commit_to_surface(mesh)
Below is a complete example that creates a pulsing blob complete with new normals and vertex colors.
extends MeshInstance var sn = OpenSimplexNoise.new() var mdt = MeshDataTool.new() func _ready(): sn.period = 0.7 mdt.create_from_surface(mesh, 0) for i in range(mdt.get_vertex_count()): var vertex = mdt.get_vertex(i).normalized() # Push out vertex by noise vertex = vertex * (sn.get_noise_3dv(vertex)*0.5+0.75) mdt.set_vertex(i, vertex) # Calculate vertex normals, face-by-face for i in range(mdt.get_face_count()): # Get the index in the vertex array var a = mdt.get_face_vertex(i, 0) var b = mdt.get_face_vertex(i, 1) var c = mdt.get_face_vertex(i, 2) # Get vertex position using vertex index var ap = mdt.get_vertex(a) var bp = mdt.get_vertex(b) var cp = mdt.get_vertex(c) # Calculate face normal var n = (bp - cp).cross(ap - bp).normalized() # Add face normal to current vertex normal # this will not result in perfect normals, but it will be close mdt.set_vertex_normal(a, n + mdt.get_vertex_normal(a)) mdt.set_vertex_normal(b, n + mdt.get_vertex_normal(b)) mdt.set_vertex_normal(c, n + mdt.get_vertex_normal(c)) # Run through vertices one last time to normalize normals and # set color to normal for i in range(mdt.get_vertex_count()): var v = mdt.get_vertex_normal(i).normalized() mdt.set_vertex_normal(i, v) mdt.set_vertex_color(i, Color(v.x, v.y, v.z)) mesh.surface_remove(0) mdt.commit_to_surface(mesh) | http://docs.godotengine.org/en/latest/tutorials/content/procedural_geometry/meshdatatool.html | 2019-08-17T13:17:33 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.godotengine.org |
SP Namespace
Applies to: SharePoint Foundation 2010
Provides a subset of types and members in the Microsoft.SharePoint namespace for working with a top-level site and its lists or child Web sites.
Classes
The following table lists classes in the SP namespace that are supported for public use in Microsoft SharePoint Foundation 2010. | https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-2010/ee557057%28v%3Doffice.14%29 | 2019-08-17T14:15:14 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
Multiple Selection
RadGridView allows the user to select more than one item at a time from the displayed data. By default, this functionality is disabled and in order to turn it on, you have to set the MultiSelect property to true.
Multiple row selection
In order to enable multiple row selection, after setting the MultiSelect property to true, you have to set the SelectionMode to GridViewSelectionMode.FullRowSelect:
radGridView1.MultiSelect = true; radGridView1.SelectionMode = GridViewSelectionMode.FullRowSelect;
RadGridView1.MultiSelect = True RadGridView1.SelectionMode = GridViewSelectionMode.FullRowSelect
When these settings are applied, you have several options to make a multiple selection:
Press Ctrl + A to select all rows in RadGridView.
Hold the Ctrl key and click the rows that you want to select.
In order to mark a block selection, mark the first row of the desired selection, hold Shift and click on the last row of the desired selection.
All the selected rows are available in the RadGridView.SelectedRows collection
Multiple cell selection
In order to enable multiple cell selection, after setting the MultiSelect property to true, you have to set the SelectionMode to GridViewSelectionMode.CellSelect:
radGridView1.MultiSelect = true; radGridView1.SelectionMode = GridViewSelectionMode.CellSelect;
RadGridView1.MultiSelect = True RadGridView1.SelectionMode = GridViewSelectionMode.CellSelect
Once you have applied these setting, the options for selection are:
Ctrl + Ato select all cells in RadGridView.
Holding the
Ctrlkey and click the cells that you want to select.
In order to mark a block selection, mark the first cell of the desired selection, hold
Shiftand click on the last cell of the desired selection. Please note that this will select all the cells in the rectangle between the first and the second selected cell.
All the selected cells are available in the RadGridView.SelectedCells collection
If the MultiSelect property is enabled, you can make a multiple selection by holding the left mouse button down and moving the mouse making a rectangle. This will select all rows (cells) in the created rectangle.
CurrentRow and CurrentCell when multiple selection is used
When multiple row (cell) selection is used, the current row(cell) value will be equal to the last cell (row) clicked when a selection is made. | https://docs.telerik.com/devtools/winforms/controls/gridview/selection/multiple-selection | 2019-08-17T12:34:36 | CC-MAIN-2019-35 | 1566027313259.30 | [array(['images/gridview-selection-multiple-selection002.png',
'gridview-selection-multiple-selection 002'], dtype=object)
array(['images/gridview-selection-multiple-selection003.gif',
'gridview-selection-multiple-selection 003'], dtype=object)] | docs.telerik.com |
Stateful transforms¶
There’s a subtle problem that sometimes bites people when working with formulas. Suppose that I have some numerical data called x, and I would like to center it before fitting. The obvious way would be to write:
y ~ I(x - np.mean(x)) # BROKEN! Don't do this!
or, even better we could package it up into a function:
In [1]: def naive_center(x): # BROKEN! don't use! ...: x = np.asarray(x) ...: return x - np.mean(x) ...:
and then write our formula like:
y ~ naive_center(x)
Why is this a bad idea? Let’s set up an example.
In [2]: import numpy as np In [3]: from patsy import dmatrix, build_design_matrices, incr_dbuilder In [4]: data = {"x": [1, 2, 3, 4]}
Now we can build a design matrix and see what we get:
In [5]: mat = dmatrix("naive_center(x)", data) In [6]: mat Out[6]: DesignMatrix with shape (4, 2) Intercept naive_center(x) 1 -1.5 1 -0.5 1 0.5 1 1.5 Terms: 'Intercept' (column 0) 'naive_center(x)' (column 1)
Those numbers look correct, and in fact they are correct. If all we’re going to do with this model is call dmatrix() once, then everything is fine – which is what makes this problem so insidious.
Often we want to do more with a model than this. For instance, we might find some new data, and want to feed it into our model to make predictions. To do this, though, we first need to reapply the same transformation, like so:
In [7]: new_data = {"x": [5, 6, 7, 8]} # Broken! In [8]: build_design_matrices([mat.design_info.builder], new_data)[0] Out[8]: DesignMatrix with shape (4, 2) Intercept naive_center(x) 1 -1.5 1 -0.5 1 0.5 1 1.5 Terms: 'Intercept' (column 0) 'naive_center(x)' (column 1)
So it’s clear what’s happened here – Patsy has centered the new data, just like it centered the old data. But if you think about what this means statistically, it makes no sense. According to this, the new data point where x is 5 will behave exactly like the old data point where x is 1, because they both produce the same input to the actual model.
The problem is what it means to apply “the same transformation”. Here, what we really want to do is to subtract the mean of the original data from the new data.
Patsy’s solution is called a stateful transform. These look like ordinary functions, but they perform a bit of magic to remember the state of the original data, and use it in transforming new data. Several useful stateful transforms are included out of the box, including one called center().
Using center() instead of naive_center() produces the same correct result for our original matrix. It’s used in exactly the same way:
In [9]: fixed_mat = dmatrix("center(x)", data) In [10]: fixed_mat Out[10]: DesignMatrix with shape (4, 2) Intercept center(x) 1 -1.5 1 -0.5 1 0.5 1 1.5 Terms: 'Intercept' (column 0) 'center(x)' (column 1)
But if we then feed in our new data, we also get out the correct result:
# Correct! In [11]: build_design_matrices([fixed_mat.design_info.builder], new_data)[0] Out[11]: DesignMatrix with shape (4, 2) Intercept center(x) 1 2.5 1 3.5 1 4.5 1 5.5 Terms: 'Intercept' (column 0) 'center(x)' (column 1)
Another situation where we need some stateful transform magic is when we are working with data that is too large to fit into memory at once. To handle such cases, Patsy allows you to set up a design matrix while working our way incrementally through the data. But if we use naive_center() when building a matrix incrementally, then it centers each chunk of data, not the data as a whole. (Of course, depending on how your data is distributed, this might end up being just similar enough for you to miss the problem until it’s too late.)
In [12]: data_chunked = [{"x": data["x"][:2]}, ....: {"x": data["x"][2:]}] ....: In [13]: builder = incr_dbuilder("naive_center(x)", lambda: iter(data_chunked)) # Broken! In [14]: np.row_stack([build_design_matrices([builder], chunk)[0] ....: for chunk in data_chunked]) ....: Out[14]: array([[ 1. , -0.5], [ 1. , 0.5], [ 1. , -0.5], [ 1. , 0.5]])
But if we use the proper stateful transform, this just works:
In [15]: builder = incr_dbuilder("center(x)", lambda: iter(data_chunked)) # Correct! In [16]: np.row_stack([build_design_matrices([builder], chunk)[0] ....: for chunk in data_chunked]) ....: Out[16]: array([[ 1. , -1.5], [ 1. , -0.5], [ 1. , 0.5], [ 1. , 1.5]])
Note
Under the hood, the way this works is that incr_dbuilder() iterates through the data once to calculate the mean, and then we use build_design_matrices() to iterate through it a second time creating our design matrix. While taking two passes through a large data set may be slow, there’s really no other way to accomplish what the user asked for. The good news is that Patsy is smart enough to make only the minimum number of passes necessary. For example, in our example with naive_center() above, incr_dbuilder() would not have done a full pass through the data at all. And if you have multiple stateful transforms in the same formula, then Patsy will process them in parallel in a single pass.
And, of course, we can use the resulting builder for prediction as well:
# Correct! In [17]: build_design_matrices([builder], new_data)[0] Out[17]: DesignMatrix with shape (4, 2) Intercept center(x) 1 2.5 1 3.5 1 4.5 1 5.5 Terms: 'Intercept' (column 0) 'center(x)' (column 1)
In fact, Patsy’s stateful transform handling is clever enough that it can support arbitrary mixing of stateful transforms with other Python code. E.g., if center() and spline() were both stateful transforms, then even a silly a formula like this will be handled 100% correctly:
y ~ I(spline(center(x1)) + center(x2))
However, it isn’t perfect – there are two things you have to be careful of. Let’s put them in red:
Warning
If you are unwise enough to ignore this section, write a function like naive_center above, and use it in a formula, then Patsy will not notice. If you use that formula with incr_dbuilders() or for predictions, then you will just silently get the wrong results. We have a plan to detect such cases, but it isn’t implemented yet (and in any case can never be 100% reliable). So be careful!
Warning
Even if you do use a “real” stateful transform like center() or standardize(), still have to make sure that Patsy can “see” that you are using such a transform. Currently the rule is that you must access the stateful transform function using a simple, bare variable reference, without any dots or other lookups:
dmatrix("y ~ center(x)", data) # okay asdf = patsy.center dmatrix("y ~ asdf(x)", data) # okay dmatrix("y ~ patsy.center(x)", data) # BROKEN! DON'T DO THIS! funcs = {"center": patsy.center} dmatrix("y ~ funcs['center'](x)", data) # BROKEN! DON'T DO THIS!
Builtin stateful transforms¶
There are a number of builtin stateful transforms beyond center(); see stateful transforms in the API reference for a complete list.
Defining a stateful transform¶
You can also easily define your own stateful transforms. The first step is to define a class which fulfills the stateful transform protocol. The lifecycle of a stateful transform object is as follows:
- An instance of your type will be constructed.
- memorize_chunk() will be called one or more times.
- memorize_finish() will be called once.
- transform() will be called one or more times, on either the same or different data to what was initially passed to memorize_chunk(). You can trust that any non-data arguments will be identical between calls to memorize_chunk() and transform().
And here are the methods and call signatures you need to define:
- class patsy.stateful_transform_protocol¶
- __init__()
It must be possible to create an instance of the class by calling the constructor with no arguments.
- memorize_chunk(*args, **kwargs)¶
Update any internal state, based on the data passed into memorize_chunk.
- memorize_finish()¶
Do any housekeeping you want to do between the last call to memorize_chunk() and the first call to transform(). For example, if you are computing some summary statistic that cannot be done incrementally, then your memorize_chunk() method might just store the data that’s passed in, and then memorize_finish() could compute the summary statistic and delete the stored data to free up the associated memory.
- transform(*args, **kwargs)¶
This method should transform the input data passed to it. It should be deterministic, and it should be “point-wise”, in the sense that when passed an array it performs an independent transformation on each data point that is not affected by any other data points passed to transform().
Then once you have created your class, pass it to stateful_transform() to create a callable stateful transform object suitable for use inside or outside formulas.
Here’s a simple example of how you might implement a working version of center() (though it’s less robust and featureful than the real builtin):
class MyExampleCenter(object): def __init__(self): self._total = 0 self._count = 0 self._mean = None def memorize_chunk(self, x): self._total += np.sum(x) self._count += len(x) def memorize_finish(self): self._mean = self.total * 1. / self._count def transform(self, x): return x - self._mean my_example_center = patsy.stateful_transform(MyExampleCenter) print(my_example_center(np.array([1, 2, 3])))
But of course, if you come up with any useful ones, please let us know so we can incorporate them into patsy itself! | https://patsy.readthedocs.io/en/v0.3.0/stateful-transforms.html | 2019-08-17T13:13:04 | CC-MAIN-2019-35 | 1566027313259.30 | [] | patsy.readthedocs.io |
Assign Default Home Pages Based on the Lightning App
Why: Assign a custom Home page for different apps, so that users that live in different apps don’t use the same layout. This additional layer of assignment makes designing a Home page simpler, so you can specify different customizations for different app users.
For example, the Home page for your org’s inside sales app can have a different layout and different components than the Home page for your field sales app, each focused on connecting reps in those areas with the tools and info they need.
Setting an app default Home page doesn’t impact existing profile assignments. Existing profile assignments are nested beneath the layer of app assignments. As a result, users with different profiles can see different Home pages within the same app.
How: To assign a Home page as an app default, from Setup, enter Home in the Quick Find box, and then select Home. You can also assign specific Home pages from the Lightning App Builder.
| https://docs.releasenotes.salesforce.com/en-us/spring19/release-notes/rn_home_assign_app.htm | 2019-08-17T13:37:39 | CC-MAIN-2019-35 | 1566027313259.30 | [array(['release_notes/images/home_activation.png',
'home activation modal'], dtype=object) ] | docs.releasenotes.salesforce.com |
On the Header of Martfury theme, there is a special menu that directs users to the Product Categories, and it is called: Shop by Department Menu
To change its setting, please go to Customize ▸ Header ▸ Department Menu, at the end of the panel, you will see settings for Shop by Department Menu.
Department Text
This is the text that shown on the header bar. You can change or translate this phrase in this setting.
Department Link
This is where you enter the link of your Department page.
Department Menu on Homepage
Here you can see two options: Open and Close.
1. Open
Note: If you select “Open” option, the Shop by Department Menu will not close when you clicking it.
2. Close
Choose the Close option to close the Department Menu when users do not use it.
Editing menu
To edit menu items in the Shop By Department menu, please go to Appearance ▸ Menus ▸ Edit Menus ▸ Select a menu to edit ▸ Shop By Department.
| http://docs.drfuri.com/martfury/7-shop-by-department-menu/ | 2019-08-17T13:36:45 | CC-MAIN-2019-35 | 1566027313259.30 | [array(['http://docs.drfuri.com/martfury/wp-content/uploads/sites/7/2018/08/shop-by-departmenu-1024x510.png',
None], dtype=object)
array(['http://docs.drfuri.com/martfury/wp-content/uploads/sites/7/2018/08/depart.png',
None], dtype=object)
array(['http://docs.drfuri.com/martfury/wp-content/uploads/sites/7/2018/08/name.png',
None], dtype=object)
array(['http://docs.drfuri.com/martfury/wp-content/uploads/sites/7/2018/08/open.png',
None], dtype=object)
array(['http://docs.drfuri.com/martfury/wp-content/uploads/sites/7/2018/08/close.png',
None], dtype=object)
array(['http://docs.drfuri.com/martfury/wp-content/uploads/sites/7/2018/08/edit_menu.png',
None], dtype=object) ] | docs.drfuri.com |
OmniLight¶
Inherits: Light < VisualInstance < Spatial < Node < Object
Category: Core
- by right-clicking the curve.
The light’s radius.
- ShadowDetail omni_shadow_detail
See ShadowDetail.
- ShadowMode omni_shadow_mode
See ShadowMode. | https://docs.godotengine.org/en/latest/classes/class_omnilight.html | 2019-08-17T13:18:15 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.godotengine.org |
Setting up Azure Key Vault
The private/public key pairs used by Tessera can be stored in and retrieved from a key vault, preventing the need to store the keys locally.
This page details how to set up and configure an Azure Key Vault for use with Tessera.
The Microsoft Azure documentation provides much of the information needed to get started. The information in this section has been taken from the following pages of the Azure documentation:
-
-
Creating the vault¶
The Key Vault can be created using either the Azure Web Portal or the Azure CLI.
Using the portal¶
- Login to the Azure Portal
Create a resourcefrom the sidebar
Key Vault
- Fill out the necessary fields, including choosing a suitable name and location (the list of possible locations can be found using the Azure CLI, see below), and click
Create
Using the CLI¶
Login to Azure using the Azure CLI
az login
Create a resource group, choosing a suitable name and location
az group create --name <rg-name> --location <location>
To view a list of possible locations use the command
az account list-locations
Create the Key Vault, choosing a suitable name and location and referencing the resource group created in the previous stepA Key Vault has now been created that can be used to store secrets.
az keyvault create --name <kv-name> --resource-group <rg-name> --location <location>
Configuring the vault to work with Tessera¶
Azure uses an Active Directory system to grant access to services. We will create an ‘application’ that we will authorise to use the vault. We will provide the credentials created as a result of this to authenticate our Tessera instance to use the key vault.
In order for the vault to be accessible by Tessera, the following steps must be carried out:
- Log in to the Azure Portal
Azure Active Directoryfrom the sidebar
App registrations,
New application registrationand complete the registration process. Make note of the
Application ID.
- Once registered, click
Settings,
Keys, and create a new key with a suitable name and expiration rule. Once the key has been saved make note of the key value - this is the only opportunity to see this value!
To authorise the newly registered app to use the Key Vault complete the following steps:
All servicesfrom the sidebar and select
Key vaults
- Select the vault
Access policiesand
Add new
- Search for and select the newly registered application as the
Principal
- Enable the
Getand
Setsecret permissions
Enabling Tessera to use the vault¶
Environment Variables¶
If using an Azure Key Vault, Tessera requires two environment variables to be set:
AZURE_CLIENT_ID: The
Application ID
AZURE_CLIENT_SECRET: The application registration
key
Both of these values can be retrieved during the application registration process as outlined above.
Dependencies¶
The Azure dependencies are included in the
tessera-app-<version>-app.jar. If using the
tessera-simple-<version>-app.jar then
azure-key-vault-<version>-all.jar must be added to the classpath. | https://docs.goquorum.com/en/latest/Privacy/Tessera/Tessera%20Services/Keys/Setting%20up%20an%20Azure%20Key%20Vault/ | 2019-08-17T12:47:29 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.goquorum.com |
Application Application Application Application Class
Definition
Provides
static methods and properties to manage an application, such as methods to start and stop an application, to process Windows messages, and properties to get information about an application. This class cannot be inherited.
public ref class Application sealed
public sealed class Application
type Application = class
Public NotInheritable Class Application
- Inheritance
-
Examples ref class Form1: public System::Windows::Forms::Form { private: Button^ button1; ListBox^ listBox1; public: Form1() { button1 = gcnew Button; button1->Left = 200; button1->Text = "Exit"; button1->Click += gcnew EventHandler( this, &Form1::button1_Click ); listBox1 = gcnew ListBox; this->Controls->Add( button1 ); this->Controls->Add( listBox1 ); } private: void Form1::button1_Click( Object^ /*sender*/,(); } }; int main() { // Starts the application. Application::Run( gcnew Form1 ); }(); } }
Public Class Form1 Inherits Form <STAThread()> _ Shared Sub Main() ' Start the application. Application.Run(New Form1) End Sub Private WithEvents button1 As Button Private WithEvents listBox1 As ListBox Public Sub New() button1 = New Button button1.Left = 200 button1.Text = "Exit" listBox1 = New ListBox Me.Controls.Add(button1) Me.Controls.Add(listBox1) End Sub Private Sub button1_Click(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles button1.Click Dim count As Integer = 1 ' Check to see whether the user wants to exit the application. ' If not, add a number to the list box. While (MessageBox.Show("Exit application?", "", _ MessageBoxButtons.YesNo) = DialogResult.No) listBox1.Items.Add(count) count += 1 End While ' The user wants to exit the application. ' Close everything down. Application.Exit() End Sub End Class
Remarks. | https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.application?view=netframework-4.7.2 | 2019-08-17T13:51:24 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.microsoft.com |
Rocky Series Release Notes¶
7.0.0-12¶
Deprecation Notes¶
Deprecated the generate_iv option name. It has been renamed to aes_gcm_generate_iv to reflect the fact that it only applies to the CKM_AES_GCM mechanism.
7.0.0¶
New Features¶
Added new options to the PKCS#11 Cryptographic Plugin configuration to enable the use of different encryption and hmac mechanisms. Added support for CKM_AES_CBC encryption in the PKCS#11 Cryptographic Plugin.
Remap the order:put to orders:put to align with language in the orders controller.
Upgrade Notes¶
(For deployments overriding default policies) After upgrading, please review Barbican policy files and ensure that you port any rules tied to order:put are remapped to orders:put.
Deprecation Notes¶
Deprecated the p11_crypto_plugin:algoritm option. Users should update their configuration to use p11_crypto_plugin:encryption_mechanism instead.
Bug Fixes¶
By default barbican checks only the algorithm and the bit_length when creating a new secret. The xts-mode cuts the key in half for aes, so for using aes-256 with xts, you have to use a 512 bit key, but barbican allows only a maximum of 256 bit. A check for the mode within the _is_algorithm_supported method of the class SimpleCryptoPlugin was added to allow 512 bit keys for aes-xts in this plugin.
Fixed the response code for invalid subroutes for individual secrets. The API was previously responding with the incorrect code “406 - Method not allowed”, but now responds correctly with “404 - Not Found”. | https://docs.openstack.org/releasenotes/barbican/rocky.html | 2019-08-17T13:17:28 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.openstack.org |
Contents Now Platform App Engine Previous Topic Next Topic OAuth 2.0 tutorial - create an OAuth provider and profile Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share OAuth 2.0 tutorial - create an OAuth provider and profile Set up the Google service as an OAuth provider in ServiceNow by entering your client information, Google API URLs, and configuring the OAuth profile. Before you beginRole required: oauth_adminYou must have configured the Google service as an OAuth provider and recorded your Client ID and Client Secret values. Procedure Navigate to System OAuth > Application Registry. Click New. Select Connect to a third party OAuth Provider. Enter a Name for the OAuth provider. For this example, use Google. Enter the Client ID and Client Secret that you obtained from Google. Set the Default Grant type to Authorization Code. In the Authorization URL field, enter. In the Token URL field, enter. In the Redirect URL field, enter https://<instance>.service-now.com/oauth_redirect.do. This URL must match the redirect URL provided to Google. In the Token Revocation URL field, enter. Right-click the form header and select Save. A new OAuth Entity Profile record is created. In the OAuth Entity Scopes embedded list, add a new row with the Name and OAuth scope values set to. Right-click the form header and select Save. In the OAuth Entity Profiles embedded list, select the automatically-created profile. In the OAuth Entity Profile Scopes embedded list, add a new row and select the Google contacts API read-only scope. Click Update. Previous topicOAuth 2.0 tutorial - configure the Google service as an OAuth providerNext topicOAuth 2.0 tutorial - create a REST message On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/madrid-application-development/page/integrate/outbound-rest/task/t_OAuthDemoCreateProvider.html | 2019-08-17T13:16:48 | CC-MAIN-2019-35 | 1566027313259.30 | [] | docs.servicenow.com |
Controlling Access for Cost Explorer
How you manage access to the information in Cost Explorer depends on how your AWS account is set up. Your account might be set up to use the AWS Identity and Access Management (IAM) service to grant different levels of access to different IAM users. Your account might be part of consolidated billing in AWS Organizations, in which case it is either a master account or a member account. For information about managing access to Billing and Cost Management pages, see Controlling Access For more information about consolidated billing, see Consolidated Billing for Organizations.
Granting Cost Explorer Access
You can enable Cost Explorer only if you are the owner of the AWS account and you signed in to the account with your root credentials. If you are the owner of a master account in an organization, enabling Cost Explorer enables Cost Explorer for all the organization accounts. In other words, all member accounts in the organization are also granted access. You can't grant or deny access individually.
Cost Explorer and IAM Users
An AWS account owner who is not using consolidated billing has full access to all Billing and Cost Management information, including Cost Explorer. After you enable Cost Explorer, you should interact with Cost Explorer as an IAM user. If you have permission to view the Billing and Cost Management console, you can use Cost Explorer.
An IAM user must be granted explicit permission to view pages in the Billing and Cost Management console. With the appropriate permissions, the IAM user can view costs for the AWS to which the IAM user belongs. For the policy that grants the necessary permissions to an IAM user, see Controlling Access.
Consolidated Billing Considerations
The owner of the master account in an organization has full access to all Billing and Cost Management information for costs incurred by the master account and by member accounts, and can view all costs in Cost Explorer. The owner of a member account in an organization can see costs for the member account, but can't see costs for any other account in the organization. For more information, see Consolidated Billing for Organizations. | http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-explorer-access.html | 2017-07-20T18:41:01 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.aws.amazon.com |
Publicising your talk¶
Oxford Talks will automatically compile listings of talks in subject areas and can pull together all talks belonging to a department or a division. The more information you can give about a talk, the wider it will be publicised.
Adding your talk to topic listings¶
We use Topics to group talks into subject areas. Assigning one or more topics to your talk will mean that it will have a better chance of being discovered and readvertised by specialist communities.
Go to the Topics field on the Add talk form. Start typing and you will be offered options from the Library of Congress Subject Headings. Once you’ve found and selected a topic it will be highlighted in blue. To remove it, just click on the ‘X’.
Please start with broader topics first e.g.: ‘Neuroscience’ or ‘Ancient History’, and then add narrower topics in the specialist area of the talk e.g.: ‘Molecular Neurobiology’ or ‘Naval Warfare’.
There are some further hints in the Identifying suitable topics section of this guide
Viewing topic listings¶
To see a listing of talks for a specific topic, type the topic name into the site search box and then use the Topic filters on the left hand side of the results page. Alternatively, click on the topic name when viewing a talk.
The talks site will create a listing for any topic if you provide the FAST Topic URI e.g.:
You can search for the FAST Topic URI here:
look for a phrase like this in the results:
"uri":""
Adding your talk to department listings¶
We use the University’s complete list of units, buildings and locations, Oxpoints, to specify the department or unit a talk belongs to.
Start typing the name in the Organising department field in either the Series or the Talk editing form.
Once you’ve assigned a department or unit to a Series it will be automatically assigned to any Talks you then add to the Series.
If you choose a sub-department or unit, then the talk will also appear in the parent department and division listings.
Adding an abstract¶
Keywords in the talk abstract will be used for searching, so please add the abstract if you have it.
Public Collections¶
Note
There have been changes to this part of Oxford Talks in the latest release (May 2016)
As well as topic and department listings, there may be some more ad hoc listings you would like your talk to be included in. You can set up a Public Collection to collect together talks relevant to a particular theme, enterprise or project within the University - Athena Swan is a good example.
- Follow the instructions in the User Guide - Collect talks you are interested in - to create and publish a list
- You can also allow other Talks Editors to add talks to your list - see the Sharing Editing section for more details | http://talksox.readthedocs.io/en/latest/user/talk-editors/publicizing-your-talk.html | 2017-07-20T18:29:22 | CC-MAIN-2017-30 | 1500549423320.19 | [array(['../../_images/adding-your-talk-to-topic-listings.png',
'Adding your talk to topic listings'], dtype=object)
array(['../../_images/adding-your-talk-to-department-listings.png',
'Adding your talk to department listings'], dtype=object) ] | talksox.readthedocs.io |
Under plugin settings, the Installed Plugins tab lists which plugins are currently installed in the Karaf instance selected in the Karaf instance data panel. System Plugins cannot be uninstalled through the UI. (The Plugin Manager is itself a system plugin). Non-system plugins can be reinstalled or removed from the system. Each plugin has metadata associated with it which is used to identify and describe the plugin.
The Plugin Manager obtains a list of available plugins from the Available Plugin’s server.
Available Plugin’s server can be part of an externally hosted plugin shopping cart or it can simply be a url serving the internal list of available plugins as described in the section on Internal Plugins.
In order for externally downloaded plugins to be installed, the Available Plugin’s server must have a related maven repository from which Karaf can download the feature. By default feature download is not enabled in OpenNMS HORIZON. To enable Karaf external feature download, the address of the maven repository should be entered in the org.ops4j.pax.url.mvn.cfg file in the OpenNMS HORIZON /etc directory.
Alternatively the Plugin Manager can list the available plugins which have been installed on the local machine as bundled Plugin Kar’s (using the Karaf Kar deploy mechanism) along with any internal plugins bundled with OpenNMS HORIZON. In this case, the Plugin Server URL should be pointed at http:\\localhost:8980\opennms.
The admin username and passwords are used to access the Available Plugins Server. If a shopping cart is provided for obtaining licences, the URL of the shopping cart should be filled in.
The Available Plugins panel list the plugins which are available and listed by the Available Plugins server. These can be directly installed into the selected Karaf instance or can be posted to a manifest for later installation. If a plugin is installed, the system will try and start it. However if a corresponding licence is required and not installed, the features will be loaded but not started. You must restart the feature if you later install a licence key.
The Plugins Manifest for a given Karaf instance lists the target plugins which the Karaf instance should install when it next contacts the licence manager. If the Plugin Manager can communicate with the remote server, then a manifest can be selected for installation. A manual manifest entry can also be created for a feature. This can be used to install features which are not listed in the Available Features list. | http://docs.opennms.eu/latest/plugin-manager/installing-available-plugins.html | 2017-07-20T18:31:58 | CC-MAIN-2017-30 | 1500549423320.19 | [] | docs.opennms.eu |
Daylight saving time help and support
This article describes the Microsoft policy in response to daylight saving time (DST) and time zone changes.
Note
Subscribe to the Microsoft Daylight Saving Time & Time Zone Blog to receive the latest updates on changes around the world.
Applies to: Windows 10 - all editions
Original KB number::
Windows 10 update history
Windows 8.1 and Windows Server 2012 R2 update history
Windows 7 SP1 and Windows Server 2008 R2 SP1 update history
Windows Server 2012 update history
Windows Server 2008 SP2 update history | https://docs.microsoft.com/lt-LT/troubleshoot/windows-client/system-management-components/daylight-saving-time-help-support | 2021-07-24T09:09:01 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.microsoft.com |
Protecting user accounts using hashed passwords
In Smartsite 5, the HashPasswords boolean setting can be set to 1 to skip (encrypted) password-saving altogether. Smartsite will then hash the passwords on update using the MD5 algorithm and compare the hash with the one generated when a user/visitor logs on:
The setting enforces that User- and Visitor passwords are not stored inside the site database. Instead, only an MD5 hash is stored.
No longer will anyone with access to the database be able to retrieve passwords and decrypt them.
In iXperion 1.4+, the feature has been reimplemented in the standard .NET Membership Provider (SqlMembershipProvider).
Config Editor
To enable the feature, use the Smartsite Config Editor. Open the appropriate site, select Unlock and switch to Advanced Mode (using View menu). Then in the right pane within the Security section you have the option to set the Password Format. Select Hashed as the new value.
When saving this change, the Config Editor will update the passwordFormat attribute of the Sql Membership Provider within the web.config:
It will also check/set the above mentioned HashPasswords registry setting.
Furthermore, when the Password Format is set to Hashed, the Config Editor will verify the passwords already stored within the database. If not all passwords are in hashed format, the Config Editor offers the option to convert all passwords to hashed format:
| https://docs.seneca.nl/Smartsite-Docs/Install-Config/Security/Membership_and_Role_Providers/Protecting-user-accounts-using-hashed-passwords.html | 2021-07-24T08:14:41 | CC-MAIN-2021-31 | 1627046150134.86 | [array(['/Images/Security/HashPasswords.png', 'Hash Passwords CMS setting'],
dtype=object)
array(['/Images/Config%20Editor/Config%20Editor%20-%20Convert%20passwords.png',
'Config Editor - Convert passwords dialog'], dtype=object) ] | docs.seneca.nl |
Organic results & sitelinks
"organic": [ { "pos": 1, "url": "", "desc": "Welcome to adidas Shop for adidas shoes, clothing and view new collections for adidas Originals, running, football, training and much more.", "title": "adidas Official Website | adidas US", "sitelinks": { "expanded": [ { "url": "", "desc": "Shoes - Shop Women - Tops - Women's Tights & Leggings - ...", "title": "Women" }, { "url": "", "desc": "Shop our collection of men's sneakers, shoes & slides at ...", "title": "Men's Shoes" }, { "url": "", "desc": "Men's Shoes - Men's Shop - Men's Clothing - Running Shoes - Tops", "title": "Men" }, { "url": "", "desc": "Browse adidas women's shoes for running, working out, casual ...", "title": "Women's Shoes & Sneakers" } ] }, "url_shown": "› ...", "pos_overall": 1 }
Updated 9 months ago | https://docs.serpmaster.com/docs/organic-results-sitelinks | 2021-07-24T06:40:22 | CC-MAIN-2021-31 | 1627046150134.86 | [array(['https://files.readme.io/e0794f5-Organic__Sitelinks.png',
'Organic & Sitelinks.png'], dtype=object)
array(['https://files.readme.io/e0794f5-Organic__Sitelinks.png',
'Click to close...'], dtype=object) ] | docs.serpmaster.com |
TYPO3 Explained¶
The content of this document is related to TYPO3 CMS, a GNU/GPL CMS/Framework available from
Official Documentation
This document is included as part of the official TYPO3 documentation.
If you find an error or something is missing, please create an issue or make the change yourself. You can find out more about how to do this in Feedback and Contribute.
Core Manual
This document is a Core Manual. Core Manuals address the built in functionality of TYPO3 CMS
- Extension Development
- Introduction
- Extension management
- System and Local Extensions
- Files and locations
- composer.json
- Declaration File (ext_emconf.php)
- Configuration Files (ext_tables.php & ext_localconf.php)
- Naming conventions
- Extension configuration (ext_conf_template.txt)
- Extending the TCA array
- Choosing an extension key
- Creating a new extension
- Creating a new distribution
- Adding documentation
- Publish your extension
- Other resources
- Custom Extension Repository
TYPO3 A-Z
- Ajax
- Assets (CSS, JavaScript, Media)
- Authentication
- Autoloading
- Backend access control (Users & Roles)
- Backend modules
- Backend routing
- Backend user object
- Checking user access
- Checking access to current backend module
- Checking access to any backend module
- Access to tables and fields?
- Is “admin”?
- Read access to a page?
- Is a page inside a DB mount?
- Selecting readable pages from database?
- Saving module data
- Getting module data
- Getting TSconfig
- Getting the Username
- Get User Configuration Value
- Bootstrapping
- Broadcast channels
- Caching
- Coding Guidelines
- Configuration
- Constants
- Content Elements & Plugins
- Context API and Aspects
- Context Sensitive Help (CSH)
- Crop Variants for Images
- Custom file processors
- Database (Doctrine DBAL)
- Debugging
- Dependency injection
- Deprecation
- File abstraction layer (FAL)
- Directory structure
- Enumerations & BitSets
- Environment
- Error and exception handling
- Events, signals and hooks
- Extension scanner
- Flash messages
- Fluid
- FormEngine
- Form protection tool
- HTTP request library / Guzzle / PSR-7
- Icon API
- Internationalization and localization
- JavaScript in TYPO3 Backend
- LinkBrowser
- Locking API
- Logging Framework
- Mail API
- Mount points
- Namespaces
- Page types
- Pagination
- Password hashing
- Request handling (Middlewares)
- Rich text editors (RTE)
- Routing - “Speaking URLs” in TYPO3
- Security guidelines
- Search engine optimization (SEO)
- Services
- Session handling in TYPO3
- Site handling
- Basics
- Creating a new site
- Base variants
- Adding Languages
- Error handling
- Writing a custom page error handler
- Static routes
- Using environment variables in the site configuration
- Using site configuration in TypoScript
- Using site configuration in conditions
- Site settings
- CLI tools for site handling
- PHP API: accessing site configuration
- Extending site configuration
- Soft references
- Symfony Console Commands (cli)
- Symfony expression language
- System Categories
- System registry
- TCE (TYPO3 Core engine) & DataHandler
- Testing
- Upgrade wizards
- Versioning and Workspaces
- XCLASSes (Extending Classes) | https://docs.typo3.org/m/typo3/reference-coreapi/master/en-us/Index.html | 2021-07-24T08:38:50 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.typo3.org |
Mixer has been designed with simplicity in mind and to make impactful textures as quickly as possible. This section is a step by step guide to help you make your first Mix.
You can move through the following topics to help you install mixer, discover activation and sign-in procedures, and jump right into a quick start where you will explore Mixer’s UI and its 2D/ 3D workflows.
Installing Mixer: Discover Mixer’s installation options, the folder setup to organize numerous assets, and how Mixer is closely integrated with Bridge.
Activation: Explore the power of Mixer as part of Quixel’s ecosystem, all for free.
Quick Start: Explore Mixer’s interface and discover the specific workflows for both 2D and 3D texturing.
Reaktion
Thanks for your feedback.
Hinterlasse einen Kommentar. | http://docs.quixel.com/mixer/1/de/topic/making-your-first-mix | 2021-07-24T06:50:53 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.quixel.com |
Home Button and Menu Icon
Home button, located in the up left corner, takes you to the start page of Codejig ERP.
Menu icon, located next to the Home button, folds in and out the Main menu.
By default, the Main menu is folded, and you can only see the icons instead of names of modules available in Codejig ERP. To fold it out:
- Click the Menu icon (pic) next to the Home button.
As a result, the Main menu is permanently folded out.
If the Main menu is unfolded, you see the name of your application next to the Home button.
More information | https://docs.codejig.com/en/entity2305843015656313960/view/4611686018427399521 | 2021-07-24T08:40:15 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.codejig.com |
specialised type of Node that provides uniqueness consensus by attesting that, for a given transaction, it has not already signed other transactions that consumes any of the proposed transaction’s input states.
-.
- For all Corda Enterprise Network Manager release notes, see the Corda Enterprise Network Manager release notes page.
Corda Enterprise
Corda Enterprise is a commercial edition of the Corda platform, specifically optimised to meet the privacy, security and throughput demands of modern day business. Corda Enterprise is interoperable and compatible with Corda open source and is designed for organisations).
Corda Enterprise vs Corda open source: feature comparison
More details on Corda Enterprise features compared to Corda open source features follow below. | https://docs.corda.net/docs/corda-enterprise/4.6.html | 2021-07-24T06:51:06 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.corda.net |
The add-ons are independent applications, but they run under a common executable:
mender-connect. Every add-on, with configure being one notable
exception, requires Mender Connect to function.
Once you have installed
mender-connect, there are no other items needed to get the add-ons
on your device. All you need is a proper configuration.
The following table shows a brief summary of the add-ons.
Mender Connect is loosely coupled with the Mender Client. The main information passed between
mender-client and
mender-connect is the device authorization status. Since only accepted
devices can interact with the Mender Server, the Mender Client passes over DBus
the authorization token which Mender Connect uses to establish
a Websocket connection
to the server. We use the well-known and well-defined open APIs, which makes the solution flexible
and portable.
Please refer to the following sections for the Mender Connect installation:
After installation, please refer to the add-ons subsections for the configuration options, including the enabling and disabling of the features.
Please note, that you have to enable DBus in the Mender client for most of the add-ons to function.
We describe the specific add-ons configuration in the following sections. All
Mender Connect based add-ons can share the same configuration file
/etc/mender/mender-connect.conf. Which, for example, can look like that:
{ "ClientProtocol": "https", "HttpsClient": { "Certificate": "/certs/cert.pem", "Key": "/keys/key.pem" }, "ServerCertificate": "/certs/hosted.pem", "ServerURL": "wss://192.168.1.1", }
The mechanism for providing the configuration file and specifying the configuration values will depend on your choice of OS distribution or build system.
If you have already built an Artifact containing the rootfs, have a look at modifying a Mender Artifact.
Allows you to configure a client certificate and number of seconds to wait between consecutive reconnection attempts.
Found errors? Think you can improve this documentation? Simply click the Edit link at the top of the page, and then the icon on Github to submit changes.
© 2021 Northern.tech AS | https://docs.mender.io/development/add-ons/overview | 2021-07-24T08:41:38 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.mender.io |
Evenly round to the given number of decimals.
Notes ]) | https://docs.scipy.org/doc/numpy-1.5.x/reference/generated/numpy.around.html | 2021-07-24T08:31:06 | CC-MAIN-2021-31 | 1627046150134.86 | [] | docs.scipy.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.